INHOPE - Association of Internet Hotline Providers | Microsoft: Protecting communities from abusive AI generated content
Article
Partner Updates

Microsoft: Protecting communities from abusive AI generated content

Microsoft has a longstanding commitment to digital safety: we recognise that we have a responsibility to support safe online experiences for our users, especially young people, and to contribute to a safer online ecosystem to support fundamental human rights.

Across our diverse services, we approach safety through four, inter-related pillars: (1) platform architecture, including our commitment to safety by design; (2) content moderation, and the steps we take to reduce risks related to illegal and harmful content; (3) culture, including our efforts to help users foster safe online spaces; and (4) collaboration to address complex, whole-of-society harms.

Online child sexual exploitation and abuse remains one of the most complex and urgent issues of our time. We are committed to continuing to support and work closely with partners across industry, government, and civil society to create a safer online world for all. That includes critical partnerships with the National Center for Missing and Exploited Children (NCMEC), the Internet Watch Foundation (IWF), the Tech Coalition, the WeProtect Global Alliance, and it is why we are pleased to be supporting INHOPE's annual Summit in New York this year.


What are you doing to address abusive AI-generated content?

In so many ways, AI will create exciting opportunities for all of us to bring new ideas to life. But as these new tools come to market from Microsoft and across the tech sector, we must take new steps to ensure these new technologies are resistant to abuse. The history of technology has long demonstrated that creativity is not confined to people with good intentions. We are collectively seeing the abuse of AI tools by bad actors including the creation of synthetic child sexual exploitation material. At Microsoft, we are committed to a robust and comprehensive approach to address abusive AI, based on six focus areas:

  1. A strong safety architecture.
  2. Durable media provenance and watermarking.
  3. Safeguarding our services from abusive content and conduct.
  4. Robust collaboration across industry and with governments and civil society.
  5. Modernised legislation to protect people from the abuse of technology.
  6. Public awareness and education.

Are there any current projects you’d like to highlight?

We continue to take steps across each of these pillars to help address child sexual exploitation and abuse risks.

  • Recognising the potential risks arising from the abuse of AI technologies, Microsoft was pleased in April to join a range of other leading AI companies in supporting new safety by-design principles for generative AI, led by Thorn and All Tech is Human, which will guide us as we take steps across our services and partner with others to evolve best practices.
  • We have also been proud to announce we have joined the Tech Coalition’s flagship Lantern program, which enables cross-industry child safety signal sharing, alongside our ongoing engagement with the Tech Coalition’s critical industry leadership on these topics.
  • We’re continuing to invest in research to help understand how people are using and perceiving AI, including through our annual Global Online Safety Survey and our partnership with National 4H. And based on that, we’ve developed a Family Safety Toolkit, which includes tips for parents and caregivers navigating online safety and the age of AI.
  • And to support efforts to consider how modernised legislation can support cross-sectoral efforts to protect the public from abusive AI-generated content risks, we have recently released a new whitepaper for US policymakers. This outlines policy recommendations across a range of harms, including the urgent need to protect women and children from image-based sexual abuse.

What’s next?

We look forward to continuing to work with policymakers and experts to help take these recommendations and steps forward, including as we discuss prevention and awareness-raising activities at the INHOPE Summit in October. We look forward to continuing to partner across industry, governments, and with civil society to advance safety across our services and the ecosystem as a whole.


Read more about Microsoft's commitment to responsible AI and digital safety.

Microsoft: Protecting communities from abusive AI generated content
- by Microsoft
'

Learn more about the INHOPE Summit sponsored by Microsoft

'