Google has released an AI-powered toolkit that will help organisations detect and report child sexual abuse material (CSAM) circulating on the internet. The company is making this toolkit available for free to NGOs and industry partners via its Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.
“Today we’re introducing the next step in this fight: cutting-edge artificial intelligence (AI) that significantly advances our existing technologies to dramatically improve how service providers, NGOs, and other technology companies review this content at scale,” Google said in a blog post. By using deep neural networks for image processing, the tools can assist reviewers sorting through many images by prioritising the most likely CSAM content for review. The tech giant claims that since the early 2000s, it has been investing in technology, teams, and working closely with expert organisations, like the Internet Watch Foundation, to fight the spread of child sexual abuse material (CSAM) online.
Google says that quick identification of new images means that children who are being sexually abused are much more likely to be identified and protected from further abuse. “This initiative will allow greatly improved speed in review processes of potential CSAM. We’ve seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period,” Google says. The company added that it will continue to invest in technology and organisations “to help fight the perpetrators of CSAM and to keep its platforms and users safe from this type of abhorrent content.”