Tool able to find 700% more illegal content than human moderators
Menlo Park/London (pte/04.09.2018/11:30) Google had been cooperating with the Internet Watch Foundation to develop a new tool, which works with an artificial intelligence (AI) to detect content depicting sexual abuse of children online. This free tool uses face recognition software to help human moderators find and delete illegal graphic content.
Moderators are being sheltered from exposure
The tool is designed so that moderators don’t have to be exposed to content that could trigger traumatic reactions, while at the same time detecting much bigger quantities of child abuse content. Other detection systems are also able to detect graphic content depicting child abuse through comparison with a database. But that doesn’t shelter moderators from also having to inspect the abusive content themselves.
Google is trying to skip that step by providing service providers, NGOs and other technology companies with their new software. The AI uses neural networks for sorting through massive amounts of data and flagging specific content. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it”, says Google.
The new tool is already showing success. It found 700% more abusive content in the same time a human investigator searched for content depicting sexual abuse of children online. “We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our ‘human’ experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material”, says Susie Hargreaves, Internet Watch Foundation chief executive.