“Amazingly realistic” child abuse images are created using AI | Science and technology news
Artificial intelligence could be used to generate “unprecedented amounts” of realistic child sexual abuse material, an online safety group has warned.
The Internet Watch Foundation (IWF) said it was already “amazingly realistic” AI-Produced images that would be “indistinguishable from real” for many people.
The websites examined by the group, some of which were reported by the public, featured children as young as three years old.
The IWF, which is responsible for finding and removing child sexual abuse material online, warned it was being realistic enough that it could become more difficult to tell when real children are at risk.
IMF Managing Director Susie Hargreaves called the Prime Minister Rishi Sunak to treat the issue as “top priority” when the UK hosts a global AI summit later this year.
She said: “We are not currently seeing large numbers of these images, but we recognize that criminals have the potential to produce unprecedented amounts of lifelike images of child sexual abuse.”
“This would potentially be devastating for internet safety and the safety of children online.”
Risk of AI images “growing”
While AI-generated imagery of this type is illegal in the UK, the IMF said rapid advances in technology and increased accessibility mean the scale of the problem could soon make it difficult for the law to keep up.
The National Crime Agency (NCA) said the risk was “rising” and was being taken “extremely seriously”.
Chris Farrimond, Director of Threat Leadership at the NCA, said: “There is a very real possibility that an increase in the amount of AI-generated material could significantly impact law enforcement resources and increase the time it takes us to track down real children.” to identify “vulnerable”.
Mr Sunak said the upcoming world summit, expected in the autumn, will debate the regulatory “guard rails” that could mitigate future risks from AI.
He has already met with key players in the industryincluding numbers Google as well as ChatGPT Manufacturer OpenAI.
A government spokesman told Sky News: “AI-generated child sexual exploitation and abuse content is illegal, whether or not it depicts a real child. This means tech companies will be required under the Online Safety Act to proactively identify and remove content that is designed to keep up with new technologies like AI.
“The Online Safety Act will require businesses to be proactive about all forms of child sexual abuse online, including grooming, live streaming, child sexual abuse material and banned child images – or face hefty fines.”
AI a “threat to democracy”
Why transparency is crucial for the future of AI
Perpetrators help each other with the help of AI
The IWF said it also found an online “manual” written by offenders to help others use AI to create even more lifelike abuse images, bypassing security measures that image generators have put in place.
Like text-based generative AI like ChatGPT, image tools like DALL-E 2 and Midjourney train on data from across the web to understand prompts and provide appropriate results.
Click here to subscribe to Sky News Daily, wherever you get your podcasts
DALL-E 2, a popular image generator from ChatGPT developer OpenAI, and Midjourney both say they limit their software’s training data to limit its ability to create certain content and block some text input.
OpenAI also uses automated and human monitoring systems to prevent abuse.
Ms Hargreaves said AI companies need to adapt to ensure their platforms are not being exploited.
“Continued misuse of this technology could have deeply sinister consequences — and result in more and more people being exposed to this harmful content,” she said.