
An analysis by WIRED and Indicator found nearly 90 schools and 600 students around the world impacted by AI-generated deepfake nude images—and the problem shows no signs of going away. | PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES
By Matt Burgess | WIRED
It usually starts with a photo downloaded from social media.
Around the world, teenage boys are saving Instagram and Snapchat images of girls they know from school and using harmful “nudify” apps to create fake nude photos or videos of them. These deepfakes can quickly be shared across whole schools, leaving victims feeling humiliated, violated, hopeless, and scared the images will haunt them forever.
The deepfake crisis hitting schools started slowly a couple of years ago, but it has since grown considerably as the technology used to create the explicit imagery has become more accessible. Deepfake sexual abuse incidents have hit around 90 schools globally and have impacted more than 600 pupils, according to a review of publicly reported incidents by WIRED and Indicator, a publication focusing on digital deception and misinformation.
The findings show that since 2023, schoolchildren—most often boys in high schools—in at least 28 countries have been accused of using generative AI to target their classmates with sexualized deepfakes. The explicit imagery, containing minors, is considered to be child sexual abuse material (CSAM). This analysis is believed to be the first to review real-world cases of AI deepfake abuse taking place at schools globally.
As a whole, the analysis shows the worldwide reach of harmful AI nudification technology, which can earn their creators millions of dollars per year, and shows that in many incidents, schools and law enforcement officials are often not prepared to respond to the serious sexual abuse incidents.
Article continues here.











