
The sun sets on the Illinois State Capitol on Feb. 18, 2026, in Springfield. State lawmakers recently updated Illinois’ child pornography laws to include AI-generated images. (Brian Cassella/Chicago Tribune)
By The Editorial Board | Chicago Tribune
Every parent should be paying attention to what’s been going on at Lake Zurich High School.
In an April 2 communication to families, school officials said police are investigating allegations that students used artificial intelligence to generate and share explicit, pornographic images using the likeness of other students. District officials have said that no staff members directly viewed the images, underscoring both the sensitivity of the material and the limits schools face once a police investigation begins. The conduct itself dates to late February, but only came to light April 2.
Kids have been bullying each other since the dawn of human existence. These allegations are different. Imagine being a victim’s mother or father and having to console them, to strategize how to show their face back at school, to process the feelings of violation, embarrassment and sadness that inevitably follow such exposure. Imagine being the parent of the child who did it and will have to face the consequences.
What’s going on is an uncomfortable tension between two difficult truths. Victims of AI manipulation are suffering real harm, including humiliation and lasting emotional damage. At the same time, many of the teens responsible are not fully equipped to grasp the permanence and scale of what they’re doing.
Adolescent minds today have easy access to technology that can create and distribute images instantly, without clear or consistently enforced guardrails. Schools, laws and parents are still operating under rules built for a world where harmful images had to be shot, not fabricated, and where the consequences unfolded more slowly.
Last month, two teenage Pennsylvania boys received probation after generating hundreds of fake nude photos of classmates using AI. The boys were 14 at the time of the crime. Last year, police in Louisiana discovered several middle-school boys had been sharing AI-generated nude photographs of female classmates on Snapchat. Advocates say there are thousands of instances of AI targeting each year, and as the technology improves the problem grows with it.
A key challenge in attacking the problem is the nature of teenagers; their decision-making and maturity are still developing. In the same way we don’t expect kids to drink until they’re 21 or drive until they’re 16, we cannot expect all teenagers to make responsible decisions with tools this powerful.
Editorial continues here.
This highlights one of the hardest emerging problems with AI—its misuse for generating fake explicit images and how quickly it has outpaced school policies, laws, and parental awareness. It’s clear that the technology risk is now as much social and legal as it is technical, especially when it involves teenagers.