The pre-AI world is gone. Estimates counsel that already, as many as one in eight youngsters personally is aware of somebody who has been the goal of a deepfake photograph or video, with numbers rising to 1 in 4 who’ve seen a sexualized deepfake of somebody they acknowledge, both a buddy or a star. This can be a actual drawback, and it’s one which lawmakers are out of the blue waking as much as.
Within the Nineteen Eighties, once I was a child, it was an image of a lacking youngster on a milk carton from throughout the nation that encapsulated parental fears. In 2026, it’s an AI-generated suggestive picture of a beloved one.
The growing availability of AI nudification instruments, resembling these related to Grok, has fueled skyrocketing stories of AI-generated youngster sexual abuse materials — from roughly 4,700 in 2023 to over 440,000 within the first half of 2025 alone, in accordance with the Nationwide Middle on Lacking and Exploited Kids.
That is horrific, dirty stuff. It’s significantly tough to examine — and write about — as a mother, as a result of the flexibility to defend your youngster from it feels so past your management. Mother and father already battle simply to maintain youngsters off social media, get screens out of school rooms or lock up family units at evening. And that’s after a decade’s value of knowledge on social media’s impression on youngsters.
Earlier than we’ve even solved that drawback, AI is taking the world by storm — particularly among the many younger. Almost half (42%) of American teenagers report speaking to AI chatbots as a buddy or companion. The overwhelming majority of scholars (86%) report utilizing AI in the course of the college yr, in accordance with Training Week. Even youngsters ages 5 to 12 are utilizing generative AI. In a number of high-profile instances, mother and father say AI chatbots inspired their teenagers to commit suicide.
Too many mother and father are out of the loop. Polling from Widespread Sense Media exhibits that oldsters constantly underestimate their kids’s use of AI. Faculties, too. The identical survey discovered that few colleges had communicated — or arguably even developed — an AI coverage.
However there’s a shared sense of foreboding: Individuals stay much more involved (50%) than excited (10%) in regards to the elevated use of AI in day by day life, and the overwhelming majority consider that they’ve little to no skill to manage it (87%).
Policymakers are on the transfer. On Jan. 13, the Senate unanimously handed a invoice, the Defiance Act, to permit victims of deepfake porn to sue the individuals who created the pictures. The UK and EU are investigating whether or not Grok was used to generate sexually express deepfake pictures of girls and kids with out their consent, violating their On-line Security Act.
Within the U.S., the Take It Down Act, signed into regulation by Congress final yr, criminalized sexual deepfakes and requires platforms to take away the pictures inside 48 hours; sharers may face jail time.
In my house state of Texas, now we have a number of the most aggressive AI legal guidelines within the nation. The Securing Kids On-line by way of Parental Empowerment Act of 2024, amongst different issues, requires platforms to implement a technique to stop minors from being uncovered to “dangerous materials.” It’s been unlawful since Sept. 1, 2025, to create or distribute any sexually suggestive pictures with out consent. Punishments vary from felony expenses and imprisonment to recurring fines. And beginning this yr, the Texas Accountable AI Governance Act goes into impact banning AI growth with the only intent to create deepfakes.
Texas may not be recognized for its bipartisanship, however these efforts have been pushed in a bipartisan method and framed (accurately) as defending Texas kids and parental rights. “In immediately’s digital age, we should proceed to battle to guard Texas youngsters from misleading and exploitative know-how,” stated Lawyer Normal Ken Paxton, asserting his investigation into Meta AI studio and Character.AI.
However we don’t know but if these legal guidelines might be efficient. For one, it’s all nonetheless so new. For an additional, the know-how retains altering.
And it doesn’t assist that the creators of AI are tight with Washington. Huge Tech corporations are the large boys in D.C. as of late; their lobbying has grown considerably. Nearer to house, Texas Democrats are involved that Paxton may not push Musk over the Grok debacle given the billionaire’s thick GOP connections.
Underneath the Trump administration, the Federal Commerce Fee launched a proper inquiry into Huge Tech, asking them to element how they take a look at and monitor for potential damaging impacts of chatbots on youngsters. However that’s primarily self-disclosing; these identical corporations haven’t precisely impressed confidence on that rating with social media, or within the case of Grok, in deepfake youngster nudes.
Extra exterior accountability is required. To that finish, a multi-prong strategy is required. I’d prefer to see Well being and Human Providers incorporate AI’s problem to youngsters’ well-being as a part of the MAHA motion. A bipartisan fee may discover AI age limits, college insurance policies and kids’s relational expertise. (Concerningly, there was little point out of AI in MAHA’s complete report on youngster well being final yr.)
However even with federal and state motion, the truth is that a lot of the AI world might be navigated by mother and father ourselves. Whereas there are steps that might restrict kids’s publicity to AI at youthful ages, avoidance alone shouldn’t be the reply. We’re solely at the start, and already AI know-how is unavoidable. It’s in our computer systems, properties, colleges, toys and work and the AI age is simply simply starting.
Extra scaffolding is required. The deep work will fall to folks. Mother and father have at all times wanted to boost kids with sturdy spines, thick skins and ethical advantage. The struggles of every period change, however that doesn’t. We’ll now want to boost kids who’ve the sense of function, critical-thinking skills and relational know-how to dwell with this new and already ubiquitous know-how — with its nice promise and risks.
It’s a courageous new world on the market, certainly.
