Chatbot web site Character.ai is slicing off youngsters from having conversations with digital characters, after going through intense criticism over the sorts of interactions younger folks have been having with on-line companions.
The platform, based in 2021, is utilized by tens of millions to speak to chatbots powered by synthetic intelligence (AI).
However it’s going through a number of lawsuits within the US from dad and mom, together with one over the demise of a young person, with some branding it a “clear and present danger” to younger folks.
Now, Character.ai says from 25 November under-18s will solely be capable to generate content material corresponding to movies with their characters, relatively than speak to them as they’ll at the moment.
On-line security campaigners have welcomed the transfer however mentioned the characteristic ought to by no means have been out there to kids within the first place.
Character.ai mentioned it was making the modifications after “reviews and suggestions from regulators, security specialists, and oldsters”, which have highlighted considerations about its chatbots’ interactions with teenagers.
Specialists have beforehand warned the potential for AI chatbots to make issues up, be overly-encouraging, and feign empathy can pose dangers to younger and susceptible folks.
“Right this moment’s announcement is a continuation of our common perception that we have to preserve constructing the most secure AI platform on the planet for leisure functions,” Character.ai boss Karandeep Anand informed BBC Information.
He mentioned AI security was “a transferring goal” however one thing the corporate had taken an “aggressive” method to, with parental controls and guardrails.
On-line security group Web Issues welcomed the announcement, but it surely mentioned security measures ought to have been inbuilt from the beginning.
“Our personal analysis reveals that kids are uncovered to dangerous content material and put in danger when participating with AI, together with AI chatbots,” it mentioned.
Character.ai has been criticised previously for internet hosting doubtlessly dangerous or offensive chatbots that kids may speak to.
Avatars impersonating British youngsters Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life on the age of 14 after viewing suicide materials on-line, have been discovered on the site in 2024 earlier than being taken down.
Later, in 2025, the Bureau of Investigative Journalism (TBIJ) discovered a chatbot based mostly on paedophile Jeffrey Epstein which had logged greater than 3,000 chats with customers.
The outlet reported the “Bestie Epstein” avatar continued to flirt with its reporter after they mentioned they have been a toddler. It was one in all a number of bots flagged by TBIJ that have been subsequently taken down by Character.ai.
The Molly Rose Basis – which was arrange in reminiscence of Molly Russell – questioned the platform’s motivations.
“But once more it has taken sustained strain from the media and politicians to make a tech agency do the proper factor, and it seems that Character AI is selecting to behave now earlier than regulators make them,” mentioned Andy Burrows, its chief govt.
Mr Anand mentioned the corporate’s new focus was on offering “even deeper gameplay [and] role-play storytelling” options for teenagers – including these could be “far safer than what they could be capable to do with an open-ended bot”.
New age verification strategies will even are available in, and the corporate will fund a brand new AI security analysis lab.
Social media skilled Matt Navarra mentioned it was a “wake-up name” for the AI business, which is transferring “from permissionless innovation to post-crisis regulation”.
“When a platform that builds a teen expertise nonetheless then pulls the plug, it is saying that filtered chats aren’t sufficient when the tech’s emotional pull is powerful,” he informed BBC Information.
“This is not about content material slips. It is about how AI bots mimic actual relationships and blur the traces for younger customers,” he added.
Mr Navarra additionally mentioned the large problem for Character.ai can be to create an attractive AI platform which teenagers nonetheless wish to use, relatively than transfer to “much less safer options”.
In the meantime Dr Nomisha Kurian, who has researched AI security, mentioned it was “a wise transfer” to limit teenagers utilizing chatbots.
“It helps to separate artistic play from extra private, emotionally delicate exchanges,” she mentioned.
“That is so essential for younger customers nonetheless studying to navigate emotional and digital boundaries.
“Character.ai’s new measures may replicate a maturing part within the AI business – baby security is more and more being recognised as an pressing precedence for accountable innovation.”
