A girl has instructed the BBC she felt “dehumanised and diminished right into a sexual stereotype” after Elon Musk’s AI Grok was used to digitally take away her clothes.
The BBC has seen a number of examples on the social media platform X of individuals asking the chatbot to undress ladies to make them seem in bikinis with out their consent, in addition to placing them in sexual conditions.
XAI, the corporate behind Grok, didn’t reply to a request for remark, aside from with an automatically-generated reply stating “legacy media lies”.
Ms Smith shared a publish on X about her picture being altered, which was met with feedback from those that had skilled the identical – earlier than others requested Grok to generate extra photographs of her.
“Girls usually are not consenting to this,” she mentioned.
“Whereas it wasn’t me that was in states of undress, it appeared like me and it felt like me and it felt as violating as if somebody had truly posted a nude or a bikini image of me.”
A Dwelling Workplace spokesperson mentioned it was legislating to ban nudification instruments, and below a brand new prison offence, anybody who provided such tech would “face a jail sentence and substantial fines”.
The regulator Ofcom mentioned tech companies should “assess the danger” of individuals within the UK viewing unlawful content material on their platforms, however didn’t affirm whether or not it was presently investigating X or Grok in relation to AI photographs.
Grok is a free AI assistant – with some paid for premium options – which responds to X customers’ prompts once they tag it in a publish.
It’s typically used to provide response or extra context to different posters’ remarks, however folks on X are additionally capable of edit an uploaded picture by means of its AI picture enhancing characteristic.
It has been criticised for permitting customers to generate photographs and movies with nudity and sexualised content material, and it was beforehand accused of making a sexually explicit clip of Taylor Swift.
Clare McGlynn, a legislation professor at Durham College, mentioned X or Grok “might stop these types of abuse in the event that they needed to”, including they “seem to take pleasure in impunity”.
“The platform has been permitting the creation and distribution of those photographs for months with out taking any motion and we’ve but to see any problem by regulators,” she mentioned.
XAI’s personal acceptable use coverage prohibits “depicting likenesses of individuals in a pornographic method”.
In a press release to the BBC, Ofcom mentioned it was unlawful to “create or share non-consensual intimate photographs or youngster sexual abuse materials” and confirmed this included sexual deepfakes created with AI.
It mentioned platforms akin to X have been required to take “applicable steps” to “cut back the danger” of UK customers encountering unlawful content material on their platforms, and take it down rapidly once they change into conscious of it.
Extra reporting by Chris Vallance.
