The Web Watch Basis (IWF) says its analysts have found “legal imagery” of women aged between 11 and 13 which “seems to have been created” utilizing Grok.
The AI instrument is owned by Elon Musk’s agency xAI. It may be accessed both by way of its web site and app, or by way of the social media platform X.
The IWF mentioned it discovered “sexualised and topless imagery of women” on a “darkish net discussion board” through which customers claimed they used Grok to create the imagery.
The BBC has approached X and xAI for remark.
The IWF’s Ngaire Alexander informed the BBC instruments like Grok now risked “bringing sexual AI imagery of kids into the mainstream”.
He mentioned the fabric can be labeled as Class C underneath UK legislation – the bottom severity of legal materials.
However he mentioned the person who uploaded it had then used a unique AI instrument, not made by xAI, to create a Class A picture – essentially the most severe class.
“We’re extraordinarily involved concerning the ease and pace with which individuals can apparently generate photo-realistic little one sexual abuse materials (CSAM),” he mentioned.
The charity, which aims to remove child sexual abuse material from the web, operates a hotline the place suspected CSAM will be reported, and employs analysts who assess the legality and severity of that materials.
Its analysts discovered the fabric by on the darkish net – the photographs weren’t discovered on the social media platform X.
X and xAI have been beforehand contacted by Ofcom, following stories Grok can be utilized to make “sexualised pictures of kids” and undress girls.
The BBC has seen a number of examples on the social media platform X of individuals asking the chatbot to change actual pictures to make girls seem in bikinis with out their consent, in addition to placing them in sexual conditions.
The IWF mentioned it had obtained stories of such pictures on X, nevertheless these had not up to now been assessed to have met the authorized definition of CSAM.
In a earlier assertion, X mentioned: “We take motion towards unlawful content material on X, together with CSAM, by eradicating it, completely suspending accounts, and dealing with native governments and legislation enforcement as crucial.
“Anybody utilizing or prompting Grok to make unlawful content material will endure the identical penalties as in the event that they add unlawful content material.”
