cross-posted from: https://news.abolish.capital/post/16855

Elon Musk is facing calls for legal ramifications after Grok, the AI chatbot used on his X social media platform, produced sexually suggestive images of children.

Politico reported on Friday that the Paris prosecutor’s office in France is opening an investigation into X after Grok, following prompts from users, created deepfake photographs of both adult women and underage girls that removed their clothes and replaced them with bikinis.

Politico added that the investigation into X over the images will “bolster” an ongoing investigation launched by French prosecutors last year into Grok’s dissemination of Holocaust denial propaganda.

France is not the only government putting pressure on Musk, as TechCrunch reported on Friday that India’s information technology ministry has given X 72 hours to restrict users’ ability to generate content deemed “obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law.”

Failure to comply with this order, the ministry warned, could lead to the government ending X’s legal immunity from being sued over user-generated content.

In an interview with Indian cable news network CNBC TV18, cybersecurity expert Ritesh Bhatia argued that legal liability for the images generated by Grok should not just lie with the users whose prompts generated them, but with the creators of the chatbot itself.

“When a platform like Grok even allows such prompts to be executed, the responsibility squarely lies with the intermediary,” said Bhatia. “Technology is not neutral when it follows harmful commands. If a system can be instructed to violate dignity, the failure is not human behavior alone—it is design, governance, and ethical neglect. Creators of Grok need to take immediate action.”

Corey Rayburn Yung, a professor at the University of Kansas School of Law, argued on Bluesky that it was “unprecedented” for a digital platform to give “users a tool to actively create” child sexual abuse material (CSAM).

“There are no other instances of a major company affirmatively facilitating the production of child pornography,” Yung emphasized. “Treating this as the inevitable result of generative AI and social media is a harrowing mistake.”

Andy Craig, a fellow at the Institute for Humane Studies, said that US states should use their powers to investigate X over Grok’s generation of CSAM, given that it is unlikely the federal government under President Donald Trump will do so.

“Every state has its equivalent laws about this stuff,” Craig explained. “Musk is not cloaked in some federal immunity just because he’s off-again/on-again buddies with Trump.”

Grok first gained the ability to generate sexual content this past summer when Musk introduced a new “spicy mode” for the chatbot that was immediately used to generate deepfake nude photos of celebrities.

Weeks before this, Grok began calling itself “MechaHitler” after Musk ordered his team to make tweaks to the chatbot to make it more “politically incorrect.”


From Common Dreams via This RSS Feed.

  • neroiscariot [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 months ago

    You’d think a functional facsimile of a government would make Elon take this offline until it passes a basic check that most other llms seem to be able to do…but nah, let the csam machine go brrr

  • SorosFootSoldier [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 months ago

    If people out there think a cabal of rich pedophiles blackmailing each other can’t exist, point to this. Look at what normal people get up to when they can undress any woman, A LOT of men instantly try to do nudes of children.

  • darkcalling [comrade/them, she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    They’re going to keep doing stuff like this and they’re going to use public outrage to ram through incredible levels of control of the internet. They’ll eventually get around to making them stop these machines from doing this but first they’ll use the outrage to make platforms responsible for all user content, forcing the entire internet to be tightly censored to zionist, western imperialist dogma as part of this. The current say reddit, facebook, etc regimes where they play whack a mole with anti-imperialist content will at that point look like a bright but quickly vanishing past compared to what this foreboding future holds.

    Be so easy to just selectively punish the bad actors and narrowly target them, just like Australia could have just tried to force social media companies to not do abusive stuff in their country, hand over algorithm details for review, that kind of thing but instead why pass up an opportunity to card all internet users behind the guise of keeping children off social media.

    In 5 years people in the west will have whiplash of how fast it all happened, much like 9/11 and the security state expansion there. There will be some truly big outrage moment, authentic or ginned up over this and they’ll just bulldoze over any opposition and use it as the perfect pretext.

  • Carl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    The threat of the government punishing someone is the only thing keeping any corporation in any industry in line, but in a brand new fast moving industry like generative ai, it’s especially important. If the whole industry sees the first big public example of this happening not lead to consequences, they’re going to assume that that means no consequences ever, and all the safeties are going to come off and the plagiarism machines will become cp generation machines.

    So look forward to that in a couple years.