In light of some of the disturbing, to put it mildly, developments out of the wreckage of what was Twitter… I feel compelled to correct a previous post I wrote here.
With the benefit of hindsight, I believe I was wrong to write the post which I published back in September ’25 – Exploring safe parenting in the social media era – because the goal of that item was to try and elaborate on some of the dangers inherent in putting your kids onto social media and help you navigate them.
It turns out, somewhat chillingly, that there’s a whole new danger out there and oh boy is it a big one.
You’ve heard of AI by now, of course, unless you’ve been living under a seriously big rock. (Which, if you have… can I join you? Please? I’ve had enough of this AI crap!) Well, xAI’s system – “Grok” – has turned out to be one of the worst things that could’ve happened to your kids’ online safety by a serious margin.
The Financial Times recently lambasted X (formerly Twitter) with the frankly incredible headline “Who’s who at X, the deepfake porn site formerly known as Twitter“. You might be wondering why, or think that this is a bit of an exaggeration.
Alas, it is not only thoroughly deserved but in my opinion a bit of an understatement. Let me explain why.
Why is this “Grok” thing so bad?
First, a bit of background. Grok is a generative AI (“GenAI”) system, which uses a number of other systems to create output in response to prompts. If you want to read about it in more detail, there’s always its Wikipedia page.
What’s horrifying is that thanks to X, absolute reprobates are were able to instantaneously create deepfaked explicit images of anyone. To do so, you’d need nothing more than a perfectly innocent image of that person – say, you standing fully clothed in front of a tree somewhere – and a request to Grok to make modifications.
Creating deepfakes in this way is already illegal in the UK, and I think also the US – though I am not a lawyer.
Sadly, things just keep getting worse.
How could that get worse?
Are you sure you want to know the answer to that? Really? Last chance to stop reading and go do something else, and remain – evidently – blissfully ignorant of this shitshow?
No?
Ok then. Turns out that the asshats using Grok to “nudify” people weren’t just using it on adults. Yes, you read that right. Grok was quite happily churning out imagery which, in the UK at least, would land you in serious legal trouble (to put it mildly). Don’t believe me? Read the news.
Oh… sh*t.
Quite.
What’s even more depressing is the platform’s response to this whole abhorrent affair. To begin with, X moved the feature behind a paywall – so that the only way you could create this kind of content using Grok was if you paid for it. That’s not better!
The position has since been walked back further, with xAI saying that they’ve made changes to the system to prevent Grok from creating this kind of material at all regardless of whether or not you’re paying to use the platform.
This seems all well and good… until you realise that these AI systems, including their often AI-based guardrails / security guards, can’t distinguish between the instructions given by their creators – the “system prompt” which dictates how the model ought to respond – and any overt/implied instructions delivered by the user as part of their instructions to the system.
If you’re interested in learning more about how GenAI can’t tell the difference between these “baseline rules” and what it’s being asked, and how easy it can be to subvert them, then check out Lakera’s Gandalf.
Grok isn’t alone, it’s just worst.
It would be wrong to single out Grok as the sole means of creating this type of abusive content, because it’s not the sole means of creating this type of abusive content. Other AI models exist which can do the exact same thing, and there’s non-AI ways of doing it also…
Grok is simply the worst one for it in my opinion because unlike other systems with the same capabilities, asking Grok to create “nudified” images of someone you knew by tagging it on X meant that the Grok account replied in public with the requested content. (You could also do the same in private, via DMs or whatever they’re called these days…)
So, it’s not just Grok that can make this shit… but it is, as far as I know, the only one which will create it and automatically post it to social media. Yuck.
That can’t be legal!
That’s the thing; it’s not, at least not in the UK. It was illegal to begin with, and in response to the ongoing scandal and the platform’s frankly pathetic attitude towards the criticism it has rightly received the UK Government is drafting additional legislation to address the issue.
Regulators across Europe and the world are also reviewing the situation and taking varying measures.
There is, however, a US-specific angle which I have yet to see covered anywhere which I find interesting. Let’s talk Section 230.
Section 230?
Specifically, I am referring to Section 230 of the Communications Decency Act in the US. Now, obvious caveat here: I am not a lawyer in any jurisdiction.
However, Section 230 is part of the US legal code that tech platforms – including social media – have long relied on to absolve themselves of any legal liability for what their users put onto their platforms. They’re not responsible for what their users put onto the platform, so long as they act in good faith to clean up anything which is “objectionable”. They are simply providing a means for you to arrive at that content.
What sticks in my mind, as far as §230 goes, is the idea that in this case I’m not sure this would fly.
If we were talking about a user on X taking an offline version of Grok, interacting with it in some way that wasn’t tagging it publicly in a feed, or had used a completely different GenAI system to create objectionable material before they themselves posted it to X then maybe the situation would be different.
This isn’t the case. People asked Grok to create objectionable content, and then the Grok account – owned, controlled, and operated by xAI / X – posted it to the feed. In public.
Publishing precedents…?
In addition, since Grok does have mechanisms which dictates what it can and cannot do – including what content it can and cannot generate, and presumably what it can and cannot post – does that not meet the precedent for what constitutes “publishing” established prior?
“Publication involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” (Barnes v Yahoo!, Inc.)
The system prompt and any additional guardrails are involved in reviewing and deciding whether or not a given piece of content is produced by the GenAI system. Is that not inherently editorial in nature? Would that not, from a legal perspective, disqualify Grok in this instance from the protections afforded it by §230?
I don’t have a definitive answer to this; as I said, I’m not a lawyer. But I can’t see how you could reasonably claim that you’re “being held liable for someone else’s content” when it came direct from Grok. Did someone else, who isn’t Grok, prompt it to do so? Sure they did, but you couldn’t wriggle out of culpability for setting your neighbours woodshed on fire by saying it was your mates who prompted you into striking the match… right?
So, about that other post…
I’ve gone off on a bit of a tangent here, but to bring it back to my previous post from September where I was suggesting that there are ways to “safely” have your kids interact with / be present on social media.
I firmly believe I was wrong.
For the record, I barely use any social media myself now. I play the games on LinkedIn every morning to get the ol’ grey matter going, that’s about it. One thing I sure as hell don’t do is put anything relating to my kid there. No pictures, no details, no anything.
After everything that’s happened recently with Grok… I’m more convinced than ever that I’ve made the right decision there. Perhaps it’s a decision you should make too.
— TTFN

