We are genuinely sorry to start your Saturday off on this note, but this is a business newsletter after all, and this is an important business story, so here goes…
What happened: Elon Musk’s AI company, xAI, said it had restricted usage of Grok to paid subscribers after facing days of criticism that the chatbot was being used to generate sexualized deepfake images on X, including of children.
European leaders threatened action against X over the deepfake images, and British officials said the company’s move to limit Grok’s image generation feature to paying customers was “not a solution” and “simply turns an AI feature that allows the creation of unlawful images into a premium service.”
Plus, X may not have even done what it claimed and paywalled the feature — as of yesterday, reporters were still able to find multiple ways to easily generate sexualized deepfakes with free accounts.
Why it matters: Musk has pitched Grok as an AI model with fewer safety guardrails, and it certainly is that — but rather than making the model “maximally truth-seeking,” as Musk has claimed, removing those guardrails has made it a handy tool for extremely unpleasant (and, in some cases, criminal) behaviour.
Laws concerning deepfakes are still patchy. In Canada, sexualized deepfake images of minors are banned under child pornography laws, but when adults are depicted it’s more of a grey area.
Our take: This is one of the more disgusting uses of AI, but unfortunately one that was entirely predictable, and a company that gave its chatbot a “spicy mode” deserves its share of the blame. We expect laws against this sort of activity to be among the first and most widely adopted regulations of the technology. —TS
