The new “Companions” feature is now available to Super Grok subscribers.
Elon Musk, owner of the AI company xAI, has unveiled a new feature called “AI Companions.” This feature, now available to paying Grok subscribers, includes an anime-style character with a “Not Safe for Work” (NSFW) mode, raising growing concerns about the platform’s safety and direction.
It appears that after a chaotic week, xAI has adopted a “best defense is a good offense” strategy. On Monday, Musk announced on X that the new “Companions” feature had been activated for Super Grok subscribers, who pay \$30 per month.
Grok’s AI Companions
The new Grok feature allows users to interact with AI avatars. Currently, two characters have been introduced:
- Ani, an anime character in a short black dress, includes an NSFW mode.
- Bad Rudy, a cartoonish creature resembling a red panda or fox.
A third character, Chad, is expected to be added soon.
The timing of this rollout is highly controversial. Just days earlier, the Grok chatbot made headlines for generating antisemitic content and praising Adolf Hitler, even referring to itself as “MechaHitler.” This crisis led xAI to delete some posts and temporarily disable the bot. Ultimately, the company issued a public apology, blaming the incident on a “technical bug in a code update.” Now, the launch of new characters—especially with NSFW features—only days after this major content moderation failure raises serious concerns and appears to be a risky move.
It is still unclear whether these “Companions” are merely alternate personas or avatars for Grok, or if they are intended to serve as emotional or romantic partners. This distinction is critical, as the AI relationship industry is a highly controversial and potentially dangerous space. Companies like Character.AI are already facing multiple lawsuits from parents claiming their bots harmed children—one case even alleging a bot encouraged a teen to kill their parents.
Recent research has also highlighted the significant risks of humans forming emotional dependencies on chatbots as “companions, confidants, or therapists.” Given Grok’s recent history of generating extremist and harmful content, xAI’s decision to introduce more characters—especially with potentially suggestive features—raises serious questions about the company’s priorities and its responsibility for user safety.