- Grok conversations shared by users were found indexed by Google
- The interactions, also private, have become available by anyone online
- The problem arose because the grok sharing button has not added noindex tags to avoid the discovery of the search engine
If you have spent time talking to Grok, your conversations could be visible with a simple Google search, as discovered for the first time in a report of Forbes. More than 370,000 Grok cats have become indexed and available on Google without the knowledge or user authorization when they used the Grok sharing button.
The unique URL created by the button has not marked the page as something Google ignores it, which makes it publicly visible with a little effort.
Passwords, private health problems and relational drama fill in public conversations. Even more disturbing questions for Grok on the manufacture of drugs and the planning of murders also appear. Grok’s transcriptions are technically anonymized, but if there are identifiers, people could determine who increased small complaints or criminal diets. This is not exactly the type of subjects you want to linked to your name.
Unlike a screenshot or a private message, these links have no expiration or integrated access control. Once alive, they are live. It’s more than a technical problem; It is difficult to trust AI. If people use chatbots like ersatz therapy or romantic player, they don’t want what the conversation is leaking. Finding your deepest thoughts alongside recipe blogs in search results could keep you away from technology forever.
No privacy with AI cats
So how do you protect yourself? First, stop using the “Share” function unless you are completely comfortable with public conversation. If you have already shared a cat and regret it, you can try to find the link and request its deletion of Google using their content deletion tool. But it is a heavy process, and there is no guarantee that it will disappear immediately.
If you talk to Grok via the X platform, you must also adjust your privacy settings. If you deactivate the authorization to use your messages for the formation of the model, you may have more protection. It is less certain, but the haste to deploy AI products has made a lot of privacy protection more vague than you think.
If this problem seems familiar, it is because it is only the last example of Chatbot AI platforms that grop the confidentiality of users while encouraging the individual sharing of conversations. Openai recently had to resume a “experience” where chatgpt shared conversations began to appear in Google’s results. Meta faced her own reactions this summer when people discovered that their discussions with the Meta AI chatbot could appear in the application flow flow.
Conversations with chatbots can read more like the newspaper’s entries than as publications on social networks. And if the default behavior of an application transforms these into available content, users will push back, at least until the next time. As with Gmail advertisements scan your reception box or your Facebook applications scratching your list of friends, the impetus is always to apologize after a violation of confidentiality.
The best case is that Grok and others correct it quickly. But AI Chatbot users should probably assume that everything that is shared could possibly be read by someone else. As with so many other supposedly private digital spaces, there are many more holes than anyone can see it. And perhaps does not treat Grok as a trusted therapist.