- There is a new AI assistant created by the founder of Signal called Confer
- Like Signal, Confer encrypts chats so no one can read them
- Unlike ChatGPT or Gemini, Confer does not collect or store your data for training, logging, or legal access purposes.
The man who generalized private messaging now wants to do the same for AI. Signal creator Moxie Marlinspike has launched a new AI assistant called Confer, built around similar privacy principles.
Conversations with Confer cannot be read even by server administrators. The platform encrypts every part of user interaction by default and runs in what’s called a trusted execution environment, never letting sensitive user data leave this encrypted bubble. No recorded data is verified, used for training, or sold to other companies. Confer is an exception in this regard, as data is generally considered the benefit of making an AI chatbot free.
But with consumer trust in AI privacy already under strain, the appeal is clear. People notice that what they say to these systems doesn’t always stay private. Last year, a court order forced OpenAI to retain all ChatGPT user logs, even deleted ones, for potential legal discovery, and ChatGPT chats even appeared in Google search results for a time, thanks to accidentally public links. There has also been an outcry over contractors reviewing anonymized chatbot transcripts containing personal health information.
Confer data is encrypted before even reaching the server, using passwords stored only on the user’s device. These keys are never downloaded or shared. Confer supports syncing chats between devices, but thanks to cryptographic design choices, even Confer’s creators can’t unlock them. It’s ChatGPT with Signal security.
Private AI
Confer’s design goes even further than most privacy-focused products by offering a feature called remote attestation. This allows any user to check exactly what code is running on Confer’s servers. The platform publishes the software stack in its entirety and digitally signs each version.
This may not matter to all users. But for developers, organizations, and watchdogs trying to assess how their data is handled, it’s a radical level of security that could give some worried fans of AI chatbots some breathing room.
Not that there aren’t privacy settings on other AI chatbots. There are actually quite a few that users can view, even if they only think about doing so after they’ve already said something personal. ChatGPT, Gemini, and Meta AI all offer opt-out options for things like chat history, allowing the data to be used for training purposes or the data to be deleted outright. But the default state is monitoring, and unsubscribing is the user’s responsibility.
Confer reverses this configuration by making the most private configuration the default. However, this is built in, which also shows how responsive most privacy tools are. This could at least raise awareness, if not ask consumers for more forgetful AI chatbots. Organizations such as schools and hospitals interested in AI might be attracted to tools that ensure privacy by design.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.
The best business laptops for every budget




