At a Glance
- Signal founder Moxie Marlinspike has built Confer, an end-to-end encrypted, open-source AI chatbot.
- All prompts and responses are encrypted on-device before reaching servers, blocking company access.
- Confer uses passkeys and confidential computing to run the AI inside a sealed hardware environment.
- Why it matters: Users can confide in an AI without risking their thoughts becoming ad-targeting data.
The founder of Signal has been quietly developing a fully end-to-end encrypted, open-source AI chatbot engineered to keep every user conversation secret. In a new series of blog posts, Moxie Marlinspike explains that while he values large language models, he is alarmed by the privacy gaps in today’s popular platforms.
Marlinspike contends that a chatbot’s interface should mirror its underlying privacy guarantees. Signal feels like a private one-on-one chat because it is one; ChatGPT and Claude feel like intimate journals even though the companies behind them can read inputs and fold them into future training.
The Privacy Problem With Today’s AI
Marlinspike’s central point: if software feels like a safe space, it should actually be one. He notes that LLMs are the first major medium that “actively invites confession,” encouraging users to reveal how their minds work-doubts, biases, and all. That trove of personal patterns could later be weaponized by advertisers to sell products or steer behavior.

To break that cycle, he created Confer, a service where, in his words, “you can explore ideas without your own thoughts potentially conspiring against you someday.”
How Confer Works Under the Hood
Confer encrypts prompts on a user’s phone or computer before any data leaves the device. Encrypted text travels to Confer’s servers and is decrypted only inside a secure data environment to generate a reply.
Key technical choices include:
- Passkeys, not passwords: Face ID, Touch ID, or a device PIN derive the encryption keys tied to the user’s hardware.
- Confidential computing: Code runs inside a Trusted Execution Environment (TEE). The host machine supplies CPU, memory, and power but cannot peek at the TEE’s memory or execution state.
- Attestation: The hardware produces cryptographic proof so a user’s device can confirm the environment is untampered.
The LLM’s inference happens within this sealed virtual machine. The resulting response is encrypted and routed back to the user, keeping the entire loop away from what Marlinspike calls “a data lake specifically designed for extracting meaning and context.”
Why This Builds on Signal’s Legacy
Signal launched in 2014 with similar open-source, privacy-first DNA. Its encrypted protocol was later adopted by Meta’s WhatsApp, showing that major tech players can embrace such designs. Confer’s architecture could, in theory, follow the same path.
By open-sourcing the project, Marlinspike hopes to set a new baseline: AI that feels private should actually be private, verified by code anyone can audit.
What Users Gain
- True end-to-end encryption for both prompts and model answers
- No storage of readable chat logs on company servers
- Cryptographic proof that the AI is running inside a protected enclave
- Freedom to brainstorm, confess, or experiment without feeding an ad profile
What Comes Next
Confer’s repository is public, allowing developers to inspect the implementation and propose improvements. If uptake grows, cloud providers may face pressure to offer confidential-computing hardware at scale, potentially reshaping how commercial AI services handle sensitive data.
For now, Marlinspike’s experiment stands as a proof that privacy-preserving AI is technically viable-and that users don’t have to choose between helpful chatbots and control over their inner thoughts.
