🫖
highlight to read

TEE Talk is a private LLM chat. Your words are encrypted on your device and decrypted only inside a hardware-isolated Trusted Execution Environment. No other human can read them.

How it works

When you connect, your client exchanges cryptographic keys with a remote server, verifies it's running the exact open-source code published here on the hardware it claims to be, and establishes a secure channel. Nothing is stored between sessions. When it seems right, the AI may draw on old wisdom to help you think.

your device hardware TEE 🔑 🔑 matching keys exchanged "what is love?" AI reads and responds "baby don't hurt me" stores both conversation history wiped re-sends full history: "what is love?" "baby don't hurt me" "don't hurt me" AI reads from scratch and responds "no more" conversation history wiped "what is love?" "baby don't hurt me" "don't hurt me" "no more" [human logs off] stores all session ended, memory wiped ✓ 🔑 → 💨 keeps the full conversation no thoughts head empty 🐣 no human can read your messages ✗ your ISP · ✗ the cloud provider · ✗ the server operator encryption enforced by your device · isolation enforced by CPU hardware

How to talk

By web UI (desktop only)

Download for your platform from the latest release, extract, and run. Opens a web UI in your browser, end-to-end encrypted to the TEE. The first message may take a moment — the model loads on demand.

macOS

macOS will block the app because it isn't signed. Open System Settings → Privacy & Security, find the message about "tee-talk" being blocked, and click Allow Anyway. Or run in Terminal:

xattr -d com.apple.quarantine ~/Downloads/tee-talk
chmod +x ~/Downloads/tee-talk
~/Downloads/tee-talk

Windows

Click More info on the SmartScreen warning, then Run anyway.

Linux

chmod +x tee-talk
./tee-talk

By terminal

git clone https://github.com/reeeneeee/tee-talk.git
cd tee-talk
cargo run -- connect -a 34.60.196.117:9999

Requires Rust. Add --trust-server to skip attestation (still encrypted).

By text message

  1. Text HELLO or START to +1 (970) 717-2021 (details & disclosures)
  2. You'll receive a welcome message confirming your opt-in.
  3. Start the conversation.

SMS is not end-to-end encrypted — your carrier can see messages. For full encryption, use the web UI or terminal client above.

How to verify

The binary running inside the TEE is built reproducibly from public source code. You can rebuild it on any x86_64 Linux machine with Docker and check that the hardware-signed attestation report matches your build. (ARM/Apple Silicon won't work — the build uses target-cpu=x86-64-v3.)

gpg -d readings.txt.gpg > readings.txt
./scripts/build.sh
EXPECTED_BINARY_HASH=65f221...77d \
  cargo run -- connect -a 34.60.196.117:9999

Full details in VERIFY.md.

Frequently asked questions

What is a Trusted Execution Environment?

A TEE is a hardware-isolated area of a processor. Code and data inside it are protected from everything outside, including the operating system, the cloud provider, and anyone with physical access to the machine. TEE Talk uses AMD SEV-SNP, which encrypts all memory at the CPU level.

Can whoever's running the server read my messages?

No. Your messages are encrypted on your device and only decrypted inside the TEE. The server operator cannot access the TEE's memory; that's enforced by the hardware.

Can't you connect to the VM and read the logs?

I can connect via serial port and run journalctl to see operational logs, not what was said. The binary is open source, you can read the code and confirm it doesn't log your messages. The hardware attestation proves the VM is running exactly the published code, byte for byte. If the binary were modified to log conversations, the attestation hash would change and your client would refuse to connect.

How is this different from [big LLM]?

When you use [big LLM], your prompts are sent to their servers and processed in plain text. With TEE Talk, your prompts are encrypted end-to-end and processed inside hardware isolation. No one can read them.

What AI model is running?

Llama 3.2 3B, an open-weight model by Meta. It runs locally inside the TEE via Ollama. No data is sent to any third-party API.

What happens to my conversations?

Nothing is stored. When your session ends, everything disappears. There are no logs, no databases, no conversation history on the server.

Why should I trust this?

You don't have to. The binary running in the TEE is built reproducibly from public source code. You can rebuild it yourself and verify that the hardware-signed attestation matches.

Is the SMS option still encrypted?

No. SMS messages pass through your carrier in plain text. The AI still runs inside the TEE, but messages between you and the server are not end-to-end encrypted. For full privacy, use the desktop client or terminal.

What if my device is compromised?

Then all bets are off. A keylogger can read what you type before it's encrypted, and stolen keys let an attacker impersonate you. TEE Talk protects your messages in transit and during processing, but it can't protect a device you've already lost control of. This is true of every encrypted system.

Why is this so slow?

The model runs on a single confidential VM with no GPU. Hardware isolation sets necessary constraints on what resources can be used, so responses take longer than they would on a big cloud GPU cluster. A smarter model on faster hardware is on the roadmap.

How do you profit from this?

I don't. I pay GCP out of pocket to host the confidential VM. No other company gets paid from this project.