// Open Source · 100% Local · No Cloud

Your AI Memory. Your Machine. Your Call.

When you switch AI services, you lose months of accumulated context. Every project. Every preference. Every decision. Gone.

OwnYourContext fixes that — entirely on your machine.

↗ View on GitHub → See How It Works
489 Conversations Tested
98% Success Rate
5/5 Recall Validated
0 Cloud Calls Made
// The Problem

50 Million Tokens.
200K Window.

The Lock-In Nobody Talks About

Your ChatGPT export is 400–500MB of JSON — approximately 50 million tokens of conversation history. Claude's context window is 200,000 tokens. You can't upload 50 million tokens into a 200K window. The math doesn't work. So you start over. Every time.

The Fix That Actually Works

OwnYourContext compresses your conversation history into a context window that fits. A local AI model running on your machine summarizes and categorizes every conversation. 489 conversations become 10 clean markdown files. Upload to Claude Projects. Context restored.

// How It Works

Four Steps.
One Afternoon.

01

Upload

Drop your ChatGPT export zip. The full, unmodified file. 500MB is fine.

02

Analyze

Local Ollama + Llama 3.2 reads every conversation on your machine. No API calls. Nothing leaves.

03

Review

Auto-classified into topic buckets. Merge, rename, or reorganize before export.

04

Export

Clean markdown files, one per topic. Ready for Claude Projects or any LLM service.

$ python app.py
Extracting conversations... Parsed 489 conversations.
🧹 Cleaned up temporary files.
✅ Analysis complete — 481/489 conversations summarized

Detected Topics:
  Work & Career: 169 conversations
  Technical & Coding: 138 conversations
  Research & Learning: 32 conversations
  ...

✅ Export complete — 10 files written
📁 Output: ~/Documents/LLMMigrator/output
// Privacy Architecture

Local First.
Not a Principle.
The Point.

Your conversation history contains things you wouldn't share with a third party. Health decisions. Financial discussions. Career strategy. Personal relationships. Every summarization call goes to localhost:11434. Nothing leaves your machine until you decide otherwise.

100% Local Processing No Cloud API Calls No Data Sharing Open Source — MIT License Temp Files Auto-Deleted You Control the Output

Get Notified
When v0.2 Ships

Conversation selection, richer summaries, and export improvements. Drop your email and we'll ping you once.

Power user? Star the repo on GitHub

// no accounts. no email lists. no tracking. ever.