“Small surface. Tested rules. Honest memory.”

I spend my days writing enterprise code — multi-system integrations, strict conventions, dozens of services and stakeholders that all need to be explained to the AI before it can be useful. I spend my nights on side projects, including a tablet game called Ultima Mobile Classic. Different stacks, different scales, same problem: every morning I sat down and re-explained everything. The architecture. The conventions. The bugs we’d already fixed. The names of the systems. The personality of each project. The AI started from zero every single session.

So I started writing it down. A CLAUDE.md file. Then it grew. Then it grew more. Eventually it was 2,000 lines, and the AI was ignoring half of it. I was spending more time managing context than shipping code — at work and at home.

I needed something better. Not a bigger instructions file — a smaller one. One that stayed small no matter how much I knew, and only handed the AI what it actually needed for the task at hand.

That’s what RunawayContext is. It was built in production, for production — large codebases, strict governance, multiple contributors — and the same tool now runs my weekend side projects. Same patterns, both scales.

View on GitHub →

what it does

RunawayContext is a persistent-memory system for AI coding agents. Instead of stuffing every fact about your project into one giant instructions file, it keeps your knowledge in a structured SQLite database on your machine and serves the AI a small, focused brief — usually under 3,000 tokens — at the start of every session.

The AI knows where to look up deeper details when it needs them. Your token bill drops. Your project’s institutional memory survives between sessions. And nothing leaves your machine.

≤3K
tokens always-loaded, no matter how much you’ve taught it
70–95%
token reduction vs. a traditional instructions file
0 B
network egress by default — everything is local
575
passing tests, 86% code coverage

why a v3

Every version was driven by something I learned by actually using the previous one. Each iteration kept the same goal — persistent context, small surface, lower tokens — and tightened the screws on how that goal is enforced.

v1April 2026
Proof of concept. A single SQLite file and a markdown brief. It worked — well enough that I started using it on multiple projects. But its discipline was just policy, and policy drifts.
v2May 2026
Because v1 drifted in real-world use. The DB became authoritative; markdown files became views, auto-generated from it. Drift detection ran in the background. Better, but its architecture still relied on convention.
v3May 2026
This release. Every claim has a named, machine-checkable test. 15 hard rules (HR-1 through HR-15), each with an enforced contract. A six-tier maturation curve so the system grows with the project. MCP server with 13 tools. Hash-chained audit log. Non-destructive migration from v1/v2 — nothing is lost. Loopholes are closed by construction.

what’s in it

The pieces that make persistent context actually work in practice.

Local-first SQLite Two files: knowledge.db and sessions.db. FTS5 full-text search built in. Optional semantic retrieval via sqlite-vec. Zero network calls by default.
Auto-Generated Briefs Project briefs capped at 150 lines and regenerated whenever knowledge changes. The AI’s always-loaded surface stays small — the database can be huge.
15 Hard Rules Named contracts (HR-1 through HR-15), each with a machine-checkable test. No policies, no vibes — rules either pass or fail, and CI catches the failures.
MCP Server 13-tool Model Context Protocol server for Claude Code, Cursor, and other MCP-compatible agents. Retrieval, logging, searching, and lifecycle operations.
Python Client + CLI Use the Python Client class inside custom agents, or call the CLI from shell scripts. Or skip the binaries entirely and just use the markdown template.
Lesson Maturation Six-state lifecycle for lessons: draft → reviewed → active. Explicit approval gates so AI-suggested lessons don’t silently rewrite your conventions.
Project-Tagged Writes Guards prevent one project’s lessons from leaking into another’s context. Multi-user with author attribution and conflict resolution at T3+.
Audit Log Hash-chained, append-only audit log (T4+). Every change to the knowledge base is signed and recoverable. Soft-delete architecture: nothing is ever truly gone.
Drift Detection Stop hook + cron watcher catch when generated views grow past their cap, when policy and schema disagree, or when lessons stop being applied.
Session Transcripts Conversation snapshots indexed for FTS5 search. Find the moment a decision was made, not just what was decided. Telemetry stays local.
Non-Destructive Migration v1 and v2 installs upgrade cleanly. Nothing is lost. Kind undo path with full export and archival. If v3 isn’t for you, you can roll back.
AI-Native Install Your AI clones the repo, installs dependencies, runs the contract tests, and reports the diagnostics. You paste one prompt; setup takes about 5 minutes.

it grows with the project

You don’t need every feature on day one. RunawayContext has a six-tier maturation ladder so a solo project can start with just a markdown template and graduate up to enterprise federation only when the team actually needs it. Each tier has a real promotion gate — not a feature toggle, but an earned milestone.

Tier Name Users What you get
T0 Hello World Solo Just the markdown template. No install. Good for trying the discipline before committing to the tooling.
T1 Solo 1 person Full SQLite DB, FTS5 search, drift detection. No MCP server yet — the markdown brief is plenty for one project.
T2 Solo Power 1 person MCP server with 13 tools, semantic search, telemetry, drafts inbox. The sweet spot for serious solo work.
T3 Pair / Squad 2–5 people Author attribution, git-based export/import, conflict resolution. Knowledge survives team members coming and going.
T4 Team 5–20 people Visibility ACLs, hash-chained audit log, garbage detection, promotion gates. The level where governance starts to matter.
T5 Org / Enterprise 20+ people Federation across projects, SSO bindings, OpenTelemetry export, fine-grained grants. Terminal tier.

how it installs

RunawayContext is AI-native — the install procedure is a prompt you paste to your coding agent. It clones the repo, installs dependencies, runs the contract tests, and reports diagnostics. Takes about five minutes.

# paste this to Claude Code, Cursor, or any MCP-aware AI I want to install RunawayContext v3 from https://github.com/sms021/RunawayContext on this machine. Follow the procedure in INSTALL_PROMPT.md exactly.

Already on v1 or v2? The same prompt handles the upgrade non-destructively — reference the appropriate section of INSTALL_PROMPT.md. Nothing in your existing knowledge base is touched until the migration verifies clean.

why I open-sourced it

This started as a fix for my own problem — long days on the day-job codebase, late nights on side projects, and one AI that kept forgetting things across both. But the more I used it, the more it became obvious that the problem isn’t mine alone. Anyone working with AI coding agents hits this wall: the instructions file grows, the AI stops reading half of it, and you start spending more time on context management than on the actual work.

If RunawayContext is useful to you — in your team’s production codebase, on a personal project, or anywhere in between — I’d love to hear how. Issues, pull requests, and stories from the field all welcome. If it’s saved you a few thousand tokens or a few hours of re-explaining, and you want to chip in, the donate button is at the top of the page. Either way — thanks for taking a look.

try it out

RunawayContext is free and open source under a permissive license. Star the repo if it’s useful, open an issue if it’s broken, and tell a friend if it saves you a session.

View on GitHub Support This Project