How Engrams Works
Every time you start a new chat with your AI assistant, it has no memory of your previous conversations. Engrams changes that. It quietly sits alongside your AI tool, remembering your project decisions, design patterns, and progress — and automatically provides the right information when it's needed.
How a typical interaction works
- You ask your AI something. For example: "Add a /users endpoint."
- Your AI checks with Engrams to see if there are any relevant decisions, patterns, or notes about this area of the project.
- Engrams picks the best matches — only the most useful items, so the response stays fast and affordable.
- Your AI writes code informed by real project context — not generic guesses.
- New decisions get saved. As you work, Engrams records what you decide so it's there next time.
Storage: the filesystem-first architecture
Engrams uses a two-layer storage model designed for team collaboration:
| Layer | Location | Purpose |
|---|---|---|
.engrams/ | Project root (committed to Git) | Authoritative source for team decisions, patterns, and shared data — human-readable markdown files |
engrams/context.db | Project root (in .gitignore) | Local SQLite cache for fast queries, FTS, semantic search, progress tracking, and session state |
The core invariant: .engrams/ is the source of truth for team items.
The SQLite database is a derived cache that can always be rebuilt from the filesystem. If the
database is deleted, Engrams reconstructs it automatically from the markdown files.
Visibility levels
Every item stored in Engrams — decisions, patterns, custom data — carries an optional visibility label that controls who sees it and how it's stored.
| Visibility | Stored in .engrams/? | Meaning |
|---|---|---|
team | ✅ Yes (write-through) | Shared across all teammates via Git |
individual | ❌ No (SQLite only) | Personal notes or local state — never committed |
proposed | ❌ No (until accepted) | Suggestions under review before becoming team decisions |
workspace | ✅ Yes | Workspace-wide defaults visible to anyone using this repo |
Team sync via Git
When your AI logs a team decision, Engrams performs a guaranteed write-through:
the item goes into SQLite and is written as a markdown file in .engrams/.
If the filesystem write fails, the entire operation fails — no silent data loss.
Sharing is then just normal Git:
- Commit the
.engrams/directory alongside your code changes. - Teammates pull and receive the new markdown files.
- On the next MCP tool call, Engrams auto-imports new files into each teammate's local SQLite cache.
Decisions are identified across machines by a content-addressed slug (a 12-character SHA-256 of the summary text), so the same decision is recognized as identical and never duplicated.
See Team Sync for the full architecture, directory structure, and setup guide.
Decision enforcement
Before making changes, your AI can check whether a planned action conflicts with existing
team decisions using tool_check_planned_action. This pre-mutation check ensures
the AI respects prior architectural choices — like "use PostgreSQL for all storage" — before
writing any code.
Workspace detection
Engrams figures out which project you're working in automatically by looking for common project
files like .git, package.json, pyproject.toml, or Cargo.toml.
You never need to tell it where your project lives.
How Engrams connects to your AI tool
| Mode | What it does | Best for |
|---|---|---|
| Local (stdio) | Runs directly alongside your AI tool as a local process | IDE extensions like Roo Code, Cline, Cursor, and Windsurf |
| Network (HTTP) | Runs as a web server that clients can connect to | Remote setups, or multiple people sharing one Engrams instance |
Most users will use Local (stdio) — it's the default and requires no extra configuration.
Smart search (semantic search)
When you save a decision or pattern, Engrams also creates a compact representation of its meaning. Later, when your AI searches for something like "caching strategy," Engrams can find related items — even if they use completely different words, like "Redis configuration" or "query performance." This is called semantic search, and it's what makes Engrams feel like it actually understands your project.
Keyword search
Engrams also supports fast, traditional keyword search. If you know the exact term you're looking for — like a specific library name or error message — keyword search finds it instantly without any extra processing.
The knowledge graph
Your project knowledge isn't isolated facts — it's all connected. Engrams lets you link related items together. For example, you can connect a design decision to the coding patterns that implement it, or link a task to the decisions it depends on. Your AI can follow these connections to understand the bigger picture.
Context budgeting
As your project grows, you might have hundreds of saved decisions and patterns. Loading all of them into every AI conversation would be slow and expensive. Engrams solves this by ranking items by how relevant they are to your current question and only including the most useful ones. The result: fast, focused, affordable responses. See Context Budgeting for details.