How Engrams Works
Engrams sits between your AI agent and your project. It stores, indexes, and retrieves structured knowledge so your AI always has the right context — without you asking for it.
The core loop
- You talk to your AI. "Add a /users endpoint."
- The AI calls Engrams tools to retrieve relevant decisions, patterns, and progress items.
- Engrams scores and returns the most relevant items within the configured token budget.
- The AI generates a response grounded in your actual project decisions.
- New knowledge is logged back into Engrams as the session progresses.
Storage
Each project gets its own SQLite database at
<project-root>/context_portal/context.db.
Data is never shared between projects. The database is created automatically on first use.
Workspace detection
Engrams detects the active project automatically using a priority-ordered list of project
indicators: .git, package.json, pyproject.toml,
Cargo.toml, and others. You never need to pass a workspace path manually.
Communication modes
| Mode | How it works | Best for |
|---|---|---|
| stdio | Direct inter-process communication via stdin/stdout | Local IDE extensions (Roo, Cline, Cursor, Windsurf) |
| HTTP | FastAPI server on a configurable port (default 8000) | Remote clients, multiple agents sharing one server |
Vector embeddings & semantic search
When you log a decision or pattern, Engrams generates a vector embedding
using sentence-transformers and stores it in ChromaDB. When your AI searches for
"caching strategy", it finds decisions about Redis, TTL, and query indexes — because Engrams
understands meaning, not just words.
Full-text search (FTS)
In addition to semantic search, Engrams maintains SQLite FTS5 virtual tables for decisions, custom data, and glossary items. FTS is fast and requires no embedding lookup — ideal for exact or partial keyword matches.
The knowledge graph
Entities (decisions, patterns, progress items, custom data) can be explicitly linked via
link_engrams_items. These relationships form a queryable graph that lets your AI
navigate from a decision to the patterns that implement it, or from a task to the decisions
it depends on.
Context budgeting
As your knowledge base grows, loading everything into every prompt becomes expensive. Engrams' budgeting system scores each item by relevance to the current query and loads only the highest-value items within a configurable token limit. See Context Budgeting for details.