Semantic Search
Engrams finds relevant context by understanding meaning, not just exact keyword matches. Ask about "performance optimization" and it finds decisions about caching, indexing, and query tuning.
How it works
When you log a decision or pattern, Engrams generates a vector embedding
using the sentence-transformers library and stores it in ChromaDB. At retrieval
time, the query is embedded and compared against stored vectors using cosine similarity.
Items with the highest semantic similarity are returned first.
Example
You: "How should I handle caching for better performance?"
AI: Searching for relevant decisions...
Found:
• Decision #8: Use Redis for session caching
• Decision #15: Cache invalidation strategy (TTL-based)
• Decision #22: Database query optimization with indexes
Based on these decisions, I recommend Redis for session
caching with a 24-hour TTL... Semantic vs full-text search
| Type | Best for | Tool |
|---|---|---|
| Semantic | Conceptual queries — finds meaning-related items | get_relevant_context |
| Full-text (FTS) | Keyword or phrase search — fast exact/partial matches | search_decisions_fts, search_custom_data_value_fts |
MCP tools
get_relevant_context— semantic search across all knowledge typessearch_decisions_fts— full-text search on decisionssearch_custom_data_value_fts— full-text search on custom data valuessearch_project_glossary_fts— full-text search scoped to the glossary