So I built grov. It runs a local proxy that intercepts Claude Code's API calls, captures reasoning from each session (via LLM extraction), stores it in SQLite, and auto-injects relevant context into future sessions.
In testing: baseline task took 10-11 min with 3+ explore agents launched. With context injection: 1-2 min, zero explore agents. Claude just reads the files directly because it already knows the codebase patterns.
How it works:
-Run grov init
- Run grov proxy (keep it running)
- Use Claude Code normally
- Grov captures what Claude learns, injects it next session
It's local-only (SQLite), nothing leaves your machine. Currently ~250 npm downloads with no promotion, which pushed me to share it here.
Still early (v0.2.2). Would love feedback from Claude Code users. Please report bugs!
Example: My cofounder debugged the payment flow last week. When I touch payment code today, his reasoning is automatically injected into my session.
1. Automatic capture with structured extraction: Grov uses Haiku to extract reasoning_trace (conclusions + insights) and decisions (choice + why) from each session. You don't write anything, it captures automatically.
2. Intelligent injection by file match: When you edit src/auth/login.ts, Grov queries past sessions that touched auth files and injects only that context. A markdown file would be read entirely every time, wasting tokens. (next version will also include semantic search)
3.Team sync: Automatically syncs to a team dashboard. When dev A explains the auth system, dev B's Claude knows it automatically while doing related work.
Technically this was the core idea of Grov, for my coding agent to know the reasoning behind why my cofounder's coding agent chose to implement xyz in such way.
File paths: If you're touching src/auth/, we inject memories that touched auth files. Recency: Most recent memories first, limited to top 10
Semantic search is on the roadmap but not implemented yet. Right now it's straightforward file matching + recency.
(Appreciate the question, really have to start working on this; not the first time someone asked :))