feat: Honcho memory integration (opt-in)#38
Conversation
|
Can you link me to more info on Honcho I havent heard of it to know what it adds vs the USER.md user profile that the agent already builds on their user |
|
So USER.md is creating conclusions similar to how Honcho does, yes. To be clear, my PR currently runs alongside it -- I'll make one more commit to integrate them cleanly (still opt-in) if you'reinterested. Important to note-- the reasoning Honcho does is powered by Neuromancer which was specifically fine-tuned for this task. (link to evals here) There's also dreaming -- the reasoning model ambiently makes conclusions and fills in the gaps between sessions. It's a better way to form identity, all accessible in the cloud (remap anything on the fly) or self-hosted with your own model infra. I'm using Honcho as a bridge between all of my agentic interactions and that's the most valuable part for me. The peer-peer architecture also lets you control which peers can make conclusions about other peers -- super helpful for agentic orchestration going forward. Happy to discuss more -- erosika on Discord / @3rosika on Twitter. |
Opt-in persistent cross-session user modeling via Honcho. Reads ~/.honcho/config.json as single source of truth (shared with Claude Code, Cursor, and other Honcho-enabled tools). Zero impact when disabled or unconfigured. - honcho_integration/ package (client, session manager, peer resolution) - Host-based config resolution matching claude-honcho/cursor-honcho pattern - Prefetch user context into system prompt per conversation turn - Sync user/assistant messages to Honcho after each exchange - query_user_context tool for mid-conversation dialectic reasoning - Gated activation: requires ~/.honcho/config.json with enabled=true
When Honcho is active: - System prompt uses Honcho prefetch instead of USER.md - memory tool target=user add routes to Honcho - MEMORY.md untouched in all cases When disabled, everything works as before. Also wires up contextTokens config to cap prefetch size.
USER.md stays in system prompt when Honcho is active -- prefetch is additive context, not a replacement. Memory tool user observations write to both USER.md (local) and Honcho (cross-session) simultaneously.
6430f5b to
70d1abf
Compare
|
Okay, after reviewing I like it a lot. I'll add more documentation on it after merge - Thanks! |
…e integration, setup CLI Authored by erosika. Builds on #38 and #243. Adds async write support, configurable memory modes, context prefetch pipeline, 4 new Honcho tools (honcho_context, honcho_profile, honcho_search, honcho_conclude), full 'hermes honcho' CLI, session strategies, AI peer identity, recallMode A/B, gateway lifecycle management, and comprehensive docs. Cherry-picks fixes from PRs #831/#832 (adavyas). Co-authored-by: erosika <erosika@users.noreply.github.com> Co-authored-by: adavyas <adavyas@users.noreply.github.com>


Cross-session user modeling via Honcho. Hermes builds a persistent representation of who it's talking to -- preferences, patterns, context -- and carries that understanding across conversations. A model of the user.
Three integration points: prefetch user context into the system prompt each turn, sync exchanges for ongoing modeling, and a dialectic reasoning tool (
query_user_context) for the agent to actively query its understanding of the user mid-conversation.honcho_integration/reads~/.honcho/config.jsonas single source of truth. All imports lazy, all calls non-fatal. Optional dependency, zero behavior change when disabled.Composes with USER.md
Both systems run in tandem. USER.md stays in the system prompt as the agent's local, curated snapshot of the user. Honcho prefetch is additive -- synthesized cross-session context layered on top.
When the agent writes to
memory(target="user", action="add"), observations go to both USER.md (local file) and Honcho (cross-session reasoning model). MEMORY.md (agent's own notes) is untouched in all cases.When Honcho is disabled, USER.md works exactly as before. Nothing changes.
Prefetch size is configurable via
contextTokensin the global config -- caps how much user context Honcho surfaces per turn. No value = uncapped.