Add Voyage AI embedding provider#437
Conversation
|
hardcoded embedding size is silly. |
|
Chiming in as another data point for this fix. We're running Honcho on a local vLLM setup using Qwen3-Embedding (1024 dimensions) behind an OpenAI-compatible endpoint, and hit exactly the same wall described in #443. Before this PR we had to manually patch three files to get it working:
The One note on the "no manual DB wipe" framing: the Alembic migration here doesn't eliminate re-embedding — it NULLs the existing vectors, resizes the column, and lets the reconciler re-embed asynchronously. The actual message and conclusion data is preserved, which is the important part. The re-embedding cost is still there, just handled automatically rather than requiring a manual DB rebuild. Happy to test this against our vLLM + OpenAI-compatible path (not Voyage) if a pre-merge test would be useful. |
Adds Voyage AI as an embedding provider.
VECTOR_STORE_DIMENSIONS was being silently ignored before because dimensions were hardcoded. That actually works now.
Changes:
Set LLM_EMBEDDING_PROVIDER=voyage and LLM_VOYAGE_API_KEY=... to use it.