-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[Bug]: Ollama Models don't recognize environment (aka hermes-agent) #2074
Copy link
Copy link
Closed as not planned
Closed as not planned
Copy link
Labels
bugSomething isn't workingSomething isn't working
Description
Bug Description
If I run hermes with OpenAI Codex, it works fantastic (but has daily/weekly limits).
So I tried on same hermes changing provider/model to use local Ollama.
I tested these models: qwen3-coder, glm-4.7-flash, qwen3-vl:latest, llama3.1, hermes3:8b.
With all same results, they don't know that are running inside hermes.
I asked by the configured cronjobs (when used Codex) , and don't understand ...
Update:
- Using model=minimax-m2.5:cloud , seems working fine. So it seems there is a problem with local models.
Steps to Reproduce
- hermes setup
- select custom provider
- use your ollama url, like http://localhost:11434
- use downloaded models (ollama pull model-name)
- hermes
- chat...
Expected Behavior
Responds with my configured cronjobs
Actual Behavior
errors, no cronjobs, etc
Affected Component
CLI (interactive chat)
Messaging Platform (if gateway-related)
N/A (CLI only)
Operating System
MacOS
Python Version
3.11
Hermes Version
0.4.0
Relevant Logs / Traceback
Root Cause Analysis (optional)
No response
Proposed Fix (optional)
No response
Are you willing to submit a PR for this?
- I'd like to fix this myself and submit a PR
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working