Skip to content

[Bug]: Ollama Models don't recognize environment (aka hermes-agent) #2074

@anibalardid

Description

@anibalardid

Bug Description

If I run hermes with OpenAI Codex, it works fantastic (but has daily/weekly limits).
So I tried on same hermes changing provider/model to use local Ollama.
I tested these models: qwen3-coder, glm-4.7-flash, qwen3-vl:latest, llama3.1, hermes3:8b.

With all same results, they don't know that are running inside hermes.
I asked by the configured cronjobs (when used Codex) , and don't understand ...

Image Image Image

Update:

  • Using model=minimax-m2.5:cloud , seems working fine. So it seems there is a problem with local models.

Steps to Reproduce

  1. hermes setup
  2. select custom provider
  3. use your ollama url, like http://localhost:11434
  4. use downloaded models (ollama pull model-name)
  5. hermes
  6. chat...

Expected Behavior

Responds with my configured cronjobs

Actual Behavior

errors, no cronjobs, etc

Affected Component

CLI (interactive chat)

Messaging Platform (if gateway-related)

N/A (CLI only)

Operating System

MacOS

Python Version

3.11

Hermes Version

0.4.0

Relevant Logs / Traceback

Root Cause Analysis (optional)

No response

Proposed Fix (optional)

No response

Are you willing to submit a PR for this?

  • I'd like to fix this myself and submit a PR

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions