-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[Bug]: agent delegation to custom inference provider does not work #1011
Copy link
Copy link
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Bug Description
I have the following in my config.yaml:
...
delegation:
model: zh-qwen27-thinking
provider: local
max_iterations: 50
default_toolsets:
- terminal
- file
- web
...
custom_providers:
- name: local
base_url: http://1.2.3.4:1234/v1
api_key: no-api-key...
model: zh-qwen27-thinkingSetup
My main inference provider is through OpenRouter, and I have a local model hosted on a machine on my local network.
Steps to Reproduce
- Host local inference with OpenAI API compatible endpoint with logging enabled.
- Create a custom provider in
config.yamlthat points at the endpoint from 1. - Specify custom provider in
config.yamldelegationblock. - Run a command that causes a delegate agent to launch.
- Watch log output on inference API endpoint, observe that no inference goes to this endpoint.
Expected Behavior
I expect that delegated tasks will run against the named custom_provider, in my case local.
Actual Behavior
When a delegate agent spins up, it continues to use the main model specified in config.yaml.
Affected Component
Configuration (config.yaml, .env, hermes setup)
Messaging Platform (if gateway-related)
No response
Operating System
Ubuntu 24.04
Python Version
3.11.15
Hermes Version
vv1.0.0
Relevant Logs / Traceback
Root Cause Analysis (optional)
hermes_cli/runtime_provider.py:_get_model_config does not appropriately pull configuration for custom providers, when specified.
Proposed Fix (optional)
PR coming soon.
Are you willing to submit a PR for this?
- I'd like to fix this myself and submit a PR
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working