Skip to content

[Bug]: agent delegation to custom inference provider does not work #1011

@comalice

Description

@comalice

Bug Description

I have the following in my config.yaml:

...
delegation:
  model: zh-qwen27-thinking
  provider: local
  max_iterations: 50
  default_toolsets:
  - terminal
  - file
  - web
...
custom_providers:
- name: local
  base_url: http://1.2.3.4:1234/v1
  api_key: no-api-key...
  model: zh-qwen27-thinking

Setup

My main inference provider is through OpenRouter, and I have a local model hosted on a machine on my local network.

Steps to Reproduce

  1. Host local inference with OpenAI API compatible endpoint with logging enabled.
  2. Create a custom provider in config.yaml that points at the endpoint from 1.
  3. Specify custom provider in config.yaml delegation block.
  4. Run a command that causes a delegate agent to launch.
  5. Watch log output on inference API endpoint, observe that no inference goes to this endpoint.

Expected Behavior

I expect that delegated tasks will run against the named custom_provider, in my case local.

Actual Behavior

When a delegate agent spins up, it continues to use the main model specified in config.yaml.

Affected Component

Configuration (config.yaml, .env, hermes setup)

Messaging Platform (if gateway-related)

No response

Operating System

Ubuntu 24.04

Python Version

3.11.15

Hermes Version

vv1.0.0

Relevant Logs / Traceback

Root Cause Analysis (optional)

hermes_cli/runtime_provider.py:_get_model_config does not appropriately pull configuration for custom providers, when specified.

Proposed Fix (optional)

PR coming soon.

Are you willing to submit a PR for this?

  • I'd like to fix this myself and submit a PR

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions