OpenClaw sub-agent model override not working: complete fix guide
Problem statement: you configured multiple agents in your OpenClaw instance with different models, but sub-agents keep using the main agent's model instead of their assigned model. Requests that should route to a specialized agent end up using the wrong model, breaking workflows that depend on specific model capabilities. This most commonly appears after upgrading to version 2026.3.13 or when adding new agents to an existing multi-agent deployment.
- Issue #51545 (2026-03-21): sub-agent model override not working in 2026.3.13.
- Community reports: agents configured with specific models revert to instance defaults after restart.
- Self-hosted deployments: model settings in dashboard do not propagate to runtime configuration.
Why model override failures break multi-agent setups
Multi-agent OpenClaw deployments depend on model specialization. You might configure one agent for code generation with a capable Claude model, another for quick responses with a faster model, and a third for specialized tasks using a provider-specific model. When model overrides fail, every agent suddenly behaves like the main agent, destroying the specialization pattern you designed.
The impact is not limited to wrong answers. It also affects cost planning (you may be charged for using a more expensive model than intended), latency (fast agents become slow), and capability gaps (specialized features stop working). For teams that built workflows around agent-specific behavior, this failure can make the entire multi-agent architecture feel broken.
How multi-agent model configuration actually works
Understanding the intended behavior helps you diagnose where things go wrong. In a properly configured multi-agent instance:
- Agent-local model settings: each agent has its own model configuration stored separately from the main agent.
- Config generation: the runtime builder reads per-agent model settings and generates
agents.list[]entries. - Auth profiles: each enabled agent receives its own auth profile set at
.openclaw/agents/<agent_id>/agent/auth-profiles.json. - Hot-reload propagation: configuration changes should propagate without full instance restart when hot-reload works correctly.
- Binding routing: Telegram, Slack, and other bindings route to the correct agent reference based on configured routing rules.
When any of these layers breaks, model overrides fail silently. The agent still runs, but using the wrong model.
Evidence from the field: first-party implementation details
Hosted multi-agent deployments show that model override issues often trace back to config generation and auth profile wiring.
In our implementation, multi-agent configuration added InstanceAgent and InstanceAgentBinding tables with explicit model storage per agent.
The config generator must emit agents.list[] with each agent's model, and the K8s init bootstrap must write agent-local auth profiles for every enabled agent.
What our implementation confirmed
- Existing users migrate cleanly because every instance gets a seeded
mainagent as a baseline. - Runtime writes agent-local auth profiles at
.openclaw/agents/<agent_id>/agent/auth-profiles.jsonfor each enabled agent. - Non-main workspaces are created as
.openclaw/workspace-<agent_id>to keep agent contexts separate. - Provider credentials are shared across agents per instance in the current implementation, but model selection must be respected per-agent.
This means the problem is usually not in the storage layer but in how configuration is read, generated, or applied at boot time. If you see model overrides failing, the most productive place to investigate is between the dashboard save and the runtime config file generation.
Fast triage: confirm the issue in 10 minutes
- Check agent configuration in dashboard: verify each agent shows the correct model in the Agents tab.
- Export instance configuration: capture the full config state without secrets.
- Inspect generated runtime config: check if
agents.list[]contains per-agent model entries. - Test with a simple routing rule: create a test binding that routes to a specific agent and verify model selection.
- Review gateway logs: look for model loading messages that show which model each agent actually uses.
If dashboard shows correct models but runtime logs show the same model for all agents, you have confirmed a configuration propagation failure.
Step-by-step diagnostic and fix playbook
Step 1: Verify dashboard state is correct
Before chasing runtime issues, confirm that the dashboard actually saved your model choices. Navigate to the Agents tab under your instance dashboard and check each agent's model setting. If an agent shows the wrong model there, fix it in the UI and save. If the UI shows the correct model but the agent still uses the wrong one, continue to the next step.
Step 2: Check for agent reordering issues
Some deployments reference agents by position instead of ID. If you reordered agents after setting models, references may point to the wrong agent. Try setting a distinctive model for each agent (different providers or clearly different model names) to make routing obvious in logs. This reveals whether agents are being confused during routing or if model settings themselves are not being applied.
Step 3: Force full instance restart
Hot-reload may not pick up agent model changes reliably. Trigger a full instance restart from the dashboard or CLI to force configuration regeneration from the database. After restart, test whether agents now use their assigned models. If a restart fixes it, the problem was hot-reload propagation. If restart does not help, the problem is in how configuration is generated or read.
Step 4: Inspect generated configuration directly
If you have shell access to the instance, inspect the generated OpenClaw configuration files. Look for the agents.list[] structure and verify
that each agent entry contains the expected model field. If agents.list[] is missing model entries or contains default values, the config generator
is not reading your agent model settings correctly. This is a backend bug that should be reported upstream with your configuration export.
Step 5: Test with minimal reproduction
Create a new test instance with exactly two agents: main using one model, a second agent using a clearly different model. Add a simple routing rule and test whether the second agent uses its assigned model. If this minimal case works, your original instance may have corrupted state or conflicting bindings. If it fails even in a fresh instance, you have confirmed a platform-level bug in the multi-agent model override feature.
Step 6: Verify binding routing is correct
Incorrect agent routing can masquerade as model override failure. If bindings route to the wrong agent, you see the wrong model even though agent configuration is correct. Review your binding configuration: Telegram channels should specify the correct agent ID, Slack connections should use the intended agent, and Built-In Chat routing rules should match your intended agent selection. Fix any binding misreferences and retest.
Practical diagnostics teams skip (and regret skipping)
- Model alias confusion: some providers use multiple names for the same model, making logs hard to interpret.
- Provider credential sharing: if provider credentials are shared, verify auth profiles are correctly distributed per agent.
- Workspace separation: check that each agent has its own workspace directory and that contexts are not being mixed.
- Version mismatch: CLI and dashboard versions may differ, causing confusing UI behavior.
- Cache invalidation: old cached model references may persist after configuration changes.
Edge cases that can mislead your debugging
Not every model routing problem is caused by the multi-agent system. Watch for these edge cases before concluding you have an override bug:
- Provider-side model renaming: a provider may have renamed a model, breaking your configured model reference.
- Explicit model override in prompts: system prompts that hardcode model names can bypass agent-level settings.
- Skills that call models directly: some skills may make their own model choices, ignoring agent configuration.
- Partial upgrade state: some services restarted with new config while others still run old configuration.
- Environment-specific overrides: development and production environments may have different model availability.
How to verify the fix is working
- Each agent reports a different model in gateway startup logs.
- Test prompts to each agent show expected behavior differences (speed, capabilities, formatting).
- Routing tests via different channels reach the intended agent with its assigned model.
- Provider dashboard shows separate usage per model, confirming agents use different models.
- Instance restart preserves agent model settings without manual reconfiguration.
Common mistakes that prolong this issue
- Assuming the bug is in the agent itself rather than configuration generation or routing.
- Restarting services individually instead of the full instance, leaving stale config in some services.
- Changing multiple settings at once (models, bindings, prompts) making diagnosis impossible.
- Testing only through one channel when the problem is binding-specific, not agent-specific.
- Reporting the bug without collecting minimal reproduction details from a fresh test instance.
Prevention: hardening multi-agent configuration
Once you have model overrides working, add safeguards to prevent regression. Document your intended agent-to-model mapping in your runbook. Add a validation step after configuration changes that checks each agent's loaded model. Consider adding a test channel that sends known prompts to each agent and verifies responses match expected model behavior. These checks catch configuration drift before it affects production workflows.
When to consider managed hosting
Multi-agent configuration adds operational complexity. If you spend more time debugging agent model routing than using the agents themselves, managed hosting may reduce the burden. Hosted environments handle configuration generation, hot-reload propagation, and binding routing at the platform level. Compare tradeoffs at /compare/. If you want multi-agent functionality without managing the plumbing yourself, review /openclaw-cloud-hosting/.
Fix once. Stop recurring multi-agent model configuration issues.
If this keeps coming back, you can move your existing setup to managed OpenClaw cloud hosting instead of rebuilding the same stack. Import your current instance, keep your context, and move onto a runtime with lower ops overhead.
- Import flow in ~1 minute
- Keep your current instance context
- Run with managed security and reliability defaults
If you would rather compare options first, review OpenClaw cloud hosting or see the best OpenClaw hosting options before deciding.
FAQ
Will fixing this require recreating all my agents?
Usually not. Most model override issues are fixed by correcting configuration generation or routing. Only recreating agents if you have corrupted database state that cannot be repaired through the UI.
Should I downgrade to an earlier version?
Downgrade is an option if multi-agent model overrides are critical and no fix is available. Capture your current configuration first, downgrade to the last known-good version, and verify agents work correctly. Plan to upgrade again once a fix is released.
Does this affect Built-In Chat agent selection?
Yes. If Built-In Chat lets you select agents but they all behave like the main agent, model override failure affects chat routing just like it affects channel bindings. Verify chat-specific routing rules in addition to agent configuration.
Sources
- OpenClaw issue #51545 (opened 2026-03-21)
- OpenClaw issue #55050 (opened 2026-03-26) — related context/session configuration
- First-party implementation:
docs/worklog/2026-03-13-multi-agent-instance-config.md— multi-agent configuration architecture