Choosing a model
Every chat uses an LLM behind the scenes, and you can pick which one. Open the model picker in the chat input bar to switch mid-conversation. The choice applies to the current project until you change it again.
Available models (cost tier in parentheses)
- Qwen 3.6 Plus ($) — Alibaba's open-weight model, free tier
- MiniMax M2.7 ($) — inexpensive open-weight model
- Kimi K2.5 ($) — Moonshot's open-weight model, large context
- Gemini 2.5 Flash ($) — fast and cheap, very large context
- Claude Haiku 4.5 ($$) — fast, good for short tasks and tool calls
- GLM 5.1 ($$) — Zhipu's open-weight model
- Gemini 3.1 Pro ($$) — Google's flagship, strong on analysis and multimodal
- GPT-5.3 Codex ($$) — tuned for code-related tasks
- GPT-5.4 ($$) — OpenAI's general-purpose model
- Claude Sonnet 4.6 ($$) — strong all-rounder, the default for most work
- Claude Opus 4.6 ($$$) — highest quality, best for long-form writing and complex reasoning
$ is cheapest, $$$ is most expensive. Cost tier reflects relative credit consumption per token — actual spend depends on how much the agent reads and writes.
Availability depends on plan — some models are gated by plan and will appear greyed out if your plan doesn't include them. Connecting a BYOK key for a provider means Magister routes requests for that provider's models to your key directly, bypassing credit usage at the proxy layer.
Per-project default — the model picker you see is the per-project default. The project will use whichever model was last selected there. If you want a different default for a different project, switch projects and change it there.
Picking well — start with Sonnet 4.6 for most work. Move to Opus 4.6 when you need top quality on writing or reasoning and don't mind higher credit usage. Move to Haiku 4.5 or Gemini 2.5 Flash when you're doing lots of fast, simple tool-driven steps and want to save credits.