Claude Code and Opus 4.6: Agent Teams Are Here
Anthropic shipped Claude Opus 4.6 on 5 February 2026 as part of Claude Code v2.1.32, with three major advances: a 1 million token context window (beta), adaptive thinking, and a 128K token output limit. At the same time, Claude Code gained Agent Teams — a research preview for multi-agent orchestration that is directly relevant to enterprise automation.
Opus 4.6: context and control
Opus 4.6 is the first Opus-class model with a 1M-token context window in beta, so entire codebases and long document sets can sit in a single context. Adaptive thinking means the model decides how much reasoning effort to apply, with manual override via a /effort parameter — so you can trade speed for depth. The 128K output limit supports much longer generated artefacts (docs, reports, configs) in one go. Benchmarks improved sharply: Terminal-Bench 2.0 at 65.4%, OSWorld at 72.7%.
Agent Teams (research preview)
Agent Teams let multiple Claude instances work in parallel with a "team lead" coordinating work. Features include shared task boards with dependency resolution, direct messaging between agents via SendMessage, and quality gates through TeammateIdle and TaskCompleted hooks for code review and validation. You enable it with CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1. For enterprises, this is a step toward multi-agent workflows where different agents own different steps (e.g. one for extraction, one for validation, one for reporting).
Desktop: preview, review, merge
By 20 February 2026, Claude Code desktop added app preview (run dev servers and view apps in the interface), automated inline code review, and PR monitoring with auto-fix and auto-merge. Sessions can move between desktop, mobile, and CLI. For teams, that means tighter loops between AI-generated code and human review — a pattern we mirror in our own guardrailed automation.
Why it matters for ConvertToAI
Long context plus adaptive thinking fits document-heavy and multi-step workflows (legal, compliance, due diligence). Agent Teams points the way to orchestrated multi-agent pipelines. We use these capabilities to design solutions where the right model handles the right step — Opus 4.6 for deep analysis and long-context reasoning, with other models for speed or specialism where needed.