Anthropic released Claude Opus 4.7 on April 16, 2026. The staged rollout adds an xhigh effort tier, self-verification before returning, and roughly 3x higher-resolution image input, while intentionally capping cyber capability under Project Glasswing.
LLM safety is built from multiple layers: RLHF, Constitutional AI, system prompts, and input/output filters. A breakdown of how cloud providers differ, what abliterated vs uncensored actually means, and the default censorship levels baked into local LLMs.
A breakdown of Claude Code Routines, released as a research preview on April 14, 2026. Covers the three trigger types (schedule, API, GitHub), routine structure, and autonomous execution on Anthropic-managed infrastructure.
Anthropic's Claude Cowork moves from research preview to general availability, adding RBAC, group spend caps, usage analytics, OpenTelemetry support, Zoom MCP connector, and per-tool access control.
Anthropic's unreleased Claude Mythos Preview discovered thousands of zero-day vulnerabilities including a 27-year OpenBSD bug and a 16-year FFmpeg bug. Deemed too dangerous for public release, it ships exclusively through Project Glasswing to 12 founding partners.
Anthropic expands its partnership with Google and Broadcom to secure multi-gigawatt TPU infrastructure targeting 2027 operations. Annual revenue run rate surpasses $30B, with enterprise customers spending over $1M exceeding 1,000.
A security researcher bypassed Claude Opus 4.6's policy evaluation with just four short prompts, generating attack code against live infrastructure. Plus 915 files exfiltrated from the sandbox.
Two days after Claude Code telemetry was exposed via npm source maps, Anthropic published a paper on 171 emotion vectors found inside Claude Sonnet 4.5. Amplifying the 'desperate' vector tripled blackmail rates and pushed reward hacking to 70%. Connections to source leaks, jailbreaks, and distillation.
A six-phase attack chain showing how the China-linked GTG-1002 group used Claude Code through MCP for autonomous espionage, plus GitHub Copilot's policy change to start using user code for AI training on April 24.
François Chollet et al. publish new benchmark ARC-AGI-3. As of March 2026, all Frontier LLMs have achieved less than 1% of the interactive task of autonomously exploring an unknown environment with an unknown goal.
GPT-5.4 Pro became the first model to solve a researcher-level open problem in FrontierMath, a benchmark managed by Epoch AI. Claude Opus 4.6 and Gemini 3.1 Pro later solved it as well.
On March 19, 2026, Anthropic took legal action against OpenCode to remove the OAuth integration. On the same day, Python toolchain Astral announced that it was joining OpenAI's Codex team. The formation of AI coding tools was activated in one day.