Claude Code full source exposed via npm; OpenAI Codex token-theft flaw disclosed at the same time
At the end of March 2026, two major security stories around AI coding tools landed at once. Anthropic’s Claude Code was found to have its entire source code browsable via the npm registry, and OpenAI’s Codex had a branch-name command-injection vulnerability that could steal GitHub tokens.
Full source disclosure from Claude Code’s npm source maps
Discovery
On March 31, security researcher Chaofan Shou (@Fried_rice) discovered that @anthropic-ai/claude-code version 2.1.88 published to npm included a 57 MB source map file.
Source maps are debugging artifacts that map minified JavaScript back to the original source, used by browser DevTools and similar tooling. Because Claude Code is built with Bun’s bundler, which generates source maps by default, they end up inside the npm package unless explicitly excluded via .npmignore or the files field in package.json.
Scale of the disclosure
| Item | Value |
|---|---|
| TypeScript files | ~2,300 |
| Lines of code | 512,000+ |
| Source map size | 57 MB |
| Built-in tools | ~40 |
| Slash commands | ~50 |
What leaked was the CLI client-side code; it did not include model weights, training data, or the server-side API system.
Internal architecture
The source maps reveal detailed internals of Claude Code.
| Module | Size | Contents |
|---|---|---|
| Query engine | ~46,000 LOC | LLM API calls, streaming, caching, orchestration |
| Tool system | ~29,000 LOC | ~40 permission-gated tools for file ops, Bash execution, web fetch, LSP integration |
| Multi-agent | — | A parallel execution mechanism called “Swarms,” running multiple agents concurrently in isolated contexts |
| IDE bridge | — | Bidirectional communication with VS Code and JetBrains extensions via JWT auth |
| Persistent memory | — | File-based context persisted across sessions |
The runtime uses Bun rather than Node.js. The terminal UI is component-based rendering with React + Ink, and validation is standardized on Zod v4.
Unreleased features and feature flags
A total of 44 feature flags were found, revealing features that are built but not shipped.
The one that drew attention is BUDDY: a Tamagotchi-style AI pet shown in a speech bubble next to the input box. It implements species and rarity tiers, character stats, hat accessories, and animation sequences, and can be accessed via the hidden /buddy command.
Unannounced model “Capybara”
References to an unannounced model family also appeared in the codebase:
capybaracapybara-fastcapybara-fast[1m]
These names have not been officially announced by Anthropic.
Telemetry and user-behavior tracking
The code also shows telemetry that logs detailed user behavior. It includes a frustration indicator (profanity detection), the number of repeated “continue” prompts, and other usage-pattern metrics.
Anthropic’s response
Anthropic removed the affected version from npm, but had not issued an official statement at the time of writing. This is not the first time; a similar source-map leak occurred in early 2025.
OpenAI Codex branch-name command injection
Vulnerability overview
BeyondTrust’s Phantom Labs discovered a command-injection vulnerability in OpenAI Codex. In HTTP requests to the task-creation API, input sanitization for the GitHub branch-name parameter was insufficient.
Attack chain
graph TD
A["攻撃者がブランチ名に<br/>悪意あるコマンドを埋め込む"] --> B["Codex APIにタスク作成<br/>リクエスト送信"]
B --> C["Codexがブランチ名を<br/>シェルコマンドに渡す"]
C --> D["入力サニタイズなし<br/>コマンドが実行される"]
D --> E["コンテナ内のauth.jsonから<br/>GitHubトークンを読み取り"]
E --> F["トークンをタスク出力<br/>または外部サーバーに送信"]
F --> G["被害者のリポジトリに<br/>読み書きアクセス"]
Attackers used the Unicode Ideographic Space character to inject commands into the branch name. Although it looks like a normal space, when passed to the shell it acts as a command separator. From auth.json stored in the Codex container, a GitHub OAuth access token could be extracted in plaintext.
Impact
All components were affected: the ChatGPT web UI, the Codex CLI, the Codex SDK, and IDE extensions. The stolen token grants read/write access to the victim’s private repositories, enabling supply-chain attacks.
Timeline
| Date | Event |
|---|---|
| 2025-12-16 | BeyondTrust Phantom Labs reports the vulnerability |
| 2025-12-23 | OpenAI applies an initial hotfix (7 days later) |
| 2026-01-30 | Reinforced protections for shell command handling |
| 2026-02-05 | Marked Critical Priority 1; fix completed |
It took 7 days from report to the first response, and roughly 7 weeks to fully resolve. No evidence of exploitation was found.
GPT-5.4 references leaked from the Codex repository
Separate from the Codex vulnerability, references also leaked from OpenAI’s public GitHub repository (openai/codex).
When PR #13050 added full-resolution vision support on February 27, the minimum model version was written as (5, 4). On March 2, PR #13212 added the /fast slash command and a ServiceTier enum, which also referenced GPT-5.4.
OpenAI changed the version numbers with seven force pushes over five hours, and a screenshot posted by an employee showing GPT-5.4 in the model-selector dropdown was also deleted. No official statement has been issued.
Claude Code could have avoided this by adding source maps to .npmignore, and Codex missed the most basic step of sanitizing a branch name. Neither case involved a sophisticated attack—just a build-pipeline misconfiguration and a lack of input validation.