Tech 10 min read

Bun PR #30412 merged: 1M-line Zig-to-Rust rewrite hits main, try it via canary

IkesanContents

When I first heard about this, it sounded like Rust had been rewritten in Bun. It was the other way around.
Bun, the JavaScript runtime, moved its main implementation from Zig to Rust.

On May 14, 2026, PR #30412 on oven-sh/bun was merged into main.
The PR title is exactly Rewrite Bun in Rust.
It went from the claude/phase-a-port branch into main: 6,755 commits, 2,188 files changed, 1,009,257 lines added, 4,024 lines removed.

What Jarred Sumner wrote in the PR body

The PR description itself is short. The points:

  • Bun’s existing test suite passes on all platforms
  • Several memory leaks and flaky tests were fixed
  • Binary size shrank by 3–8 MB
  • Benchmarks are between neutral and faster
  • The main motivation is compiler-assisted detection of memory bugs
  • The codebase is largely the same: same architecture, same data structures, few third-party libraries, no async Rust
  • You can try it now with bun upgrade --canary
  • Some optimization work is still pending before non-canary. Cleanup will land in follow-up PRs
bun upgrade --canary

The most recent official release is Bun v1.3.14 from May 13, which is the final Zig-era release. Its release notes feature Bun.Image, the isolated linker’s global virtual store, and HTTP/2 + HTTP/3 work; the Rust rewrite is not mentioned there.

A language swap, not a redesign

In the PR body, Sumner explicitly says this is not a redesign in a different shape.
Same architecture, same data structures, few third-party libraries, no async Rust. The plan is to port the structure of the Zig implementation into Rust.

Bun is still a JavaScript/TypeScript runtime sitting on top of JavaScriptCore — this is not a “rewrite V8 in Rust” story. The FFI to JavaScriptCore, the Node compatibility surface, the package manager, the bundler, and the test runner all keep their responsibilities. Only the implementation language behind each piece changed.

The primary goal Sumner names is memory safety, not performance.
After years of debugging memory bugs in the Zig implementation, the team wanted compiler-level catches for use-after-free, double-free, and free-path leaks. Rust won’t auto-eliminate leaks from references held too long, or re-entrancy across the JavaScript boundary. But entire classes of bugs that used to surface only at runtime under Zig now move to compile time.

Matching Zig behavior in Rust

“Same architecture, ported to Rust” sounds boring at the elevator-pitch level. The commits tell a different story — a lot of work went into reproducing Zig-specific behaviors in Rust. Some examples from commit messages:

  • LineOffsetTable SmallVec<[i32;256]> stack scratch — match Zig stack — the buffer the Zig version kept on the stack is now SmallVec<[i32; 256]> to preserve the same stack-usage pattern
  • #[inline] Method::which/find — match Zig comptime-map inlining — the comptime-forced inlining from Zig is recreated with explicit #[inline] on the Rust side
  • lazy JsonCache arena — avoid mi_heap_new per bundler worker — avoids paying mi_heap_new on every worker startup
  • TranspilerJob::run stack-local Arena — drop per-worker leaked memory — fixes per-worker leaks
  • ParseTask: dealloc Result without Drop (Zig destroy is struct-only) — Zig’s destroy only works on structs, so the Rust port had to adapt to Drop semantics
  • read VM thread-local directly + inline drain_microtasks — fetches the VM reference directly from TLS

mimalloc heap operations (mi_heap_*), per-worker arenas, TLS-based VM access — the low-level control surface from the Zig version carries over. The Rust side uses #[inline(never)], SmallVec, and explicit Box drop paths where needed. Bare-metal patterns are visible where required.

The decision to skip async Rust fits the same plan. Bun’s I/O depends on a custom event loop (uws) and an existing threading model; rebuilding on top of tokio or a Future-based design would change the architecture itself. Sumner explicitly stated “same data structures,” so avoiding async Rust is consistent with that.

What the branch names say about the AI workflow

The main branch was claude/phase-a-port. The “Phase A” naming hints at Phase B and beyond. The PR body also mentions cleanup landing in follow-up PRs, so later phases are likely focused on dedup, internal API cleanup, and optimization.

Tracing the merge commits absorbed into the PR, multiple claude/* sub-branches ran in parallel:

  • claude/phase-a-port — the main port
  • claude/bench-until-green — a branch dedicated to looping benchmarks until they go green. Performance regressions are isolated here
  • claude/code-dedup — code consolidation. Successive commits drop -968, -692, -1320, -1946, -480, -717 LOC into shared implementations, consolidating over 5,000 lines
  • claude/ci-auto-fix-53852 / claude/ci-auto-fix-53863 — branches that auto-fix CI failures, scoped per issue number

A 6,755-commit, 1M-line diff isn’t realistically reviewable by a human in a normal PR flow.
What made it work instead was that Bun already had a thorough test suite and benchmark surface, and that the port, dedup, CI fixes, and benchmark tuning ran in separate themed branches that were then merged together. With “pass the same tests and benchmarks” as a machine-checkable goal, the AI generate → verify loop has something concrete to converge on.

The PR also shows Claude bot and CodeRabbit review traces — AI generating, a different AI checking. Things tests and benchmarks can’t catch — performance regressions outside the bench surface, platform-specific behavior, unsafe boundaries, long-term maintainability — still need human and long-term operational eyes.

Why this landed directly on main, not a separate repo

The first thing that surprised me was that this went straight into oven-sh/bun on main. With 2,188 files changed and a million-plus lines added, you’d normally expect a separate repository (bun-rust or similar) or a long-lived branch where it stabilizes before being merged into the main project.

But following the PR and the commits, the choice to do it in-repo lines up. The port keeps the same architecture and the same data structures, so this is closer to a language swap than a fork — a separate repo would have ended up with the same codebase anyway. Tests and benchmarks live in the existing repo, and splitting them across two repos would have meant maintaining the verification surface twice; the claude/bench-until-green loop wants a single home for the benchmarks it gates on. And given the Phase A naming, Phase B and beyond are coming. Doing this in a separate repo would mean facing the same merge magnitude all over again at the eventual integration point.

The trade-off is that the history gets messy. git blame is dominated by this single merge commit; Zig-era author attribution is largely buried. The README still says “written in Zig” at the time of writing — the code transition and the documentation update aren’t in sync. Anyone wanting to read the Zig-era history has to follow the merge commit’s parents.

Why Anthropic is rewriting Bun

Bun became part of Anthropic in 2025 and is used as the runtime for Claude Code. As I noted in Claude Code’s npm sourcemap revealed Bun usage, Claude Code runs on Bun, not Node.js.

That relationship is the backdrop for this rewrite. Anthropic is going to keep paying the cost of maintaining Claude Code’s runtime. By Sumner’s account, the Zig version has cost the team significant time on memory bug investigation and fixes over the years. At the same time, the requirements that drove the original Zig choice — single-binary distribution, JavaScriptCore FFI, low-level control — still need to hold. And the existing test suite and benchmarks give an AI-driven port a real verification substrate to converge against.

When those conditions all line up, the investment becomes obvious: hand type, borrow, and ownership checking to the compiler, and spend less engineering time chasing memory bugs. The “1 million lines ported by AI” headline is the surface story. What’s behind it is reducing long-term maintenance cost for the runtime that Claude Code depends on.

Where this leaves the Zig ecosystem

Bun was the flagship large-scale production Zig project. For a lot of developers, Bun was the example that came to mind when thinking about real-world Zig adoption, and Bun’s existence raised Zig’s profile.

The GitHub main branch still has “written in Zig” in the README at the time of writing. But GitHub’s language stats now show Rust at 46.6% and Zig at 32.2% — the codebase has already flipped in Rust’s favor. The README and the official site copy will catch up later.

“Zig was bad, that’s why they switched” doesn’t hold. The low-level control, JavaScriptCore FFI, and single-binary distribution Bun needed all worked fine in Zig, and the original decision to pick Zig still makes sense. What changed is that long-term maintenance is now owned by Anthropic, and the team took the trade-off of having the compiler do more of the memory-safety work.

What the Zig community keeps is the set of practical patterns Bun demonstrated — JSC integration, a custom event loop, single-binary distribution, low-level I/O implementation — and the fact that this implementation will keep being read as a reference point for future Zig projects. Bun’s own language choice has changed, but the path you walk to build a production runtime in Zig is preserved in its commit history.

How to try it now, and what to watch later

If you’re using Bun purely as a toolchain, there isn’t much to act on yet. The regular bun upgrade lands you on v1.3.14, which is the final Zig release — its release notes are dominated by Bun.Image and HTTP/3, not the Rust port. The entry point for the Rust version is canary, switched on with bun upgrade --canary.

If you run Bun in production CI or as a server runtime, before worrying about the Rust port itself, check the behavior you actually rely on:

  • The behavior of bun install and dependency resolution
  • Stability of bun test and whether your existing tests pass
  • Native addons and N-API-dependent package behavior
  • File watching / watcher behavior
  • Differences across Linux, macOS, and Windows

The PR body claims existing tests pass on all platforms, but that’s not the same as a guarantee for your specific dependency tree and execution pattern.

Once the Rust version graduates from canary into the default release, the work that begins is checking crash rate, memory footprint, startup time, CI duration, and behavior across the three platforms — one item at a time. Whether the numbers actually change for you starts being visible from there.

References