Tech 6 min read

Anthropic Secures Multi-GW TPU/ASIC Infrastructure with Google and Broadcom

IkesanContents

On April 6, 2026, Anthropic announced a new agreement expanding its partnership with Google Cloud and Broadcom to secure multi-gigawatt (GW) next-generation TPU capacity. Operations are expected to begin in 2027. Anthropic CFO Krishna Rao described it as “the largest compute investment in AI infrastructure scaling.”

The announcement gathered 164 points on Hacker News.

Scale of the Deal

“Multi-gigawatt” is hard to grasp on its own. According to Broadcom’s SEC filing, the specific figure is 3.5 GW of compute capacity. To put that in perspective, a typical data center has a power capacity of tens of MW to around 100 MW. 3.5 GW is equivalent to roughly 35–70 large-scale data centers.

The backdrop is Broadcom’s March earnings. CEO Hock Tan said “2026 has gotten off to a strong start” and revealed that Broadcom was already providing 1 GW of capacity for Google-manufactured TPU supply to Anthropic. This announcement expands that to 3.5 GW.

Most of the new capacity will be located in the United States. This is an extension of Anthropic’s $50B U.S. compute infrastructure investment plan announced in November 2025.

Revenue Run Rate Surpasses $30B

The business numbers released alongside this announcement are staggering. The run rate (annualized revenue) has surpassed $30B. It was $9B at the end of 2025, meaning it has more than tripled in just four months.

The trajectory shows the acceleration clearly:

PeriodRun Rate
End of 2025$9B
February 2026 (Series G)$14B
March 2026$19B
April 2026$30B+

That works out to over $5B per month being added. At the time of the February Series G announcement, enterprise customers spending over $1M annually numbered 500+. This announcement puts that figure at 1,000+—a doubling in under two months, virtually unprecedented in the SaaS world.

Claude Code’s run rate alone exceeds $2.5B. For a product that hasn’t even been out a year since its May 2025 public launch, that number is extraordinary.

Multi-Hardware Strategy

Anthropic’s distinctive approach is a multi-hardware strategy that avoids dependence on any single chip vendor. Claude’s training and inference runs on three hardware families:

graph LR
    A[Claude<br/>Training + Inference] --> B[AWS Trainium<br/>Project Rainier]
    A --> C[Google TPU<br/>Massively expanded in this deal]
    A --> D[NVIDIA GPU<br/>General-purpose GPU]
    B --> E[AWS Bedrock]
    C --> F[Google Vertex AI]
    D --> G[Microsoft Azure Foundry]

By matching chip types to workload characteristics, Anthropic optimizes performance while maintaining supply chain resilience. Project Rainier with AWS remains Anthropic’s primary cloud and training partner. Anthropic has been explicit that the Google/Broadcom expansion does not replace the AWS relationship.

It was also reaffirmed that Claude is the “only frontier AI model” available on all three major clouds: AWS Bedrock, Google Vertex AI, and Microsoft Azure Foundry.

TPUs and Custom ASICs

The deal involves Google TPUs and Broadcom-manufactured custom ASICs. Google’s current TPU is the 6th-generation “Trillium,” which achieved the following improvements over the previous TPU v5e:

MetricTrillium (TPU v6)
Peak compute performance4.7× vs. v5e
HBM capacity and bandwidth2× vs. v5e
Chip-to-chip interconnect2× vs. v5e
Energy efficiency67%+ improvement vs. v5e
Training cost-performance2.5× vs. v5e

Each pod scales to a maximum of 256 chips, and clusters exceeding 100,000 chips can be assembled via Google AI Hypercomputer. Whether the “next generation” coming online in 2027 refers to a Trillium successor or Broadcom’s custom ASICs is not specified, but The Register notes that Broadcom handles contract manufacturing of Google’s TPU designs, making the two effectively inseparable.

A quick comparison with NVIDIA GPUs: the H100/B200 family is the de facto standard for PyTorch-based R&D thanks to the overwhelming advantage of the CUDA ecosystem. However, in large-scale production environments—especially inference workloads—TPUs and custom ASICs often deliver better cost-performance. At Anthropic’s scale, even a few percentage points in per-chip cost translates to tens of billions of yen annually, making the multi-hardware strategy an economic inevitability.

Why Multi-GW Infrastructure Is Necessary

Why does an AI company need power-plant-level electricity? 3.5 GW is equivalent to three or four nuclear reactors.

Training costs for frontier models are increasing by more than 10× per generation. If GPT-3’s training compute is set as 1, GPT-4-class models required roughly 100×, and current frontier models are estimated at 1,000× or more. Inference demand is also growing exponentially—Anthropic’s Claude handles production workloads for over 1,000 enterprise customers.

Power efficiency of AI-dedicated chips improves with each generation, but model size and demand are growing faster. The result: data center power capacity has become the bottleneck. CoinDesk reported that “bitcoin miners face a new rival for cheap power,” showing that AI infrastructure’s power grab is already spilling over into other industries.

Position in the AI Infrastructure Arms Race

AI companies’ infrastructure investments have escalated rapidly over the past year.

CompanyInvestment Scale
Microsoft$80B data center investment plan (FY2026)
Google$75B infrastructure investment plan (FY2026)
Meta$60B–$65B infrastructure investment plan (FY2026)
AmazonAWS Trainium custom chips + Project Rainier
Anthropic$50B U.S. compute infrastructure investment + this 3.5 GW deal

While three hyperscalers each pour tens of billions of dollars into infrastructure, Anthropic—as a model provider—is competing at the same scale of infrastructure investment. In contrast to OpenAI’s reliance on Microsoft infrastructure through the Stargate JV, Anthropic maintains multi-cloud deployment across AWS, Google, and Microsoft while also diversifying at the chip level with Trainium, TPU, and NVIDIA GPU—a unique position.

Recent related news includes Claude Code’s third-party lockout and OpenAI Codex’s shift to token-based pricing, reflecting intensifying monetization competition in AI coding tools. Anthropic’s strategy of immediately channeling rapid revenue growth into infrastructure investment is a bet that this $30B run rate is sustainable.