Tech 12 min read

SoftBank, NEC, Honda, and Sony launch Japan AI Foundation Model Development for domestic physical AI

IkesanContents

On April 12, 2026, SoftBank, NEC, Honda, and Sony Group established a new company called “Japan AI Foundation Model Development” to build domestic AI foundation models.
Japan’s three mega-banks (MUFG, SMBC, Mizuho), Nippon Steel, and Kobe Steel are also investing, with AI startup Preferred Networks (PFN) providing technical collaboration on model development.
METI is backing the project with approximately ¥1 trillion over five years, bringing the combined public-private investment to roughly ¥3 trillion.

The goal is not another ChatGPT-style conversational AI.
The strategy is to build a foundation model for “physical AI”—AI that autonomously controls robots and machinery—trained on Japan’s proprietary industrial data.

Company Overview

ItemDetails
Company nameJapan AI Foundation Model Development
LocationShibuya, Tokyo
Core fourSoftBank, NEC, Honda, Sony Group (~10%+ equity each)
Other investorsMUFG Bank, SMBC, Mizuho Bank, Nippon Steel, Kobe Steel
Technical partnerPreferred Networks (PFN)
Team~100 AI engineers
CEOFormer SoftBank executive
TargetTrillion-parameter-class foundation model

The four core companies each hold roughly ten-something percent of equity, sharing management responsibility. The remaining shares are distributed among the other investors.

Their roles break down as follows.

CompanyRole
SoftBank / NECLead foundation model development. SoftBank provides compute infrastructure and data centers
Honda / Sony GroupDeploy the finished model in their products: autonomous driving, general-purpose robots, gaming/entertainment, semiconductors
PFNTechnical collaboration on model architecture design and development

The finished model is intended for broad distribution across Japanese industry, not just the investors.
The company plans to apply soon for NEDO’s open call for “Multimodal Foundation Model Development Toward AI Robotics and Physical AI,” which carries up to ¥383.4 billion in funding.

What Is Physical AI?

Physical AI is a concept championed by NVIDIA CEO Jensen Huang. It refers to AI that understands real-world physics and autonomously controls robots and machinery.
Where text-generation AI like ChatGPT and Claude operates in “digital space,” physical AI operates in “physical space.”

graph TD
    A[AI Foundation Model] --> B[Language AI<br/>Text generation / dialogue]
    A --> C[Physical AI<br/>Real-world control]
    B --> D[ChatGPT<br/>Claude<br/>Gemini]
    C --> E[Autonomous driving]
    C --> F[Industrial robots]
    C --> G[Construction machinery]
    C --> H[General-purpose humanoids]
    E --> I[Environment recognition<br/>& path planning]
    F --> J[Assembly & transport<br/>autonomous control]
    G --> K[Terrain recognition<br/>& task judgment]
    H --> L[Multi-joint<br/>coordinated motion]

The key technical building blocks of physical AI break down as follows.

Technical elementDescriptionExample
Multimodal perceptionUnified understanding of composite data from cameras, LiDAR, tactile sensors, etc.Obstacle detection in factories, part pose estimation
World modelPredicting outcomes of actions via physics simulation”Grasping at this angle will drop it”—pre-action reasoning
Real-time controlMillisecond-level actuationRobot arm trajectory correction, vehicle steering

Traditional industrial robots merely repeat programmed motions. Physical AI adapts autonomously to environmental changes.
An autonomous mobile robot (AMR) in a factory, for instance, instantly recalculates its route when someone walks across its path.

NVIDIA has built an ecosystem for physical AI on its GPUs through Isaac (robotics), DRIVE (autonomous driving), and Omniverse (digital twin).
Hyundai of South Korea has declared it will mass-produce 30,000 AI-equipped robots annually by 2028; the global race is accelerating.

Why Japan Is Betting on Physical AI

In general-purpose large language model (LLM) development, US players—OpenAI, Google, Anthropic—and China’s DeepSeek are overwhelmingly ahead.
The investment gap is stark: Japan’s total AI spending is roughly one-thirtieth of the US figure.

GPT-4 has an estimated 1.8 trillion parameters; DeepSeek-V3 has 671 billion.
Competing head-on with raw capital is a losing proposition.

Japan’s answer is an asymmetric strategy: specialize in physical AI.

graph LR
    A[Japan's strengths] --> B[High-quality<br/>manufacturing data]
    A --> C[Robotics<br/>know-how]
    A --> D[Materials &<br/>precision engineering]
    B --> E[Factory operation data<br/>QA inspection data<br/>Equipment sensor data]
    C --> F[Honda autonomous driving<br/>Sony sensing<br/>PFN control AI]
    D --> G[Nippon Steel materials<br/>Kobe Steel machinery<br/>Manufacturing processes]
    E --> H[Physical AI<br/>Foundation Model]
    F --> H
    G --> H

Japan’s manufacturing sector has accumulated decades of industrial data—equipment operation logs, quality-inspection images, time-series sensor readings—that is not publicly available.
A foundation model trained on this data cannot be replicated by general-purpose LLMs trained on English web crawls.

The investor lineup makes the strategy concrete.

CompanyStrength
HondaAutonomous driving, humanoid robot ASIMO technology
Sony Groupaibo robotics, image sensor technology, PlayStation AI
Nippon Steel / Kobe SteelMassive process data from steel and materials manufacturing
Three mega-banksFinancial data and capital support

Rather than competing with the US and China on LLMs, the plan is to create a new arena: “Japanese industrial data × Physical AI.”

Compute Infrastructure and Data Centers

Training a trillion-parameter-class model demands massive GPU clusters.
SoftBank has committed to building this compute infrastructure in-house.

FacilityDetailsScale
Osaka (Sakai)Former Sharp LCD panel factory repurposed as a data center150 MW capacity, future 250 MW+
Hokkaido (Tomakomai)New AI data centerDetails undisclosed

SoftBank acquired the ~450,000 m² Sakai site from Sharp for approximately ¥100 billion in 2025.
The facility spans ~840,000 m² of floor area and is scheduled to begin operations during 2026.
SoftBank plans to invest approximately ¥2 trillion in data centers over six years starting FY2026.

KDDI also acquired a separate section of the same Sharp Sakai complex and began operating it as an AI data center in January 2026.
The former LCD factory is transforming into a core hub for Japan’s AI infrastructure.

Microsoft also announced in April 2026 a $10 billion (approximately ¥1.5 trillion) investment in Japan’s AI infrastructure.
It is partnering with Sakura Internet and SoftBank to deliver AI compute resources (GPUs) domestically.

Government Support

This domestic AI development effort is positioned as a national project under the government’s “Basic Plan for Artificial Intelligence.”

graph TD
    A[METI] --> B[NEDO]
    B --> C["Physical AI Foundation<br/>Model Development<br/>(up to ¥383.4B)"]
    B --> D["GENIAC<br/>(GenAI development support)"]
    A --> E["~¥1T over 5 years<br/>(FY2026–FY2030)"]
    E --> F["FY2026 budget<br/>~¥300B"]
    F --> G[Funded by GX transition bonds]
    C --> H["Japan AI Foundation<br/>Model Development<br/>(new company)"]
    D --> I[24 selected projects<br/>Rakuten, NRI, etc.]

NEDO’s open call for “Multimodal Foundation Model Development Toward AI Robotics and Physical AI” carries a staggering single-year budget of up to ¥383.4 billion for FY2026.
The program runs from FY2026 through the end of FY2030—five years—with annual stage-gate reviews.
It has two tracks: a development track (actually building and delivering models) and an exploratory track (frontier research).

Separately, METI has run GENIAC (Generative AI Accelerator Challenge) since 2024, selecting 24 organizations in its third round including Rakuten and Nomura Research Institute.
GENIAC focuses on providing compute resources and curating datasets; the new physical AI project sits above it as a larger initiative.

Can Japan Actually Win?

The vision is grand, but the challenges are real.

Parameter count alone doesn’t determine the winner.
DeepSeek-V3 achieved GPT-4o-level performance with 671 billion parameters at roughly one-seventeenth the training cost of GPT-4 ($6 million vs. $100 million).
Simply piling on parameters risks an inefficient model.
The era now rewards architectural innovation and MoE (Mixture of Experts—a technique that activates only a subset of specialized parameters per input) design.

Is 100 people enough?
OpenAI employs over 3,000; DeepSeek has hundreds.
A 100-person team is small even if it aggregates Japan’s top talent.
That said, PFN and internal teams at each core company will contribute externally, so the effective development workforce may be somewhat larger.

The industrial data barrier.
Japan’s industrial data is undeniably valuable, but the hurdle of providing it to an outside entity—the new company—is high.
Manufacturing data is a trove of trade secrets; anonymization, cleansing, and format standardization alone carry significant costs.
How long it takes for the “high-quality industrial data as our edge” strategy to materialize into an actual data pipeline remains an open question.

Timing.
The plan is to complete a trillion-parameter model by the late 2020s, but the US and China are already in that territory as of 2026.
By the time it’s finished, the competitive frontier will likely have moved further ahead.
Whether the “shift the playing field” strategy of physical AI specialization actually works is the key question.

Japan’s Language AI Layer Is Already Deep

The new company specializes in physical AI, but Japan’s language AI development already has considerable depth.
Even just the core members and their orbit are running multiple models trained from scratch.

OrganizationModel
PFNPLaMo 2.0 (31B, trained from scratch)
NECcotomi v3 (trained from scratch)
NIILLM-jp-4 (32B MoE, 11.7T token training. MT-Bench JA score of 7.82 surpasses GPT-4o)
NVIDIANemotron Nano 9B Japanese (top Japanese performance under 10B)

PFN’s and NEC’s models are available via API on Sakura Internet’s “Sakura AI Engine.”
NII’s LLM-jp-4 is fully open under Apache 2.0, designed without synthetic data from commercial LLMs.

Physical AI foundation models are an extension of language AI.
Beyond text understanding, they require multimodal processing that integrates camera, LiDAR, and tactile sensor data.
PFN’s inclusion as a technical partner is likely aimed at repurposing the from-scratch training expertise built through PLaMo for physical AI.

Meanwhile, the recent controversy over Rakuten AI 3.0—which received GENIAC subsidies but turned out to be based on DeepSeek-V3—is still fresh.
If the new company claims to be “domestic,” it will face the same transparency demands: what is original and what is borrowed must be disclosed from the start.

The Model’s “Exit” Is Already Taking Shape

A trillion-parameter foundation model is worthless if nobody uses it.
On this front, Japan’s AI distribution infrastructure has expanded rapidly over the past year.

Sakura Internet’s “Sakura AI Engine” is a platform that serves domestic LLMs via an OpenAI API-compatible interface.
LLM-jp-3.1, PLaMo 2.0, and cotomi v3 are already available through its API.
Data stays within domestic data centers, making it viable for municipalities and financial institutions that cannot send data to overseas clouds.
A free tier of 3,000 requests per month puts it within reach of individual developers.

If the new company’s physical AI foundation model is completed, it could be deployed immediately on existing API infrastructure.
Not having to build a distribution platform from scratch is a quiet but significant advantage.

The other exit is edge inference.
Physical AI runs on robots and autonomous vehicles—round-trip latency to the cloud is unacceptable.
The model must run directly on the device.

Options are growing on this front too.
NVIDIA’s Nemotron Nano 9B Japanese is a 9B model that runs on a single GPU, ranking first in the sub-10B category on Nejumi Leaderboard 4.
Liquid AI’s LFM2.5-JP (1.2B) uses a Convolution+Attention hybrid architecture, achieving roughly 2x Transformer speed even on CPU.

Train on a large foundation model, distill and lighten via MoE for edge deployment—this pipeline’s success will determine whether physical AI becomes practical.
The fact that Japanese-specialized compact models are already appearing is a tailwind for foundation model development.

SoftBank’s Dual Position with OpenAI

One point nags after reading all of this.
SoftBank is also a major shareholder in OpenAI.

In 2025, SoftBank invested a total of $40 billion (approximately ¥6 trillion) in OpenAI, becoming its second-largest external shareholder after Microsoft.
Its stake exceeds 10%.
That same January, SoftBank launched the “Stargate” project with OpenAI and Oracle, committing up to $500 billion (approximately ¥75 trillion) to AI data centers in the US.
In January 2026, reports emerged of negotiations for an additional $30 billion.

The relationship runs deep domestically too.
In November 2025, the joint venture “SB OAI Japan” was established to exclusively deploy OpenAI’s enterprise AI “Crystal Intelligence” to Japanese businesses during 2026.

SoftBank’s current positioning, then, looks like this.

PositionDetails
OpenAI sideMajor shareholder with $40B invested. Deploying OpenAI technology in the US via Stargate and in Japan via SB OAI Japan
Domestic AI sideCore investor in “Japan AI Foundation Model Development.” Leading physical AI foundation model development

Selling OpenAI’s AI in Japan while simultaneously leading domestic AI development.
Isn’t this a conflict of interest?

SoftBank’s framing is that “the layers are different.”
OpenAI does language AI; the new company does physical AI.
Different target markets, no competition.

But it’s doubtful this clean separation will hold forever.
OpenAI is expanding into robotics and multimodal—there’s no guarantee it won’t enter the physical AI space.
Conversely, if the new company’s foundation model proves capable, language AI applications are a natural extension.
”Different layers” is a snapshot of the present, not a structural guarantee.

Another concern is the flow of public funds.
At the core of a project that could receive up to ¥383.4 billion in NEDO funding sits a major OpenAI shareholder.
The risk of technical know-how flowing from the new company to the OpenAI side, or SoftBank’s infrastructure investments effectively benefiting OpenAI, is a point that at minimum demands accountability.

Read charitably, Masayoshi Son’s strategy is a “SoftBank wins regardless of who prevails” position.
Bet on OpenAI for US LLM dominance; build Japan’s physical AI dominance yourself.
For SoftBank, it’s a rational hedge—but as the core of a company flying the “domestic AI” flag, its transparency will remain under scrutiny.