An IDOLM@STER Producer's Backend Speculation on Mirishita, Sparked by Arigasankyu's 10th Anniversary
Contents
The IDOLM@STER community recently had a small celebration for the 10th anniversary of “Arigasankyu.”
This is a tech blog, so writing “Arigasankyu 10th anniversary is the best!” wouldn’t resonate with anyone, and frankly isn’t much fun to write either. Instead, I’ll ride the anniversary moment and take a technical look at the backend architecture of Mirishita, a title that has been running remarkably stably for years, speculating from public materials.
What Is Arigasankyu
On April 17, 2016, at Day 2 of the Makuhari stop of “THE IDOLM@STER MILLION LIVE! 3rdLIVE TOUR,” voice actress Azusa Tadokoro made a slip of the tongue during the MC before “Welcome!!”: she meant to say “arigatou” (thank you in Japanese) but blurted out “sankyu” instead. The whole venue burst into laughter, and the performance continued immediately afterward. The moment has lived on in community memory ever since, and 10 years later the catchphrase is still affectionately used. She herself probably finds it more embarrassing than being elevated to “the ultimate expression of gratitude.”
This year marks the 10th anniversary. The IDOLM@STER franchise itself dates back to 2005 (the original arcade game), making it 21 years old. The Million Live! brand (the original Mobage browser game) started in February 2013, now 13 years old. The Idolm@ster Million Live! Theater Days (nicknamed Mirishita) launched on June 29, 2017, and will hit its 9-year mark next summer.
I’ve been playing “Guri-Masu” (the original Mobage Million Live!) since launch, so I’m a Million Live Producer through and through. I tried Mobamas (the original Mobage Cinderella Girls) briefly but couldn’t keep up with the power inflation and dropped out. I know about other IDOLM@STER titles by name but barely touch them, and I don’t have the energy for the arcade games. So this post is Mirishita-focused.
Mirishita’s Operational Scale
Before discussing “nearly 9 years with no major outages,” here’s the scale of what that actually means.
- At launch: 13 members from 765 ALLSTARS + 39 members from Million Stars = 52 total
- 1 million pre-registrations; 3.9 million downloads within the first month
- “13-member Live” implemented April 2018; “39-member Live” in December 2019
- In-game anniversary events from 1st (June 2018) through 8th (June 2025, “Chouchou∞MUGEND@I!”), plus special events like Million Live! brand 10th anniversary “Beyond @ Crossing!” (February 2023)
- Anniversary real-world live concerts from 1st (2014, Nakano Sunplaza) onward, continuing through 12th (October 2025, Keio Arena TOKYO) → postponed 11th (March 2026, Kitakyushu MESSE) → 13th (May 2026, Ariake Arena scheduled), with some cancellations and reschedules due to COVID-19
- TV anime aired October 2023; OVA “The IDOLM@STER Million Live! Someday, at the Center” Blu-ray released March 2026
- 52-week consecutive CD release campaign featuring all 52 idols running from July 2025 through July 2026
- Korean and Traditional Chinese versions ran alongside from 2019 to 2022 (overseas versions ended January 2022; Japanese version continues)
- Feature additions have been steady: Gravure Studio (Jan 2021), Season Pass (Jul 2021), two-person shooting mode (May 2022), real-photo backgrounds (Jun 2022), Lesson Room (Dec 2023), AR support (Feb 2024), new game mode “Idol Grand Prix” prologue beta (Jan 2025), new event series “BATTLE OF THEATER” launch (Apr 2025), player level cap raised from 999 to 2000 at 8th anniversary (Jun 2025)
Events and new songs are pushed out multiple times per month. Nine years of that pace with virtually no infrastructure-caused long outages is a real track record.
Why Mirishita Doesn’t Go Down
Announcements of “all servers down” on Mirishita are rare. Here’s a recent comparison that makes the point.
Between 16:40 and 17:43 JST on April 15, 2025, AWS’s Tokyo region (apne1-az4) suffered a simultaneous loss of primary and backup power. Project Sekai and other titles entered emergency maintenance during that window. Meanwhile, Mirishita’s official X account (@imasml_theater) posted no announcement whatsoever.
The reason is simple: Mirishita doesn’t run on AWS.
The Answer Was Already Public in 2017
Three months after Mirishita’s launch, on September 28, 2017, at the first “Google Cloud INSIDE Games & Apps” event, Kazunari Hoshina of Bandai Namco Studios gave a talk titled “GAE/Go Powering Mirishita.” The 29-slide deck is publicly available on SlideShare.
Key points:
| Component | Details |
|---|---|
| Runtime | Google App Engine Standard (1st gen), Go runtime |
| Datastore | Cloud Datastore |
| Log aggregation / analytics | BigQuery |
| Monitoring | Stackdriver Logging / Monitoring (now Cloud Logging / Monitoring) |
| Load testing | Executed at 2× expected load |
| Operational record | Six months post-launch with no configuration changes, absorbing all traffic including bots via autoscaling |
| Scale | Thousands of requests per second, hundreds of instances |
And the critical part: deployment. GAE has a versions feature that lets you deploy a new version on the side and switch traffic over instantly. This gives you Blue/Green-style zero-downtime releases. If something breaks, you can roll back just as quickly.
flowchart LR
User[User Device<br/>Unity Client]
LB[GAE Frontend<br/>auto-scaling]
App[Go Application<br/>multiple versions<br/>Blue/Green switch]
DS[(Cloud Datastore)]
BQ[(BigQuery<br/>logs & analytics)]
SD[Stackdriver<br/>Logging/Monitoring]
User -->|HTTPS| LB
LB --> App
App --> DS
App --> BQ
App --> SD
Where Blue/Green Deployment Pays Off
Switching GAE versions comes down to deploying a new version and changing the traffic split from 0% → 100% (a single gcloud app services set-traffic command). Rolling back to the previous version is equally immediate.
flowchart TD
Dev[Deploy new<br/>version v2]
Stage[v2 idle at zero traffic<br/>v1 serves 100%]
Switch{Traffic cutover<br/>v1:0% / v2:100%}
OK[Normal operation<br/>v2 only]
NG[Instant rollback<br/>v1:100% / v2:0%]
Dev --> Stage
Stage --> Switch
Switch -->|Healthy| OK
Switch -->|Issue detected| NG
App updates don’t require taking the whole server fleet down for maintenance. This is the mechanism behind Mirishita’s reputation for “short maintenance windows” and “effectively no downtime.”
How Content Keeps Coming Without App Updates
If you’ve played Mirishita for a while, you notice something: even in periods without app store update notifications, new songs, costumes, and cards keep arriving, and sometimes even entirely new features and event mechanics show up.
In reality, the app does get updated through the store (Ver 8.0.100 shipped in August 2025, Ver 8.1.000 on April 16, 2026), so it’s not “zero app updates.” But the store-update cadence is remarkably low relative to the operational update cadence. This pattern is standard for Unity-based live-ops games, but Mirishita is a particularly clear example of how thoroughly the design commits to it.
The implementation combo looks roughly like this:
- Songs, MVs, costumes, 3D models, and UI textures are separated from the app binary and delivered via AssetBundles on a CDN. This is what’s behind the user-facing “Data Download” management screen and per-song download flow
- Event schedules, gacha rates, reward tables, and character stats are fetched from the server as master data, letting behavior change without modifying client scripts
- Feature flags: code for upcoming features is shipped with the app in advance, then enabled via server-side flags on release day
- Live performances are authored with Timeline plus AssetBundles, so adding a new song is purely a matter of adding assets
OS support updates, major UI overhauls, and fundamentally new live modes still require app-store updates. But day-to-day content updates and mid-sized feature additions bypass the store entirely, which is why the update frequency feels unusually light from the user’s perspective.
A Handful of Known Incidents
Not zero. What shows up in public searches:
| Date | Summary |
|---|---|
| January 18, 2022 | AutoPath-related bug. Full-combo rewards could be unfairly obtained on songs other than Clover-class, triggering emergency maintenance and forced logout |
| September 29, 2022 | iOS paid item purchase error. This was an App Store side issue, not Mirishita’s |
| January 7, 2024, 16:00 | Display bug on “10-roll Platinum Gacha, once a day for free” allowed some users to effectively pull unlimited free 10-rolls. Operations chose not to roll back; compensation was distributed on January 10 (1 × 10-roll ticket) and January 17 (8 tickets to all users) |
| June 16, 2024, around 13:40 | Emergency maintenance to address unspecified issues (details not disclosed) |
What stands out is the decision not to roll back in the January 2024 gacha incident. Datastore uses strong consistency at the entity-group level for writes, so restoring a point-in-time snapshot would cause broader damage to the entire game (legitimately progressing users would also be rewound). Choosing to compensate with distribution rather than roll back is the correct call for modern large-scale live-ops games.
Mystery Errors During Large-Scale Events
Speaking from player experience, you occasionally see brief “connection error” or “please reconnect” messages right at the start of anniversary-tied or special anniversary events. Official announcements are either absent or reduced to a terse “traffic was temporarily concentrated.”
Since there’s no public information, what follows is speculation. The typical culprits would be three:
First, Datastore hotspots. Ranking and score-aggregation workloads tend to concentrate writes on the same entity group. Google’s documentation notes roughly one write per second per entity group as a practical ceiling.
Second, Cron execution spikes. Event-launch jobs overlapping with regular traffic can outpace the speed at which new instances spin up.
Third, CDN cold starts. Event banners and song assets downloading all at once can hit the CDN before it’s warmed up.
Bandai Namco’s Infrastructure Strategy, Zoomed Out
Bandai Namco’s own blog explicitly states they’ve been using Google Cloud since 2016.
| Title | Stack / use case |
|---|---|
| Tekken 8 (2024) | GKE + Cloud Spanner + Diarkis (realtime networking) for global matchmaking infrastructure |
| Mirishita | GAE Standard 1st gen + Go (as described above) |
| The Idolm@ster Starlit Season | Published as a GCP customer case study |
| Arcade log collection | AWS serverless (presented at CEDEC 2019 and 2022) |
Meanwhile, Bandai Namco Holdings migrated its entire enterprise backbone to AWS in August 2017 (cloudpack/iret-assisted, with TCO reduced by roughly 30%).
Bandai Namco Nexus (formerly BXD, renamed March 2021) handles the data platform and runs primarily on GCP, using BigQuery as its DWH, Dataflow + Cloud Composer / TROCCO for ETL, with IaC on Terraform Cloud + GitHub Actions—this stack is documented on Zenn.
So Bandai Namco splits game-server-class workloads onto GCP and enterprise systems onto AWS as an explicit multi-cloud strategy. Mirishita sitting on GAE fits naturally into this broader direction.
For the record, on March 12, 2026, BANDAI NAMCO ID rolled out Passkey support. Biometric data stays on the device and is never sent to Bandai Namco servers—the auth infrastructure keeps getting refreshed in parallel.
Side Note on ENZA
Dug this up during research and found it interesting enough to include. The browser-game platform “ENZA”—something I’d only heard of through Shiny Colors (Shanimas)—has this lineage:
- In 2017, a Bandai Namco + Drecom joint venture called “BXD” (BNEI 51% / Drecom 49%) began operations
- In March 2020, BNEI acquired all Drecom shares and made it a wholly owned subsidiary
- In March 2021, the company was renamed Bandai Namco Nexus
- Shanimas on ENZA runs on an HTML5 + PIXI.js + Spine + pixi-particles stack (disclosed at CEDEC 2018)
Gakuen Idolm@ster (Gakumas) is separate: jointly developed with QualiArts (a CyberAgent subsidiary) as a native app on Unity 2022.3.21f1. Not an ENZA title.
IDOLM@STER games span wildly different dev teams and tech stacks per title. Mirishita—“internally developed at Bandai Namco Studios, on GAE/Go”—is actually one of the more straightforward setups in this lineup.
The Client Side: “AKANE Daisakusen”
Leaving the server side, a word on the client. “AKANE Daisakusen” (AKANE Grand Operation), presented at Unite Tokyo 2018, was the optimization project for enabling 13-member live performances.
The origin story is revealed in the final slide of the deck: the team started the optimization work around New Year’s 2017, which was the year of the Rooster (酉, tori) in the zodiac. The glyph 酉 visually resembles 西 (west), and 西 in turn resembles 茜 (akane, “madder red”)—a chain of visual association that led them to name the project “AKANE Daisakusen.” Then they retroactively fit the backronym “Android Kousoku-ka And NativE-ka” (Android Speed-up And Native-ization) to make it work. The official story sticks to “tori → akane,” but Million Live! has an actual idol named Akane Nonohara, so there’s no way whoever proposed or approved this name wasn’t aware of the double meaning.
Facts from the slides:
| Item | Value |
|---|---|
| Polygon count | ~10,000 per character; costume textures 1024×1024 |
| Resolution | Base 1280×720 (4:3 devices 960×720); older devices use MSAA low-res mode |
| SubMesh merging | Costumes, sequins, skin, and color merged to cut draw calls from 44 to 36 |
| Post effects | Lightweight CommandBuffer-based implementation |
| Shaders | Custom, in-house |
| Design rule | Never compromise visual quality; avoid re-authoring assets |
3D live performances are real-time Unity rendering on the client (not prerecorded MVs), with motion capture cleaned up in MotionBuilder and animated in 3ds Max (per the CGWORLD feature series).
It may be odd to say on the Arigasankyu 10th anniversary, but a quiet infrastructure is technically a legitimate thing to praise. Because the servers almost never go down, users can just concentrate on running the events every time.
By the way, here’s my Mirishita card.
