How CivicSurvival kept 158K lines of AI-written C# honest with CivicRAG and 300 Roslyn analyzers
Contents
A post on DEV Community describes building CivicSurvival, a large total conversion mod for Cities: Skylines II, in 3 months of actual work.
The author describes their programming background as “knowing only IF,” but the output is not a small prototype. 158,583 lines of C#, 31,219 lines of TypeScript/React, 542 ECS systems, 369 UI components, 300+ custom Roslyn analyzers, 2,503 Git commits.
What stood out when reading it was not the speed but how much effort went into constraining AI from the outside.
The original post is by Valentyn Kurchenko-Hai: How I, knowing only IF, vibecoded CivicSurvival.
Published 2026-05-06. CivicSurvival is in closed beta at time of writing.
The tooling story overshadows the mod itself
CivicSurvival adds blackouts, generators, corruption, panic, air defense, shelters, and information warfare to Cities: Skylines II.
The game design alone is heavy, but the article focuses more on the development environment.
The stack is C#, Unity DOTS/ECS, Burst, Harmony patches, Coherent UI, TypeScript/React.
Around it sits a Python-based CivicRAG, MCP server, codebase indexing, SQLite with vector search and full-text search, export scripts, report generators, and audit tools.
The author writes “one mod file is not what you get — it’s a small ecosystem.” The numbers back that up.
This is quite different from the AI-built retro game with Phaser 3 and role-based skill splitting I wrote about earlier.
That post was about splitting a short-term booth game into designer, Phaser implementation, and 8-bit music roles.
CivicSurvival goes beyond role splitting. It stacks AI-readable indexes, compile-time rules AI cannot break, and build logs AI cannot fake.
CivicRAG is less about search and more about preventing AI from guessing
CivicRAG, described in the post, is a custom MCP server for codebase exploration.
Plain grep struggles to track who reads BlackoutState when ComponentLookup<BlackoutState> hides in fields and ECS system execution order is invisible to text search.
So the author indexed 542 systems, 369 UI components, and 1,427 cross-domain links.
rag_query("blackout permanence") runs semantic search. rag_component("BlackoutState") shows writers, readers, and execution order.
rag_styk("AirDefense", "Threats") inspects cross-domain connection surfaces.
The RAG (Retrieval-Augmented Generation) here is not decoration to make the LLM look omniscient.
It is an index to stop the AI from guessing “probably this” without reading the neighboring system.
This is a different animal from the personal memory tool in YourMemory uses biological decay to discard old AI context.
YourMemory was an MCP for recalling in-session memory. CivicRAG is a tool for making AI rediscover the actual connection graph of a large codebase.
Same MCP/RAG umbrella, but “remembering a past conversation” and “not misreading the current code structure” are very different problems.
300+ Roslyn analyzers that remove AI’s freedom
The most practically useful part of the post was the Roslyn analyzer section.
The author had AI write 300+ custom Roslyn analyzers that run on every build.
Unity DOTS/ECS is accident-prone compared to regular C#.
Calling EntityManager.SetComponentData inside OnUpdate creates a sync point that tanks FPS.
Forgetting .Update(this) on ComponentLookup<T> causes a runtime error.
Creating an EntityQuery inside OnUpdate leaks.
The C# compiler does not catch these project-specific design violations.
So CIVIC051 bans EntityManager writes inside OnUpdate. CIVIC310 forces magic numbers through constants like GameRate.SECONDS_PER_HOUR.
CIVIC361 blocks cross-domain imports.
When AI forgets the rules, the compiler and analyzers stop it.
In design principles for putting AI coding agents into production, I looked at Stripe Minions’ Devbox and Toolshed and the question of “how much breaks when the model gets it wrong.”
CivicSurvival is not enterprise production, but the idea is close.
The difference is that instead of limiting infrastructure blast radius, it catches design violations at compile time inside the codebase itself.
CLAUDE.md-style text rules alone are leaky. You can write “don’t create sync points” and AI will do the same thing inside a convenience helper.
When the rule is an analyzer, some of those escape routes become compile errors.
AI was weak at bugs you can only see on screen
In the second half of the post, the author describes spending nearly two weeks on a drone rendering bug.
It involved Unity ECS, DOTS, CS2’s render pipeline, BatchRendererGroup, HDRP, motion vectors, temporal smoothing, Coherent UI, custom .cok models, and decompiled Game.dll.
Logs looked correct.
ECS components matched stock objects.
CPU-side data coordinates looked identical to a stock car.
But the on-screen drone pulsated during acceleration.
AI was useful for building the diagnostic system, writing Harmony patches, and organizing decompiled code.
But it struggled to look at the screen and determine “this is motion blur pulsation, not mathematical jitter.”
In the end, the author gave up on properly controlling motion vector flags in CS2 1.5.5 and fell back to zeroing motion blur via HDRP Volume override when speed exceeded 1x.
This part felt honest for an AI development article.
Instead of ending at “AI can build a giant mod,” it says “AI misreads visual artifacts that don’t show up in logs.”
In game development, “runs,” “correct,” and “feels right” are three separate things.
AI gravitates toward the first two. The last one still needs a human eye and hands.
AI auditing works when you promote findings to fixed rules
CivicSurvival throws AI agents at the codebase like static testers at scale.
The post mentions AUDIT_HISTORY.md recording approximately 9,700+ claimed findings and 5,500+ fixed.
March alone produced roughly 1,500 debugger-agent reports and around 2,600 unique findings.
Those numbers look suspicious on their own.
AI findings overlap and include false positives.
The author knows this and treats the process as: agents raise suspicions, other agents confirm, fixes pass the build, and recurring issue types get promoted to analyzers.
AI audit gains value not as bulk review but as a way to discover which patterns deserve a permanent rule.
Once a dangerous pattern is found, moving it to a Roslyn analyzer means the compiler stops it next time, regardless of AI mood.
In running Claude Code and Codex overnight via tmux to auto-build a game, I tried an automated loop where Claude Code implements and Codex reviews.
That was a small-scale overnight game experiment, so review results stayed as one-off fixes.
CivicSurvival goes further. Recurring findings get converted from human-readable reports into compile-time prohibitions.
”Knowing only IF” does not mean giving up responsibility
The original title is strong.
It reads as “I only knew IF and built a 158K-line mod with AI.”
But reading the body, the author has not let go of responsibility for what the code means.
They cannot write all C# syntax themselves.
But they track which domain writes state, which system reads it, whether a change touches save migration, whether UI binding is needed, and whether a new analyzer should be added.
They do not accept AI’s plan on the first pass. They make it re-read, adding risks, forbidden actions, expected behavior, and specific files.
This is close to file-based planning in planning-with-files.
But CivicSurvival goes beyond planning files alone, building “external state that keeps AI from getting lost” out of CivicRAG, analyzers, and build logs combined.
The useful distinction here is between “delegating syntax you don’t know to AI” and “delegating structure you don’t know to AI.”
The former works. The latter starts to collapse around 5,000 lines, according to the author’s line in the sand.