Tech 5 min read

I looked into VNCCS, a ComfyUI tool that auto-generates character sheets

IkesanContents

Why I looked into this

I have been generating character illustrations with Qwen-Image-Edit, but I keep struggling with one problem: side twin-tails do not come out properly. Normal twin-tails tied in the back are fine, but side twin-tails, the kind that hang down from around the front of the ears, are apparently too structurally complex and either break or get ignored.

While I was experimenting with whether QWEN v16 Edit would improve the quality, I found a Reddit post called “VNCCS Utils 0.20 - QWEN Detailer”. VNCCS is a character-sprite generation tool for visual novels, and it apparently can auto-generate character sheets. If it includes mechanisms for keeping character consistency, it might also help with stable hairstyle generation.

So I looked into it.

What VNCCS is

VNCCS stands for Visual Novel Character Creation Suite. It is a collection of ComfyUI custom nodes for generating visual-novel character sprites with a consistent appearance. The developer is AHEKOT.

It is trying to solve the classic problem every image-generation user knows: “I want to generate the same character with different expressions and outfits, but the face changes every time.”

GitHub repo: ComfyUI_VNCCS

The five-step workflow

VNCCS splits character generation into five steps.

StepWhat it doesWorkflow
Step 1Base character generationVN_Step1_QWEN_CharSheetGenerator
Step 1.1Clone an existing character-
Step 2Outfit set creationVN_Step2_QWEN_ClothesGenerator
Step 3Expression set generationVN_Step3_QWEN_EmotionStudio
Step 4Final sprite outputVN_Step4_SpritesGenerator
Step 5LoRA training dataset creation (optional)VN_Step5_DatasetCreator

Step 1 generates a character sheet - front, side, back, and so on - and later steps keep referring to that sheet while producing outfit and expression variations.

How it keeps consistency

The core of VNCCS is a three-pass generation system.

  1. First Pass: rough generation, with Match around 0.5
  2. Stabilizer: stabilizes the result with a LoRA for character sheets, vn_character_sheet_v4.safetensors. Match 0.85 is recommended
  3. Third Pass: finalization with RMBG (background removal)

The Match parameter balances character consistency and variety. Too low and you get a different face every time; too high and you just copy the previous frame.

The dedicated character-sheet LoRA is the key to consistency.

What QWEN Detailer is

This is the headline feature of VNCCS Utils. It is a node that uses Qwen-Image-Edit 2511 to refine detected regions such as faces and hands, usually detected by YOLO.

How it differs from traditional detailers:

  • Vision-understanding based: QWEN looks at the image and decides what needs to be fixed
  • Drift suppression: it reduces the chance that the enhanced region drifts away from the original composition
  • Poisson blending: it blends the edited area smoothly with its surroundings
  • Built-in color matching: it automatically corrects color shifts

The flow is simple: detect a broken face with YOLO, then let QWEN fix it. No manual masking required.

Model requirements

You need a fair number of models to run VNCCS.

Required:

  • SDXL checkpoint, preferably Illustrious-based
  • vn_character_sheet_v4.safetensors (character-sheet LoRA)
  • YOLO face-detection model, bbox/segm
  • SAM for segmentation

For QWEN Detailer:

  • Qwen-Image-Edit 2511

For ControlNet:

  • AnytestV4
  • IllustriousXL_openpose

It is SDXL-based because the character-LoRA ecosystem is concentrated around SDXL, especially the Illustrious family.

Can it help with side twin-tails?

Honestly, there is no direct hairstyle control parameter. Hairstyle still depends on the prompt, usually Danbooru-style tags.

That said, there are a few promising points:

  1. Lock it in with the character sheet: if Step 1 correctly draws side twin-tails on the sheet, later steps are more likely to preserve them
  2. Stabilization through the three-pass pipeline: the Stabilizer pass heavily references the sheet, which may reduce hairstyle drift
  3. Fix drift with QWEN Detailer: broken parts can be repaired after the fact

The problem is whether Step 1 can render side twin-tails correctly in the first place. If that breaks, the later steps just amplify the mistake. In the end, it still depends on SDXL and prompt quality.

That said, compared with generating directly through Qwen-Image-Edit, the combination of SDXL + character-sheet LoRA + ControlNet should be structurally more stable. SDXL has lots of hairstyle LoRAs based on Danbooru tags, so finding or training a side-twin-tail-specific LoRA could improve results.

VNCCS limitations

Things that stood out during the investigation:

  • Full-body images are preferred: partial images can reduce consistency because the system tries to hallucinate missing parts
  • Complex layered outfits are unstable: the same kind of issue may happen with hairstyles like side twin-tails
  • Clothes LoRA is still beta: some parts may be missing

VNCCS Utils can also be used on its own

You do not have to install the whole VNCCS workflow to get value from it. The VNCCS Utils nodes can be used independently.

  • QWEN Detailer: add it to your existing workflow to fix faces and hands only
  • Camera control nodes: control azimuth and elevation for multi-angle LoRAs
  • Model management: download models from Civitai and HuggingFace

If you just want to add QWEN Detailer to your current environment and use it for post-generation face correction, that is the easiest entry point.

References