Running Qwen-Image-Edit-2511 on RunPod
In the previous article, I checked the specs for running Qwen-Image-Edit-2511 locally: RTX 4090 at ¥450,000, 64GB RAM at over ¥100,000. As of January 2026, building an AI PC is terrible timing.
So I went with a cloud GPU instead (RunPod). RTX 4090 at $0.34/hour (roughly ¥51), with each image generation costing about ¥2–3. For light users, cloud wins by a mile.
What Is Qwen-Image-Edit-2511?
An image editing model with 20B parameters released by Qwen (Tongyi Qianwen). It supports text-instruction-based image editing, inpainting, and outpainting.
What I actually wanted to use is fal/Qwen-Image-Edit-2511-Multiple-Angles-LoRA. It’s a LoRA that lets you specify 96 camera angles (4 elevations × 8 azimuths × 3 distances) to regenerate images. Built from 3D model training data, it can transform a character illustration into a view from a different angle.
What Is RunPod?
A cloud GPU service. 5–6× cheaper than AWS or GCP, and it comes with ComfyUI templates so setup is easy.
| GPU | Community Cloud | Spot (with interruption risk) |
|---|---|---|
| RTX 4090 | $0.34/hr (¥51) | ~50% off |
| RTX 3090 | $0.22/hr (¥33) | ~50% off |
What You Need
- RunPod account
- Credit card or PayPal (charge $10+)
- 1–2 hours for initial setup
Step 1: Create a RunPod Account
- Go to https://www.runpod.io/
- Sign up (Google/GitHub login available)
- In the left menu: Billing → Add Credits, add about $10
Step 2: Launch a Pod
- Left menu: Pods → + Deploy
- Select GPU: RTX 4090 (Community Cloud)
- Search for “ComfyUI” in the template search and select it
- Container Disk: 50GB
- Volume Disk: 100GB (for model storage, reusable across sessions)
- Click Deploy
Takes 1–2 minutes to start.
Step 3: Access ComfyUI
Click “Connect” on the running Pod → HTTP Service (port 3000) opens ComfyUI.
The default workflow appears on first load. If you see this screen, you’re good.
Step 4: Install Custom Nodes
Click the “Manager” button in the top-right of ComfyUI (if ComfyUI Manager is pre-installed).
If Manager is missing, connect via SSH and install manually.
SSH Connection (only if needed)
Click “Connect” → SSH over exposed TCP to get the connection details.
ssh root@<ip> -p <port> -i ~/.ssh/id_rsa
Installing Custom Nodes
cd /workspace/ComfyUI/custom_nodes
# Camera Angle Selector (96-angle selection UI)
git clone https://github.com/NickPittas/ComfyUI_CameraAngleSelector.git
# ComfyUI Manager (if not installed)
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
Choosing a Model Variant
Qwen-Image-Edit-2511 comes in multiple quantized variants. Choose based on your GPU’s VRAM.
| Variant | Size | Required VRAM | Quality | Use Case |
|---|---|---|---|---|
| BF16 (full precision) | ~57GB | 40GB+ | Best | A100/H100 |
| FP8 | ~20GB | ~20GB | Nearly equivalent | Recommended for RTX 4090 |
| NF4 | — | ~17GB | Good | RTX 3090 |
| GGUF Q4_K_M | ~13GB | 12GB+ | Practical | RTX 3060 |
FP8 is optimal for RTX 4090 (24GB). Maintains quality with VRAM headroom.
Recommended Settings by Variant
| Variant | CFG | Steps | Notes |
|---|---|---|---|
| BF16 | 4.0 | 40 | High quality, slow |
| FP8 | 4.0 | 20 | Good balance |
| FP8 + Lightning LoRA | 1.0 | 4 | Fast preview |
Download Sources
- FP8: 1038lab/Qwen-Image-Edit-2511-FP8
- GGUF: unsloth/Qwen-Image-Edit-2511-GGUF
- Lightning (fast inference): lightx2v/Qwen-Image-Edit-2511-Lightning
The following steps assume FP8. For full precision, replace the download source with Qwen/Qwen-Image-Edit-2511.
Step 5: Download Models
Download the Qwen-Image-Edit-2511 base model and LoRA. Even the FP8 version is 20GB+, so it takes a while.
cd /workspace/ComfyUI/models
# Install HuggingFace CLI
pip install -U huggingface-hub
# Base model FP8 (~20GB)
huggingface-cli download 1038lab/Qwen-Image-Edit-2511-FP8 \
--local-dir ./checkpoints/Qwen-Image-Edit-2511-FP8/
# Multiple-Angles-LoRA
mkdir -p loras
huggingface-cli download fal/Qwen-Image-Edit-2511-Multiple-Angles-LoRA \
--local-dir ./loras/Qwen-Image-Edit-2511-Multiple-Angles-LoRA/
For full precision:
# Base model BF16 (~57GB)
huggingface-cli download Qwen/Qwen-Image-Edit-2511 \
--local-dir ./checkpoints/Qwen-Image-Edit-2511/
Lightning LoRA for Fast Inference (Optional)
Reduces inference steps from 40 to 4. Quality drops slightly but speed improves dramatically.
# Lightning LoRA
huggingface-cli download Kijai/flux-fp8 \
Qwen-Image-Edit-2511-Lightning-4steps-V1.0-bf16.safetensors \
--local-dir ./loras/
Step 6: Restart ComfyUI
Restart ComfyUI so it picks up the new models and nodes.
supervisorctl restart comfyui
Or stop and start the Pod itself.
Step 7: Build the Workflow
Connect the following nodes in ComfyUI.
Basic Setup
[Load Checkpoint] Qwen-Image-Edit-2511
↓
[Load LoRA] Multiple-Angles-LoRA (strength: 0.8~1.0)
↓
[Load Image] Input image
↓
[Camera Angle Selector] Select angle
↓
[CLIP Text Encode] Prompt
↓
[KSampler]
↓
[Save Image]
Using Camera Angle Selector
A 3D UI appears where you select from 96 angles.
- Direction: front, front-right, right, back-right, back, back-left, left, front-left
- Height: overhead, eye level, low angle, ground level
- Distance: close-up, medium shot, full shot
The selection outputs a prompt like <sks> front view eye level medium shot.
Recommended Parameters
| Parameter | Fast Mode | High Quality Mode |
|---|---|---|
| Steps | 4 (with Lightning LoRA) | 20 |
| CFG Scale | 3.0 | 4.0 |
| Sampler | euler | dpm++_2m_karras |
Cost Management
RunPod charges by the hour, so manage your pods when done.
- Stop: Pauses GPU billing, Volume is retained but storage billing continues
- Delete: Removes Pod, you can choose to keep the Volume
- Terminate: Completely deletes Pod + Volume
Tips to Keep Costs Down
- Stop the Pod immediately when done
- Save models to Volume — no re-download needed next time
- Use Spot instances (50% off). Risk of interruption.
- Delete Volume too if you won’t use it for a while (storage is also billed)
Troubleshooting
Model Doesn’t Appear in the List
- Check that ComfyUI was restarted
- Verify the model file path is correct (
/workspace/ComfyUI/models/checkpoints/) - Confirm the download completed
Out of Memory (OOM) Error
- Even with RTX 4090’s 24GB, high resolutions or complex workflows can OOM
- If using BF16, switch to FP8
- Reduce input image size
- Use Lightning LoRA to reduce inference steps
- If all else fails, try the GGUF version
SSH Connection Fails
- Confirm the Pod is running in the RunPod console
- Check that your SSH public key is registered in RunPod
- Check whether the port is blocked by a firewall
NSFW Support (3D Model Reference Sheets, etc.)
The official model tends to suppress NSFW content. For use cases requiring bare skin, like reference sheets for 3D model base meshes, use a community NSFW variant.
Phr00t/Qwen-Image-Edit-Rapid-AIO
An all-in-one model with VAE/CLIP integrated. Separate NSFW and SFW versions available.
| Version | Notes |
|---|---|
| v18.1-NSFW | Stable, 28.4GB |
| v19-NSFW | Latest, Lightning 2511 8-step integrated |
Download: Phr00t/Qwen-Image-Edit-Rapid-AIO
Setup (Phr00t Version)
Download the Phr00t version instead of the official one.
cd /workspace/ComfyUI/models/checkpoints
# v18.1-NSFW (stable)
huggingface-cli download Phr00t/Qwen-Image-Edit-Rapid-AIO \
v18/Qwen-Rapid-AIO-NSFW-v18.1.safetensors \
--local-dir ./
Workflow Differences
Since Phr00t’s version has VAE/CLIP integrated, the node setup is simpler.
[Load Checkpoint] Qwen-Rapid-AIO-NSFW-v18.1
↓
[Load LoRA] Multiple-Angles-LoRA (strength: 0.8~1.0)
↓
[Load Image] Input image
↓
[Camera Angle Selector] Select angle
↓
[CLIP Text Encode] Prompt
↓
[KSampler] steps: 4, cfg: 1.0, sampler: euler_ancestral, scheduler: beta
↓
[Save Image]
Recommended Parameters (Phr00t Version)
| Parameter | Value |
|---|---|
| Steps | 4~8 |
| CFG | 1.0 |
| Sampler | euler_ancestral |
| Scheduler | beta |
Notes
- Reports of reduced face consistency in v18+ compared to v16
- Worth trying v16 if consistency matters for editing tasks
- v18+ has better NSFW quality for text-to-image
- The RunPod + ComfyUI environment for Qwen-Image-Edit-2511 can be set up in 1–2 hours
- RTX 4090 at $0.34/hour, ~¥2–3 per inference
- With Volume reuse, subsequent sessions start in minutes
- Smarter to try it in the cloud before spending ¥700,000 on a local setup