Celebrate Christmas and New Year with 25% OFF all services at B2BHostingClub.
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
/mo
This table provides a detailed overview of the most widely used Stable Diffusion models, evaluating their compatibility with GPU types, web interfaces (like ComfyUI or AUTOMATIC1111), and advanced features such as LoRA, ControlNet, and SDXL Refiner support. It also highlights whether additional components like FFmpeg are needed for audio/video models, and clarifies each model's licensing terms—critical for commercial or research deployments.
|
Model Name
|
Size (fp16)
|
Recommended GPU
|
Figure/sec
|
LoRA Support
|
ControlNet Support
|
Recommended UI
|
Suit for Refiner?
|
Additional components required
|
License Agreement
|
|---|---|---|---|---|---|---|---|---|---|
| stabilityai/stable-diffusion-v1-4 | ~4.27GB | RTX3060/5060 | 1.5-2 | ✅ | ✅(needs expansion) | AUTOMATIC1111 | ❌ | none | CreativeML OpenRAIL-M |
| stabilityai/stable-diffusion-v1-5 | ~4.27GB | RTX3060/5060 | 1.8-2.2 | ✅ | ✅ | AUTOMATIC1111 | ❌ | none | CreativeML OpenRAIL-M |
| stabilityai/stable-diffusion-xl-base-1.0 | ~6.76GB | A4000/A5000 | 1.2-1.5 | ✅ | ✅ (SDXL version required) | ComfyUI | ✅ | none | CreativeML OpenRAIL++-M |
| stabilityai/stable-diffusion-xl-refiner-1.0 | ~6.74GB | A4000/A5000 | 0.8-1.1 | ✅ | ❌ | ComfyUI | ✅(As a Refiner) | none | CreativeML OpenRAIL++-M |
| stabilityai/stable-audio-open-1.0 | ~7.6GB | A4000/A5000 | -- | ❌ | ❌ | Web UI | ❌ | FFmpeg, TTS preprocessing | Non-commercial RAIL |
| stabilityai/stable-video-diffusion-img2vid-xt | ~8GB | A4000/A5000 | Depends on the frame rate | ❌ | ❌ | Web UI | ❌ | FFmpeg | Non-commercial RAIL |
| stabilityai/stable-diffusion-2 | ~5.2GB | RTX 3060 / 5060 | 1.6-2.0 | ✅ | ✅ | AUTOMATIC1111 | ❌ | none | CreativeML OpenRAIL-M |
| stabilityai/stable-diffusion-3-medium | ~10GB | RTX4090 / 5090 | 1.0-1.5 | ✅ | Partial support | ComfyUI | ✅ | none | Not open source, requires API license |
| stabilityai/stable-diffusion-3.5-large | ~20GB | A100-40GB / RTX5090 | 0.5-0.9 | unknown | unknown | Web UI / API | ✅ (Need to combine with Refiner) | unknown | API-only license |
| stabilityai/stable-diffusion-3.5-large-turbo | ~20GB | A100-40GB / RTX5090 | >2.0 | unknown | unknown | Web UI / API | ✅ (Need to combine with Refiner) | unknown | API-only license |
Run any version of Stable Diffusion—SD 1.5, 2.1, SDXL, or SD 3.5—on your terms. Choose your UI (ComfyUI or AUTOMATIC1111), customize pipelines, switch checkpoints, and fine-tune models with LoRA or ControlNet integration.
Deploy on powerful GPUs (e.g. RTX 4090, A100) for fast, multi-user inference. Handle image, audio, or even video generation at scale, with support for batching, concurrency, and memory-efficient backends like vLLM.
Self-hosted means no third-party API calls. Keep your prompts, generations, and models completely private—ideal for secure environments or enterprise use cases. Run everything fully offline once models are downloaded.
Use AUTOMATIC1111 for quick generation and ease of use, or ComfyUI for advanced, node-based workflows supporting Refiner stages, multi-model chaining, and fine-grained control—all with visual, drag-and-drop interfaces.
From 24/7 support that acts as your extended team to incredibly fast website performance