Flagship tool
Compare performance in our RTX 4070 vs 4090 comparison.
On a budget? Check our budget AI GPU guide.
For image generation, read our Stable Diffusion GPU guide.
For large models, see our best GPU for LLMs guide.
Find the exact laptop or GPU you need for your AI workload
Answer a few quick questions and we’ll point you to the right laptop tier for Stable Diffusion, local LLMs, ComfyUI, and other real-world AI workloads.
AI hardware quiz
Find your ideal AI setup in about 30 seconds
The first question is already open below. Choose your main workload, move through a few quick steps, and get one clear recommendation plus realistic alternatives.
How this tool works
How to use this tool well
- Match your heaviest real workload, not your easiest test project.
- Treat minimum as the lowest workable tier, not the best buying choice.
- Use the recommended lane when you want smoother iteration and longer hardware life.
- If the calculator pushes you toward desktop-class hardware, that is usually a VRAM issue rather than a CPU issue.
How the calculator thinks
1. Workload first
Local LLMs, Stable Diffusion, fine-tuning, and general AI development push hardware in different ways. The calculator starts with the task, not the marketing label.
2. VRAM before hype
VRAM determines what fits. Raw GPU speed matters after that. The result shows both a minimum workable tier and the tier we would actually recommend.
3. Buying path next
Every result points you to the GTG page that matches the decision you still need to make: laptops, GPUs, VRAM planning, local LLMs, or image-generation hardware.
AI hardware guide
Find your hardware tier faster on mobile
Compare minimum and recommended VRAM by workload, then jump straight to the calculator or a matching buying guide.
Quick VRAM ladder
| Use case | Minimum | Recommended | What that usually means |
|---|---|---|---|
| 7B local LLMs | 8GB | 8–12GB | Entry local inference and experimentation |
| 13B local LLMs | 12GB | 16GB | Serious local AI on higher-end laptops or desktop GPUs |
| 34B local LLMs | 24GB | 24–48GB | Usually a desktop-class recommendation |
| Stable Diffusion | 8GB | 12GB | Entry image generation vs smoother higher-resolution use |
| SDXL + ControlNet / LoRAs | 12GB | 16GB | Heavier memory pressure and fewer compromises |
| Fine-tuning / training-adjacent work | 24GB | 48GB+ | Desktop-first territory in most cases |
7B local LLMs
Entry local inference8GB can work for lighter local inference. 12GB gives smoother iteration and fewer memory limits.
13B local LLMs
More comfortable at 16GB12GB is the practical floor. 16GB is a much safer target for regular local AI work.
34B local LLMs
Desktop usually winsOnce you are here, laptop-class hardware gets compromised fast. Desktop-class VRAM is usually the better value path.
Stable Diffusion
Image generation8GB works for simpler runs. 12GB feels better for faster iteration and fewer quality compromises.
SDXL + ControlNet / LoRAs
Heavier memory pressure12GB can work. 16GB is where the workflow starts to feel meaningfully more usable on a regular basis.
Fine-tuning / training-adjacent work
Desktop-first territoryIf your result lives here, desktop hardware usually delivers better value, more headroom, and fewer compromises than a laptop.
Not sure which tier you actually need?
Use the calculator to match your workload to a realistic minimum and a safer recommended setup.
FAQ
What matters more for AI hardware: VRAM or raw GPU speed?
VRAM usually matters first because it decides whether a model or workflow fits at all. Raw GPU speed matters after the model fits in memory.
Why does the calculator show minimum and recommended hardware?
The minimum tier is the lowest workable setup. The recommended tier is the one we would actually buy for smoother iteration, fewer out-of-memory problems, and longer useful life.
When should I choose a desktop instead of a laptop?
If your result calls for 24GB+ VRAM, frequent 34B-class model work, or fine-tuning, desktop hardware usually delivers better value and fewer compromises than a laptop.
What matters more for AI hardware: VRAM or raw GPU speed?
VRAM usually matters first because it decides whether a model or workflow fits at all. Raw GPU speed matters after the model fits in memory.
Why does the calculator show minimum and recommended hardware?
The minimum tier is the lowest workable setup. The recommended tier is the one we would actually buy for smoother iteration, fewer out-of-memory problems, and longer useful life.
When should I choose a desktop instead of a laptop?
If your result calls for 24GB+ VRAM, frequent 34B-class model work, or fine-tuning, desktop hardware usually delivers better value and fewer compromises than a laptop.
Interactive AI Hardware Tool Suite
Use these quick calculators to estimate whether your GPU can run a model, how much VRAM you need, which laptop tier fits your workload, and whether cloud or local hardware makes more sense.
Can Your GPU Run This Model?
VRAM Requirement Estimator
AI Laptop Recommendation Tool
Cloud vs Local Cost Estimator
These tools provide practical estimates, not lab-certified benchmarks. For deeper recommendations, see the GPU ranking guide and AI laptop guide.