1. Hardware Input

Tell the app what you have

Recommendations are estimates. Actual performance depends on CPU, GPU, quantization, context length, thermals, OS, and Ollama version.

Recommendations are estimates, not benchmarks.

2. Recommendations

Top local models for your setup

Choose your hardware and click Recommend Models to see the best matches.

3. Setup Steps

How to run the first recommendation

These steps update based on the OS you select and use the top recommendation automatically.

    4. Warning

    Keep expectations realistic

    • These recommendations are approximate, not benchmarks.
    • Actual performance depends on CPU, GPU, quantization, context length, thermals, OS, background apps, and Ollama version.
    • Vision and larger models often need more memory than text-only use suggests.
    • A model can technically run and still feel too slow for daily use.

    5. Upgrade Advice

    What upgrade matters most next

    6. FAQ

    Common questions

    What is Ollama?

    Ollama is a tool for downloading and running local AI models on your own machine.

    Can this website run the AI model for me?

    No. This site only recommends models and shows commands. You run the model locally with Ollama.

    Does this site send my specs anywhere?

    No. Everything runs in your browser and no hardware data is sent anywhere.

    Why are recommendations approximate?

    Real-world speed and quality vary based on quantization, CPU, GPU, context length, thermals, drivers, OS, and Ollama version.

    What should I choose for an 8 GB laptop?

    Small 3B to 4B models are usually the practical ceiling. CPU-only systems may still prefer 1B to 3B models for speed.

    What is the best model for coding?

    Qwen coding models are strong picks when your RAM and VRAM are high enough, especially the 7B range for balanced local use.

    What is the best model for vision?

    Vision-capable Gemma 3 models are the main options in this v1 dataset. They need more memory than text-only models.

    Can I use this on GitHub Pages?

    Yes. It is a static HTML, CSS, and JavaScript site with no backend, build step, or API dependency.

    7. Privacy

    Everything stays in your browser

    This app is fully static. No login, no database, no API calls, no tracking logic, and no hardware data is sent anywhere.

    Guides

    Ollama models for weak hardware

    Practical starting guides for running Ollama on weak hardware, with model picks, commands, limits, and upgrade advice.

    Read the laptop local AI guide Read the 8GB RAM guide Read the Ollama RAM guide Read the 16GB RAM models guide Read the 8GB RAM and 4GB VRAM guide Read the 4GB VRAM models guide Read the low-end coding models guide