Models and sizes FAQ
Pick the right model for your laptop and workload. These FAQs cover model sizes, presets, and when to switch.
Choosing models
Which model for 8 GB RAM?
Use 4–8B quantized models with the low-spec preset and concise prompts.
When to use a larger model?
For long-form, nuanced reasoning on hardware with 16 GB+ RAM and solid ports.
Can I swap models per task?
Yes. Keep a manifest and switch presets based on task complexity and hardware.
Performance impact
Do bigger models drain battery faster?
Yes. Run them plugged in; use smaller models on battery.
How to keep small models sharp?
Give clear context, cap tokens, and use bullet outputs for precision.
Is GPU required?
No. PortableMind is tuned for CPU; modern CPUs handle the included models.
Management
Can I remove models to save space?
Yes, but keep backups. Ensure at least one low-spec and one standard model remain.
How to add a new model safely?
Download on a trusted network, copy to the models folder, and validate offline.
Do models auto-update?
No. Update manually when you choose, then validate offline performance.
Quality vs. speed
Outputs feel generic
Add examples, use stricter formatting, or step up one model size if hardware allows.
Responses are slow
Drop to a smaller model, shorten prompts, and reduce stream speed.
How to avoid rambling?
Set token caps, request bullet points, and include tone/style constraints.
More resources
Related guides
Battery-optimized offline AI on laptops
Run offline AI without draining your battery. PortableMind includes presets and tips to keep power use low while staying offline.
Best offline AI models for Mac: what runs well on MacBooks
Model guidance for macOS: what runs well on MacBooks and how to stay stable without cloud dependencies.
Best offline AI models for Windows: fast vs smart presets
Model guidance for Windows laptops: choose lightweight models for speed or bigger ones for reasoning, with real-world tips.