this post was submitted on 24 Jun 2025
27 points (80.0% liked)

Ollama - Local LLMs for everyone!

190 readers
3 users here now

A place to discuss Ollama, from basic use, extensions and addons, integrations, and using it in custom code to create agents.

founded 1 week ago
MODERATORS
 

Do you use it to help with schoolwork / work? Maybe to help you code projects, or to help teach you how to do something?

What are your preferred models and why?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 week ago

My Mac mini (32GB) can run 12B parameter models at around 13 tokens/sec, and my 3060 can achieve roughly double. However, both machines have a hard time keeping up with larger models. I'll have to look into some special-purpose models