this post was submitted on 31 May 2025
219 points (90.7% liked)
Technology
70528 readers
4123 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ollama can’t run on Android
That's fair, but I think I'd rather self host an Ollama server and connect to it with an Android client in that case. Much better performance.
How is Ollama compared to GPT models? I used the paid tier for work and I'm curious how this stacks up.
It's decent, with the deepseek model anyway. It's not as fast and has a lower parameter count though. You might just need to try it and see if it fits your needs or not.
Yes, that's my setup. But this will be useful for cases where internet connection is not reliable
Is there any useful model you can run on a phone?
Llama.cpp (on which ollama runs on) can. And many chat programs for phones can use it.
Try PocketPal instead
You can use it in termux
Has this actually been done? If so, I assume it would only be able to use the CPU
Yeah I have it in termux. Ollama is in the package repos for termux. The speed it generates does feel like cpu speed but idk