this post was submitted on 28 Jan 2025
1 points (100.0% liked)

Self-Hosted Alternatives to Popular Services

213 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ShinyAnkleBalls on 2025-01-27 22:08:37+00:00.


Hi there, I keep seeing mor and more posts about running Deepseek R1 locally. Some claim you can do it using Ollama with a few GB of ram.

You can't run THE Deepseek R1 with Ollama. If you install Ollama and select Deepseek R1, what you are getting and using are the much much smaller and much much less performant distilled models. They are effectively fine tunes of different existing models (Qwen2.5, Llama, etc.) using data generated using Deepseek R1. They are great, but not THE R1 OpenAI is scared of.

I don't know why Ollama decides to call these models Deepseek R1, but it's problematic. Running the actual Deepseek R1 in q4 requires more than 400GB of VRAM or RAM depending on how long are are willing to sit there waiting for an answer...

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here