this post was submitted on 02 Feb 2025
57 points (79.4% liked)
Technology
62073 readers
5862 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Can I download their model and run it on my own hardware? No? Then they're inferior to deepseek
In fairness, unless you have about 800GB of VRAM/HBM you're not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.
I'm really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.
Qwen 2.5 is already amazing for a 14B, so I don’t see how deepseek can improve that much with a new base model, even if they continue train it.
Perhaps we need to meet in the middle, and have quad channel APUs like Strix Halo become more common, and maybe release like 40-80GB MoE models. Perhaps bitnet ones?
Or design them for asynchronous inference.
I just don’t see how 20B-ish models can perform like one orders of magnitude bigger without a paradigm shift.
I use 14b and it's certainly great for my modest highschool physics and python (to help the kids) needs, but for party games and such it's a drag its pop culture stops at mid 2023
Thing is, there are a lot of free APIs for 30B-70B class models.
Self hosting is great of course, and if 14B does the job then it does the job.