this post was submitted on 31 Jan 2025
68 points (91.5% liked)
Technology
62073 readers
5620 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've ran them on intel cpu's before. When putting a cpu with more than two memory channels and a several hundred watt power budget up to a beefed up mobile cpu, it's not a fair fight.
Second hand xeons are cheaper though
I'm talking about running them in GPU, which favours the GPU even when the comparison is between an AMD Epyc and a mediocre GPU.
If you want to run a large version of deepseek R1 locally, with many quantized models being over 50GB, I think the cheapest Nvidia GPU that fits the bill is an A100 which you might find used for 6K.
For well under that price you can get a whole Mac Studio with those 192 GB the first poster in this thread mentioned.
I'm not saying this is for everyone, it's certainly not for me, but I don't think we can dismiss that there is a real niche where Apple has a genuine value proposition.
My old flatmate has a PhD in NLP and used to work in research, and he'd have gotten soooo much use out of >100 GB of RAM accessible to the GPU.
I had found one for about 400 recently, a bit far away tho. I ended up going with a gpu closer by. I don't need that many gb's