this post was submitted on 16 May 2024
16 points (100.0% liked)

LocalLLaMA

3314 readers
2 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Current situation: I've got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second.

I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don't run out of memory, it also runs at about 2 tokens per second.

What's the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU?

As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now.

Thanks for any pointers!

top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

I hate to bear the bad news, but as long as the model is too large to fit entirely in VRAM getting 5 t/s on a 8x7b is going to be difficult. You can throw another 16gb RAM in the system which could help with caching and context length, but since the model is still having to juggle data in and out of VRAM the speeds will remain low.

I wouldn't upgrade the CPU personally, focus on adding beefier GPU. And it's probably not worth adding the 970 to the mix, the 4GB isn't providing much room and will likely slow down the 6800 XT more.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

I don't know how important the CPU is for those workloads tbh. but I feel like not as important so maybe you're fine leaving it as it is.

I think AMD wanted to release a new GPU lineup (radeon 8000 series) sometime this year/early next year. Maybe just wait for that, sell your old card on the used market and buy a new one?

(And throw in 16G of RAM as you said)

[–] [email protected] 5 points 1 year ago (1 children)

Ollama doesn't currently support mixing CUDA & ROCm. https://github.com/ollama/ollama/issues/3723#issuecomment-2071134571

One thing to keep in mind about adding RAM your speed could drop depending on how many slots you populate. For me, I have a 5700G and with 2x16Gb, it runs at 3200Mhz, but with 4x16Gb(same exact product), it only runs at 1800Mhz. In my case, RAM speed has a huge effect on tokens/sec, if I have a model that has to use some RAM.

You can check AMD's spec page for your processor, but they don't really document a lot of this stuff.

[–] [email protected] 2 points 1 year ago (1 children)

Good call out on not mixing CUDA and ROCm, I wasn't aware of this

[–] [email protected] 1 points 1 year ago

Yep, I had been hoping for the same thing.

Also, to @[email protected], you might want to wait and see what gets announced at Computex next month. Hopefully they announce some new stuff and the current gen prices drop.