this post was submitted on 06 Feb 2025
5 points (64.7% liked)

Selfhosted

41875 readers
1314 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm on Arch Linux btw. and I have a RTX 3060 with 12 GB VRAM which is cool so a 14b model fits into the VRAM. It works quite well but I wonder if there is any way to help with the speed even more by trying to utilize the iGPU in my Intel 14600K. It always just sits there not doing anything.

But I don't know if it even makes sense to try. From what I read in some comments on the internet, the bottleneck will be the ram speed in the iGPU, which will use my normal ram which is a magnitude slower than the VRAM.

Does anyone have any experience with that?

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 1 day ago
[–] [email protected] 3 points 1 day ago (2 children)

Models are computed sequentially (the output of each layer is the input into the next layer in the sequence) so more GPUs do not offer any kind of performance benefit

[–] [email protected] 6 points 22 hours ago* (last edited 22 hours ago) (1 children)

More gpus do improve performance:

https://medium.com/@geronimo7/llms-multi-gpu-inference-with-accelerate-5a8333e4c5db

All large AI systems are built of multiple "gpus" (AI processers like Blackwell ). Really large AI models are run on a cluster of individual servers connected by 800 GB/s network interfaces.

However igpus are so slow that it wouldn't offer significant performance improvement.

[–] [email protected] 2 points 19 hours ago (1 children)

What I am talking about is when layers are split across GPUs. I guess this is loading the full model into each GPU to parallelize layers and do batching

[–] [email protected] 1 points 14 minutes ago

No, full models are not loaded into each GPU to improve the tokens per second.

The full Gpt 3 needs around 640GB of vram to store the weights. There is no single GPU (ai processor like a100) with 640 GB of vram. The model is split across multiple gpus (AI processers).

[–] [email protected] 1 points 1 day ago (1 children)

I see, that's a shame, thanks for explaining it.

[–] [email protected] 3 points 22 hours ago* (last edited 19 hours ago)

You can. But I don't think it will help because the igpu is so slow.

https://medium.com/@mayvic/llm-multi-gpu-batch-inference-with-accelerate-edadbef3e239