Nope.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Models are computed sequentially (the output of each layer is the input into the next layer in the sequence) so more GPUs do not offer any kind of performance benefit
More gpus do improve performance:
https://medium.com/@geronimo7/llms-multi-gpu-inference-with-accelerate-5a8333e4c5db
All large AI systems are built of multiple "gpus" (AI processers like Blackwell ). Really large AI models are run on a cluster of individual servers connected by 800 GB/s network interfaces.
However igpus are so slow that it wouldn't offer significant performance improvement.
What I am talking about is when layers are split across GPUs. I guess this is loading the full model into each GPU to parallelize layers and do batching
No, full models are not loaded into each GPU to improve the tokens per second.
The full Gpt 3 needs around 640GB of vram to store the weights. There is no single GPU (ai processor like a100) with 640 GB of vram. The model is split across multiple gpus (AI processers).
I see, that's a shame, thanks for explaining it.
You can. But I don't think it will help because the igpu is so slow.
https://medium.com/@mayvic/llm-multi-gpu-batch-inference-with-accelerate-edadbef3e239