Memory bandwidth is 256GB/sec, much less than M4 Max (526GB/s) or M2 Ultra (800GB/s). Expect performance to reflect that.
LocalLLaMA
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
It's comparable to the M4 Pro in memory bandwidth but has way more RAM for the price.
Good point. You can't even get an M* Pro with 128GB. Only the Max and Ultra lines go that high, and then you'll end up spending at least twice as much.
I think it has potential but I would like to see benchmarks to determine how much. The fact that they have 5Gbps Ethernet and TB4 (or was it 5?) is also interesting for clusters.
Indeed!
(It should be TB4 if I remember correctly)
It's absolutely positioned to be a cheap AI PC. Mac Studio started gaining popularity due to its shared RAM, Nvidia responded with their home server thing, and now AMD responds with this.
It being way cheaper and potentially faster is huge. the bad news is that it's probably going to be scalped and out of stock for some time.
'Wonder how it will compare to NVIDIA Digits on price
Update: Depending on specs, it's priced similar it seems - approx. 3000 EUR for a decent complete prosumer build.
Update 2: Holy boaty, it's modular and supports third-party hardware: https://frame.work/dk/en/products/desktop-mainboard-amd-ai-max300?v=FRAMBM0006
Perhaps I'm mistaken but I read this as "Nvidia takes 3k just for the chip" and framework "2,5k for the whole system"?
Either way exciting news for self hoster in the next years!
Huge!
Awesome! We need benchmarks ASAP!