I don't get the point. Framework laptops are interesting because they are modular but for desktop PCs that's the default. And Framework's PCs are less modular than a standard PC because the RAM is soldered
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Soldered on ram and GPU. Strange for Framework.
Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)
IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.
It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.
Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg
There's even the next iteration already happening: Cerebras is maling wafer-scale chipa with integrated SRAM. If you want to have the highest memory-bandwith to your cpu core it has to lay exactly next to it ON the chip.
Ultimately RAM and processor will probably be indistinguishable with the human eye.
Sound like a downgrade to me I rather have capability of adding more ram than having a soldered limited one doesn't matter if it's high performance. Especially for consumer stuff.
Looking at my actual PCs built in the last 25 years or so, I tend to buy a lot of good spec ram up front and never touch it again. My desktop from 2011 has 16GB and the one from 2018 has 32GB. With both now running Linux, it still feels like plenty.
When I go to build my next system, if I could get a motherboard with 64 or 128GB soldered to it, AND it was like double the speed, I might go for that choice.
We just need to keep competition alive in that space to avoid the dumb price gouging you get with phones and Macs and stuff.
I definitely wouldn't mind soldered RAM if there's still an expansion socket. Solder in at least a reasonable minimum (16G?) and not the cheap stuff but memory that can actually use the signal integrity advantage, I may want more RAM but it's fine if it's a bit slower. You can leave out the DIMM slot but then have at least one PCIe x16 expansion slot. A free one, one in addition to the GPU slot. PCIe latency isn't stellar but on the upside, expansion boards would come with their own memory controllers, and push come to shove you can configure the faster RAM as cache / the expansion RAM as swap.
Heck, throw the memory into the CPU package. It's not like there's ever a situation where you don't need RAM.
All your RAM needs to be the same speed unless you want to open up a rabbit hole. All attempts at that thus far have kinda flopped. You can make very good use of such systems, but I’ve only seen it succeed with software specifically tailored for that use case (say databases or simulations).
The way I see it, RAM in the future will be on package and non-expandable. CXL might get some traction, but naah.
Couldn't you just treat the socketed ram like another layer of memory effectively meaning that L1-3 are on the CPU "L4" would be soldered RAM and then L5 would be extra socketed RAM? Alternatively couldn't you just treat it like really fast swap?
Wrote a longer reply to someone else, but briefly, yes, you are correct. Kinda.
Caches won’t help with bandwidth-bound compute (read: ”AI”) it the streamed dataset is significantly larger than the cache. A cache will only speed up repeated access to a limited set of data.
These little buggers are loud, right?
The Noctua fan option should be pretty quiet.
I have a Noctua fan in my PC. Quiet AF. I don't hear it and it sites beside me.
Hmm, probably not. I think it just has the single 120mm fan that probably doesn't need to spin up that fast under normal load. We'll have to wait for reviews.
I also just meant given the size constraints in tiny performance PCs. More friction in tighter spaces means the fans work harder to push air. CPU/GPU fans are positioned closer to the fan grid than on larger cases. And larger cases can even have a bit of insulation to absorb sound better. So, without having experimented with this myself, I would expect a particularly small and particularly powerful (as opposed to efficient) machine to be particularly loud under load. But yes, we'll have to see.
Calling it a gaming PC feels misleading. It's definitely geared more towards enterprise/AI workloads. If you want upgradeable just buy a regular framework. This desktop is interesting but niche and doesn't seem like it's for gamers.
I think it’s like Apple-Niche
Question about how shared VRAM works
So I need to specify in the BIOS the split, and then it's dedicated at runtime, or can I allocate VRAM dynamically as needed by workload?
On macos you don't really have to think about this, so wondering how this compares.
On my 7800, it’s static. The 2GB I allocate is not usable for the CPU, and compute apps don’t like it “overflowing” past that.
This is on Linux, on a desktop, ASRock mobo. YMMV.
It's typically dynamic