this post was submitted on 24 Jan 2025
121 points (96.2% liked)

PC Gaming

9112 readers
1134 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 48 points 1 week ago* (last edited 1 week ago) (3 children)

They used ray tracing for the hit registration so that's presumably why.

It's a really interesting idea ... presumably that means there are some really flashy guns and there is a very intricate damage system that runs at least partially on the GPU.

[–] [email protected] 25 points 6 days ago* (last edited 6 days ago) (1 children)

really flashy guns and there is a very intricate damage system that runs at least partially on the GPU.

Short opinion: no, CPU's can do that fine (possibly better) and it's a tiny corner of game logic.

Long opinion: Intersecting projectile paths with geometry will not gain advantages being moved from CPU to GPU unless you're dealing with a ridiculous amount of projectiles every single frame. In most games this is less than 1% of CPU time and moving it to the GPU will probably reduce overall performance due to the latency costs (...but a lot of modern engines already have awful frame latency, so it might fit right in fine).

You would only do this if you have been told by higher ups that you have to OR if you have a really unusual and new game design (thousands of new projectile paths every frame? ie hundreds of thousands of bullets per second). Even detailed multi-layer enemy models with vital components is just a few extra traces, using a GPU to calc that would make the job harder for the engine dev for no gain.

Fun answer: checkout CNlohr's noeuclid. Sadly no windows build (I tried cross compiling but ended up in dependency hell), but still compiles and runs under Linux. Physics are on the GPU and world geometry is very non-traditional. https://github.com/cnlohr/noeuclid

[–] [email protected] 9 points 6 days ago* (last edited 6 days ago) (2 children)

https://www.pcguide.com/news/doom-the-dark-ages-promises-accurate-hit-detection-with-help-from-cutting-edge-ray-tracing-implementation/

Honestly, I'm not interested in debating it's validity especially with the exact details of what they've done still under wraps ... I have no idea if they are really on to something or not and the details are scarce, but I did find the article I read.

[–] [email protected] 9 points 6 days ago* (last edited 6 days ago) (1 children)

Ooh thankyou for the link.

“We can leverage it [ray tracing] for things we haven’t been able to do in the past, which is giving accurate hit detection”

“So when you fire your weapon, the [hit] detection would be able to tell if you’re hitting a pixel that is leather sitting next to a pixel that is metal”

“Before ray tracing, we couldn’t distinguish between two pixels very easily, and we would pick one or the other because the materials were too complex. Ray tracing can do this on a per-pixel basis and showcase if you’re hitting metal or even something that’s fur. It makes the game more immersive, and you get that direct feedback as the player.”

It sounds like they're assigning materials based off the pixels of a texture map, rather than each mesh in a model being a different material. ie you paint materials onto a character rather than selecting chunks of the character and assigning them.

I suspect this either won't be noticeable at all to players or it will be a very minor improvement (at best). It's not something worth going for in exchange for losing compatibility with other GPUs. It will require a different work pipeline for the 3D modellers (they have to paint materials on now rather than assign them per-mesh), but that's neither here nor there, it might be easier for them or it might be hell-awful depending on the tooling.

This particular sentence upsets me:

Before ray tracing, we couldn’t distinguish between two pixels very easily

Uhuh. You're not selling me on your game company.

"Before" ray tracing, the technology that has been around for decades. That you could do on a CPU or GPU for this very material-sensing task without the players noticing for around 20 years. Interpolate UVs across the colliding triangle and sample a texture.

I suspect the "more immersion" and "direct feedback" are veils over the real reasoning:

During NVIDIA's big GeForce RTX 50 Series reveal, we learned that id has been working closely with the GeForce team on the game for several years (source)

With such a strong emphasis on RT and DLSS, it remains to be seen how these games will perform for AMD Radeon users

No-one sane implements Nvidia or AMD (or anyone else) exclusive libraries into their games unless they're paid to do it. A game dev that cares about its players will make their game run well on all brands and flavours of graphics card.

At the end of the day this hurts consumers. If your games work on all GPU brands competitively then you have more choice and card companies are better motivated to compete. Whatever amount of money Nvidia is paying the gamedevs to do this must be smaller than what they earn back from consumers buying more of their product instead of competitors.

[–] [email protected] 4 points 6 days ago* (last edited 6 days ago)

Well like, basically every shooter currently uses a hitbox to do the hitscan and that never matches the model 1:1. The hitboxes are typically far less detailed and the weak points are just a different part of the hitbox that is similarly less detailed.

I think what they're doing is using the RT specialized hardware to evaluate the bullet path (just like a ray of light from a point) more cheaply than can be traditionally done on the GPU (effectively what Nvidia enabled when they introduced hardware designed for ray tracing).

If I'm guessing correctly, it's not so much that they're disregarding the mesh but they're disregarding hitbox design. Like, the hit damage is likely based on the mesh and the actual rendered model vs the simplified hitbox ... so there's no "you technically shot past their ear, but it's close enough so we're going to call it a headshot" sort of stuff.

If you're doing a simulated shotgun blast that could also be a hundred pellets being simulated through the barrel heading towards the target as well. Then add in more enemies that shoot things and a few new gun designs and... maybe it starts to make sense.

[–] [email protected] 3 points 6 days ago

It sounds like they're tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It's a weird thing to do and a bad description of it IMO, but that's what I got from that summary. It wouldn't be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.

Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.

This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.

Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)

[–] [email protected] 17 points 6 days ago (1 children)

Not disputing you, but hasn't hitscan been a thing for decades? Or is what you're saying a different thing?

Also, I always thought that the CPU and GPU either couldn't communicate with each other, or that it was a very difficult problem to solve. Have they found a way to make this intercommunication work on a large scale? Admittedly I only scanned the article quickly, but it looks like they're only talking about graphics quality. I'd love to know if they're leveraging the GPU for more than just visuals!

[–] [email protected] 16 points 6 days ago (1 children)

It's a different thing. This is pixel perfect accuracy for the entire projectile. There aren't hotboxes as I understand it, it's literally what the model is on the screen.

[–] [email protected] 6 points 6 days ago

Ooh, that makes sense. Sounds like it could be much cheaper to process than heavy collision models. Thanks for the clarification!

[–] BCsven 3 points 5 days ago

W10 OK slow, but OK. W11 so much jank and buggy bullshit. I moved allmy games to Linux. With Proton and Vulkan all my games work including the RTX settings.

[–] [email protected] 6 points 5 days ago

Damn, and I was acturally exited for this. Now I get the feeling that it'll be like every other AAA game.

[–] [email protected] 4 points 5 days ago* (last edited 5 days ago) (1 children)

And Quake III required a 3D accelerator. RTX has been around long enough.

[–] [email protected] 5 points 5 days ago* (last edited 5 days ago)

And i agree that is a good thing and natural progression/evolution of tech.
What i dont like is nvidia's cockhold on the tech with insane prices ( for the card and power draw ) as a result. I know other cards are catching up and all that, but the difference is still huge because some methods and functions are locked to the cuda cores and nvidias tech.
I will not be giving nvidia money in the next 7 years. We will see where they stand once i have to replace my (amd) gpu.

[–] ILikeBoobies 6 points 6 days ago (2 children)

8gb is intro level for gpus anyway so that’s not a big deal

I suppose if you’re going to have ray tracing it cuts down development time to not have to redo lighting again for when the feature is off

[–] [email protected] 6 points 6 days ago (2 children)

Fair, but it's been shown time and time again that most users are either on "intro level" gpus or weaker. Heck, a midrange from 2 years ago is an 8gb card. I'm not sure how they expect to sell this game at all unless it's just planned to be a bundle add-on for the 50xx/90xx series cards.

[–] saigot 1 points 5 days ago (1 children)

Currently the most popular gpu according to the steam survey is a 3060. That plays the only other mandatory RT game, indiana jones, at 60fps on high. A 2080 can play on low at 50.

[–] [email protected] 1 points 5 days ago

Yeah, but in this case I'm referring to vram. RT is what it is, and most "recent" cards support some kind of RT, even if not well. The concern is more that, for instance, the 3070 only has 8GB. I wouldn't ever say that the 3070 is nearing it's EoL either. The 3060 is the top card in steam, sure, but the next two dozen or so places are almost universally 8GB cards (with varying degrees of RT support) , including several 40xx series. I'm just saying that I don't see a hard RT and >8GB VRAM requirement playing out as well as a lot of people think.

[–] ILikeBoobies 1 points 6 days ago

I would say from 3000 series on 8gb was intro

Just a scam to get anything lower than that between then and now

[–] [email protected] 3 points 6 days ago (1 children)

The amount of VRAM isn't really the issue, even an extremely good GPU like the 7900XTX (with 24GB VRAM) struggles with some ray tracing workloads because it requires specially designed hardware to run efficiently

[–] ILikeBoobies 1 points 6 days ago

Different points

  1. On min vram

  2. On ray tracing

[–] [email protected] 2 points 6 days ago* (last edited 6 days ago) (3 children)

How does RT only titles work on consoles? The RT really isn't that powerful, aren't they supposed to be equivalent to an RTX 2070 at best? It sounds like the graphics difference will be quite a lot for PC vs consoles.

[–] [email protected] 8 points 6 days ago (1 children)

Consoles have been seriously struggling to run anything past 30fps blurry messes for the past several years. It’s real bad on that side of the fence.

Although PC gamers aren’t much better off, having to buy $900 GPUs every year just to run the latest AAAA blurfest at 30 FPS with AI frame gen on top of upscaling on top of interpolation frame gen.

[–] [email protected] 4 points 6 days ago

No one should spend $900 on a GPU when about $500 gets you a product that's about 90% as good

[–] [email protected] 2 points 6 days ago

Both current gen consoles are RT capable, so they'll just use lowered graphical settings, some amount of optimization, and upscaling. Indiana Jones ran great though, way better than you'd expect. I was getting a perfectly smooth 75 fps on a 6750 XT on 1080p high, no upscaling or framegen in use.

[–] [email protected] 1 points 6 days ago

They'll just run at 30fps lol

[–] ryper 83 points 1 week ago* (last edited 1 week ago) (3 children)

Reminder that Bethesda is owned by Microsoft, the company that insists it's going to end support for Windows 10 in October and wants everyone to move to Windows 11, which doesn't officially support perfectly functional but somewhat old CPUs. So of course they don't care about GPUs too old to support ray tracing.

[–] [email protected] 8 points 6 days ago (1 children)

At some point it was going to happen, this is just earlier than many thought. The real question is “when is AMD going to have an answer to Nvidia when it comes to RT performance?”

[–] MystikIncarnate 4 points 5 days ago (1 children)

Earlier than they thought?

How long did they think it would take before RT was a requirement? It was introduced with the GeForce 20 series more than six years ago.

For technology, six years is vintage.

The only people this should affect is people still using GTX 10 and 16 series cards. I dunno what's happening with AMD/Radeon. Since they were purchased by AMD the banking schemes have gotten to be more and more nonsensical, so I always have a hard time knowing WTF generation a card is from by the model number.

In any case. Yeah, people using 5+ year old tech are going to be unable to play the latest AAA games. And?

Has there ever been a time when a 5+ year old system can reasonably play a modern AAA title without it being a slide show?

[–] [email protected] 4 points 5 days ago* (last edited 5 days ago) (1 children)

I’m still hearing from people that they’re using an Nvidia 1000 card. I was expecting to hear 2000 instead of 1000, and then it would happen.

[–] MystikIncarnate 4 points 5 days ago (1 children)

I have a 20 series card, albeit one of the higher tier ones, and I probably won't be upgrading this year. I probably also won't be playing any new AAA titles either.

It's fine to have an older card, but nobody in that position should be expecting to play the latest and greatest games at reasonable framerates, if at all.

It is the way of things.

I am personally rather miffed about the fact that if you want any performance from a GPU, you basically need to spend $800+. Even though some cards are saying they're available for less, they almost never are, either due to scalping or greed (which are kind of the same thing), or something else like idiotic tariffs. I don't have nearly a grand I can burn every year to upgrade my GPU the last GPU I bought was a 1060, and my current card was a gift. I haven't had a budget for a decent GPU in many, many years.

When I upgrade, I'm likely going Intel arc, because the value proposition makes sense to me. I can actually spend less than $600 and get a card that will have some reasonable level of performance.

[–] [email protected] 0 points 5 days ago (2 children)

The current Intel GPUs aren’t better than a RTX 2070, so that won’t be an upgrade if you’re on a higher tier 2000 series.

I just went up to a 4070 Ti from a 2080 Ti, because it was the only worthwhile upgrade. $660 used. So you don’t need to spend $800.

[–] [email protected] 2 points 5 days ago (1 children)

Euh, no. The intel battlemage cards are way way better than rtx 2070. They even beat the 4060... For 250$.
Intel battlemage gpu's are really good cards if you dont need pure, raw, power because everything must be in 4k and on ultra etc.
Which is a good value since that raw, pure, power comes with an electricity bill i would not want to pay

[–] [email protected] 1 points 5 days ago (1 children)

Yeah, I got the cards wrong. They are around a 2080, which is around the same as a 4060. Still not much of an upgrade from an upper end 2000 series, which to me is 2070 and up.

[–] [email protected] 1 points 5 days ago (1 children)

I would actually want to see the actual performance differences, because the intel cards have a way faster memory bandwidth which is giving them the performance. Still, 250 for 4060 performance ( which is way way more ) is one hell of a good deal in comparison

[–] [email protected] 1 points 4 days ago (1 children)
[–] [email protected] 1 points 4 days ago* (last edited 4 days ago)

This isnt saying much without context. What game, settings, cpu, ...
I am also missing battlemage in that list..

[–] MystikIncarnate 1 points 4 days ago

Yeah, the gifted card I'm using is a 2080 Ti. My friend that gifted it, went from a dual 2080 ti SLI setup to a 4090 IIRC, he kept one for his old system so it's still useful, but gave me one of the two since SLI is dead and he doesn't need the extra card in a system he's not frequently using.

11G of memory is an odd choice, but it was a huge uplift from the 3G I was using before then. I had a super budget GTX 1060 3G (I think it was made by palit?) before.

I still have to play on modest settings for anything modern, but my only real challenge has been feeding it with fresh air. My PC case puts the GPU on a riser with front to back airflow and very little space front-back and top/bottom. The card uses a side intake, which is fairly typical for GPUs, which is basically starved for air if I install the card normally. For now, I've got it on a riser, sitting on top of the system with the cover off, so my GPU is in open air. Not ideal. I need to work on a better solution.... But it works great otherwise.

[–] [email protected] 3 points 6 days ago (1 children)

Somewhat old CPUs are 8 years old+ now? Windows 11 is crap, but I don’t think the hardware requirements are the reason.

[–] [email protected] 4 points 5 days ago (1 children)

Honestly? Yeah.

They're perfectly functional still and capable of pretty much anything for a modern workload, spec depending... If they can run win 11 fine (and they should be able to if they can run 10), then the cutoff is arbitrary and will cause more systems to find their way to landfills sooner than they otherwise would have.

[–] [email protected] 1 points 4 days ago

How many 8 year old computers still function halfway decently? Most of those people are probably due for an upgrade whether they know it or not.

Even if we go with this popular narrative, no one is actually going to immediately run to throw out their working Win10 PC when Microsoft cuts off updates. They’ll just continue to use it insecurely. Just like millions of people did and still do with Win7.

This is the issue with using a proprietary operating system in general. Eventually they’ll cut you off arbitrarily because there’s a profit motive to do so. Relying on them to keep your system updated and secured indefinitely is a naive prospect to begin with.

load more comments (1 replies)
load more comments
view more: next ›