Hacker News new | past | comments | ask | show | jobs | submit login

DLSS is only marginally better then FSR 2.0.

Sure, you can see the difference if you're carefully looking for it in comparison videos. But when you're actually playing it's usually not noticeable to any meaningful degree. You're gonna get occasional artefacts with both.

As for ray tracing - to be honest, playing at 4k with an RTX 3070 there's very few games where I can turn it on without the game running unacceptably slow even with DLSS.

Ray traced shadows and AO are nice but hardly a deal breaker.

I think it's more the desire to have the latest and greatest tech that makes Nvidia cards desirable, rather than any material difference that you'll actually notice while gaming.

Some good comparison shots of DLSS 2 and FSR 2 in this video.

https://youtu.be/1WM_w7TBbj0




I mean, you talk about "comparison videos", when in reality the issue is that significant amount of games support DLSS and pretty much noone supports FSR 2.0.

You can do any comparison you want, but it won't help you when your software won't make use of the library your GPU supports.

(Also, we're at DLSS 3.0 now with frame generation which moves the bar higher.)


Fsr 2 is supported out of tje box on Gamescope in Linux for all games


That's not FSR2, that's FSR1.

FSR2 needs game specific integration (IIRC because it needs access to motion vectors) and cannot be applied globally.


Nope, that's FSR 1.0 which looks terrible by any standard.


Thanks for the correction


Have you tried DLSS 3 with frame generation yet? Ray tracing performance hits are mitigated substantially, 4K included. What is more shocking to me is how low they got the latency on it; I have yet to notice perceptible lag when turned on (contrasting this of course with TVs that attempt to increase frames to achieve the 'soap opera effect', where lag becomes substantial). I just hope more games support it one day.

FSR may one day get much better, but in my personal preferences and tastes, I greatly prefer DLSS for upscaling. It tends to be sharper, better at fine details at long distances, and tends to do better with certain (literal) edge cases.


I haven't, like most people I guess it'll be at least half a decade until I upgrade to a 40 series card.

> What is more shocking to me is how low they got the latency on it

Most games are perfectly playable at a stable 30fps (33.3ms latency at least). The frame generation is hopefully giving you 90-120fps. So you're getting latency sub 16.6ms, equivalent to stable 45-60fps. That will usually only be noticeable in drawing apps or twitchy FPS games.

Maybe they're doing some other tricks like decoupling input latency from frame latency, I haven't looked into it. It's common to do this with physics, in the opposite direction (e.g. run physics at a fixed 33.3ms interval while generating frames as fast as possible, aiming for 60fps).


Seems like it's still better to buy Nvidia then. Nvidia cards support both DLSS and FSR so you can choose the tech that works best with the particular game.

Meanwhile AMD only supports FSR and if the game looks better with DLSS then there's nothing you can do.

AMD cards also aren't that much cheaper so it doesn't make much sense to buy worse tech that you'll be using for years every day to save a few euros.


It's not as if Nvidia is always a better choice. Things turned around a bit when new crop of games (The Last of Us, Hogwarts Legacy, etc) came out, requiring more than 8GB of VRAM. They do work with 8GB, but with lot more stutterging as textures have to be constantly transfered from main RAM. For these games amount of VRAM made a huge difference. At that point in time, AMD had more mid level cards with 12GB (6700) and 16GB (6800) and for considerably less money than equivalent Nvidia, at least in Europe. Nvidia cheapened and stuck to 8GB even for 3070 which was more expensive than 16GB 6080 around here.. This became a big issue and pushed Nvidia to put more VRAM in next generation.


If you actually try and find proof of the 8GB fiasco, there's mostly one source that put together an artificial scenario where they pushed a bottom end card to 4k on newly launched console ports which then got optimized.

NVIDIA said as much, released a 16GB version, and lo-and-behold it's absolutely pointless, even performing worse than the 8GB at times: https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4060...

Most graphics cards are still under 8GB of VRAM, and raw numbers of memory storage mean nothing for performance in isolation.


I believe it's the 128-bit interface hindering performance more than the amount of VRAM.


The framing here is a little unfair, its DLSS that does not support AMD hardware, not the other way around. This is clearly a deliberate decision by Nvidia. Meanwhile AMD/FSR supports both AMD and Nvidia hardware.


DLSS directly leverages the tensor cores of particular NVidia cards.

Whether or not this is just a marketing gimmick is up to you and your own conclusions on the matter, but AMD doesn't seem to have an equivalent hardware tech for this yet. It seems to be taking steps in the right direction with the WMMA instruction for accelerating matrix multiplication in a similar fashion, but performance so far head-to-head seems lacking, and I haven't seen anything definitive out of AMD suggesting this for gaming purposes.


The framing is fair unless you expect NVIDIA to ditch their ML based solution for a class of inferior implementations which in turn can run just about anywhere.

DLSS isn't "NVIDIA flavored FSR", it's a more advanced pipeline that requires significantly more architectural alignment. FSR 3 was going to be the closest thing to an equivalent and won't run everywhere FSR 2 does.


Hardware Unboxed's viewership draw is placating "Team Red", or the the people who treat hardware like a sports competition, so when they say:

- the overwhelming conclusion is DLSS is the superior upscaling technology with near universally better results

- it was a brutal result for AMD

- DLSS gives NVIDIA a clear selling point

I'm not sure if it backs your suggestion...


Weird you didn't post the comparison he made a couple weeks later where he says that sometimes DLSS beats native rendering: https://www.youtube.com/watch?v=O5B_dqi_Syc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: