“Well, first of all, they’re completely wrong,” Huang said in response to a question from Tom’s Hardware editor-in-chief Paul Alcorn about the criticism.
“The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI,” Huang continued.
Just a elongated way to say AI slop.

“Well, first of all, they’re completely wrong,”
Proceeds to explain exactly what everyone hates about it.
I mean he’s gotta say people are wrong for not liking it, it’s his job to sell it
True. I mean we can’t exactly expect the CEO of an AI company to admit his AI tool produces AI slop. Got to think about shareholder value after all.
Anyway outlaw the stock market and shareholders and this won’t happen again. It’s just a way for rich gambling addicts to bet and rope regular people into it in the hopes of getting rich only for them to lose everything.
His statement reeks of “Don’t you all have phones” energy.
It’s worse…
People are saying it’s AI slop filter, and he won’t shut the fuck up that the slop isn’t a filter, it’s a main ingredient.
When enabled it fundamentally changes how it works.
Developers will be stuck with the default version we’ve seen, or have to devote a shit ton of money to do twice the work for the people who use this, and they won’t see the other version.
Like, this shit is going to get worse and worse the more people understand it.
he wont be able to afford hes new jacket then.


holy shit
“The consumers don’t know what they want. I, the CEO, know what the consumers want. And the consumers want to give me money!”
“Do you guys not have two GPUs?”
Jensen Huang has all the GPUs, he can probably play games where each character has its own dedicated GPU and every atom and molecule of the environment is rendered in real time with a hyper-realistic physics engine, with built-in AI that plays for you so that even your idle pastimes are automated giving you more time to WORK AND PRODUCE VALUE FOR THE OWNER-CASTE.
“This game only needs two GPUs to run, what’s the problem?”
None of this is done for the average consumer/gamer. We’re not the consumers he’s addressing.
DLSS is a consumer product.
The more you buy, the more you save!
Remember, he’s never talking to us, he’s talking to major stockholders.
Even some of those can read the room and reach the conclusion that “if people won’t buy it, I won’t make profits”
Don’t you guys have phones?
Same exact vibes.
But we thought everyone was okay with repackaged interpolation! Why not repackaged Instagram filters!?
I think most people are ok with frame gen because it doesn’t touch the actual content. It just moves things around a bit with motion vectors which actually was kind of a thing even before AI although not very good. It didn’t repaint the game into some different art style.
Also there were real frames in there.
This is going to 100% replace the game graphics.
I think most people are ok with frame gen because it doesn’t touch the actual content. It just moves things around a bit with motion vectors which actually was kind of a thing even before AI although not very good. It didn’t repaint the game into some different art style.
Naw most people are not ok with fake frames, and like raytracing is getting less and less likely to be left on. Most people however hate fake frames not due to the frames themselves but the motion blur effect that seems to be needed to make things look ok on top of the frame gen (no one likes motion blur).
You are right that this is going to replace game graphics to some degree since its another shortcut game studios can use to cut costs (and the industry is kinda struggling at the moment). Why spend effort, time and money making a model look good when you can use a tool to gloss over the work and while it does not look “good” per say it will look better then it should.
Yep, I’m more suggesting that this was the logical path they would have continued down.
I personally don’t like the generation because it’s functionally noise and can effect the feel / responsiveness of the game. Upscaling seems pretty reasonable - but like many I just can’t abide by the notion that we are counting a generated frame as a frame for benchmark sake.
I’m not against framegen existing. It’s a preference. Same as that feature on TVs. To each their own.
Back to the new dlss though: yeah it was inevitable they go here… and I’m personally thrilled this was the line everyone more or less took issue with.
Looking forward to the day I boot up a new game and all the characters look like this because I don’t have DLSS 9 enabled

On the bright side there are more games than anybody can play in a lifetime that already exist. Sucks ass for my favourite hobby and lifelong companion from before my memory begins, but hey.
And, indies are still going strong. AAA died ten years ago, but we still get classics like Outer Wilds
Yeah, it’s actually been kind of a relief to have fewer new games to look forward to every year. I have a backlog of something like 700 unplayed games already in my library. I know I’m not going to play them all as much as they deserve before I die, but being able to make a much bigger dent in them is nice.
Gamers do not want x.
Gamers are wrong about this.
That is definitely how it works. Keep pushing that line.
I don’t like this movie.
You are wrong and must like this movie because I like it!
Fuck. Off.
We’re at the “the customer is always wrong” stage of capitalism. Wheeeee
That’s because everyone remembers that particular saying incorrectly, similarly to “a few bad apples,” or “the blood is thicker than water.” Everything and every saying is being twisted to mean exactly opposite what it should be to protect the pedophile capitalists that own everything.
Maybe so. It’s too nice a day to get upset.
In this case the customers are giving an opinion in a matter of taste.
The blood thingy always meant what it means today:
No the fuck it didn’t. That page is wrong. The full quote is “The blood of the covenant is thicker than the water of the womb.” Meaning that you can choose your family as an adult, and should do so with care and consideration.
Ah, and your sources? Like they have on the Wiki page. Cause the usual source for your version is “this blog, where somebody wants to sound smart”
My grandmother, as well as The Gospel of Thomas, IIRC. It’s somewhere in The Apocrypha, I remember reading it in Catholic school
So is nvidia’s plan to just not have any market share in a few years?
When the AI bubble pops they’re going to need someone to buy their stuff again. Perhaps pissing everybody off isn’t the best long-term strategy. But what do I know I’m only their target demographic.
They’ll just force remote computing on everyone. They’ll find a way to milk this further.
No. They force us to like their slop to keep their mouths full of shareholder-dicks to get more of that thick and juicy shareholder sap.
Gamers are not the primary market for GPU anymore. Its AI sadly. So they get away with shit like that…
DLSS has no application in AI, though, so that point is not really valid here.
Its a sideshow, a marketing gag, and they get way with shitty marketing because another industry buys their stuff anyways. I don’t see where that is not a relevant or valid argument.
There’s absolutely no application for DLSS 5 for anything other than gaming
And they’ll keep being wrong. But that’s ok, flat earthers still exist too.
That’s been the line they have pushed for at least three decades now. It didn’t seem to be hurting them until rather recently, when it started affecting the larger software world.
Same with DLSS 1, in a few years people will like it.
Probably… not? I’m not sure anymore, about it but i think everything is going to look the same in some years with this tech and we, gamers, are becoming more and more aware of our power to turn the tides so, our hate could bury the sloppy part of the tech. EPIC CEO thought the same (in some years people will love me) with the epicstore and people mostly still hate it even with 100 free games in their account.
“completely wrong” and proceeds to say it’s just a “fusion”.
It IS an AI slop filter, and you can take it and shove it up your ass alongside all of your stupid jackets.
he needs the money so he can buy more JACKETS.
If he shoves them all up his rectum and pull them out they’ll be all tied together cause he’s a clown.
I never would trust a 70 year old tech bro wearing a leather jacket.
Wasn’t it always an Ai driven filter? What is different about 5 that makes it detestable in comparison?
It completely changes the graphics into ‘ai slop’. If you look at their examples, it makes the game looks like slop videos.
What did dlss 4 and 3 do?
@Lemming6969 @dovahking DLSS 4/3 just had fake frame generation and upscaling techniques. DLSS 5 is the first one that introduces this slop filter that replaces the image entirely.
This one uses generative AI to “add details” instead of just adding pixels for more resolution. It can decide what a character “should” look like pushing a more homogeneous design to anyone using this since it will be built in training models of who knows what origin.
This is long winded but I firmly believe this explains a lot about the industries frenzied push into all these odd directions… All of it. Here seems as good as any place to dump this mess I’ve been stewing on:
I really think it’s important that raytracing, while novel, wasn’t created to improve visuals. It wasnt created to make a programmers life easier. It was created because it was comremovedtionally difficult** and could be optimized for. It was a fantastic play by nvidia. They created a feature that functionally did very little but they could get an entire cycle ahead of the competition in that optimization. Differentiation of products, in a duopoly, is a big deal. Amd dove right into it - knowing full well that this would leave them brutally behind… But this was a fortuitous event: despite the disadvantage.
Why? Simple. GPUs have been struggling against Moore’s law. Framerates were exceeding ranges even monitors can refresh at. And worse yet there was another hard limit: our eyes. How do you sell cards that have no perceivable value?
Reality is we may well be reaching a point where additional resolutions and framerates dont matter. Badly optimized games only buy so much time.
These companies aren’t stupid. Crypto? They loved it. Comremovedtionally expensive. Always need faster… Until we didnt. What now? Demand was plummeting for overpriced high end cards.
Go back and look at when AI and nvidia got in bed. The earnings call was due to be a bloodbath after all these cards were rotting on shelves, unpurchased, and depreciating daily. It was coming ro light that they had been selling cards to miners under the table and that was going to get ugly fast. I have never, in my life, heard a company talk so much about a product on a earnings call – that wasn’t theirs. Not a word breathed about unsold cards barely any numbers discussed. ChatGPT referenced so many times that there was confusion as to whether nvidia actually owned it. The Q/A at the end was comedy gold. People were so confused.
AI was the perfect save. AI is a power virus. Want to fix the black box? Train a black box to mangage that black box. Its a comremovedtional sinkhole. They’ve extracted value from gamers to dimishing returns. Meanwhile they can sell the ultimate snake oil to investors: virtual slave labor. Unpaid workers. In floods private equity. Gamers stopped mattering immediately. All of these advances are software. From a GPU design company. Why? It shuts up the peasants while they continue rebranding the “snake oil” to get whoever is buying. Weve nearly achieved the panacea. Just a bit longer!
Behold: we have dressed our industry in the finest of the emperors newest clothes. You can either start selling them or be the only one who doesn’t.
🫧
From a programming and visuals standpoint: Ray tracing was always sought after, and it is peak graphical fidelity. It makes visuals better, and (shader) programming easier, more physics-based. It’s not just differentiation, the industry has been dreaming of realtime ray-tracing for 30 years. With slow, continuous movement in that direction.
Dont get me wrong. Its absolutely a very novel and useful feature. It made shit look great. I’m not down on the tech: I’m just saying the push for it wasn’t for the industry. It was to kill framerates and sell cards.
I doubt it. This thing was in the pipeline for decades. It wasn’t just nvidia doing the thing because moore’s law. Everybody was interested and excited, while the moore’s law was alive and well. Literally can’t find better quality, but intel was pushing tech demos such as this.
The actual push for adoption and walled garden of NV RTX is… honestly, just business as usual. Nvidia did exact same with PhysX. Once they have the technological edge, they push hard to pump their ecosystem. They always played evil.
It is good business. Shit for the consumer (unsurprising) … But really aside from Jensen’s apparent ego - I’m curious why nvidia has any interest in the gaming sector. I feel like they accomplished the perfect transition.
Trillions invested to make unoptimized games barely run, and make it look worse at the same time, instead of just investing like 1/10th into optimization during dev cycles.
NVIDIA really is like a parasitic cancerous growth on the side of the games industry, it’s existence increasingly predicated on the destruction of current standards, overtaking their function to ensure survival and it’s continuous ever expanding cancerous growth
By the amount and intensity that this man tries to sell his garbage through sheer bluff and bs you’d think he’s running to be President someday.
Don’t give him ideas
Achievement unlocked: Whoever is wealthiest gets to be President.
WHAT HAVE I DONE !?!?!?
(。╯︵╰。)

So this dumb fuck’s own marketing material has said this operates off final pixel colour and motion vectors (for temporal stability presumably) - that says to me that it’s not working with actual geometry info at all. It probably has a step to infer geometry but it’s still just a fancy Instagram filter working with limited data and an obviously ill-suited training set.
the previous versions at least need the software to supply motion vectors. otherwise it’s just guesswork. i’m assuming there will be some way to supply lighting information as well.
whatever the final product can do, they certainly didn’t show it off in their examples.
Technically, at least on vulkan, these things can be inferred or intercepted with just an injected layer, though it’s not trivial. If you store a buffer history for depth you can fairly accurately compute an approximation of actual (isolated) mesh surfaces from the pov of the view. But that isn’t the same as real polygons and meshes that the textures and all map to… pretty sure you can’t run that pipeline real time even with tiled temporal ss. Almost definitely works on the output directly, perhaps some buffers like motion vectors and depth for the same frame that they’ve needed since dlss2 anyway. But pretty suspect to claim full polygons, unless running with tight integration from the game itself, even then the frame budgets are crazy tight as it is, nevermind running extra passes on that level
Probably not meshes since it is way too expensive. But these guys write the GPU drivers, so they of course have access to the different frame buffers and textures buffers and light source data. So just from depth and normal map data you can get a good representation of geometry. Like deferred rendering lights the scene with the data in the G-Buffer, which is 2D, not geometry.
Oh, thanks for pointing that out.
Ignoring that current version looks sloppy, as a gamedev I would accept extra AI beautification post processing step as additional feature, but I would never accept corporation getting their hands into my beloved geometry.
That’s what he’s saying. That it doesn’t change the geometry or textures (still completely controlled by the devs) and that the parts that it does change are also tunable by the devs.
He’s responding to the backlash about how it changes models/textures (which it doesn’t) by saying those are still fully in the hands of the devs and the parts people are seeing in the demos can be fine tuned by the dev teams to match their vision for what they want it to do or not do (like change lighting on material surfaces and hair but not character faces as an example).
It’s a post-processing screen space effect. At that point, there’s zero control the game can have over the geometry. If the AI model wants to change it, it can. It fundamentally can’t only operate on lighting like the marketing claims, it can only make a hallucinating best-effort statistical guess at what the geometry in the final image should be.















