I'm completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it's running through an ultra realistic beauty filter.

The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace's face look "sexier" because apparently that's what realism looks like now.

I wouldn't be so baffled if this was some experimental setting they were testing, but they're advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.

Going to go with 0% consistency and characters flipping between multiple faces

Even if it looked good, it has zero context of the original artists’ intent. This is like having AI summarize pages of a book as you read. You’re now locked a layer away from the original artist’s work and it’s a layer controlled by corpos. No thank you.

At least 2 layers.

LLMs don’t think. They copy paste something that’s been found repeatedly in the data it was trained on, statistical probability of words going with other words. Hell, it doesn’t even know what words are or much less mean. So it’s at least 2+ layers removed from the truth, one being the one you pointed out, and another being an amalgamation (mishmash) of the data it was trained on.

I get that lemmy hates AI, and I’m not going to try to talk you out of that, but please stop repeating this factually incorrect myth. LLMs are not stochastic parrots, despite what you may have heard. And they do think… to a degree. Note that they’re by no means everything CEOs and tech bros want them to be, but if you’re going to criticize them, please do it accurately.

They do know the meaning of words, but only in relation to other words. It’s how they work. It’s not a statistical thing like word frequency patterns— they’re not doing the same thing autocomplete does. Instead, they’re doing math on words in a several hundred-thousand dimensional array where placement on this grid indicates the meaning of the word— one vector direction indicates plurals, another indicates rudeness or politeness, another indicates frog-like, another might indicate related to 1993 ibm pentium CPUs, etc, etc, etc. It developed this array via training on terabytes of text, but it’s not storing a copy of that text, nor looking it up, nor copying anything from it… it’s defining words based on how they are used, then doing math on it to figure out what is the most appropriate thing to say next— not the most likely thing according to statistics, the most meaningful based on the definitions of the words it understands.

They really do not copy and paste. They do use definitions. They do think about the words in a very real way.

They don’t apply logical consistency and fact checking. There are hacks to make them talk to themselves in a way that following the meaningful definitions of words will more likely lead to fact checking and logical consistency, but it’s not 100% fool proof.

but if you’re going to criticize them, please do it accurately

You should take your own advice.

They do know the meaning of words, but only in relation to other words.

That's only one part to meaning and it's the only one LLMs have. It's facinating what this one part can do, but we don't operate this way. LLM have no world model, no logic model to associate a word to. It doesn't think, it's still just and input - output machine.

It’s not a statistical thing like word frequency pattern.

Instead, they’re doing math on words in a several hundred-thousand dimensional array where placement on this grid indicates the meaning of the word

I'm sorry, how is this not statistics?
The training is by it's very nature statistical. We give millions of text inputs with expected outputs and tune the model until they match. How is this anything but statistics??

It developed this array via training on terabytes of text, but it’s not storing a copy of that text, nor looking it up, nor copying anything from it

Yes and no? Yes - it's not storing a copy of the training data in the text form. No - it most definetly can "memorize" text, if that's not a copy I don't know what is.
I could memorize foreign script text without understanding it and then I could recreate it. Did I make a copy? no. Can I make a copy? yes.

Having a number that relates words to other words is not understanding words. Stop believing the hype for fuck's sake. What they 'know' is NOT knowledge. They do not know anything. Period.

There is a reason they start to fail when trained on other slop; because they don't know what any of it means!

Their 'knowledge' comes from the basic weights of what word is most likely to follow. Period. The importance of that weight comes from humans. It is not intrinsic knowledge even after training. It is pure association, and not association like you or I do word association.

Seen a bit of a rise of those sort of people since moltbook or whatever it’s called emerged, trying to sucker people into believing the random bullshit generator is sentient or cognizant of its assets in any way.

What’s worse homie said “nu-uh” it’s not statistical probability and then proceeded to describe a statistical probability mesh.

Might help a bit if we all stop slapping the AI term on everything and start calling things what they are such as scripting, large language models, cronjobs, etc.

Trying to argue with those people just makes me sad and tired :(

Saying that an LLM knows words is not a value judgement. It doesn't mean "LLMs are sentient" or "LLMs are smart like humans". It's doesn't imply they have real world experiences. It's just a description of what they do. That word has been used to describe much more basic kinds of information / functionalities of computers already. What makes it so offensive now?

There is a reason they start to fail when trained on other slop; because they don't know what any of it means!

If you taught children slop at school they would not get far either. Although training LLMs on LLM output is more akin to getting rid of books and relying on what teachers remember to teach the students.

The importance of that weight comes from humans. It is not intrinsic knowledge even after training.

It comes from the llm and not from the outside, that's what intrinsic means. How is it not intrinsic knowledge? I think you mean to say without humans to read it, an llm's output holds no inherent value. That is true and nobody is claiming that it does. llms don't derive pleasure from talking like humans do so the only value llm output has is from the the person reading it.

Their 'knowledge' comes from the basic weights of what word is most likely to follow. It is pure association, and not association like you or I do word association.

llm weights are anything but basic, but regardless, this is also true and lunnrais said as such:

They do know the meaning of words, but only in relation to other words.

The difference between human knowledge and llm knowledge is that an llm's entire universe is words while humans understand words in relation to real world experiences. Again, nobody is claiming those two understandings are equivalent, just that they exist.

Also on the point of statistics, I think the way people understand statistics and the statistics used in llms are vastly different. It is true that an llm finds which word is most likely to be next, but how it does that is not a classical statistical method. An llm itself is a statistical model. When one says an llm 'knows' or 'understands' they mean it has captured abstract information in a incomprehensibly complex neural network not dissimilar to how we do it. How it can only use that information for word prediction doesn't change the fact that it has captured information beyond what is present in a word prediction.

It seems to me that 'statistics' is often brought up to devalue llms by associating them with basic statistics. This association is wrong as I've explained in the previous paragraph. And themselves being a statistical model doesn't mean their ability to express knowledge (although limited to textual domain) has to be inferior to a human's.

I understand the need to warn people of the limitations of llms. Their limitation is that they are text models with no concept of real life. Not that they are statistical models or copy paste machines

Even simply using the word "know" is anthropomorphising them and is wholly incorrect.

You are suffering from the ELIZA effect and it is just... sad.

Computers have been getting anthropomorphised for a long time. Why is it only when talking about llms that you start clutching your pearls about it? Why do you think that verb has to be exclusive to humans? To me that seems like a strange and inconsequential thing to dig your heels in.

And I struggle to see how you could genuinely believe I was suffering from 'ELIZA effect' after reading my comment. You need more nuance and less absolutism in your world view if you genuinely do.

Why is it only when talking about llms that you start clutching your pearls about it?

I am of the opinion now, and this is entirely AI's fault, that for the collective mental health of our society, a grocery store self-checkout should not even be allowed to "thank" you for your purchase.

Your eagerness to fool yourself is beyond sad.

They do build a representation of words and sequences of words and use that representation to predict what should come next.

A simplistic representation is this embedding diagram that shows how in certain vector spaces you can relate man/woman/king/queen/royal together:

The thing is, these are static representations and are only bound to the information provided to the model. Meaning there is nothing enforcing real world representations and only statistically consistent representations will be learned.

They don't "learn" anything, though. They're 'trained' (still a bad term but at least the industry uses it) to spit the correct answer out.

People, especially CEOs and advertising firms, need to stop anthropomorphizing them. They do not learn. They do not "know". They have statistically derrived association and that's it. That's all.

Holy hell ELIZA effect is in full swing and it's beyond sad. They don't build the association themselves. They don't know what the representations mean. They absolutely do not know why two words are strongly associated. It's just a bunch of math that computes a path through that precomputed vector space. That's it.

I didn't use the word learn, although that's really just a matter of semantics. I said they build a representation of words/sequences in a vector space to understand the interplay of words.

You can down vote me all you want, but that's literally just the math that's happening behind the scene. Whether any of that approaches something called "learning", probably not, but I'm not a neruoscientist.

You're right that there is an internal representation for tokens and token sequences, but they also do copy. There is a whole area of research on this, and here is an example article on extracting image datasets.

AI Summary, of the last 240 pages

Leopold Bloom looks at a dog on a beach and thinks about sex and death

Wow that's hyper realistic!

Only artist intent matters. Personal preference be damned!

Yes, the artist's intent is the part that matters in art.

That's a fine opinion you have there. It reminds me of "Only the chefs taste matters during a meal"

Why am I ordering from a chef if I don't like their cooking?

Implying that it’s the only thing that matters is dumb. If you want uncanny valley faces, go for it. I’m not interested in dumb AI permeating yet another corner of my life.

Looks horrid.

This will be the new motion plus shit that ruined all TV. Now, the kids think it IS good.

I can't express how much I hate motion plus and the fact that YOU CANT TURN IT OFF on a lot of TVs.

Much like severely compressed limited music. People today hear a dynamic song and dont like it because its too peaky or hurts their ears. They want a sausage waveform.

I'm not old, and I'm already yelling at clouds, ha. Just can't stand these corpos brainwashing people into thinking their shit is good. Its not.

"A bird who has lived its life in a cage learns to fear the sky"

I hate that this is our reality. People growing up without ever knowing what true freedom feels like. Even we never truly knew it, we just had a bigger cage.

Damn. That is exactly how ive been feeling lately. I think a lot about how sad a childhood would be right now unless you have a really good parent teaching you that everything today sucks and is corporate trash that needs to be destroyed, like capitalism.

I always wondered why samsung phones and tvs had that "vivid" high contrast color tuning turned on by default that just blows out the contrast and saturation. I thought surely no one actually prefers this kind of look. Reading some of the comments on here and on youtube, now I understand.

they could never seem to get past the cartoonish look.

Hard disagree on motion interpolation. Bad interpolation looks awful, of course, but when it's good, it's like night and day to my eyes, and every TV I've ever used can disable it.

Sometimes you can't disable "jitter reduction" or whatever that's branded as, but that's not the same thing.

First off; downvoted for a lukewarm opinion? Come on Lemmy, be better.

I’ve thought about this subject a lot and my thoughts are that it boils down to whether someone has been raised on movies (specifically 24fps) or video games (specifically 60fps).

For me, movies look like a jittery mess. I have two TV’s and the motion smoothing on one is very good but I’ve never been able to get it just right on my other one. They’re the same brand of TV just a decade apart.

Yeah, the ASICs in newer TVs are crazy powerful, and crazy good at it. They're nothing like what you'd find in a phone or even a PC, and even a one-generation jump for our Sony TVs was an improvement.

That's what I was trying to emphasize. I think interpolation on old TVs, and maybe early versions of SVP, left a bad taste in people's mouths. Kind of like fake HDR.


...But I also think there's a lot more sentiment against any kind of "processing" since the rise of AI slop.

As an example I often cite, there was this old TV show I helped touch up for a "fan" release, a long time ago. One small component in a very long pipeline was a GAN upscaler... It worked fine. The original TV release was broken as hell, and people loved the improvement.

Fast forward many years later, and I mention this was used in the "remaster" still floating around, and the same subreddit goes ballistic. They literally did not believe me, or cooed about the "flaws" of the original, or called it slop and against the rules and wanted me banned.

And I suspect frame interpolation and resolution scaling in other contexts get tossed in that same bucket. Not that I blame anyone. AI does suck.

Funny enough it’s actually the older of my two TVs that does it well. I think it marks a noticeable drop in product quality for that particular manufacturer. So still the same idea; that worse hardware gives bad results, but it’s not limited to the age of the TV just its component quality.

Oh yeah, definitely. Lines enshittify.

I just mean, generally if you look at a 2014 TV and a 2025 one, the experience of that old one is likely not represenative of the new.

It’s great for sports. And some sitcoms. And maybe news (but why are you even watching cable news these days). That’s it.

Persistence of vision serves a real purpose in filmography. “Optimizing” it away is very literally a corruption of the art and a betrayal of the director and cameraman’s skills and intent.

Yeah sorry I'm not into high def TV myself. It looks awful unless all you watch is sports and brand new marvel movies (hard no).

You may think you're disabling it ,until you compare it with another TV that actually does zero processing . night and day.

Same effect as me thinking "huh, I guess the lag on my flat screen isn't too bad for gaming" then plug into my CRT and holy snap, the clarity and precision response. (Clarifying, this is with old and new consoles, obviously anything with an analog output into a new TV ia horirible without a upscaler, but even with a retrotink 2x upscaler, it still sucks. You need to send over $700 to make it look decent enough).

people don't know what They took from us.

I have. I A/B test it all the time. I pause and pixel peep.

And I don't watch any sports, nor any marvel movies.

"huh, I guess the lag on my flat screen isn’t too bad for gaming"

I've had CRTs. And I have one of those "zero latency" overclocked LCD monitors with no internal scaler. As much as I like them, they feel sluggish compared to something newer.

Yeah sorry I’m not into high def TV myself.

In that case, I suspect you haven't tried it on more modern displays, or when its baked into transcoded footage with one of the better filters.

Yes, it looks awful and artifacty processed by older LCDs. But it looks really good these days.

Yeah, I'm not one to pay a lot for TVs. I'd like an oled, but with the prices, I really have no need for it for gaming and the TV I have is fine for normal watching.

Also isn't it crazy how its taken this long for a display to be as good as a CRT (blacks and response time wise)?? Kind of the same thing with audio, how bad digital sucked originally and how we are just now fixing that with great DACS. Humans got it right the first time with tube amps and CRTs ! Not to mention they're repairable.

I’d like an oled, but with the prices, I really have no need for it for gaming and the TV I have is fine for normal watching.

That is entirely fair. Electronics are all crazy expensive, really.

Yeah, LCDs went from bad to “mixed” and stayed that way for a long time. Granted, some things like absolute sharpness are not great on a CRT, but still.

Two 5090s for this shit lol. The first 5090 calculates all the shadows and then the second 5090 takes it back out again lmao. What a fucking joke.

That's impressively bad

Lmaooo I will stick to turning down all the settings on my shitbox computer because apparently that's the same experience hahah.

Now every frame can consume gigajules of energy.

If the next nvidia card drops it'll need 3 phase power supply

This is exactly the opposite of what I want a graphics card doing in the background. Just leave the games the way the developers made them, for fucks sake. If they suck, they suck...if they don't, they don't. But this just makes them all suck.

it's an unwoke/untumblrisation filter, right here on your graphics card.

https://knowyourmeme.com/memes/original-vs-un-tumblrized

This shit would excite certain people, if 90% of the population could afford to buy graphics cards anymore.

You also apparently need a separate GPU to run this. So not only can 90% of people not afford one, but the article states they used 2 to accomplish this.

Sloppyfilter 9000X-tream

Only on Nvidia

I wouldn't have called this generative AI, but Jensen did. Great for stills, uncanny valley for motions. People are claiming it generates completely new images, but in this instance it keeps the same geometry and texture and just processes motion and color vectors to create a hyper-rendered version of the characters.

Ignoring the obvious problems of the hardware it requires and supporting the AI bubble feeding monopoly that is NVIDIA, it is interesting technology that doesn't actually seem to act as a medium for IP theft, my beef with what I call AI slop. It might only be practically good for photo mode in games, but it will be interesting to see how it works out. It could kickstart interest in making Let's Play in a more Machinima style.

It invents light sources for objects that don't exist. This is generative.

My problems with AI, by the way, are faaar beyond IP theft.

Almost makes me wish I hadn’t already switched to Team Red, so that I could switch to Team Red due to how comically bullshit this is (on top of their recent vibe coded driver releases)

Well there is an off toggle.

You still pay for it 🤷

This seems like it’s intended as a texture and lighting improver, not an “AI Slopificator”

Among the other screenshots, a lot of them seem to have a marked improvement.

Aspects of that Grace image comparison definitely look bad, but this is a work in progress that we’re getting a glimpse into. I really hope that bimbofication doesn’t make it into the final product.

This reminds me a lot of the Smile EQ comparison that speaker sellers would make to impress average, 45-year-old men in Best Buy.

In the Grace picture alone, it removes the distant fog, it destroys the mood, it overrides the art style, it over-brightens the scene, it adds light sources that don't exist, it removes the warm light spilling out of the shop window, it makes the color palette colder, it hyper-contrasts everything—there is no world in which I would call this an improvement.

Oh look, DF prostituting their audience to make a buck from NVIDIA. Britain, how far you have fallen to behave just like yanks...

Bro they taught them that shit.

To be fair though, this is the kind of AI enhancement that could be an actual enhancement.

Most AI solutions are a race to the bottom strategy. They claim to reduce a little of the quality of the product and also massively reducing cost (where in reality it’s a massive reduction in quality and costs a decent bit).

This is what I imagined the AI revolution would look like 10 years ago, having AI enhance the product on top of the same level of quality as before, not really trying to get rid of the artists and developers.

Having said all that, the faces looked kind of creepy…

I agree, but also I see the other side. This is actually a neat usage of AI, it's not slop in my book, it's akin to upscaling.

That being said, those limitations are what drove the original artwork. The artists used those limitations to make the styles and characters we now love.

Master chiefs classic armor was just as much designed my the polygon limitations as much as what they imagined could be done

Exactly, this is forcing lowest-common denominator instagram art "style" onto existing art. Should we do this for all the paintings in the Louvre too? It would be more realistic that way, right?

Nice (/s)

It seems like it's just making it more realistic rather then "beautifying" her. The original picture doesn't have any flaws or blemishes or anything to indicate that the modelers wanted for her to be anything but attractive. If anything the other one gives more detail to the skin and makes it look real rather than ultra smooth clay.

She is a nervous, FBI desk lady who's never seen a day of action and stutters while giving case reports. Compared to Leon, she is an ordinary person. The filter makes her look like any star on The CW.

Ai hate is so strong!

I think it looks extremely promising. It may be a bit uncanny with the faces, probably fixable, but environments are dope!

This is fantastic, this is probably the way to get completely realistic graphics. This is the way, this is finally actual progress in realism.

Strong disagree. I do not want these GPUs changing the game to what they "think" the artist intended.

LLMs have a place. Art is not one of them.

Not art, realism.

I don't know how to tell you this, man, but videogames aren't realism. They're art.

You have no idea what your are taking about.

The model only works on the rendered image and motion vectors. Other than the image it has no information about the lighting in the scene, weather or anything else. So in the current form it really doesn't have much to do with realism.

That's like saying that no ai image can be realistic, because it was ai generated.

Yeah I agree. Looks like this will begin to finally solve the uncanny valley problem. The crying is so loud though. I almost feel sorry for them. This is unstoppable. I wish they could see that but they won't. It's crazy to me that these anti AI cultists think that they're going to shame AI into going away. It's just not going to happen and they're going to just get more screechy and moral and blind. I hope they get the help they need.

Yeah, but you know what will happen, all of a sudden the games will become amazing and everybody will want to play. All of them 😁

Meh, progress on graphical realism basically stalled out in the last 10 years. It was a matter of time before AI became the tech that pushes it to the next level. I personally don’t think it looks terrible. It looks more realistic if anything. If you don’t like this particular case or the beautification filter, that is one thing, but I don’t see it as refuting the use of the technology as a whole.

Edit: yep, I clicked the post and watched the promo clip. I gotta say, it looks great. I think if you’re saying otherwise, you just aren’t being honest. Lemmy has an ax to grind about a lot of weird shit. It’s odd as hell. I could not careless about the downvotes. Doesn’t change my view.

"The party told you to reject the evidence of your eyes and ears. It was their final, most essential command." — George Orwell

The Lemmy mob is no different.

The self aggrandizing of the edit is priceless.

The only thing aibros love more than ai is their own farts.

Nothing says 'I don't care if you downvote me' like editing in a George Orwell quote to cope. lol

midwest.social

Rules

  1. No porn.
  2. No bigotry, hate speech.
  3. No ads / spamming.
  4. No conspiracies / QAnon / antivaxx sentiment
  5. No zionists
  6. No fascists

Chat Room

Matrix chat room: https://matrix.to/#/#midwestsociallemmy:matrix.org

Communities

Communities from our friends:

Donations

LiberaPay link: https://liberapay.com/seahorse