SiliconFly
Golden Member
- Mar 10, 2023
- 1,925
- 1,284
- 106
OK. I think I see why you want to make a distinction. Technically they can be predicting all the frames but add latency because fake frames need to be paced? So it's not interpolation but comes with a key downside of it.Maybe latency is still there.
-Damn even game characters are going to be applying Instagram filters in Nvidia's future...
OK. I think I see why you want to make a distinction. Technically they can be predicting all the frames but add latency because fake frames need to be paced? So it's not interpolation but comes with a key downside of it.
Even if the generated frames are instantly produced, they still need to be paced. If the system generates one fully rendered frame every 20ms, then 3 extra frames would have to be paced every 5ms to have proper cadence. However, if the cost of generated frames is non-zero, let's say 3ms per frame, then that cost adds to the latency of the system, as fully rendered frames are now generated every ~30ms. (20+3+3+3)I don't understand. Just to clarify, I'm not sure how much time (for example a 5090) takes exactly to render a real frame vs predict a fake one. But I can tell, it'll almost be an order of magnitude different. Well almost. At least like 1:5 or maybe even 1:10 or something similar (approx). Fake frames are faster and cheaper to produce. And when I say transformers, I'm assuming Jensen also meant the same (standard mini transformers).
Just for example, if a real frame is rendered in ~20ms, 3 fake frames can be generated just right after that say within ~10ms following that. In essence, in 30ms, the GPU has generated 4 frames. I dunno how they pace all the frames, but I don't think DLSS4 adds extra latency over previous gen. In fact, it should be lesser (not more).
PERFECT. ALL I NEEDED TO HEAR.This means the FPS is massively increasing
The entire thread is about upscaling artifacts and includes screenshots from ONLY the upscaling sections of DF's video, there are zero references to MFG/FG or screenshots from those sections of the video. He's directly comparing the artifacts found in DLSS4 to motion smoothing artifacts and tries to push the idea that being okay with artifacts in highest-end graphics (5080) is equivalent to being okay with motion smoothing artifacts in an movie theatre.He is comparing motion smoothing in movies to the multi-frame generation, not to the upscaling. It's a very apt comparison.
The entire thread is about upscaling artifacts and includes screenshots from ONLY the upscaling sections of DF's video, there are zero references to MFG/FG or screenshots from those sections of the video. He's directly comparing the artifacts found in DLSS4 to motion smoothing artifacts and tries to push the idea that being okay with artifacts in highest-end graphics (5080) is equivalent to being okay with motion smoothing artifacts in an movie theatre.
Even if he was talking about MFG/FG motion smoothing, I don't see how it's an "very apt" comparison to compare the use of motion smoothing in PC games to the use of motion smoothing in MOVIE THEATRES, not on your smart TV. Being okay with visual artifacts in your PC gaming experience is not equivalent to being okay with visual artifacts while viewing movies in a movie threatre FFS. And that is the idea he is directly implying here:
View attachment 114609
Just because you're okay with bugs, the occasional crash, loading screen, some microtransactions like DLCs, doesn't mean you're suddenly okay with that in movie threatres. They are two entirely separate markets and situations.
He's comparing motion artifacts in gaming to motion artifacts in a movie theatre, not your smart tv applying motion smoothing effects. He's not only comparing the two, but equating them by saying you can't be okay with one of the without being okay with the other. And no, he's not just talking about frame generation, I can break it down screenshot by screenshot if you'd like:Pretty much all of his examples were frame generation artifacts, so I don't know why you insist that he is pointing out upscaling artifacts (some of examples are a combination but his focus is on the motion generated portion).
It's apt because it's essentially doing the same thing and causes many of the same visual issues and his concern is clearly visual fidelity which is important for both gaming and movies. You may think it is less of a problem for gaming, but that doesn't make his comparison wrong. Whether the artifacts are "acceptable" or not is just an opinion, so you can disagree with his statement but you can't disqualify it from the conversation just because you disagree.
He's comparing motion artifacts in gaming to motion artifacts in a movie theatre,
not your smart tv applying motion smoothing effects. He's not only comparing the two, but equating them by saying you can't be okay with one of the without being okay with the other. And no, he's not just talking about frame generation, I can break it down screenshot by screenshot if you'd like:
Screenshot 1: 0:55, no frame-gen listed but DLSS4 is listed.
Screenshot 2-4: 5:00-5:45, ALL DIRECTLY FROM THE DLSS4 vs DLSS3 comparison, NO FRAME GEN
Screen shot 5: 6:50, SAME DEAL ALL DIRECTLY FROM THE DLSS4 vs DLSS3 comparison, NO FRAME GEN
Screen shot 6-10: 7:15-7:30 These are actually MFG artifacts, I was mistaken before since this 15 second clip doesn't list MFG but it is in fact frame generation artifacts here.
View attachment 114613
Most importantly if you read his section here below the screenshots, when he talks about improving artifacts and getting artifact free he is referring to is CNN vs Transformer model and how Nvidia claims the Transformer model allows for much better AI scaling and less artifacts which is directly referring to upscaling not MFG/FG. Watch 3:40-4:40:
Let's assume for the sake of argument, that he is purely talking about MFG artifacts in his last statement here. He is comparing having motion artifacts in high-end gaming to having motion artifacts in movie threatre screenings.
View attachment 114615
Refuting an argument different from the one actually under discussion: Refuting motion artifacts in movie threatres vs the actual discussion at hand of motion artifacts in gaming
While not recognizing or acknowledging the distinction: Never acknowledges the difference between accepting visual bugs in games vs Movie Theatres.
I absolutely can disqualify his statements from the discussion, not because I necessarily disagree his criticism of MFG artifacts, but because he makes false comparisons between two entirely different scenarios to show why it shouldn't be the norm to accept some level of motion smoothing artifacts in gaming.
Meh, I just wanted to point out a completely ludicrous comparison trying to disqualify the use of FG/DLSS technologies, but IG people have a problem with that.Now we are arguing about what other points some other forum member might be making in their arguments.
Can we stay the course a bit here?
Part of the hype seems to be based on legitimate looking leaks such as the leaked time spy scores, but there is also lots of hype surrounding clearly false/misrepresentative information:That said, how is it possible the AMD hype train is stoking the boiler while the nvidia one seems pretty played out?
On the day of, nvidia seemed the clear winner and if they had hardware available to buy they would have cleared it out. Now its seemingly more likely they may have a tepid launch outside of the 5070 and 5090? They likely still will, but it still is weird to me that neither party was ready to ship on the day of the announcement.
He's comparing motion artifacts in gaming to motion artifacts in a movie theatre, not your smart tv applying motion smoothing effects. He's not only comparing the two, but equating them by saying you can't be okay with one of the without being okay with the other. And no, he's not just talking about frame generation, I can break it down screenshot by screenshot if you'd like:
Screenshot 1: 0:55, no frame-gen listed but DLSS4 is listed.
Screenshot 2-4: 5:00-5:45, ALL DIRECTLY FROM THE DLSS4 vs DLSS3 comparison, NO FRAME GEN
Screen shot 5: 6:50, SAME DEAL ALL DIRECTLY FROM THE DLSS4 vs DLSS3 comparison, NO FRAME GEN
Screen shot 6-10: 7:15-7:30 These are actually MFG artifacts, I was mistaken before since this 15 second clip doesn't list MFG but it is in fact frame generation artifacts here.
View attachment 114613
Most importantly if you read his section here below the screenshots, when he talks about improving artifacts and getting artifact free he is referring to is CNN vs Transformer model and how Nvidia claims the Transformer model allows for much better AI scaling and less artifacts which is directly referring to upscaling not MFG/FG. Watch 3:40-4:40:
Let's assume for the sake of argument, that he is purely talking about MFG artifacts in his last statement here. He is comparing having motion artifacts in high-end gaming to having motion artifacts in movie threatre screenings.
View attachment 114615
Refuting an argument different from the one actually under discussion: Refuting motion artifacts in movie threatres vs the actual discussion at hand of motion artifacts in gaming
While not recognizing or acknowledging the distinction: Never acknowledges the difference between accepting visual bugs in games vs Movie Theatres.
I absolutely can disqualify his statements from the discussion, not because I necessarily disagree his criticism of MFG artifacts, but because he makes false comparisons between two entirely different scenarios to show why it shouldn't be the norm to accept some level of motion smoothing artifacts in gaming.
My work here is done. See you again in 2 years.PERFECT. ALL I NEEDED TO HEAR.
off to the preorder line I go!
A false equivalence is not necessarily a straw man, but a straw man usually involves a false equivalence. He is guilty of both.There is no straw man, if it's a logical fallacy at all, it's one of false equivalence (which it's not that either). He's simply making a comparison to another high fidelity visual media and saying he doesn't find these artifacts acceptable in visual media 'a' just like they're not acceptable in visual media 'b' and thinks the industry is going down the wrong path by doing things this way. It's a fine argument that is really just his opinion and reasoning for it. If you disagree and think the artifacts are fine for gaming, great, that's your opinion and you can state your reasons for it. What you should not do is try to dismiss someone's opinion simply because you disagree.
BTW contrast his thread bashing DLSS4 versus his reaction to FSR4. Ofc there was no FG involved in the FSR4 demo but much of his DLSS4 thread was in fact criticizing upscaling, not MFG. He has worked at both AMD and Nvidia, dk why he seems to favor one of the other so much.
IDK, I would think that such an educated and experienced industry veteran might be less biased but IG not.The last one he worked at was AMD and he still feels closer to them??
or saying like this "We gonna make RDNA4 event"But they could have given some indication that they would NOT present new GPU info at CES
A false equivalence is not necessarily a straw man, but a straw man usually involves a false equivalence. He is guilty of both.
Two parts of his statement which he tries to pass as equivalent things:
1. "So the question people should be asking, do we want "smart" TV motion smoothing artifacts to be the norm of the highest end graphics?"
2. "If you walk in to a theater and watch Dune III and it has motion smoothing artifacts, would that be ok to you?"
It's not simply a comparison. He's clearly trying to equivalate the two things, by bringing up a different scenario immediately after posing the original question. And the scenario he brings up, Movie Theatres, isn't even a plausible scenario since there is zero indication that upscaling/FG will be used in movie threatres any time soon. If it were a plausible scenario(like the aforementioned smart TV AI motion smoothing), maybe I could see how he is making a simple comparison, but since the scenario is entirely ludicrous the only conclusion you can draw is that he is using this outlandish scenario to try and prove something.
Let's use the same exactly sentence structure except with different terms:
1. "So the question people should be asking, do we want completely random filler parts to be in our fast food items?"
2. "If you walk into a three star michelin restaurant and they serve you a 500$ dish consisting of random cheap filler parts, would that be okay to you?"
Let's combine the statements, because separately they might be okay, but when used together to try to equivalate two different ideas it becomes misrepresentation at best.
"So the question people should be asking, do we want completely random filler parts to be in our fast food items? If you walk into a three star Michelin restaurant and they serve you a 500$ dish consisting of random cheap filler parts, would that be okay to you?"
Clearly it is in fact a false equivalence, and a false equivalence is part of a straw man fallacy. The other important part of a straw man is using the false equivalence to disqualify something that is not infact equivalent. He doesn't outright say that having motion artifacts in Movie Threatres is unacceptable, but highly implies it. By doing this he combines false equivalence and the disqualifying of a completely separate scenario to try and disqualify the acceptance of some level of motion smoothing artifacts in high-end computer graphics.
BTW contrast his thread bashing DLSS4 versus his reaction to FSR4. Ofc there was no FG involved in the FSR4 demo but much of his DLSS4 thread was in fact criticizing upscaling, not MFG. He has worked at both AMD and Nvidia, dk why he seems to favor one of the other so much.