Actually most reviewers use 3DMark less and less due to the obvious problems with it, and they certainly don't bother with all of the individual synthetic results like TR does. We've already talked about a few glaring examples where the results directly contradict your arguments in real-world games, like multi-texture fillrate with G80 and G92, yet the G80 still matches G92 in games. Or G92 vs. G94, where 9600GT still matches G92 in games. Or better yet, how the R600 from ATI wins in 3DMark but still can't beat any G80/G92 parts in actual games.Originally posted by: Azn
Really? No one pays attention to 3dmark? Like who? Almost every single reviewer use 3dmark to measure performance. 3dmark is a tool for pc gamers that measure each sub-section of the card. :laugh: Actually it does reflect gaming situations like when AA is on and off etc... It behaves same as it would in a game.
To answer you question though 9600gt keeps up only when AA is applied even then it still trails 8800gt. When AA is disabled 8800gt is more faster than with AA. Notice the bottleneck. Memory Bandwidth! All that texture fillrate is useless if it doesn't have bandwidth to use it properly. As for shader I guess you missed 9600gt thread. Do a search.
I think you need to review 9600GT reviews. It keeps up with 8800GT (and even G80/9800) up until higher resolutions or with AA enabled. Bandwidth isn't a factor compared to 8800GT because it has the same 256-bit bus and similarly clocked RAM at 900-1000MHz.....and it manages all this despite having half the SP and Texturing ability of 8800GT. Wonder why?
I think you said it because you have no clue what you're arguing about. Once again, do you think number of memory controllers/bus width has an impact on performance when total bandwidth is not an issue? Yes or no.I said that about g92 because it has massive texture fillrate that can use wider bus or more bandwidth not about 2900xt. In some situations yes 3870 wins because of stronger shader, slightly more texture fillrate, etc...
Rofl ya, except the other cards are running the same settings and in the case of QW, effects were disabled specifically because Radeon HD parts couldn't use them. But that doesn't change the fact you can't read benchmarks or make sense of anything more than synthetic 3DMark results. Also, the effects turned off in FEAR and QW, like soft shadows and particles have nothing to do with your arguments about performance with texturing and bandwidth, as those features in games stress the ROPs and shaders more than anything. They also happen to be some of the most expensive features to enable in games, with modern GPUs handling large textures and filtering gracefully. But if you actually spent time playing games rather than talking about them, you'd know turning up texture and filtering quality is much less performance expensive than any shadowing, particle or post-processing effects.Guru3d tests where 8800gtx would look good like testing medium settings in crysis, turn off soft shadows in FEAR, disable soft particles in Quake Wars. Of course you handpicked benches for medium settings. You just don't understand where the performance negligence is coming from so you blame me like I'm dumb who can't read bar graphs.
High settings in Crysis emphasis on everything. High settings use bigger textures, better shadows, better shader, etc.. It stresses the card.
Uh, so what's a 9800GTX? Its an overclocked 8800GTS with a few power tweaks and slightly faster RAM. In this case the Gainward uses the same .8ns Samsung so for all intents and purposes the cards are identical. And no its not any real surprise the Gainward G92 wins many tests as its only clocked 155MHz faster and closer to that 756MHz number I quoted you earlier. Oh ya, that's where its fillrate would begin to match the 33% difference in ROPs.So it isn't 9800gtx. It's an overclocked 8800gts with 1 gig. There you have 8800gts 1gig beating 8800gtx in most of the benches even with AA with much lower memory bandwidth. What a surprise.
Uber high resolutions like Crysis at 1280 with Medium settings? Weren't you knocking another review site for using similar setttings? But I guess its OK when TR does it right, just as long as they include detailed 3DMark results. :laugh:No you get a overclocked 8800gtx. And for you info 8800ultra has 17% more bandwidth than 8800gtx not 8%. I read that techreport review a while ago with a overclocked G80GTS into mix. They were testing in extreme bandwidth limited situations in uber high resolutions and AA. That extra vram and memory bandwidth is sure kicking in isn't it in those extreme conditions. G80 has more bandwidth that is why it's doing well in those extreme conditions. G80 is a weaker chip compared to G92 that is why G92 can easily beat G80GTS with much lower bandwidth.
The Ultra is an overclocked 8800GTX, plain and simple. It has an updated cooler and a few power tweaks but any of the OC 8800GTX will perform identically to it clock for clock, as shown in numerous reviews. G80 is clearly the faster chip clock for clock, so I really have no clue what you're talking about. It'll be pretty obvious once GT200 rolls out with similar clock speeds to G92, only with more ROPs to give it the boost in performance lacking with G92.
It beats 8800GTX in some benchmarks and loses in others. That's with a 100MHz core speed increase to help close the gap in fillrate. G92 certainly benefits from some of its other enhancements, but overall it still can't beat the Ultra where it only has a 50MHz lead. So once again, clock for clock, G80 is clearly far superior to G92 and that's due to its ROPs more than anything else.Why is it that 9800gtx can beat 8800gtx in modest settings even with 33% reduction in ROP? :brokenheart: How can it be bottleneck when pixel fillrate is limited by memory bandwidth? For your information G92 has lower bandwidth than G80. I don't know what GTS you are talking about but if you are talking about G92GTS, 9800gtx beats it. If you are talking about G80GTS, 9800gtx beats it.
LMAO! More proof you can't read benchmarks, or comprehend anything more than what you see fit.Saying crap like bandwidth makes no performance impact in lower resolutions. That's full of $hit. BFG already tested on his bandwidth happy ultra. Decreasing his memory bandwidth by 20% gave him lower performance even without AA and much more with AA @ 1600x1200 resolution. Now what would happen if he downclocked to same GB/S as 9800gtx memory speeds. I'll tell you this much it won't be pretty against 9800gtx. G92 is starved for bandwidth with massive texture fillrate that sits there waiting for bandwidth to catch up.
http://episteme.arstechnica.co...7909965/m/453004231931
Before you say something ignorant as increasing the core clocks will show big improvements let me remind you that it is also tied to texture clocks and isn't a g92 but a G80 with lower texture fillrate. :brokenheart: Too late!
From BFG's link:
Core: -12.64% Memory: -5.45%
Commentary
The biggest performance difference clearly comes from the core clock where some games are almost seeing a 1:1 performance delta with it. I expected it would be shader clocks making the biggest difference but clearly that isn?t the case with the 8800 Ultra.
Of course texturing ability would be included with increase to core clocks, but that's not an issue when comparing to G92 since it already has the advantage over G80 with improved 1:1 TMUs. But we see with G92 that even with improved texturing units, it still can't surpass G80 without extreme core clock increases, and certainly doesn't scale nearly as well as G80. Why? Because it has 1/3rd fewer ROPs. Also notice his results mirror what I've said earlier...that performance with G80 scales linearly at nearly 1:1. You can't say the same for G92 because again, you need much faster core clocks to make up for the 1/3rd fewer ROPs which have the biggest impact on performance.
No need to talk about my fantasies, I'd be content if you properly comprehended the benchmarks and arguments you present. As for the last comment about GDDR5, that was the point in showing 9800GTX compared to G92 8800GTS, that a 270MHz increase in bandwidth isn't what G92 needs the most. Don't believe me? Go check out some of the 9800GTX OC Edition Reviews (756-770MHz core, 2300MHz memory). You'll find once again what I've been saying to be true. G92 needs fillrate more than it needs bandwidth (or anything else), which it gets with extreme core clock increases.Since there is no way to test just ROP performance I think we can rest assure it's just Chizow's fantasy for now.
G92 biggest bottleneck compared to an ultra is bandwidth. If it has bandwidth it can overcome ultra with AA. It can beat an ultra without AA in most situations anyway long as it's not some obscure settings where the extra vram and pixel fillrate makes the difference. 2fps difference with AA with 60% less bandwidth is phenomenal when 20% less bandwidth gave BFG's Ultra 8-10% less frame rates.
Bigger ROP helps in uber high resolutions and AA and does improve performance no doubt... Anything higher will give you more performance but once it's bottlenecked you get minimal return much like you trying to play a game with a pentium 3 and stuck a 8800gt that limits your frame rates.
Look at 3870 that has 20% more ROP + slightly more bandwidth than 8800gt yet it loses to 8800gt with more GFLOPS to boot. Since you can only fit so much into a single die a balanced card is the way to go. G80 does just that except it cost much more money than G92. Now stick GDDR5 in G92 it can easily out pace an ultra in it's own Anti Aliasing game.