Way to dodge the question. This is why no one pays any serious attention to 3DMark, because it tells you what you already know and doesn't reflect real-world performance in actual games. But nice try dodging the question. No need to look high-end, just look at mid-range with the 9600GT. Once again, how does a card with half the SPs and texturing ability keep up with the 8800GT? Oh right, because it has the same number of ROPs and fillrate.
Really? No one pays attention to 3dmark? Like who? Almost every single reviewer use 3dmark to measure performance. 3dmark is a tool for pc gamers that measure each sub-section of the card. :laugh: Actually it does reflect gaming situations like when AA is on and off etc... It behaves same as it would in a game.
To answer you question though 9600gt keeps up only when AA is applied even then it still trails 8800gt. When AA is disabled 8800gt is more faster than with AA. Notice the bottleneck. Memory Bandwidth! All that texture fillrate is useless if it doesn't have bandwidth to use it properly. As for shader I guess you missed 9600gt thread. Do a search.
No, you said "bigger bus is just better" without any proof whatsoever and in this case despite evidence to the contrary. Once again I think you need to take a look at those benchmarks again because the 3870 is beating the 2900XT by 1-2FPS more often than not, probably due to that "massive" 30MHz increase that in all reality is negligible. The cards are nearly identical in performance and clearly show number of memory controllers/bus width has no impact on performance in cases where bandwidth is not a bottleneck.
I said that about g92 because it has massive texture fillrate that can use wider bus or more bandwidth not about 2900xt. In some situations yes 3870 wins because of stronger shader, slightly more texture fillrate, etc...
I didn't handpick any benches, I showed you a review that clearly shows the 8800 and 9800 neck and neck. You came back with Crysis and COD4 as examples to the contrary....but only proved you can't read simple bar graphs.
So what do higher settings in Crysis do? They emphasize the G92's SP advantage over G80 which was never a question and one of the *few*, if only games that showed a significant advantage with more SP in Keys and BFG's own in-house testing. I believe COD4 also showed massive drop-off below 64SP but that's about it. That still doesn't change the fact you can't read benchmarks.
Guru3d tests where 8800gtx would look good like testing medium settings in crysis, turn off soft shadows in FEAR, disable soft particles in Quake Wars. Of course you handpicked benches for medium settings. You just don't understand where the performance negligence is coming from so you blame me like I'm dumb who can't read bar graphs.
High settings in Crysis emphasis on everything. High settings use bigger textures, better shadows, better shader, etc.. It stresses the card.
Xbit does more than review a 9800GTX, they review a card that's faster than the 9800GTX with more VRAM as well, the review I linked in the OP Gainward Bliss GTS 1GB @730/2100.
So it isn't 9800gtx. It's an overclocked 8800gts with 1 gig. There you have 8800gts 1gig beating 8800gtx in most of the benches even with AA with much lower memory bandwidth. What a surprise.
Rofl, you obviously haven't been paying attention. When you overclock a G80 GTX 8% you get an Ultra and you see at least that much difference in performance. When you overclock a G80 GTS 15% from 500 to 575, the difference means losing by that much or more to the G92 GT vs. performing nearly identically to it. Again, if you look at the Tech Report 3870 review and pay attention to the 640MB GTS numbers you'll see its very competitive with the G92 GT when it loses badly in every other review. Why? Because TR did what most other reviewers ignored, they used 575MHz clockspeed based on what was available on the market and not old reference clockspeeds. FiringSquad also did this in their comparisons which I've linked to numerous times. You don't see nearly the level of scaling with G92 as we've seen with all the different GT, GTS and now GTX parts clocked from 600-675MHz. So yes, G80 sees a much higher return on core clock vs any other adjustments to shader or memory clock, which is what anyone who has owned a G80 will tell you.
No you get a overclocked 8800gtx. And for you info 8800ultra has 17% more bandwidth than 8800gtx not 8%. I read that techreport review a while ago with a overclocked G80GTS into mix. They were testing in extreme bandwidth limited situations in uber high resolutions and AA. That extra vram and memory bandwidth is sure kicking in isn't it in those extreme conditions. G80 has more bandwidth that is why it's doing well in those extreme conditions. G80 is a weaker chip compared to G92 that is why G92 can easily beat G80GTS with much lower bandwidth.
No I didn't. I said 9800GTX can't overcome the 33% reduction ROPs despite all of its enhancements in the way of texture units and SPs and increase in bandwidth over the GTS. This emphasizes my point that ROPs are still the biggest bottleneck on G92, not bandwidth, texturing ability or anythiing else. Clock for clock G80 is better at every resolution due to its fillrate advantage.
Why is it that 9800gtx can beat 8800gtx in modest settings even with 33% reduction in ROP? :brokenheart: How can it be bottleneck when pixel fillrate is limited by memory bandwidth? For your information G92 has lower bandwidth than G80. I don't know what GTS you are talking about but if you are talking about G92GTS, 9800gtx beats it. If you are talking about G80GTS, 9800gtx beats it.
Huh? No I'm pretty sure BFG came to the same conclusion I've been saying for months in his recent testing: that core clock differences on G80 lead to the biggest performance gains. Likewise I can increase my core clock to 621Mhz without touching memory and see significant gains. Why? Because bandwidth isn't the greatest limiting factor with G80 due to its wider bus and greater bandwidth compared to G92. Sure G92 might be bottlenecked due to its 256-bit bus in some situations and some settings, but realistically its not always going to be saturating its bandwidth all of the time, ie using bandwidth at 100% efficiency, so that means increases to core clocks will still have the biggest impact on its performance, just not as much as with G80. Which is exactly what we've seen with G92.
Saying crap like bandwidth makes no performance impact in lower resolutions. That's full of $hit. BFG already tested on his bandwidth happy ultra. Decreasing his memory bandwidth by 20% gave him lower performance even without AA and much more with AA @ 1600x1200 resolution. Now what would happen if he downclocked to same GB/S as 9800gtx memory speeds. I'll tell you this much it won't be pretty against 9800gtx. G92 is starved for bandwidth with massive texture fillrate that sits there waiting for bandwidth to catch up.
http://episteme.arstechnica.co...7909965/m/453004231931
Before you say something ignorant as increasing the core clocks will show big improvements let me remind you that it is also tied to texture clocks and isn't a g92 but a G80 with lower texture fillrate. :brokenheart: Too late!
Since there is no way to test just ROP performance I think we can rest assure it's just Chizow's fantasy for now.
G92 biggest bottleneck compared to an ultra is bandwidth. If it has bandwidth it can overcome ultra with AA. It can beat an ultra without AA in most situations anyway long as it's not some obscure settings where the extra vram and pixel fillrate makes the difference. 2fps difference with AA with 60% less bandwidth is phenomenal when 20% less bandwidth gave BFG's Ultra 8-10% less frame rates.
Bigger ROP helps in uber high resolutions and AA and does improve performance no doubt... Anything higher will give you more performance but once it's bottlenecked you get minimal return much like you trying to play a game with a pentium 3 and stuck a 8800gt that limits your frame rates.
Look at 3870 that has 20% more ROP + slightly more bandwidth than 8800gt yet it loses to 8800gt with more GFLOPS to boot. Since you can only fit so much into a single die a balanced card is the way to go. G80 does just that except it cost much more money than G92. Now stick GDDR5 in G92 it can easily out pace an ultra in it's own Anti Aliasing game.