9800GTX 1GB Performance Preview?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jjzelinski

Diamond Member
Aug 23, 2004
3,750
0
0
Am I the only one noticing the hellgate bench that show the 8800GT 1024 is 32.6% faster than it's 512 counterpart?

EDIT: The above was compared with 1024, it's even greater at 1900; 35%.

Furthermore, call of Juarez depicts a 52% increase at 1600.

A 26% increase in COD4 1600.

A 25% increase at 1024 in TR: Legend

Now of course I've cherry picked the favorable scores but there are instances where the extra 512MB reveals its use. Hopefully someone better versed than myself can extrapolate more useful info from those benches as I simply don't have the time or expertise; just pointing out what I felt was overlooked.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow


Unfortunately the review focuses on the super high resolution of 2560 so that differences are very small and its hard to see any true benefit of 1GB over 512MB. I think overall though the Palit part's performance is disappointing as there is very little difference between the Sonic and the presumably stock GTS (650-675MHz core, 512MB RAM), at 2560 at least. For my part, this just further confirms VRAM and bandwidth isn't as big of a bottleneck on G92 as # of ROPs.

How do you figure? Bandwidth doesn't have any impact @ 2560x1600? When pixel fillrate gets limited by bandwidth.

It also depends on the game and what you are testing at. Some games are more shader heavy, some texture heavy, some pixel heavy...

However at more higher resolutions you need more pixel fillrate and bandwidth. They go hand in hand.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Biggest bottleneck for G92 is memory bandwidth.

Already been through this before, the 9800GTX was supposed to prove this theory correct but it did just the opposite. 9800GTX has nearly identical core/shader clocks compared to many of the G92 GTS available, but it uses faster RAM clocked to 2200+. This should help alleviate the bandwidth bottleneck and allow G92 to stretch its legs...but it doesn't. That leaves only frame buffer size and ROPs as the main *known* differences with G80. The G92 1GB reviews show there isn't much gain from 512 to 1GB and in either case, the 768MB GTX typically outperforms both. Which leaves only ROPs as the main *known* difference. There's some other factors that aren't accounted for in tech specs and don't generally receive much press, like triangle set-up units and impact of memory controllers etc, but going with the main known diffferences between G92 and G80 that 16 vs. 24 ROP tends to stick out even more than before with what we know now.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
Biggest bottleneck for G92 is memory bandwidth.

Already been through this before, the 9800GTX was supposed to prove this theory correct but it did just the opposite. 9800GTX has nearly identical core/shader clocks compared to many of the G92 GTS available, but it uses faster RAM clocked to 2200+. This should help alleviate the bandwidth bottleneck and allow G92 to stretch its legs...but it doesn't. That leaves only frame buffer size and ROPs as the main *known* differences with G80. The G92 1GB reviews show there isn't much gain from 512 to 1GB and in either case, the 768MB GTX typically outperforms both. Which leaves only ROPs as the main *known* difference. There's some other factors that aren't accounted for in tech specs and don't generally receive much press, like triangle set-up units and impact of memory controllers etc, but going with the main known diffferences between G92 and G80 that 16 vs. 24 ROP tends to stick out even more than before with what we know now.

So you are saying 9800gtx is same speed as 8800gts? No it isn't. It's still little faster especially in AA situations.

The bandwidth of 2200 mhz is actually quite a small bump and only 8gb of more bandwidth which limited to 4 clusters.

There is definitely difference between the 1gb and 512mb card in games like crysis or Call of Juerez, WIC especially with AA. 8800gtx doesn't outperform both cards. It just depends on the situation. Usually extreme AA situations is where 8800gtx prevails with more pixel and memory bandwidth. In modest settings 8800gts 512 or 9800gtx easily beats 8800gtx.

ROP has always helped in higher resolutions and AA. This was already known.

G92 is only 16 rop card. Unless you change the design of the chip configuration it's not a g92 anymore it would be something else. It's limited by bandwidth considering pixel fillrate doesn't really hit theoretical peak rate.

If you are trying to say GPU today are biggest bottleneck to only pixel fillrate that's only partially right.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
so basically, 1GB of ram is not useless after it. It is very useful under specific circumstances.

Also, 8800GTX SLI vs 9800GTX SLI isn't what we were looking for. We were looking for 512MB vs 1GB RAM 9800GTX SLI review.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Originally posted by: chizow
Originally posted by: Azn
Biggest bottleneck for G92 is memory bandwidth.

Already been through this before, the 9800GTX was supposed to prove this theory correct but it did just the opposite. 9800GTX has nearly identical core/shader clocks compared to many of the G92 GTS available, but it uses faster RAM clocked to 2200+. This should help alleviate the bandwidth bottleneck and allow G92 to stretch its legs...but it doesn't. That leaves only frame buffer size and ROPs as the main *known* differences with G80. The G92 1GB reviews show there isn't much gain from 512 to 1GB and in either case, the 768MB GTX typically outperforms both. Which leaves only ROPs as the main *known* difference. There's some other factors that aren't accounted for in tech specs and don't generally receive much press, like triangle set-up units and impact of memory controllers etc, but going with the main known diffferences between G92 and G80 that 16 vs. 24 ROP tends to stick out even more than before with what we know now.

So you are saying 9800gtx is same speed as 8800gts? No it isn't. It's still little faster especially in AA situations.

The bandwidth of 2200 mhz is actually quite a small bump and only 8gb of more bandwidth which limited to 4 clusters.

There is definitely difference between the 1gb and 512mb card in games like crysis or Call of Juerez, WIC especially with AA.

ROP has always helped in higher resolutions and AA. This was already known.

G92 is only 16 rop card. Unless you change the design of the chip configuration it's not a g92 anymore it would be something else. It's limited by bandwidth considering pixel fillrate doesn't really hit theoretical peak rate.

If you are trying to say GPU today are biggest bottleneck to only pixel fillrate that's only partially right.

The differences between the GTX and GTS are exactly what you might expect based on slight differences in clock speed, similar to the differences between a GT and GTS. If bandwidth were the major bottleneck for G92, as was argued for both the GTS and GT, an increase in that bottleneck should result in a performance increase greater than just the typical % scaling seen by increasing clock frequencies.

The bandwidth increase is actually significant, its more than 10% and finally pushes G92's bandwidth past the G80 GTS. If bandwidth were the major bottleneck then you might expect very close to 10% increase in performance but instead you still see linear performance gains with core clock between the GTS and GTX.

Lastly, at lower resolutions where ROPs and bandwidth shouldn't be an issue (based on your points), the G92 should absolutely destroy the G80 given its much higher clock speeds, 1:1 TMU enhancements, and much higher shader clocks. Yet it doesn't. ROPs aren't only important for AA, they're also responsible for rasterization and post-processing.

When referring to "biggest" there is no partial or shared title. Its either the biggest bottleneck or its not. With each G92 variant and release we've been able to get a little more glimpse of why G92 still can't beat the 16 month old G80, and I think we're closer now to an answer.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
The differences between the GTX and GTS are exactly what you might expect based on slight differences in clock speed, similar to the differences between a GT and GTS. If bandwidth were the major bottleneck for G92, as was argued for both the GTS and GT, an increase in that bottleneck should result in a performance increase greater than just the typical % scaling seen by increasing clock frequencies.

The bandwidth increase is actually significant, its more than 10% and finally pushes G92's bandwidth past the G80 GTS. If bandwidth were the major bottleneck then you might expect very close to 10% increase in performance but instead you still see linear performance gains with core clock between the GTS and GTX.

What is 10% in memory bandwidth that is limited by 4 64bit clusters? Why would it be 10% increase in performance when there are variables in a game and efficiency of the chip?

pixel fillrate test using different memory clocks

Lastly, at lower resolutions where ROPs and bandwidth shouldn't be an issue (based on your points), the G92 should absolutely destroy the G80 given its much higher clock speeds, 1:1 TMU enhancements, and much higher shader clocks. Yet it doesn't. ROPs aren't only important for AA, they're also responsible for rasterization and post-processing.

That would also depend on the game. Like you said performance isn't linear. One game might be more shader intensive. One would use less textures and concentrate on pixel performance and so on. 9800gtx does beat 8800gtx in more games than 8800gtx.


When referring to "biggest" there is no partial or shared title. Its either the biggest bottleneck or its not. With each G92 variant and release we've been able to get a little more glimpse of why G92 still can't beat the 16 month old G80, and I think we're closer now to an answer.

Listen to your own logic. If pixel performance was the biggest bottleneck 8800gtx would win every game compared to 9800gtx but it doesn't. Only in limited situations usually with high amounts of AA where it has the extra frame buffer, bigger bandwidth and pixel fillrate.

Also when you consider if G92 does add more ROP it would also have to raise memory controllers. Just like G80 GTS and 8800gtx.

Bigger pixel fillrate (more is good anyways) is always good but if one is bottlenecking the other it is useless. A efficient card is the way to go.

http://techreport.com/r.x/gefo...mb/3dm-single-1600.gif

Why does 8800gts destroys 7900gtx with more pixel fillrate? The answer is obvious.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
What is 10% in memory bandwidth that is limited by 4 64bit clusters? Why would it be 10% increase in performance when there are variables in a game and efficiency of the chip?
You're arguing number of memory controllers (and bus width) has an impact on performance beyond total memory bandwidth, which is something I've considered but discounted due to no hard evidence supporting the theory. Historically board makers have continuously chosen to go with fewer controllers and faster RAM instead to increase bandwidth and it doesn't appear to have any adverse penalties, especially when history has shown us time and time again that unused bandwidth is a complete waste. Don't need to look any further than the 512-bit 2900XT this generation. Does the 2900XT's extra memory controllers benefit performance over the nearly identical 3870 on a 256-bit bus? No it doesn't, they perform nearly identically at the same clock speeds.

The 10%+ is significant when you argue bandwidth is the greatest limiting factor, especially when you see very little gain on G92 by increasing the traditional core clocks. This is especially significant given how well G80 scales when core clock is increased. Look at the G80 GTX for instance, less than 10% core overclock results in an Ultra. 10% core overclock on a G92 is stock vs. a SC and barely any difference in performance. Others, including myself have overclocked their G92s to extremely high clock speeds and saw very little difference in performance. Many thought memory bandwidth was the bottleneck keeping G92 from scaling similarly to G80, but again, the 9800GTX with an additional 10%+ bandwidth showed this was not the case.

That would also depend on the game. Like you said performance isn't linear. One game might be more shader intensive. One would use less textures and concentrate on pixel performance and so on. 9800gtx does beat 8800gtx in more games than 8800gtx.
Depends what review you read I suppose, in most I saw 8800GTX is as fast or faster than the 9800GTX within ~1FPS. Still, all you have to look at is the differences between the G92 and G80. G92 should clearly outclass G80 given its enhancements and higher clocks yet it doesn't. What's keeping G92 from pulling away? Bandwidth? Doubtful since it means very little with G80 performance and how little improvement it had for the 9800GTX compared to G92 GTS. VRAM? Maybe, but again it only becomes an issue at higher resolutions/AA and the G80 still keeps up at lower resolutions. ROPs? Seems to explain how a G80 clocked 100-150MHz slower can still keep pace with the G92 that improves everywhere else. Or conversely, the improvements to G92 simply aren't enough to overcome a 33% ROP reduction.

Listen to your own logic. If pixel performance was the biggest bottleneck 8800gtx would win every game compared to 9800gtx but it doesn't. Only in limited situations usually with high amounts of AA where it has the extra frame buffer, bigger bandwidth and pixel fillrate.

Also when you consider if G92 does add more ROP it would also have to raise memory controllers. Just like G80 GTS and 8800gtx.
That's exactly it though...if you clocked them similarly there is no doubt the G80 GTX would win every benchmark. Meanwhile you have very little difference with the G92 even if you increase its bandwidth (GTX), SP (G92 or G94 GT), and frame buffer (1GB versions of G92) as we've seen with all of the different variants released since G92 launched.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
You're arguing number of memory controllers (and bus width) has an impact on performance beyond total memory bandwidth, which is something I've considered but discounted due to no hard evidence supporting the theory. Historically board makers have continuously chosen to go with fewer controllers and faster RAM instead to increase bandwidth and it doesn't appear to have any adverse penalties, especially when history has shown us time and time again that unused bandwidth is a complete waste. Don't need to look any further than the 512-bit 2900XT this generation. Does the 2900XT's extra memory controllers benefit performance over the nearly identical 3870 on a 256-bit bus? No it doesn't, they perform nearly identically at the same clock speeds.

The 10%+ is significant when you argue bandwidth is the greatest limiting factor, especially when you see very little gain on G92 by increasing the traditional core clocks. This is especially significant given how well G80 scales when core clock is increased. Look at the G80 GTX for instance, less than 10% core overclock results in an Ultra. 10% core overclock on a G92 is stock vs. a SC and barely any difference in performance. Others, including myself have overclocked their G92s to extremely high clock speeds and saw very little difference in performance. Many thought memory bandwidth was the bottleneck keeping G92 from scaling similarly to G80, but again, the 9800GTX with an additional 10%+ bandwidth showed this was not the case.

Bigger bus is just better. It's wider and able to hit some peaks a smaller bus can't sustain.

Look at what 10% bandwidth did compared to 8800gts 512. It gave better AA performance and improved over GTS 512 average of 5%.

What does 2900xt have anything to do with it? 2900xt was underpowered to use 512bit memory controller properly but it did help improve pixel performance with bigger bandwidth compared to 3870.

http://techreport.com/r.x/rade...0xt/gpu-3dm-single.gif

http://techreport.com/r.x/rade...x2/3dm-single-1920.gif


Depends what review you read I suppose, in most I saw 8800GTX is as fast or faster than the 9800GTX within ~1FPS. Still, all you have to look at is the differences between the G92 and G80. G92 should clearly outclass G80 given its enhancements and higher clocks yet it doesn't. What's keeping G92 from pulling away? Bandwidth? Doubtful since it means very little with G80 performance and how little improvement it had for the 9800GTX compared to G92 GTS. VRAM? Maybe, but again it only becomes an issue at higher resolutions/AA and the G80 still keeps up at lower resolutions. ROPs? Seems to explain how a G80 clocked 100-150MHz slower can still keep pace with the G92 that improves everywhere else. Or conversely, the improvements to G92 simply aren't enough to overcome a 33% ROP reduction.

Why would G92 outclass G80? They are differently configured cards. One is good at high resolutions with AA and the other is more power with texture and SP.

There are many reviews. I'm not limited to only one sided review I read somewhere and being ignorant about it. 9800gtx wins in more benchmarks even with 4x AA.

AA has very little to do with SP. It has everything to do with memory bandwidth, vram, pixel and texel fillrate.


That's exactly it though...if you clocked them similarly there is no doubt the G80 GTX would win every benchmark. Meanwhile you have very little difference with the G92 even if you increase its bandwidth (GTX), SP (G92 or G94 GT), and frame buffer (1GB versions of G92) as we've seen with all of the different variants released since G92 launched.

If it was clocked same as 9800gtx, 8800gtx would have bigger advantage and closing the gap in texture fillrate. You are not even making sense now because 9800gtx pixel fillrate is 10800 and 8800gtx is 13800. Why do you need to bump up the clock speed when it has more anyway? Even 3dmark pixel fillrate test shows this.

http://techreport.com/r.x/gefo...9800gtx/3dm-single.gif

9800gtx wins where a game needs more power like Crysis for instance by 20% and closes the gap when AA is used. Only in some ridiculous resolution with AA does 8800gtx wins or that favors pixel fillrate and bandwidth.

G92 is not as efficient as G80 is when it comes to texture fillrate. Although G92 should more than double the texture fillrate it doesn't in the real world. It's more like 50% advantage.

http://techreport.com/r.x/geforce-9800gtx/3dm-multi.gif


 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Bigger bus is just better. It's wider and able to hit some peaks a smaller bus can't sustain.

Look at what 10% bandwidth did compared to 8800gts 512. It gave better AA performance and improved over GTS 512 average of 5%.

What does 2900xt have anything to do with it? 2900xt was underpowered to use 512bit memory controller properly but it did help improve pixel performance with bigger bandwidth compared to 3870.

http://techreport.com/r.x/rade...0xt/gpu-3dm-single.gif

http://techreport.com/r.x/rade...x2/3dm-single-1920.gif
Huh? You just said bigger bus "is just better", but then you ask why the 2900XT has any relevance? I'm not arguing 512-bit offers no memory bandwidth advantage to the 2900XT, I've said many times extra bandwidth is wasted bandwidth. You're arguing simply having a wider bus with the same bandwidth equates to greater performance. While there may be some theoretical merit to that, there is no evidence of it and the 2900XT is a perfect example of that. The 2900XT and the 3870 perform nearly identically at the same clock speeds even though the 3870 has less bandwidth overall and fewer memory controllers. I'm not sure what relevance your links give, but you do realize they are from different reviews with different test platforms. Given how CPU dependent 3DMark is, I'm not sure what you're trying to show between the different reviews taken at different time periods.


Why would G92 outclass G80? They are differently configured cards. One is good at high resolutions with AA and the other is more power with texture and SP.

There are many reviews. I'm not limited to only one sided review I read somewhere and being ignorant about it. 9800gtx wins in many benchmarks even with 4x AA.

AA has very little to do with SP. It has everything to do with memory bandwidth, vram, pixel performance.
The point is it doesn't win them all and is so close that there are many who feel it doesn't deserve the 9800 designation as there still isn't any new part that beats the 16 month old G80. The 9800GTX was supposed to alleviate the bottleneck that most limited G92, bandwidth, but again as we see that's not the case.


If it was clocked same as 9800gtx, 8800gtx would have bigger advantage and closing the gap in texture fillrate. You are not even making sense now because 9800gtx pixel fillrate is 10800 and 8800gtx is 13800. Why do you need to bump up the clock speed when it has more anyway? Even 3dmark pixel fillrate test shows this.

http://techreport.com/r.x/gefo...9800gtx/3dm-single.gif
It makes perfect sense as it would emphasize the factor that yields the biggest performance gain on the G80, core clock speeds. G92 already has the advantage with texture fill rate yet it still fails to convincingly surpass G80, because it simply can't overcome the 33% decrease in ROPs. To put it simply, even with its TMU enhancements and much faster shaders, it would still need ~33% increase in core clock in order to make up for the missing ROPs and convincingly beat the G80 (ie 765MHz). That to me clearly says ROPs are the biggest limiting factor with G92. Now, imagine a G80 clocked at 765MHz, I highly doubt you'd see the tiny gains you see with G92. Again, looking at the Ultra, a 50MHz increase on the G80 makes a huge difference.

G92 is not as efficient as G80 is when it comes to texture fillrate. Although G92 should more than double the texture fillrate it doesn't in the real world.

http://techreport.com/r.x/geforce-9800gtx/3dm-multi.gif
Actually that's pretty much what I'd expect with texture fillrate. Remember, G92 isn't supposed to be 2x as efficient, it only had its TMUs upgraded to 1:1 fetch/address instead of 1:2 with G80. A 50% increase is actually pretty close based on the 32:64 vs 64:64 alignment.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Huh? You just said bigger bus "is just better", but then you ask why the 2900XT has any relevance? I'm not arguing 512-bit offers no memory bandwidth advantage to the 2900XT, I've said many times extra bandwidth is wasted bandwidth. You're arguing simply having a wider bus with the same bandwidth equates to greater performance. While there may be some theoretical merit to that, there is no evidence of it and the 2900XT is a perfect example of that. The 2900XT and the 3870 perform nearly identically at the same clock speeds even though the 3870 has less bandwidth overall and fewer memory controllers. I'm not sure what relevance your links give, but you do realize they are from different reviews with different test platforms. Given how CPU dependent 3DMark is, I'm not sure what you're trying to show between the different reviews taken at different time periods.

Bigger bus is better. 2900xt has no relevance to G92. G92 has massive texture fillrate which can help on AA performance with more bandwidth. It was underpowered to tap all that bandwidth but it did help pixel performance by 20%.

Different reviews? Why does it matter when I'm trying to show you how much bandwidth improved pixel peformance compared to 3870 and 2900xt. If you have 2900xt and 3870 you can be by guest to prove to me that it has no effect on pixel performance.

The point is it doesn't win them all and is so close that there are many who feel it doesn't deserve the 9800 designation as there still isn't any new part that beats the 16 month old G80. The 9800GTX was supposed to alleviate the bottleneck that most limited G92, bandwidth, but again as we see that's not the case.

Of course it won't win them all but it does win most of them where it's not limited to bandwidth and pixel performance.

What 9800gtx reviews did you read? I've checked a lot of sites and 9800gtx wins constantly over 8800gtx. Just some extreme cases is where gtx wins where it fails with 512 vram and start to texture thrash at that point.

It makes perfect sense as it would emphasize the factor that yields the biggest performance gain on the G80, core clock speeds. G92 already has the advantage with texture fill rate yet it still fails to convincingly surpass G80, because it simply can't overcome the 33% decrease in ROPs. To put it simply, even with its TMU enhancements and much faster shaders, it would still need ~33% increase in core clock in order to make up for the missing ROPs and convincingly beat the G80 (ie 765MHz). That to me clearly says ROPs are the biggest limiting factor with G92. Now, imagine a G80 clocked at 765MHz, I highly doubt you'd see the tiny gains you see with G92. Again, looking at the Ultra, a 50MHz increase on the G80 makes a huge difference.

How do you figure? It makes no sense when 8800gtx has more pixel fillrate than 9800gtx when your whole ideology about pixel performance is bottlenecking G92? G92 is limited to bandwidth. If it had more it could easily do better in higher resolutions and AA compared to G80.

So your saying if it has more texture fillrate it should win every time? Why do you even think that? That would depend entirely on the game. 9800gtx wins 8800gtx 90% of the time. I don't know what benchmarks you saw but what I saw 9800gtx easily outpaced every game from various sources. Now if you are talking about the ultra. It's shader performance is quite a bit up there with a 9800gtx and not to mention bigger advantage with pixel fillrate and 60% more bandwidth. Even the ultra doesn't beat 9800gtx in raw frame rates most of the time. It just depends on the game.


 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Bigger bus is better. 2900xt has no relevance to G92. G92 has massive texture fillrate which can help on AA performance with more bandwidth. It was underpowered to tap all that bandwidth but it did help pixel performance by 20%.
Again, you have no proof of that and with the same breath dispute evidence to the contrary. 2900XT has no direct relevance to G92, but when compared to the 3870XT it directly refutes your assertion that bus width/memory controllers impact performance. From TR:

The primary reason for the reduction in transistors is that AMD essentially halved the R600's memory subsystem for the RV670. Externally, that means the RV670 has a 256-bit path to memory. Internally, the RV670 uses the same ring bus-style memory architecture as the R600, but the ring bus is down from 1024 to 512 bits. Thus, the RV670 has half as many wires running around the perimeter of the chip and fewer ring stops along the way. Also, since the I/O portions of a chip like this one don't shrink linearly with fabrication process shrinks, removing half of them contributes greatly to the RV670's more modest footprint.

Again, this is particularly relevant because the 3870 cuts both internal and external memory controllers while keeping everything else virtually the same (ROP, SP, Texture units etc) and the resulting performance is nearly identical in every test in their 3870 Review.
Different reviews? Why does it matter when I'm trying to show you how much bandwidth improved pixel peformance compared to 3870 and 2900xt. If you have 2900xt and 3870 you can be by guest to prove to me that it has no effect on pixel performance.
It does matter when you throw out random synthetic benchmark graphs that clearly do not reflect any kind of real-world performance. Its hard to take any such graphs seriously, much less any comparison between results taken at different points in time. When you compare the parts in the same review on the same hardware, there clearly is no difference between the parts at the same clock speeds as seen in their 3870 Review or any other.

Of course it won't win them all but it does win most of them where it's not limited to bandwidth and pixel performance.

What 9800gtx reviews did you read? I've checked a lot of sites and 9800gtx wins constantly over 8800gtx. Just some extreme cases is where gtx wins where it fails with 512 vram and start to texture thrash at that point.

Here's one from Guru3D. You can clearly see the 8800 and 9800 GTX are neck and neck, typically tied or within 1 FPS of one another at varying resolutions and AA settings. You can also see how unremarkable the differences in clock speeds are with the different G92 variants we've seen to date. Unfortunately many 9800GTX reviews compared it to the Ultra, which I think most would agree still convincingly tops the 9800GTX.

How do you figure? It makes no sense when 8800gtx has more pixel fillrate than 9800gtx when your whole ideology about pixel performance is bottlenecking G92? G92 is limited to bandwidth. If it had more it could easily do better in AA compared to G80.
You keep saying that but the 9800GTX proves that isn't the case, or that it isn't the biggest bottleneck. Again, compare G92 GTS to G92 GTX and you'll see, even with ~230MHz faster memory or 8GB/s greater bandwidth the difference in performance is closer to its 4% difference in clock speed. If memory bandwidth were the greatest bottleneck on G92, the increase in performance from the GTS to the GTX would be closer to the 13% difference in bandwidth.

So your saying if it has more texture fillrate it should win every time? Why do you even think that? That would depend entirely on the game. 9800gtx wins 8800gtx 90% of the time. I don't know what benchmarks you saw but what I saw 9800gtx easily outpaced every game from various sources. Now if you are talking about the ultra. It's shader performance is quite a bit up there with a 9800gtx and not to mention bigger advantage with pixel fillrate and 60% more bandwidth. Even the ultra doesn't beat 9800gtx in raw frame rates most of the time. It just depends on the game.

No I never said anything about texture fillrate and performance, as it was very clear with G92 that texture fillrate improvements didn't matter much at all. You were the one touting the importance of texture fillrate and bandwidth last time, but at least you've de-emphasized the texture fillrate part this time around.

I'm not sure what reviews you're looking at but even your favorite site TR shows the Ultra beating the 9800GTX handily once you get away from synthetic benchmarks. Also notice the difference in performance when the Ultra wins is much greater than the few instances where the 9800GTX wins. Pretty much every review I've seen shows the Ultra still handily outclassing the 9800GTX while performing very similarly to the 8800GTX.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Again, you have no proof of that and with the same breath dispute evidence to the contrary. 2900XT has no direct relevance to G92, but when compared to the 3870XT it directly refutes your assertion that bus width/memory controllers impact performance. From TR:

Proof?

http://techreport.com/articles.x/14168/4

The single-textured fill rate test is typically limited by memory bandwidth, which helps explain why the Palit 9600 GT beats out our stock GeForce 8800 GT. The multitextured test is more generally limited by the GPU's texturing capabilities, and in this case, the 8800 GT pulls well away from its upstart sibling.

3870 is clocked higher and stronger SP. Even in your benchmarks 2900xt with slight lower pixel, texture, SP beats 3870.

Again, this is particularly relevant because the 3870 cuts both internal and external memory controllers while keeping everything else virtually the same (ROP, SP, Texture units etc) and the resulting performance is nearly identical in every test in their 3870 Review.

virtually? How do you figure when 3870 is clocked higher and it's SP is stronger than 2900xt. That Memory bandwidth is surely giving 2900xt some lead way considering 3870 is better than 2900xt in pixel, texture and sp.

It does matter when you throw out random synthetic benchmark graphs that clearly do not reflect any kind of real-world performance. Its hard to take any such graphs seriously, much less any comparison between results taken at different points in time. When you compare the parts in the same review on the same hardware, there clearly is no difference between the parts at the same clock speeds as seen in their 3870 Review or any other.

Sure it does. 2900xt is slower compared to 3870 when you consider SP flops and texture fillrate but that 512bit memory bus is creeping isn't it considering 2900xt beat 3870 that is clocked higher. Again 2900xt never had the fillrate to use all that bandwidth but g92 does.

Here's one from Guru3D. You can clearly see the 8800 and 9800 GTX are neck and neck, typically tied or within 1 FPS of one another at varying resolutions and AA settings. You can also see how unremarkable the differences in clock speeds are with the different G92 variants we've seen to date. Unfortunately many 9800GTX reviews compared it to the Ultra, which I think most would agree still convincingly tops the 9800GTX.

Look at your own benchmarks and is that the only source you can provide? 9800gtx > 8800gtx in Crysis and COD with AA with lower memory bandwidth and pixel fillrate.

Would you like see without AA and more sources where it's being less bottlenecked by memory bandwidth?
http://www.neoseeker.com/Artic...ews/xfx9800gtx/12.html

http://www.hothardware.com/Art...BFG_EVGA_Zogis/?page=7

You keep saying that but the 9800GTX proves that isn't the case, or that it isn't the biggest bottleneck. Again, compare G92 GTS to G92 GTX and you'll see, even with ~230MHz faster memory or 8GB/s greater bandwidth the difference in performance is closer to its 4% difference in clock speed. If memory bandwidth were the greatest bottleneck on G92, the increase in performance from the GTS to the GTX would be closer to the 13% difference in bandwidth.

Memory bandwidth is bottlenecking when you are testing bunch of games with AA against an ultra with 60% more bandwidth.

Either stick to your Pixel = everything or don't change the subject. It's not only pixel fillrate that is being raised up it's also texture fillrate and memory bandwidth. Gaming performance increase isn't linear how you think. Just because it's clocked 8% higher bandwidth doesn't equate to 8% performance increase.

No I never said anything about texture fillrate and performance, as it was very clear with G92 that texture fillrate improvements didn't matter much at all. You were the one touting the importance of texture fillrate and bandwidth last time, but at least you've de-emphasized the texture fillrate part this time around.

Oh but you did thought texture fillrate should outperform everything else and that was a different situation when I was arguing with BFG which have been already proven at a later thread with keysplyr. Last time? I'm more educated since then.


I'm not sure what reviews you're looking at but even your favorite site TR shows the Ultra beating the 9800GTX handily once you get away from synthetic benchmarks. Also notice the difference in performance when the Ultra wins is much greater than the few instances where the 9800GTX wins. Pretty much every review I've seen shows the Ultra still handily outclassing the 9800GTX while performing very similarly to the 8800GTX.

Like what? With AA and 2560x1600 where 8800ultra has 60% more bandwidth. When did it change from GTX vs 9800gtx to 9800gtx vs Ultra? AA and uber high resolutions are more dependent on memory bandwidth where G80 tends to dominate.

I love TR. They have real smart guys there who stand out from many hardware sites and affiliates of Beyond3d.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Proof?

http://techreport.com/articles.x/14168/4

The single-textured fill rate test is typically limited by memory bandwidth, which helps explain why the Palit 9600 GT beats out our stock GeForce 8800 GT. The multitextured test is more generally limited by the GPU's texturing capabilities, and in this case, the 8800 GT pulls well away from its upstart sibling.

3870 is clocked higher and stronger SP. Even in your benchmarks 2900xt with slight lower pixel, texture, SP beats 3870.
And how does that 9600GT translate into real-world performance? Again, your reliance on synthetic benchmarks and 3DMark nonetheless is laughable when none of it translates into ANY real-world performance gains. In reality the 9600GT performs similarly to the 8800GT despite having half the SP and texturing power because it has the same number of ROPs and raw fill rate. Its really that simple and further proof that ROPs = most important factor on modern GPUs. Even Carmack confirmed this recently in an interview saying any entry from Intel will need to be a strong rasterizer or it will flop.

virtually? How do you figure when 3870 is clocked higher and it's SP is stronger than 2900xt. That Memory bandwidth is surely giving 2900xt some lead way considering 3870 is better than 2900xt in pixel, texture and sp.
Rofl, 30MHz difference, or 4%. I think you'll find the results well within those bounds. For all intents and purposes, the cards perform identically showing number of memory controllers and bus width bears no significance on performance.

Sure it does. 2900xt is slower compared to 3870 when you consider SP flops and texture fillrate but that 512bit memory bus is creeping isn't it considering 2900xt beat 3870 that is clocked higher. Again 2900xt never had the fillrate to use all that bandwidth but g92 does.
You keep saying it "beats" it, but every single benchmark I see has the 3870 and 2900XT within 1 FPS of one another. I'd say that's virtually identical performance. Even if you dropped the 3870 or overclocked the 2900XT by 30MHz I think you'd find the same results. Its obvious that R600/RV670 don't have the fillrate or processing power to warrant a 512-bit bus and GDDR4. Similarly, its unclear whether G92 does either given the results we've seen with the 9800GTX and the additional bandwidth provided.

Look at your own benchmarks and is that the only source you can provide? 9800gtx > 8800gtx in Crysis and COD with AA with lower memory bandwidth and pixel fillrate.

Would you like see without AA and more sources where it's being less bottlenecked by memory bandwidth?
http://www.neoseeker.com/Artic...ews/xfx9800gtx/12.html

http://www.hothardware.com/Art...BFG_EVGA_Zogis/?page=7

LMAO, you really need to learn how to read benchmarks. 9800GTX and 8800GTX perform nearly identically in that Guru3D benchmark.....tied or within 1FPS in every single one except a low resolution Crysis bench that has the 8800 winning by 5FPS. Any reasonable person however would look at those results and say the cards perform identically. And what do you have in response? Reviews from a search engine? LOL. Anyways, if you bothered to look at the other benchmarks, you'd also see the 8800GTX wins as many as it loses, typically all within the same 3-4FPS difference. Again, any reasonable person would acknowledge the results are more or less a wash and that the cards perform similarly. But if you want a few more try FiringSquad or Xbit for more of the same showing the 8800GTX and 9800GTX perform similary and the Ultra still beats the 9800GTX convincingly.

Memory bandwidth is bottlenecking when you are testing bunch of games with AA against an ultra with 60% more bandwidth.

Either stick to your Pixel = everything or don't change the subject. It's not only pixel fillrate that is being raised up it's also texture fillrate and memory bandwidth. Gaming performance increase isn't linear how you think. Just because it's clocked 8% higher bandwidth doesn't equate to 8% performance increase.
Except it is linear in the case of G80 core increases, just not with G92. Why? Because each increase in core clock has a much greater impact on fill rate due to the greater number of ROPs on G80. Bandwith doesn't matter at lower resolutions or no AA yet the G80 GTX and Ultra still manage to keep up.

Oh but you did thought texture fillrate should outperform everything else and that was a different situation when I was arguing with BFG which have been already proven at a later thread with keysplyr. Last time? I'm more educated since then.
Where did I say anything about texture fillrate outperforming everything else? Didn't think so. Last go around you were trumpeting the same crap with your 3DMark graphs and theoretical bandwidth numbers with no real-world benches to back it up. Looks like nothing has changed.

Like what? With AA and 2560x1600 where 8800ultra has 60% more bandwidth. When did it change from GTX vs 9800gtx to 9800gtx vs Ultra? AA and uber high resolutions are more dependent on memory bandwidth where G80 tends to dominate.

I love TR. They have real smart guys there who stand out from many hardware sites and affiliates of Beyond3d.
No you don't need to enable AA or go to high resolutions for the Ultra to beat the 9800GTX. The G80 has always had more bandwidth than it has needed. That's something easily tested and confirmed by simply lowering/raising memory frequency on a G80 or comparing reviews of OC 8800GTX editions vs the 8800Ultra (same clock, different memory).

 

AzN

Banned
Nov 26, 2001
4,112
2
0
And how does that 9600GT translate into real-world performance? Again, your reliance on synthetic benchmarks and 3DMark nonetheless is laughable when none of it translates into ANY real-world performance gains. In reality the 9600GT performs similarly to the 8800GT despite having half the SP and texturing power because it has the same number of ROPs and raw fill rate. Its really that simple and further proof that ROPs = most important factor on modern GPUs. Even Carmack confirmed this recently in an interview saying any entry from Intel will need to be a strong rasterizer or it will flop.

Now what is wrong with 3dmark fillrate tests? If it's good enough for Techreport it's good enough for me. 3dmark is a real engine based on a 3D scenario if you didn't know.

You bring no evidence other than your words. What does carmack comment have anything to do with G92 bottlenecks? Rasterizer is important but without bandwidth lot of that power gets shafted.

Rofl, 30MHz difference, or 4%. I think you'll find the results well within those bounds. For all intents and purposes, the cards perform identically showing number of memory controllers and bus width bears no significance on performance.

2900xt does edge out by few frames even though it's clocked lower. The bandwidth is useless without fillrate. Lot of it sits idle however pixel fillrate did saturate to edge out the 3870 by 1 or 2fps.

You keep saying it "beats" it, but every single benchmark I see has the 3870 and 2900XT within 1 FPS of one another. I'd say that's virtually identical performance. Even if you dropped the 3870 or overclocked the 2900XT by 30MHz I think you'd find the same results. Its obvious that R600/RV670 don't have the fillrate or processing power to warrant a 512-bit bus and GDDR4. Similarly, its unclear whether G92 does either given the results we've seen with the 9800GTX and the additional bandwidth provided.

Isn't that what I've been saying? That 2900xt doesn't have the fillrate to warrant 512bit bus. :laugh: G92 is different however where it needs all the bandwidth it can get. Again 3870 is slightly stronger chip than 2900xt. It's clocked higher and SP tweaks. However it does saturate it's ROP as shown previous links which edges out in some games.

LMAO, you really need to learn how to read benchmarks. 9800GTX and 8800GTX perform nearly identically in that Guru3D benchmark.....tied or within 1FPS in every single one except a low resolution Crysis bench that has the 8800 winning by 5FPS. Any reasonable person however would look at those results and say the cards perform identically. And what do you have in response? Reviews from a search engine? LOL. Anyways, if you bothered to look at the other benchmarks, you'd also see the 8800GTX wins as many as it loses, typically all within the same 3-4FPS difference. Again, any reasonable person would acknowledge the results are more or less a wash and that the cards perform similarly. But if you want a few more try FiringSquad or Xbit for more of the same showing the 8800GTX and 9800GTX perform similary and the Ultra still beats the 9800GTX convincingly.

Guru and their medium settings for Crysis? Funny how you only hand pick your benches where 8800gtx ties with 9800gtx. Crysis @ high detail or very high detail tells a different story beating 8800gtx by 15-20%. FiringSquad? Where they test with AA in all their benches? Even then 2fps within ultra. Xbit never reviewed 9800gtx. What don't you understand? G80 is superior with AA because it has more memory bandwidth and bigger frame buffer.

Try high settings where most people would play with a card like 9800gtx maybe 8800gtx owners should stick with medium settings instead. :laugh:

http://www.neoseeker.com/Artic...ews/xfx9800gtx/12.html

http://www.hothardware.com/Art...BFG_EVGA_Zogis/?page=7


Except it is linear in the case of G80 core increases, just not with G92. Why? Because each increase in core clock has a much greater impact on fill rate due to the greater number of ROPs on G80. Bandwith doesn't matter at lower resolutions or no AA yet the G80 GTX and Ultra still manage to keep up.

G80 linear? :laugh: Bandwidth does't matter even at lower resolutions? What is low resolutions these days? 1280x1024? I can even prove this theory for you with couple of benches downclocking my memory clocks and running a bench.


Where did I say anything about texture fillrate outperforming everything else? Didn't think so. Last go around you were trumpeting the same crap with your 3DMark graphs and theoretical bandwidth numbers with no real-world benches to back it up. Looks like nothing has changed.

That's what you implied. You said 9800gtx has more texture fillrate and it should beat 8800gtx in all the tests but it doesn't. I implied they are different configured cards. That G80 is better with high resolutions with AA because of bandwidth and pixel fillrate and G92 has texture, SP advantage usually in modest settings.

No you don't need to enable AA or go to high resolutions for the Ultra to beat the 9800GTX. The G80 has always had more bandwidth than it has needed. That's something easily tested and confirmed by simply lowering/raising memory frequency on a G80 or comparing reviews of OC 8800GTX editions vs the 8800Ultra (same clock, different memory).

No AA? Play medium settings like Guru? :laugh: BFG could easily downclock his ultra to match 9800gtx memory speeds and see his AA performance drop worse than 9800gtx. 9800gtx is more powerful chip. It's just limited by bandwidth. 9800gtx gets really close with 60% lower memory bandwidth even when AA is applied.

The whole thread was about G92 bottlenecks. Is it not? G92 is memory bandwidth bottlenecked. :thumbsup:
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Now what is wrong with 3dmark fillrate tests? If it's good enough for Techreport it's good enough for me. 3dmark is a real engine based on a 3D scenario if you didn't know.

You bring no evidence other than your words. What does carmack comment have anything to do with G92 bottlenecks? Rasterizer is important but without bandwidth lot of that power gets shafted.
Way to dodge the question. This is why no one pays any serious attention to 3DMark, because it tells you what you already know and doesn't reflect real-world performance in actual games. But nice try dodging the question. No need to look high-end, just look at mid-range with the 9600GT. Once again, how does a card with half the SPs and texturing ability keep up with the 8800GT? Oh right, because it has the same number of ROPs and fillrate.

Isn't that what I've been saying? That 2900xt doesn't have the fillrate to warrant 512bit bus. :laugh: G92 is different however where it needs all the bandwidth it can get. Again 3870 is slightly stronger chip than 2900xt. It's clocked higher and SP tweaks. However it does saturate it's ROP as shown previous links which edges out in some games.
No, you said "bigger bus is just better" without any proof whatsoever and in this case despite evidence to the contrary. Once again I think you need to take a look at those benchmarks again because the 3870 is beating the 2900XT by 1-2FPS more often than not, probably due to that "massive" 30MHz increase that in all reality is negligible. The cards are nearly identical in performance and clearly show number of memory controllers/bus width has no impact on performance in cases where bandwidth is not a bottleneck.

Guru and their medium settings for Crysis? Funny how you only hand pick your benches where 8800gtx ties with 9800gtx. Crysis @ high detail or very high detail tells a different story beating 8800gtx by 15-20%. FiringSquad? Where they test with AA in all their benches? Even then 2fps within ultra. Xbit never reviewed 9800gtx. What don't you understand? G80 is superior with AA because it has more memory bandwidth and bigger frame buffer.
I didn't handpick any benches, I showed you a review that clearly shows the 8800 and 9800 neck and neck. You came back with Crysis and COD4 as examples to the contrary....but only proved you can't read simple bar graphs.

So what do higher settings in Crysis do? They emphasize the G92's SP advantage over G80 which was never a question and one of the *few*, if only games that showed a significant advantage with more SP in Keys and BFG's own in-house testing. I believe COD4 also showed massive drop-off below 64SP but that's about it. That still doesn't change the fact you can't read benchmarks.

Try high settings where most people would play with a card like 9800gtx maybe 8800gtx owners should stick with medium settings instead. :laugh:

http://www.neoseeker.com/Artic...ews/xfx9800gtx/12.html

http://www.hothardware.com/Art...BFG_EVGA_Zogis/?page=7
Xbit does more than review a 9800GTX, they review a card that's faster than the 9800GTX with more VRAM as well, the review I linked in the OP Gainward Bliss GTS 1GB @730/2100.

G80 linear? :laugh: Bandwidth does't matter even at lower resolutions? What is low resolutions these days? 1280x1024? I can even prove this theory for you with couple of benches downclocking my memory clocks and running a bench.
Rofl, you obviously haven't been paying attention. When you overclock a G80 GTX 8% you get an Ultra and you see at least that much difference in performance. When you overclock a G80 GTS 15% from 500 to 575, the difference means losing by that much or more to the G92 GT vs. performing nearly identically to it. Again, if you look at the Tech Report 3870 review and pay attention to the 640MB GTS numbers you'll see its very competitive with the G92 GT when it loses badly in every other review. Why? Because TR did what most other reviewers ignored, they used 575MHz clockspeed based on what was available on the market and not old reference clockspeeds. FiringSquad also did this in their comparisons which I've linked to numerous times. You don't see nearly the level of scaling with G92 as we've seen with all the different GT, GTS and now GTX parts clocked from 600-675MHz. So yes, G80 sees a much higher return on core clock vs any other adjustments to shader or memory clock, which is what anyone who has owned a G80 will tell you.


That's what you implied. You said 9800gtx has more texture fillrate and it should beat 8800gtx in all the tests but it doesn't. I implied they are different configured cards. That G80 is better with high resolutions with AA because of bandwidth and pixel fillrate and G92 has texture, SP advantage usually in modest settings.
No I didn't. I said 9800GTX can't overcome the 33% reduction ROPs despite all of its enhancements in the way of texture units and SPs and increase in bandwidth over the GTS. This emphasizes my point that ROPs are still the biggest bottleneck on G92, not bandwidth, texturing ability or anythiing else. Clock for clock G80 is better at every resolution due to its fillrate advantage.

No AA? Play medium settings like Guru? :laugh: BFG could easily downclock his ultra to match 9800gtx memory speeds and see his AA performance drop worse than 9800gtx. 9800gtx is more powerful chip. It's just limited by bandwidth. 9800gtx gets really close with 60% lower memory bandwidth even when AA is applied.

The whole thread was about G92 bottlenecks. Is it not? G92 is memory bandwidth bottlenecked. :thumbsup:
Huh? No I'm pretty sure BFG came to the same conclusion I've been saying for months in his recent testing: that core clock differences on G80 lead to the biggest performance gains. Likewise I can increase my core clock to 621Mhz without touching memory and see significant gains. Why? Because bandwidth isn't the greatest limiting factor with G80 due to its wider bus and greater bandwidth compared to G92. Sure G92 might be bottlenecked due to its 256-bit bus in some situations and some settings, but realistically its not always going to be saturating its bandwidth all of the time, ie using bandwidth at 100% efficiency, so that means increases to core clocks will still have the biggest impact on its performance, just not as much as with G80. Which is exactly what we've seen with G92.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Way to dodge the question. This is why no one pays any serious attention to 3DMark, because it tells you what you already know and doesn't reflect real-world performance in actual games. But nice try dodging the question. No need to look high-end, just look at mid-range with the 9600GT. Once again, how does a card with half the SPs and texturing ability keep up with the 8800GT? Oh right, because it has the same number of ROPs and fillrate.

Really? No one pays attention to 3dmark? Like who? Almost every single reviewer use 3dmark to measure performance. 3dmark is a tool for pc gamers that measure each sub-section of the card. :laugh: Actually it does reflect gaming situations like when AA is on and off etc... It behaves same as it would in a game.

To answer you question though 9600gt keeps up only when AA is applied even then it still trails 8800gt. When AA is disabled 8800gt is more faster than with AA. Notice the bottleneck. Memory Bandwidth! All that texture fillrate is useless if it doesn't have bandwidth to use it properly. As for shader I guess you missed 9600gt thread. Do a search.



No, you said "bigger bus is just better" without any proof whatsoever and in this case despite evidence to the contrary. Once again I think you need to take a look at those benchmarks again because the 3870 is beating the 2900XT by 1-2FPS more often than not, probably due to that "massive" 30MHz increase that in all reality is negligible. The cards are nearly identical in performance and clearly show number of memory controllers/bus width has no impact on performance in cases where bandwidth is not a bottleneck.

I said that about g92 because it has massive texture fillrate that can use wider bus or more bandwidth not about 2900xt. In some situations yes 3870 wins because of stronger shader, slightly more texture fillrate, etc...

I didn't handpick any benches, I showed you a review that clearly shows the 8800 and 9800 neck and neck. You came back with Crysis and COD4 as examples to the contrary....but only proved you can't read simple bar graphs.

So what do higher settings in Crysis do? They emphasize the G92's SP advantage over G80 which was never a question and one of the *few*, if only games that showed a significant advantage with more SP in Keys and BFG's own in-house testing. I believe COD4 also showed massive drop-off below 64SP but that's about it. That still doesn't change the fact you can't read benchmarks.

Guru3d tests where 8800gtx would look good like testing medium settings in crysis, turn off soft shadows in FEAR, disable soft particles in Quake Wars. Of course you handpicked benches for medium settings. You just don't understand where the performance negligence is coming from so you blame me like I'm dumb who can't read bar graphs.

High settings in Crysis emphasis on everything. High settings use bigger textures, better shadows, better shader, etc.. It stresses the card.

Xbit does more than review a 9800GTX, they review a card that's faster than the 9800GTX with more VRAM as well, the review I linked in the OP Gainward Bliss GTS 1GB @730/2100.

So it isn't 9800gtx. It's an overclocked 8800gts with 1 gig. There you have 8800gts 1gig beating 8800gtx in most of the benches even with AA with much lower memory bandwidth. What a surprise.


Rofl, you obviously haven't been paying attention. When you overclock a G80 GTX 8% you get an Ultra and you see at least that much difference in performance. When you overclock a G80 GTS 15% from 500 to 575, the difference means losing by that much or more to the G92 GT vs. performing nearly identically to it. Again, if you look at the Tech Report 3870 review and pay attention to the 640MB GTS numbers you'll see its very competitive with the G92 GT when it loses badly in every other review. Why? Because TR did what most other reviewers ignored, they used 575MHz clockspeed based on what was available on the market and not old reference clockspeeds. FiringSquad also did this in their comparisons which I've linked to numerous times. You don't see nearly the level of scaling with G92 as we've seen with all the different GT, GTS and now GTX parts clocked from 600-675MHz. So yes, G80 sees a much higher return on core clock vs any other adjustments to shader or memory clock, which is what anyone who has owned a G80 will tell you.

No you get a overclocked 8800gtx. And for you info 8800ultra has 17% more bandwidth than 8800gtx not 8%. I read that techreport review a while ago with a overclocked G80GTS into mix. They were testing in extreme bandwidth limited situations in uber high resolutions and AA. That extra vram and memory bandwidth is sure kicking in isn't it in those extreme conditions. G80 has more bandwidth that is why it's doing well in those extreme conditions. G80 is a weaker chip compared to G92 that is why G92 can easily beat G80GTS with much lower bandwidth.


No I didn't. I said 9800GTX can't overcome the 33% reduction ROPs despite all of its enhancements in the way of texture units and SPs and increase in bandwidth over the GTS. This emphasizes my point that ROPs are still the biggest bottleneck on G92, not bandwidth, texturing ability or anythiing else. Clock for clock G80 is better at every resolution due to its fillrate advantage.

Why is it that 9800gtx can beat 8800gtx in modest settings even with 33% reduction in ROP? :brokenheart: How can it be bottleneck when pixel fillrate is limited by memory bandwidth? For your information G92 has lower bandwidth than G80. I don't know what GTS you are talking about but if you are talking about G92GTS, 9800gtx beats it. If you are talking about G80GTS, 9800gtx beats it.


Huh? No I'm pretty sure BFG came to the same conclusion I've been saying for months in his recent testing: that core clock differences on G80 lead to the biggest performance gains. Likewise I can increase my core clock to 621Mhz without touching memory and see significant gains. Why? Because bandwidth isn't the greatest limiting factor with G80 due to its wider bus and greater bandwidth compared to G92. Sure G92 might be bottlenecked due to its 256-bit bus in some situations and some settings, but realistically its not always going to be saturating its bandwidth all of the time, ie using bandwidth at 100% efficiency, so that means increases to core clocks will still have the biggest impact on its performance, just not as much as with G80. Which is exactly what we've seen with G92.

Saying crap like bandwidth makes no performance impact in lower resolutions. That's full of $hit. BFG already tested on his bandwidth happy ultra. Decreasing his memory bandwidth by 20% gave him lower performance even without AA and much more with AA @ 1600x1200 resolution. Now what would happen if he downclocked to same GB/S as 9800gtx memory speeds. I'll tell you this much it won't be pretty against 9800gtx. G92 is starved for bandwidth with massive texture fillrate that sits there waiting for bandwidth to catch up.

http://episteme.arstechnica.co...7909965/m/453004231931

Before you say something ignorant as increasing the core clocks will show big improvements let me remind you that it is also tied to texture clocks and isn't a g92 but a G80 with lower texture fillrate. :brokenheart: Too late!

Since there is no way to test just ROP performance I think we can rest assure it's just Chizow's fantasy for now.

G92 biggest bottleneck compared to an ultra is bandwidth. If it has bandwidth it can overcome ultra with AA. It can beat an ultra without AA in most situations anyway long as it's not some obscure settings where the extra vram and pixel fillrate makes the difference. 2fps difference with AA with 60% less bandwidth is phenomenal when 20% less bandwidth gave BFG's Ultra 8-10% less frame rates.

Bigger ROP helps in uber high resolutions and AA and does improve performance no doubt... Anything higher will give you more performance but once it's bottlenecked you get minimal return much like you trying to play a game with a pentium 3 and stuck a 8800gt that limits your frame rates.

Look at 3870 that has 20% more ROP + slightly more bandwidth than 8800gt yet it loses to 8800gt with more GFLOPS to boot. Since you can only fit so much into a single die a balanced card is the way to go. G80 does just that except it cost much more money than G92. Now stick GDDR5 in G92 it can easily out pace an ultra in it's own Anti Aliasing game.




 

AzN

Banned
Nov 26, 2001
4,112
2
0
I decided to do a test to show how G92 was being bottlenecked by memory bandwidth more than pixel or texel fillrate. I lowered my core clocks by 24% which would reduce both my pixel and texel fillrate. My memory clocks lowered by 24% to emphasis on this test...

I did some Crysis benches with my 8800gs... 8800gs is basically a full G92 with 1/4 of it's cluster disabled. 1440x900 no AA High settings

STOCK OC CLOCKS 729/1728/1040
37.55 fps

CORE REDUCTION 561/1728/1040
34.87 fps -7.2% difference

BANDWIDTH REDUCTION 729/1728/800

33.70 fps -10.1% difference


Conclusion. Memory bandwidth is G92 biggest bottleneck.
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
its a botched and rehashed shrunk g80 that's bottlenecked by every spec they've crippled it to.

think 256 shader units, 2gb mem on a 512bit bus, 48 rops, 1ghz core, 2800mhz memory, and imagine the real performance you'll see in the last g80/g92/g94 rehash.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
G80 is different. I can see how Chizow would think his G80 is being bottlenecked by core but G92 is different. It's core is already stronger than memory bandwidth can dish out.

8800gs is exactly 75% of a full G92GTS. Not G80. Full G92 will show the same. Anyone with a full G92 can be my guest.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |