*** Unofficial 8800 GTS review thread ***

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow

Huh? The whole point in measuring in gigapixels and gigatexels per second is to give you an idea how many pixels/textures can be rendered per second since multiple textures are simply a composite of rendered pixels/texels. Since the 3dfx days fillrate and pixel pipelines has always been the main measuring stick of a GPU's performance. Only with G80 was there a divergence from the in-line pipe with pixel/vertex shaders going to a unified architecture while being separated from the render back-ends running at independent clock speeds.

You are basically talking about pixel performance not texel performance when you talk about FLOPS and ROPS. Pixel fillrate isn't an issue only useful when you are dealing with higher resolutions or with AA but texel fillrate deals with texture performance that can have a dramatic impact what the card was being targeted for. Measurement of flops is irrelevant as shown by many cards before g92. You are putting that whole FLOP measurement as how powerful a GPU is which isn't the case in modern video cards.


if anything "saturating" your memory bandwidth will result in worst performance.

You got to be kidding yourself. Why do we need better memory then? Why not we stick with SDR at 1 mhz. Hell we don't even need a memory sub-system to relay any information if that was case.


Once again, show me a single benchmark or user-test that shows a benefit from only increased memory bandwidth. There's a relatively new G92 GTS OC'ing thread that's just started up and many users are expecting their G92 GTS' in the next few days. Its really simple. Ask a few people to run some tests by simply increasing memory clockspeeds vs. stock memory clock speeds and see if there is any difference in performance.

Buy me a G928800gts and I would happily test this theory for you. To put it into perspective I don't even need a 8800gt because 8600gts does the exact same thing just on a smaller scale which I have.
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: Azn
Originally posted by: chizow

Huh? The whole point in measuring in gigapixels and gigatexels per second is to give you an idea how many pixels/textures can be rendered per second since multiple textures are simply a composite of rendered pixels/texels. Since the 3dfx days fillrate and pixel pipelines has always been the main measuring stick of a GPU's performance. Only with G80 was there a divergence from the in-line pipe with pixel/vertex shaders going to a unified architecture while being separated from the render back-ends running at independent clock speeds.

You are basically talking about pixel performance not texel performance when you talk about FLOPS and ROPS. Pixel fillrate isn't an issue only useful when you are dealing with higher resolutions or with AA but texel fillrate deals with texture performance that can have a dramatic impact what the card was being targeted for. Measurement of flops is irrelevant as shown by many cards before g92. You are putting that whole FLOP measurement as how powerful a GPU is which isn't the case in modern video cards.


if anything "saturating" your memory bandwidth will result in worst performance.

You got to be kidding yourself. Why do we need better memory then? Why not we stick with SDR at 1 mhz. Hell we don't even need a memory sub-system to relay any information if that was case.


Once again, show me a single benchmark or user-test that shows a benefit from only increased memory bandwidth. There's a relatively new G92 GTS OC'ing thread that's just started up and many users are expecting their G92 GTS' in the next few days. Its really simple. Ask a few people to run some tests by simply increasing memory clockspeeds vs. stock memory clock speeds and see if there is any difference in performance.

Buy me a G928800gts and I would happily test this theory for you. To put it into perspective I don't even need a 8800gt because 8600gts does the exact same thing just on a smaller scale which I have.

The 8600GTS is shader bound. It has about 1/4th the shader power of the 8800GTS (G92)about 1/2 the memory bandwidth. This tells me that the 8600GTS problem is not with the 128 bit bus, but with the core only having 32SP's...

To me, the 8600GTS is in no way comparable to the G92 8800GTS, yet you keep bring it up.

Lets put it this way... If the 8600GTS is limited by the memory bandwidth as you say, then the the 8800GTS (G92) would see no increase from 64 to 128 shaders... Yet, we can test this theory. Lets clock the core down from 650 to 325 and run your tests again! something tells me the FPS will drop like a rock... Yet, according to you, there will be no performance difference bcause the memory bus is already fully saturated... I think basic logic shows this false.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
You are basically talking about pixel performance not texel performance when you talk about FLOPS and ROPS. Pixel fillrate isn't an issue only useful when you are dealing with higher resolutions or with AA but texel fillrate deals with texture performance that can have a dramatic impact what the card was being targeted for.
No, pixel fillrate is relevant all of the time at any resolution and any setting. We've already gone through this before yet you still don't seem to understand. ROPs are responsible for rendering a frame to buffer. Its always relevant because the faster you can render one frame, the faster you begin rendering the next frame. This is why you see higher frame rates at lower resolutions (less pixels per frame) and start to see lower frame rates (along with other reasons, like shaders etc.) at higher resolutions (more pixels per frame). There is a diminishing return of course at lower resolutions and less difference between GPUs as the CPU becomes the bottleneck and not the GPU. These are relatively elementary concepts when comparing GPUs/CPUs.

Measurement of flops is irrelevant as shown by many cards before g92. You are putting that whole FLOP measurement as how powerful a GPU is which isn't the case in modern video cards.
Name one. Show me one card that has similar specs compared to another with fewer ROPs/pixel pipes that performs worst at the same core clock speeds.

I'm not making the argument that fillrate is the sole factor in determining how powerful a GPU is, however, in the case of the G92 vs. G80, it clearly is. G92 has the advantage in texel fillrate (2x more mapping units and higher clock speeds). G92 has the advantage in shader power (30% or so more). G92 has fewer ROPs and less bandwidth. You insist the disadvantage is bandwidth yet you can't find a single example where G92 actually benefits from additional bandwidth and ignored a benchmark you specifically asked for (3DMark06) that proved you wrong.


You got to be kidding yourself. Why do we need better memory then? Why not we stick with SDR at 1 mhz. Hell we don't even need a memory sub-system to relay any information if that was case.
Um...you do realize that bandwidth is NOT going to be fully utilized all of the time right? Seriously, its really simple. Bandwidth is not an issue until you run out of it. Of course you need enough bandwidth so that it doesn't become your bottleneck, but additional bandwidth is going to be WASTED. Having more bandwidth if you can't use it results in NO PERFORMANCE gain. There's about a bajillion examples of this from DDR3 on the desktop, GDDR4 and 512-bit on the 2900XT, and the G80 and G92. Increasing memory clock speed (ie increasing bandwidth) yields little to no performance gains. In simple terms, you have more bandwidth, but you can't do anything with it.....how hard is that to understand?


Buy me a G928800gts and I would happily test this theory for you. To put it into perspective I don't even need a 8800gt because 8600gts does the exact same thing just on a smaller scale which I have.
LOL buy you a G92 GTS..ya ok. Or just read over the countless other reviews/user-feedback results etc that come to the conclusion that more memory bandwidth is pointless on a 8-series card.

8600 is comparing apples-to-oranges. What exactly are you comparing an 8600 to? How would you know its bandwidth limited when its gimped in so many other areas that would adversely impact performance? 8600GTS to 8600GT? I wouldn't even need to go back and guess the "massive" differences between the two parts is probably due to differences in shader/core clocks between the parts.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: ArchAngel777
Originally posted by: Azn
Originally posted by: chizow

Huh? The whole point in measuring in gigapixels and gigatexels per second is to give you an idea how many pixels/textures can be rendered per second since multiple textures are simply a composite of rendered pixels/texels. Since the 3dfx days fillrate and pixel pipelines has always been the main measuring stick of a GPU's performance. Only with G80 was there a divergence from the in-line pipe with pixel/vertex shaders going to a unified architecture while being separated from the render back-ends running at independent clock speeds.

You are basically talking about pixel performance not texel performance when you talk about FLOPS and ROPS. Pixel fillrate isn't an issue only useful when you are dealing with higher resolutions or with AA but texel fillrate deals with texture performance that can have a dramatic impact what the card was being targeted for. Measurement of flops is irrelevant as shown by many cards before g92. You are putting that whole FLOP measurement as how powerful a GPU is which isn't the case in modern video cards.


if anything "saturating" your memory bandwidth will result in worst performance.

You got to be kidding yourself. Why do we need better memory then? Why not we stick with SDR at 1 mhz. Hell we don't even need a memory sub-system to relay any information if that was case.


Once again, show me a single benchmark or user-test that shows a benefit from only increased memory bandwidth. There's a relatively new G92 GTS OC'ing thread that's just started up and many users are expecting their G92 GTS' in the next few days. Its really simple. Ask a few people to run some tests by simply increasing memory clockspeeds vs. stock memory clock speeds and see if there is any difference in performance.

Buy me a G928800gts and I would happily test this theory for you. To put it into perspective I don't even need a 8800gt because 8600gts does the exact same thing just on a smaller scale which I have.

The 8600GTS is shader bound. It has about 1/4th the shader power of the 8800GTS (G92)about 1/2 the memory bandwidth. This tells me that the 8600GTS problem is not with the 128 bit bus, but with the core only having 32SP's...

To me, the 8600GTS is in no way comparable to the G92 8800GTS, yet you keep bring it up.

Lets put it this way... If the 8600GTS is limited by the memory bandwidth as you say, then the the 8800GTS (G92) would see no increase from 64 to 128 shaders... Yet, we can test this theory. Lets clock the core down from 650 to 325 and run your tests again! something tells me the FPS will drop like a rock... Yet, according to you, there will be no performance difference bcause the memory bus is already fully saturated... I think basic logic shows this false.

Why try to understand when you fully don't understand what I'm saying?
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: Azn

Why try to understand when you fully don't understand what I'm saying?

I think the real problem here is that you don't even understand what you are saying.

 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
No, pixel fillrate is relevant all of the time at any resolution and any setting. We've already gone through this before yet you still don't seem to understand. ROPs are responsible for rendering a frame to buffer. Its always relevant because the faster you can render one frame, the faster you begin rendering the next frame. This is why you see higher frame rates at lower resolutions (less pixels per frame) and start to see lower frame rates (along with other reasons, like shaders etc.) at higher resolutions (more pixels per frame). There is a diminishing return of course at lower resolutions and less difference between GPUs as the CPU becomes the bottleneck and not the GPU. These are relatively elementary concepts when comparing GPUs/CPUs.

No really? :roll: We are dealing with games that have multiple textures. Pixel fillrate is irrelevant but texel fillrate is what is more relevant today since texture fillrate is usually double or triple that of pixel fillrate. Same reason why 3870 cannot beat a 8800gt.

Name one. Show me one card that has similar specs compared to another with fewer ROPs/pixel pipes that performs worst at the same core clock speeds.

I'm not making the argument that fillrate is the sole factor in determining how powerful a GPU is, however, in the case of the G92 vs. G80, it clearly is. G92 has the advantage in texel fillrate (2x more mapping units and higher clock speeds). G92 has the advantage in shader power (30% or so more). G92 has fewer ROPs and less bandwidth. You insist the disadvantage is bandwidth yet you can't find a single example where G92 actually benefits from additional bandwidth and ignored a benchmark you specifically asked for (3DMark06) that proved you wrong.

I didn't understand what you were trying to get at with the first paragraph. I don't know what you are asking but 2600xt 4pipe card can hang with 8600gt which is 8 pixel pipes. They do not have same specs. Actually 8600gt is the clear victor looking at the specs other than memory.

Measuring Flops is like measuring mhz. It's waste of time.

3dmark is not a real game. It could be a game but it is not. It is only a benchmarking tool which total scores are swayed by shader performance and cpu performance. I think we had 3 page long debate about it. I don't think I need to tell you again.


Um...you do realize that bandwidth is NOT going to be fully utilized all of the time right? Seriously, its really simple. Bandwidth is not an issue until you run out of it. Of course you need enough bandwidth so that it doesn't become your bottleneck, but additional bandwidth is going to be WASTED. Having more bandwidth if you can't use it results in NO PERFORMANCE gain. There's about a bajillion examples of this from DDR3 on the desktop, GDDR4 and 512-bit on the 2900XT, and the G80 and G92. Increasing memory clock speed (ie increasing bandwidth) yields little to no performance gains. In simple terms, you have more bandwidth, but you can't do anything with it.....how hard is that to understand?

I thought saturating fillrate loses performance? Now you think it will be hardly? Which is it? You keep changing back and forth so I don't what to believe that comes out of you.


LOL buy you a G92 GTS..ya ok. Or just read over the countless other reviews/user-feedback results etc that come to the conclusion that more memory bandwidth is pointless on a 8-series card.

8600 is comparing apples-to-oranges. What exactly are you comparing an 8600 to? How would you know its bandwidth limited when its gimped in so many other areas that would adversely impact performance? 8600GTS to 8600GT? I wouldn't even need to go back and guess the "massive" differences between the two parts is probably due to differences in shader/core clocks between the parts.

When is anything comparing apples to apples other than that same apple? However you can find the same conclusion when dealing with their siblings "the orange".

8800gt or 8600 is not that different. Nvidia's concept is put into use with both of these cards.

8600gts has raw fillrate that rivals 8800gts. It might have 32SP shaders but it rivals of 1950pro in most situations. It's shader performance is actually slightly better than a 1950xtx and it's raw fillrate rivals 1950xtx as well but it can never reach peak performance which is being limited by memory bandwidth and performs more like 1950pro than a 1950xtx. Raw fillrate can never hit its peak as I showed you with fillrate test that was done in 3dmark multi-texure test in previous thread while G80 8800gts or 8800gtx can hit their peak.

8800gt can never hit its peak like 8600gts. That is why I compare them. As I increase my memory speed and leave the clock speed at default however fillrate goes up with 3dmark fillrate test. So you are telling me it is not being tied down by memory bandwidth? :light:

When you increase clock speed you are also increasing shader clocks. That might have an impact on some shader games but you can also increase shader clocks independently instead of increasing core clocks. I leave my card at default core clocks because all it does is add more heat to the card which does nothing when I test 3dmark fillrate test instead I just raise the hell out of SP clocks and memory speed to get the cards full potential.

 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: ArchAngel777
Originally posted by: Azn

Why try to understand when you fully don't understand what I'm saying?

I think the real problem here is that you don't even understand what you are saying.

You "think" but I "know" you don't even understand what I'm saying.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
No really? :roll: We are dealing with games that have multiple textures. Pixel fillrate is irrelevant but texel fillrate is what is more relevant today since texture fillrate is usually double or triple that of pixel fillrate.
Ya, which is why the G92 GT with higher texel fillrate still doesn't outperform the G80 112SP or even the G80 96SP when run at the same clock speeds with more texture mapping and address units. This is why I compare to G92 to G80....because you can clearly see the differences between the parts.

Same reason why 3870 cannot beat a 8800gt.
The 3870 doesn't beat the 8800GT for lots of reasons.

I didn't understand what you were trying to get at with the first paragraph. I don't know what you are asking but 2600xt 4pipe card can hang with 8600gt which is 8 pixel pipes.
Of course you don't understand, especially the part where I said "same clock speeds" since that would have an impact on fillrate (both pixel and texel), same as if you overclocked one card over another with the same number of ROPs and texture units. The 2600xt "hangs with" the 8600gt because its run at much higher clock speeds compensating for its lack of ROPs, just as the G92 compensates for its fewer ROPs with higher clock speeds, but not enough to overcome the 4/8 disadvantage it has over G80.

2600XT = 800MHz x 4 = 3200Mpixels
8600GT = 540MHz x 8 = 4320Mpixels

They do not have same specs. Actually 8600gt is the clear victor looking at the specs other than memory.
No they don't have the same specs, which is why I'm comparing G92 to G80 at similar clock speeds to isolate any differences.

Measuring Flops is like measuring mhz. It's waste of time.
Certainly not as much as measuring wasted bandwidth.

3dmark is not a real game. It could be a game but it is not. It is only a benchmarking tool which total scores are swayed by shader performance and cpu performance. I think we had 3 page long debate about it. I don't think I need to tell you again.
Yet no real games substantiate your claims either. Honestly it should be really simple, find a single benchmark from any game where clock/shader speed is the same and only memory frequency is changed where there's a difference that scales anywhere close to the difference in memory clock speeds.


I thought saturating fillrate loses performance? Now you think it will be hardly? Which is it? You keep changing back and forth so I don't what to believe that comes out of you.
Huh? Read again, you still don't seem to understand. But you must be a big fan of the R600 and its 8 billion GB/sec 512-bit memory interface with GDDR15 that results in a whole lotta bandwidth that it can't use.

8800gt or 8600 is not that different. Nvidia's concept is put into use with both of these cards.
The architecture is similar but the proportions are very different. 8600 has about 50% fillrate but only 25-30% shader power and 50% bandwidth. Its pretty obvious where this card is crippled and disproportionate compared to the 8800 series parts.

8600gts has raw fillrate that rivals 8800gts.
LOL What? Besides the fact that you're completely wrong here, its completely crippled elsewhere compared to the 8800GTS. But since you don't seem to understand simple concepts such as bottlenecks, you wouldn't understand that a part will only perform as well as its weakest link, just as you don't understand that bandwidth means nothing if you can't take advantage of additional bandwidth.

Raw fillrate can never hit its peak as I showed you with fillrate test that was done in 3dmark multi-texure test in previous thread while G80 8800gts or 8800gtx can hit their peak.

8800gt can never hit its peak like 8600gts. That is why I compare them. As I increase my memory speed and leave the clock speed at default however fillrate goes up with 3dmark fillrate test. So you are telling me it is not being tied down by memory bandwidth? :light:
Wait so the multi-texture test allows the G80 to hit their peak, but doesn't allow the G92 to? Even when you increase memory bandwidth, which yields no performance gain on the G92? But 3DMark isn't a game right? Oh wait, games don't show any performance gain either.....

But again, its really simple. If memory bandwidth was holding the G92 back, increasing memory clock would yield tangible performance gains. Except raising memory clock does very little and no where close to the benefits of raising core and shader clocks.

When you increase clock speed you are also increasing shader clocks. That might have an impact on some shader games but you can also increase shader clocks independently instead of increasing core clocks. I leave my card at default core clocks because all it does is add more heat to the card which does nothing when I test 3dmark fillrate test instead I just raise the hell out of SP clocks and memory speed to get the cards full potential.
Yep, which is why you have no clue what you're talking about.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Don't you think that has something to do with massive fillrate over GTX abilities as well?
Some of it perhaps, but it's mainly about the shader processing capability.

Also you keep using archaic terms like fillrate and multi-texturing as if you're discussing graphics cards in the context of ten years ago or something.

Like I explained earlier the fixed pipeline is gone. You don't have pixel pipes running static operations while doing fixed multitexturing through TMUs anymore.

Now you have shaders running a whole bunch of instructions on each pixel (and vertex too but to a lesser degree) supported by texture units if texture operations are required, and then ROPs render the final calculated output from the shaders to the framebuffer.

A lot of the time shaders are not using texturing or fillrate because they're modifying data with arithmetic.

Raw fillrate gives you better RAW performance especially when memory bandwidth isn't constraint with AA, post processing, etc..
Again what exactly does "raw fillrate" mean in the context of a modern rendering system? If I have a fillrate of X gigapixels/sec and Y gigatexels/sec what performance predictions can you make in a game that requires 7 shader operations for every texture operation?

Not much I'm afraid because a ROP can't do anything until the shaders have done their thing. Likewise given texture operations are a minority compared to shader ops, texturing fillrate won't play as large a factor.

Also you can't tell much from memory bandwidth given a shader operation might not even touch memory but instead re-use data from caches and/or registers.

The critical area of performance is shader performance and factors like shader architecture, efficiency, count, clock speed (etc) are usually the best indicators of performance.

This has all been explained to you but you insist on repeating legacy terms in an archaic context. They can measure things of use but you need to be looking elsewhere to get the full picture.

FEAR isn't really a Shader intensive game as any of the modern games like Crysis or unreal Tournament because even a 7900gtx does really well in this game which you know all 7 series are prone of having weak shaders.
Again the GTX does "well" because of the texture filtering fiasco of the GF7 series; disable those optimizations and you start seeing a different picture.

But that?s a topic for another time.

And Fear is quite shader heavy for a 2005 title (up to 7 shader ops for every texture op).

Shader is important but it's not as important as having better fillrate abilites.
Again "fillrate" is meaningless in the context of running shaders which 100% of games today use according to the slides.

Memory acts as a carrier to a GPU how fast you can relay that information.
Which again means little if shaders can't generate information fast enough.

G92 has more power but it is being tied down much like 8600gt is being tied down its full potential.
The 8600 is nothing like the GTS and if you had looked at even the basic specs of the 8600 series you'd see seriously cut down shading power (32 SPs compared to the next closest configuration of 96 SPs in the classic GTS).
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow

Ya, which is why the G92 GT with higher texel fillrate still doesn't outperform the G80 112SP or even the G80 96SP when run at the same clock speeds with more texture mapping and address units. This is why I compare to G92 to G80....because you can clearly see the differences between the parts.

Except that G80 112SP was superclocked and had more memory bandwidth than 8800gt to saturate it.

The 3870 doesn't beat the 8800GT for lots of reasons.

What are they because you think somehow pixel fillrate dictate performance. But for you info 3870 has more pixel fillrate than 8800gt or G92 GTS.


Of course you don't understand, especially the part where I said "same clock speeds" since that would have an impact on fillrate (both pixel and texel), same as if you overclocked one card over another with the same number of ROPs and texture units. The 2600xt "hangs with" the 8600gt because its run at much higher clock speeds compensating for its lack of ROPs, just as the G92 compensates for its fewer ROPs with higher clock speeds, but not enough to overcome the 4/8 disadvantage it has over G80.

2600XT = 800MHz x 4 = 3200Mpixels
8600GT = 540MHz x 8 = 4320Mpixels

I didn't understand what you were trying to get at now I see what you were saying.

Then why not 3870 vs 8800gt?

HD3870 = 775mhz x 16 = 12.4 Gpixels
8800GT = 600mhz x 16 = 9.9 Gpixels
G92GTS = 650mhz x 16 = 10.4 Gpixels

So by your logic 3870 is the clear victor which is flawed by a mile. Only thing 8800gt excel is texel fillrate over 3870.



No they don't have the same specs, which is why I'm comparing G92 to G80 at similar clock speeds to isolate any differences.

Of course they don't have same specs. 8600gt is clear winner by your pixel logic performance rules everything but clearly it can hang with 8600gt in many instances.

Certainly not as much as measuring wasted bandwidth.

Kind of like 8800gt texture fillrate.

Yet no real games substantiate your claims either. Honestly it should be really simple, find a single benchmark from any game where clock/shader speed is the same and only memory frequency is changed where there's a difference that scales anywhere close to the difference in memory clock speeds.

Because these mainstream hardware sites only test 3dmark scores and not enough on dissecting video cards.


Huh? Read again, you still don't seem to understand. But you must be a big fan of the R600 and its 8 billion GB/sec 512-bit memory interface with GDDR15 that results in a whole lotta bandwidth that it can't use.

Read what? Your foolishness that FLOP dictates performance or that memory bandwidth doesn't have an impact with a card like 8800gt? 2900xt might have been a big waste but not on these G92.



The architecture is similar but the proportions are very different. 8600 has about 50% fillrate but only 25-30% shader power and 50% bandwidth. Its pretty obvious where this card is crippled and disproportionate compared to the 8800 series parts.

You said it exactly right this time. "Proportions" is the magic word. G92 uses that same concept that is similar to G84.


LOL What? Besides the fact that you're completely wrong here, its completely crippled elsewhere compared to the 8800GTS. But since you don't seem to understand simple concepts such as bottlenecks, you wouldn't understand that a part will only perform as well as its weakest link, just as you don't understand that bandwidth means nothing if you can't take advantage of additional bandwidth.

LOL what don't you understand? Just because it has lower ROP counts but texel performance is up there with 8800gts. 8600gts has 10.8 Gtexels and 8800gts has 12 Gtexels. In theory it should handle a 1950xtx easily if it had 256bit memory bus long as it is not running too high of a resolution.


Wait so the multi-texture test allows the G80 to hit their peak, but doesn't allow the G92 to? Even when you increase memory bandwidth, which yields no performance gain on the G92? But 3DMark isn't a game right? Oh wait, games don't show any performance gain either.....

But again, its really simple. If memory bandwidth was holding the G92 back, increasing memory clock would yield tangible performance gains. Except raising memory clock does very little and no where close to the benefits of raising core and shader clocks.

I posted this on the other thread with this showing it. 8800GT never hit its peak while 8800gtx or G80GTS does. Reason why 8800gt win G80GTS but not GTX.

Game doesn't show any performance gain? Where? Did you post a single benchmark other than 3dmark scores that are swayed by shader performance? Oh wait 3dmark isn't a game but a benchmarking utility. Are you thinking with your GTX and not like the G92 that has massive texel fillrate over your card?


Yep, which is why you have no clue what you're talking about.

What do I have to do show you some picture so you can understand?


 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Except that G80 112SP was superclocked and had more memory bandwidth than 8800gt to saturate it.
The stock 513MHz G80 96SP has more memory bandwidth than the 8800GT "to saturate it" <---- LOL, yet it gets stomped by the 8800GT. But again, this just proves you have no clue what you're talking about since anyone who owns a G80 (or G92) knows that increasing core/shader clocks without touching memory yields actual performance increases.

What are they because you think somehow pixel fillrate dictate performance. But for you info 3870 has more pixel fillrate than 8800gt or G92 GTS.
Nope, only in the case of G92 vs. G80. I never once made the direct comparison to RV670 because the architectures are different enough that the same differences don't apply. But to start, the RV670 suffers similarly to the G84 with an unbalanced proportion of shader performance since it has 64 Shaders at worst and 320 at best (5x1 Scalar) that run at half the speed of a G92 and @60% of a G80. Once again, a part is only going to perform as well as its weakest link and in the case of the RV670 shader performance is its weakest point.


I didn't understand what you were trying to get at now I see what you were saying.

Then why not 3870 vs 8800gt?

HD3870 = 775mhz x 16 = 12.4 Gpixels
8800GT = 600mhz x 16 = 9.9 Gpixels
G92GTS = 650mhz x 16 = 10.4 Gpixels

So by your logic 3870 is the clear victor which is flawed by a mile. Only thing 8800gt excel is texel fillrate over 3870.
As covered above, the 3870 also suffers from shader performance, which would render any performance gains from fillrate useless if the GPU is waiting for shader ops to finish (again, simple concept of bottlenecks). When comparing the G92 to G80, no such bottleneck exists since the G92 has far superior shader performance (ie. no shader bottleneck), yet still falls behind the G80 GTX in performance in many cases.

Of course they don't have same specs. 8600gt is clear winner by your pixel logic performance rules everything but clearly it can hang with 8600gt in many instances.
Yawn, same reason the 3870 doesn't beat the GT, but its obvious you don't even understand the relationships involved or even the simple concept of bottlenecking.

Because these mainstream hardware sites only test 3dmark scores and not enough on dissecting video cards.
No, they test real games too and all of them disprove any claims that the G92 is bandwidth deprived. But honestly, its not brain surgery. It simply involves moving the "memory clockspeed" frequency slider up and down in RivaTuner and then running the same game or benchmark to show you have no clue what you're talking about.


Read what? Your foolishness that FLOP dictates performance or that memory bandwidth doesn't have an impact with a card like 8800gt? 2900xt might have been a big waste but not on these G92.
You wouldn't know either way, dismissed the benchmarks given to you, and refuse to look at the GT OC'ing thread on these forums that prove you wrong. I'm sure you'll refuse to look at the GTS OC'ing thread as well as I'm sure others will come to the same conclusion that increasing memory bandwidth does very little for performance, and much less than increasings core/shader clocks.

Honestly I think you have an original GeForce DDR or something since that's the last time I can recall OC'ing memory had any meaningful impact on performance. There's a reason memory bus sizes haven't changed all that much from 256-bit since the R300. The natural increases in faster memory used on video cards provides enough bandwidth increase so that additional bandwidth goes to waste.......

You said it exactly right this time. "Proportions" is the magic word. G92 uses that same concept that is similar to G84.
Except I'm not comparing G92 to G84, I'm comparing G80 to G92 with very different proportions and performance. When comparing G84 to G92 its pretty obvious with 25-30% shader power (again, weakest link) you're going to get similar overall performance. When comparing G80 to G92, you have @80% Pixel Fillrate, 150% shader power, and 70% bandwidth. Yet the G80 still beats the G92 in many cases.

You seem to think its because of bandwidth, which might be true, but again REAL WORLD tests show that there is no benefit from simply increasing memory speed. Furthermore, increases in core and shader DO yield scaling increases in performance, once again showing that memory bandwidth is not the bottleneck (see any review with a G92 GTS clocked above 650MHz). It really doesn't get any simpler than this.


LOL what don't you understand? Just because it has lower ROP counts but texel performance is up there with 8800gts. 8600gts has 10.8 Gtexels and 8800gts has 12 Gtexels. In theory it should handle a 1950xtx easily if it had 256bit memory bus long as it is not running too high of a resolution.
Um, without checking the 8600GTS' texel fillrate, your comparison only proves that texel fillrate has the least impact on actual performance. If it actually meant anything the 8600 GTS would perform similarly to the 8800GTS but instead it performs at about 33% which seems to coincide with its pixel fillrate and shader power..........


I posted this on the other thread with this showing it. 8800GT never hit its peak while 8800gtx or G80GTS does. Reason why 8800gt win G80GTS but not GTX.
Really? I only saw the same gibberish you're posting here. But again, brings us back to the FiringSquad review where the G80 GTS does outperform the 8800GT when both cards are run at similar clockspeeds in real games. Why doesn't the GT beat the GTS there as well? Because its not hitting its "peak"? lol.

Game doesn't show any performance gain? Where? Did you post a single benchmark other than 3dmark scores that are swayed by shader performance? Oh wait 3dmark isn't a game but a benchmarking utility. Are you thinking with your GTX and not like the G92 that has massive texel fillrate over your card?
I'm not talking about 3DMark or shader performance with the G80 or G92. I'm talking about ROP and fillrate. And yes, every single review you look at will show that increases to core (fillrate) and shader speeds will yield an increase in performance. Increases to memory will show very little, if any performance gains. And no I'm not "thinking with my GTX" when I say memory bandwidth has no impact on performance with G92, I'm "thinking with my GT" when I had one for 3 weeks before I sold it to a member of these forums.

What do I have to do show you some picture so you can understand?
Like I said, really simple. Have someone run a few benchmarks with a G92 or show anything in a review with only increased memory clockspeeds that show any benefit in performance. You think the G92 is bandwidth limited. Increase ONLY bandwidth by increasing memory frequency and show it to be true. I already know the results which is the only reason I bother replying to you.



 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow

The stock 513MHz G80 96SP has more memory bandwidth than the 8800GT "to saturate it" <---- LOL, yet it gets stomped by the 8800GT. But again, this just proves you have no clue what you're talking about since anyone who owns a G80 (or G92) knows that increasing core/shader clocks without touching memory yields actual performance increases.

Talk about gibber babble. G80GTS 96SP has less texel fillrate than 8800gt or 112SP GTS. The 112SP G80GTS was clocked much higher. Difference of 15% in core clocks. That's where the difference lies. Card like G80 GTS is not bottlenecked by memory bandwidth so it takes all the fillrate it can get and fully utilize it.

Nope, only in the case of G92 vs. G80. I never once made the direct comparison to RV670 because the architectures are different enough that the same differences don't apply. But to start, the RV670 suffers similarly to the G84 with an unbalanced proportion of shader performance since it has 64 Shaders at worst and 320 at best (5x1 Scalar) that run at half the speed of a G92 and @60% of a G80. Once again, a part is only going to perform as well as its weakest link and in the case of the RV670 shader performance is its weakest point.

Only in case of G92 vs G80 huh? How about G80 GTS vs 8800gt.

GTS = 10 Gpixels
GT = 9.6 Gpixels

Which performs better? Talk about talk to the hand.


As covered above, the 3870 also suffers from shader performance, which would render any performance gains from fillrate useless if the GPU is waiting for shader ops to finish (again, simple concept of bottlenecks). When comparing the G92 to G80, no such bottleneck exists since the G92 has far superior shader performance (ie. no shader bottleneck), yet still falls behind the G80 GTX in performance in many cases.

Poof. 3870 does not suffer from shader performance. Actually 3870 does quite well in 3dmark which scores are dictated by shader performance. Not to mention it hangs with GTX in games like Unreal Tournament, Bioshock, Call of Juerez, and the list goes on.



Yawn, same reason the 3870 doesn't beat the GT, but its obvious you don't even understand the relationships involved or even the simple concept of bottlenecking.

Understand? Your pixel performance give you faster frame rates is out the window. I think you clearly can't tell the difference between what is bottlenecked.


No, they test real games too and all of them disprove any claims that the G92 is bandwidth deprived. But honestly, its not brain surgery. It simply involves moving the "memory clockspeed" frequency slider up and down in RivaTuner and then running the same game or benchmark to show you have no clue what you're talking about.

Giving your same old numbers again and again. That isn't dissecting anything. Just giving the mainstream people like you how it performs in games. If it's not brain surgery you should try it and tell us instead.



You wouldn't know either way, dismissed the benchmarks given to you, and refuse to look at the GT OC'ing thread on these forums that prove you wrong. I'm sure you'll refuse to look at the GTS OC'ing thread as well as I'm sure others will come to the same conclusion that increasing memory bandwidth does very little for performance, and much less than increasings core/shader clocks.

Honestly I think you have an original GeForce DDR or something since that's the last time I can recall OC'ing memory had any meaningful impact on performance. There's a reason memory bus sizes haven't changed all that much from 256-bit since the R300. The natural increases in faster memory used on video cards provides enough bandwidth increase so that additional bandwidth goes to waste.......

What benchmarks did you give? 3dmark or the FiringSquad 112SP G80 beating out 8800gt with higher shader? You don't even understand that when you overclock the core you are also raising shader clocks as well. That whole thread is deemed useless unless someone just raise the shader clocks and leave the core clock speed in tact and test out it in games.


Except I'm not comparing G92 to G84, I'm comparing G80 to G92 with very different proportions and performance. When comparing G84 to G92 its pretty obvious with 25-30% shader power (again, weakest link) you're going to get similar overall performance. When comparing G80 to G92, you have @80% Pixel Fillrate, 150% shader power, and 70% bandwidth. Yet the G80 still beats the G92 in many cases.

You seem to think its because of bandwidth, which might be true, but again REAL WORLD tests show that there is no benefit from simply increasing memory speed. Furthermore, increases in core and shader DO yield scaling increases in performance, once again showing that memory bandwidth is not the bottleneck (see any review with a G92 GTS clocked above 650MHz). It really doesn't get any simpler than this.

GTX wins because it has enough memory bandwidth to fully saturate its fillrate. It is not being bottlenecked.

Now you say GTX bandwidth might have something to do with it. Are you crumbling with your own ignorance are you?


Um, without checking the 8600GTS' texel fillrate, your comparison only proves that texel fillrate has the least impact on actual performance. If it actually meant anything the 8600 GTS would perform similarly to the 8800GTS but instead it performs at about 33% which seems to coincide with its pixel fillrate and shader power..........

Is your emotions getting in the way? 8600gts texel fillrate is 10.8 Gtexels. It is clocked 675mhz x 16 tmu = 10800. But again you keep forgetting it is being bottlenecked by memory bandwidth to about 7600. If it did have enough bandwidth it would be fast as 1950xtx and not like 1950pro.

http://techreport.com/r.x/gefo...600/3dm-multi-1280.gif


Really? I only saw the same gibberish you're posting here. But again, brings us back to the FiringSquad review where the G80 GTS does outperform the 8800GT when both cards are run at similar clockspeeds in real games. Why doesn't the GT beat the GTS there as well? Because its not hitting its "peak"? lol.


http://images.vnu.net/gb/inqui...-dx10-hit/fillrate.jpg

Here it is again. 8800gt has a theoretical fillrate of 33.6 Gtexels/s. Clearly in that graph it can barely reach half of it's theoretical fillrate.

You see GTX can fully utilize all of its texel fillrate. 8800gt or the new 8800gts can not. That's why it loses to 8800gtx but beats G80 GTS.



I'm not talking about 3DMark or shader performance with the G80 or G92. I'm talking about ROP and fillrate. And yes, every single review you look at will show that increases to core (fillrate) and shader speeds will yield an increase in performance. Increases to memory will show very little, if any performance gains. And no I'm not "thinking with my GTX" when I say memory bandwidth has no impact on performance with G92, I'm "thinking with my GT" when I had one for 3 weeks before I sold it to a member of these forums.

pixel fillrate = high performance is out the window as why G80 GTS loses to 8800gt.

Post a benchmark instead. Why not? You can't. Let me guess 3dmark scores right?


Like I said, really simple. Have someone run a few benchmarks with a G92 or show anything in a review with only increased memory clockspeeds that show any benefit in performance. You think the G92 is bandwidth limited. Increase ONLY bandwidth by increasing memory frequency and show it to be true. I already know the results which is the only reason I bother replying to you.

Oh really you tested this on G92? Or is it your GTX. GTX doesn't have this problem.

Only reason you bother replying is because you want to say "you don't know what you are talking" LOL LOL LOL" :thumbsdown:

 

Zambien

Member
Oct 14, 2004
100
0
0
Please stop feeding the troll. This is getting ridiculous. I have seen him personally attack multiple people throughout his tirades in multiple threads. This guy is the epitome of a troll and is adding nothing to this board but mindless clutter.

BAN!

Edit: In case I'm not being clear, I'm talking about Azn.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
You don't even understand that when you overclock the core you are also raising shader clocks as well.
Not necessarily given the G80 & G92 have separate core and shader clocks that can be individually controlled.

In any case even if you were raising both it still disproves your claims about memory bandiwdth since according to your logic there should be no performance difference by raising them but there is.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Zambien
Please stop feeding the troll. This is getting ridiculous. I have seen him personally attack multiple people throughout his tirades in multiple threads. This guy is the epitome of a troll and is adding nothing to this board but mindless clutter.

BAN!

Edit: In case I'm not being clear, I'm talking about Azn.

Oh really? Which multiple threads was that?
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
You don't even understand that when you overclock the core you are also raising shader clocks as well.
Not necessarily given the G80 & G92 have separate core and shader clocks that can be individually controlled.

In any case even if you were raising both it still disproves your claims about memory bandiwdth since according to your logic there should be no performance difference by raising them but there is.

You can raise it separately with rivatuner or bios edit but if you do raise the core it automatically raises SP clocks as well.

With shader increases some games have some tangible effect. I never said shader doesn't do anything. It sure does but not as much as having bigger texel fillrate and memory bandwidth to saturate it. So these people raise the memory speed as well or did they leave memory speed at default and raise the core only or shaders too?

3dmark test shows that G92 isn't being fully utilized. If it was being fully utilized it would also beat an 8800gtx in the test but it does not.

http://images.vnu.net/gb/inqui...-dx10-hit/fillrate.jpg
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Talk about gibber babble. G80GTS 96SP has less texel fillrate than 8800gt or 112SP GTS. The 112SP G80GTS was clocked much higher. Difference of 15% in core clocks. That's where the difference lies. Card like G80 GTS is not bottlenecked by memory bandwidth so it takes all the fillrate it can get and fully utilize it.
The G80 GTS has less texel fillrate regardless due to its 1:2 TMUs compared to the 1:1 TMUs on the G92. This holds true even on the OC/SSC versions where the G80 GTS will never match the G92's texel fillrate, yet both the 96SP and 112SP G80s perform as well or better than G92 once they are run at similar clockspeeds. Of course a 10% increase in clockspeed matters if you actually increase clocks where they matter. Even the 513MHz G80 GTS has more bandwidth than the G92 yet it can't beat a 600MHz GT. Why? Because bandwidth means nothing if you can't utilize it. Once you increase shader and core clocks to be similar to the G92, the G80 pulls even or ahead.

576MHz 96SP G80 GTS vs. 600MHz 112SP G92 GT @ TechReport
576MHz 112SP G80 GTS vs. 600MHz 112SP G92 GT vs. 576MHz 128SP G80 GTX @ FiringSquad

The FiringSquad review summarizes the situation nicely. G80 GTS @ 513MHz, despite its superior bandwidth to 8800GT still gets stomped, but once its core/shader clock is increased, pulls ahead or even to the G92 GT. Even at the same clock speeds as the GTX (SSC is the same as GTX speeds), its still falls behind considerably. So, in summary, the factors you think mean the most (texel fillrate and bandwidth) actually mean very little when it comes to actual increases in performance. ROPs (pixel fillrate) with proportionate shader increases yield a much bigger gain.


Only in case of G92 vs G80 huh? How about G80 GTS vs 8800gt.

GTS = 10 Gpixels
GT = 9.6 Gpixels

Which performs better? Talk about talk to the hand.
Yep, pixel fillrate is close enough that the GT pulls ahead due to other enhancements it has over the GTS (1:1 TMUs running at a higher clockspeed, 30-40% faster shaders etc). Bandwidth is similar, yet the GTS still loses at stock speeds, again disproving your claim bandwidth matters. The situation changes though as you bring the GTS core/shader clocks closer to the GT, as seen in both reviews above.


Poof. 3870 does not suffer from shader performance. Actually 3870 does quite well in 3dmark which scores are dictated by shader performance. Not to mention it hangs with GTX in games like Unreal Tournament, Bioshock, Call of Juerez, and the list goes on.
Um, that's the point of testing real games, because real games will show real world advantages of one part over another depending on their strengths and weaknesses. As I said elsewhere, the 3870 has at worst 64 shaders and at best 320 shaders. The GTX has 128 shaders running almost 2x faster always (nearly 256 actual compared to R670). In a perfect world where the 3870's ALUs are always being utilized the 3870 would have the edge but that's clearly not the case in real world situations.

Again, when looking at bottlenecks a part is only going to perform as well as its weakest link and its pretty obvious a worst case scenario for the 3870 with 64SP puts it at a considerable disadvantage to the 128SP running at faster clocks on the GTX. I don't think anyone would argue that NV's unified shader implementation is far superior to the R600's. Separating shader core from the raster core was ingenious as current shader clocks on G92 are pushing 2GHz compared to ~800MHz on RV670.

Understand? Your pixel performance give you faster frame rates is out the window. I think you clearly can't tell the difference between what is bottlenecked.
Which is once again why I compared G92 to G80, since everything else favors G92 (except bandwidth, which yields no tangible gains on G92 or G80) except for fillrate. Oh ya, also happens that G80 still outperforms G92 at the same core clock speeds despite all its disadvantages elsewhere.

Giving your same old numbers again and again. That isn't dissecting anything. Just giving the mainstream people like you how it performs in games. If it's not brain surgery you should try it and tell us instead.
I have tried it, just as many others have. But again, its really simple. You increase memory frequency, and by doing so, increase memory bandwidth. You see no difference in performance. You raise core/shader and you see actual increases.

What benchmarks did you give? 3dmark or the FiringSquad 112SP G80 beating out 8800gt with higher shader? You don't even understand that when you overclock the core you are also raising shader clocks as well.
Sadly you don't even understand the 112SP G80 is still at a massive disadvantage in shader power compared to the G92 GT since the G92's shader core runs @30% faster at stock speeds. And no, since the 163 drivers you can run the shader/core clocks independently on G80 and G92.

But the benchmarks I gave also show the G80 96 and 112SP both perform similarly despite the advantage the 112SP version has over the 96SP. I also gave a benchmark that showed the same with 112SP and 128SP G92.

That whole thread is deemed useless unless someone just raise the shader clocks and leave the core clock speed in tact and test out it in games.
How would raising only shader clocks deem the thread useless when raising memory clockspeeds is all that's needed in proving you have no clue what you're talking about? Again, you simply don't know what you're arguing or you don't understand the impact different clockspeeds have on actual performance.


GTX wins because it has enough memory bandwidth to fully saturate its fillrate. It is not being bottlenecked.

Now you say GTX bandwidth might have something to do with it. Are you crumbling with your own ignorance are you?
Nope, G80 GTX bandwidth was never an issue to those that understand extra bandwidth means nothing unless you can use it. My point was that G92 isn't being bottlenecked by bandwidth either, which is easily tested by simply raising memory clock frequencies and seeing no tangible gains in peformance. Its also easily proven by increasing only core/shader clocks, which will always yield an improvement in performance (there's G92 GTS pushing 800MHz) with memory clocks capping out @1050MHz. If G92 was bandwidth limited, as you seem to think, 1) memory clock increases would yield the bigger gain over shader/core increases and 2) core/shader clock increases would yield little/no performance gains since the card is already bottlenecked by bandwidth. Except neither is true.


Is your emotions getting in the way? 8600gts texel fillrate is 10.8 Gtexels. It is clocked 675mhz x 16 tmu = 10800. But again you keep forgetting it is being bottlenecked by memory bandwidth to about 7600. If it did have enough bandwidth it would be fast as 1950xtx and not like 1950pro.

http://techreport.com/r.x/gefo...600/3dm-multi-1280.gif
Again, its being bottlenecked in other areas before bandwidth is even an issue, not that it matters since I'm not talking about the 8600.

http://images.vnu.net/gb/inqui...-dx10-hit/fillrate.jpg

Here it is again. 8800gt has a theoretical fillrate of 33.6 Gtexels/s. Clearly in that graph it can barely reach half of it's theoretical fillrate.

You see GTX can fully utilize all of its texel fillrate. 8800gt or the new 8800gts can not. That's why it loses to 8800gtx but beats G80 GTS.
Huh? This is why we don't base performance solely on theoreticals kiddies. The Ultra and GTX have 39.2 and 36.8 Mtexels/sec, respectively, compared to the 33.6 of the GT. Neither the GTX or Ultra come ANYWHERE close to their theoretical max, in fact, they scale nearly identically to the GT based on theoretical max. Feel free to measure the distance between "15000" and "20000" to figure out all 3 are about 50% of their theoretical max. But I guess the GTX and Ultra are bandwidth limited as well right? LMAO. Thanks for showing us you can't even read a simple graph, that you provided.

pixel fillrate = high performance is out the window as why G80 GTS loses to 8800gt.
But the G80 GTS doesn't lose to the GT when run at similar clockspeeds (minimizing differences elsewhere and increasing the GTS' strength, 20vs16 ROPs).

Post a benchmark instead. Why not? You can't. Let me guess 3dmark scores right?
Yep I know you're going to ignore 3DMark (even though its what you base your laughable texel fillrate/bandwidth argument on), but I did run some LOTRO tests:
  • GT @650/850 (Stock 8800GT SC)
    2007-11-20 11:27:16 - lotroclient
    Frames: 7174 - Time: 120000ms - Avg: 59.783 - Min: 30 - Max: 83

    GT @650/1000
    2007-11-20 11:32:31 - lotroclient
    Frames: 7294 - Time: 120000ms - Avg: 60.783 - Min: 31 - Max: 85

    GT @675/1000
    2007-11-20 11:36:28 - lotroclient
    Frames: 7437 - Time: 120000ms - Avg: 61.975 - Min: 33 - Max: 87

    GT @700/1000
    2007-11-20 11:41:17 - lotroclient
    Frames: 7467 - Time: 120000ms - Avg: 62.225 - Min: 25 - Max: 96

    GT @729/1000
    2007-11-20 11:50:40 - lotroclient
    Frames: 7611 - Time: 120000ms - Avg: 63.425 - Min: 33 - Max: 105

    GT @729/1050 (Unstable in ATITool)
    2007-11-20 11:59:53 - lotroclient
    Frames: 7601 - Time: 120000ms - Avg: 63.342 - Min: 28 - Max: 102
I would've run more if I knew some troll that never owned a G80 or G92 would come along and argue bandwidth on an 8800 actually mattered.

Oh really you tested this on G92? Or is it your GTX. GTX doesn't have this problem.
Sure did, still have my copy of QW:ET and the mouse pad that came with it too (new owner didn't want em). Tested enough to see it performed very similarly to my G80 GTS @ 621/1000 (same as the linked benches above) which gave me enough confidence to trade the GT+cash for my GTX.

Only reason you bother replying is because you want to say "you don't know what you are talking" LOL LOL LOL" :thumbsdown:
Yep, which is pretty obvious in your case. :thumbsup:

 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
You can raise it separately with rivatuner or bios edit but if you do raise the core it automatically raises SP clocks as well.
Sorry no, that's false. You can raise the two independently though there?s some limit in place so you can?t blow the ratio too far out of proportion.

Anyway, that?s still irrelevant to your claim given memory bandwidth isn?t changing in either scenario.

It sure does but not as much as having bigger texel fillrate and memory bandwidth to saturate it.
Again this is utter nonsense. You keep repeating these terms but you've been proven wrong time and time again. That and the G92's "raw texture fillrate" isn't a factor in many modern game since they use FP rendering.

Again you use terminology as if you were describing legacy games running on a ten year old Voodoo or something. You need to brush up on modern rendering since a lot has changed since shaders were introduced in 2001 or so.

Did you see the graphs I posted in the last thread?

Look at the bottom graph and see how the ratio of shader : texture ops is increasing.

Do you understand the significance of this?

Do you understand why this is taking games away from relying on texture fillrate and making them more reliant on shading?

Answer the question. Do you understand what the chart is showing or not?

Answer the question.

I don't think you understand even that simple chart but instead you keep parroting terms like "raw texture fillrate" and "bandwidth to saturate".

So these people raise the memory speed as well or did they leave memory speed at default and raise the core only or shaders too?
The 3DMark score clearly showed memory increase alone had the lowest impact compared to raising other clocks separately.

3dmark test shows that G92 isn't being fully utilized.
Which test? You mean the 3DMark multitexturing one which does nothing except test multitexturing, hence making it useless for modern games that are moving away from texture based operations?

Again I'll ask whether you understand what the chart above is showing?
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
I decided to run a quick test with my own person 8800GTS 512 MB.

FEAR @ 1680 X 1050 16XAA/16XAF, TSAA, HQ, SOFT SHADOWS ON EVERYTHING ON MAX

750/1825/1600

16 Min
30 Avg

750/1825/2200

20 Min
39 Avg

The memory clock was increased 38% and it increased performance by 25% in minimum frame rates and 30% in average frame rates.

Now, before AZN comes in here to say "See, I told you!" I going to be pre-emptive and say, don't bother posting. Keep in mind the settings I used are extremely bandwidth intensive... Most people will not run with those settings... So the test was slanted in the first place to give AZN the benifit of the doubt. There are indeed situations where the new 8800GTS is bandwidth starved, but only in extreme situations with very high AA and TSAA. I will rerun the tests later with 'normal' settings. But this gives you an idea of why the 384 bit memory bus is superior to the 256 one in certain limited scenarios.

For most people and most game settings, you will hit the 'unplayable' frame rate BEFORE the memory bottlenecking occours. In other words, by the time your memory speed is no longer capable of keeping up with the shaders, frame rates are already below the playable mark, which makes memory bandwidh starved a highly unlikely scenario.

Now to clarify - BFG and CHIZOW never said memory bandwidth didn't matter. They said shader mattered more, which I agree with.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: ArchAngel777
I decided to run a quick test with my own person 8800GTS 512 MB.

FEAR @ 1680 X 1050 16XAA/16XAF, TSAA, HQ, SOFT SHADOWS ON EVERYTHING ON MAX

750/1825/1600

16 Min
30 Avg

750/1825/2200

20 Min
39 Avg

The memory clock was increased 38% and it increased performance by 25% in minimum frame rates and 30% in average frame rates.

Now, before AZN comes in here to say "See, I told you!" I going to be pre-emptive and say, don't bother posting. Keep in mind the settings I used are extremely bandwidth intensive... Most people will not run with those settings... So the test was slanted in the first place to give AZN the benifit of the doubt. There are indeed situations where the new 8800GTS is bandwidth starved, but only in extreme situations with very high AA and TSAA. I will rerun the tests later with 'normal' settings. But this gives you an idea of why the 384 bit memory bus is superior to the 256 one in certain limited scenarios.

For most people and most game settings, you will hit the 'unplayable' frame rate BEFORE the memory bottlenecking occours. In other words, by the time you your memory speed is no longer capable of keeping up with the shaders, frame rates are already below the playable mark, which makes memory bandwidh starved a highly unlikely scenario.

Cool thanks for the tests. To clarify I never said bandwidth didn't matter, I said it didn't matter if you weren't using it. Those tests clearly show that with increases to core/shader under certain conditions you'll need a proportionate increase in bandwidth to keep up, which makes sense. I'd also be interested to see some tests with stock core/shader clocks and increases in memory bandwidth, but I know its not exactly "fun" running benchmarks all the time when you'd rather be gaming. Does show that the G92 can be bandwidth limited though; will be interesting to see if NV releases a GDDR4 variant or a part with faster GDDR3 similar to the ICs found on the Ultra.
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: chizow

Cool thanks for the tests. To clarify I never said bandwidth didn't matter, I said it didn't matter if you weren't using it. Those tests clearly show that with increases to core/shader under certain conditions you'll need a proportionate increase in bandwidth to keep up, which makes sense. I'd also be interested to see some tests with stock core/shader clocks and increases in memory bandwidth, but I know its not exactly "fun" running benchmarks all the time when you'd rather be gaming. Does show that the G92 can be bandwidth limited though; will be interesting to see if NV releases a GDDR4 variant or a part with faster GDDR3 similar to the ICs found on the Ultra.


Yep, I knew exactily what you and BFG meant. It is AZN that is settting up straw man arguements like "So if you had your memory clocked at 1Mhz you are saying it would still perform fine!" which is not what anyone is saying...

I will see about rerunning some of these tests. You are right that I sort of skued them his favor even more by running really high core/shaders in the first place. If I set to stock of 650/1625 I am sure the difference would be even less...
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Azn should be checking the Bit-Tech and Behardware links I recently posted to see just how many times the GTS 512 is scoring victories over the GTX. Clearly its shader performance is allowing it to pull ahead.

Interestingly in the Behardware review Crysis with AA demonstrates VRAM is the issue but not bandwidth but rather amount (i.e. the game is choking because it needs more than 512 MB).

The scores there are pretty much lining up with the amount of VRAM on the cards with the 640 MB GTS coming out ahead of the 8800 GT and 8800 GT 512.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |