9800GTX 1GB Performance Preview?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Really? No one pays attention to 3dmark? Like who? Almost every single reviewer use 3dmark to measure performance. 3dmark is a tool for pc gamers that measure each sub-section of the card. :laugh: Actually it does reflect gaming situations like when AA is on and off etc... It behaves same as it would in a game.

To answer you question though 9600gt keeps up only when AA is applied even then it still trails 8800gt. When AA is disabled 8800gt is more faster than with AA. Notice the bottleneck. Memory Bandwidth! All that texture fillrate is useless if it doesn't have bandwidth to use it properly. As for shader I guess you missed 9600gt thread. Do a search.
Actually most reviewers use 3DMark less and less due to the obvious problems with it, and they certainly don't bother with all of the individual synthetic results like TR does. We've already talked about a few glaring examples where the results directly contradict your arguments in real-world games, like multi-texture fillrate with G80 and G92, yet the G80 still matches G92 in games. Or G92 vs. G94, where 9600GT still matches G92 in games. Or better yet, how the R600 from ATI wins in 3DMark but still can't beat any G80/G92 parts in actual games.

I think you need to review 9600GT reviews. It keeps up with 8800GT (and even G80/9800) up until higher resolutions or with AA enabled. Bandwidth isn't a factor compared to 8800GT because it has the same 256-bit bus and similarly clocked RAM at 900-1000MHz.....and it manages all this despite having half the SP and Texturing ability of 8800GT. Wonder why?

I said that about g92 because it has massive texture fillrate that can use wider bus or more bandwidth not about 2900xt. In some situations yes 3870 wins because of stronger shader, slightly more texture fillrate, etc...
I think you said it because you have no clue what you're arguing about. Once again, do you think number of memory controllers/bus width has an impact on performance when total bandwidth is not an issue? Yes or no.

Guru3d tests where 8800gtx would look good like testing medium settings in crysis, turn off soft shadows in FEAR, disable soft particles in Quake Wars. Of course you handpicked benches for medium settings. You just don't understand where the performance negligence is coming from so you blame me like I'm dumb who can't read bar graphs.

High settings in Crysis emphasis on everything. High settings use bigger textures, better shadows, better shader, etc.. It stresses the card.
Rofl ya, except the other cards are running the same settings and in the case of QW, effects were disabled specifically because Radeon HD parts couldn't use them. But that doesn't change the fact you can't read benchmarks or make sense of anything more than synthetic 3DMark results. Also, the effects turned off in FEAR and QW, like soft shadows and particles have nothing to do with your arguments about performance with texturing and bandwidth, as those features in games stress the ROPs and shaders more than anything. They also happen to be some of the most expensive features to enable in games, with modern GPUs handling large textures and filtering gracefully. But if you actually spent time playing games rather than talking about them, you'd know turning up texture and filtering quality is much less performance expensive than any shadowing, particle or post-processing effects.

So it isn't 9800gtx. It's an overclocked 8800gts with 1 gig. There you have 8800gts 1gig beating 8800gtx in most of the benches even with AA with much lower memory bandwidth. What a surprise.
Uh, so what's a 9800GTX? Its an overclocked 8800GTS with a few power tweaks and slightly faster RAM. In this case the Gainward uses the same .8ns Samsung so for all intents and purposes the cards are identical. And no its not any real surprise the Gainward G92 wins many tests as its only clocked 155MHz faster and closer to that 756MHz number I quoted you earlier. Oh ya, that's where its fillrate would begin to match the 33% difference in ROPs.


No you get a overclocked 8800gtx. And for you info 8800ultra has 17% more bandwidth than 8800gtx not 8%. I read that techreport review a while ago with a overclocked G80GTS into mix. They were testing in extreme bandwidth limited situations in uber high resolutions and AA. That extra vram and memory bandwidth is sure kicking in isn't it in those extreme conditions. G80 has more bandwidth that is why it's doing well in those extreme conditions. G80 is a weaker chip compared to G92 that is why G92 can easily beat G80GTS with much lower bandwidth.
Uber high resolutions like Crysis at 1280 with Medium settings? Weren't you knocking another review site for using similar setttings? But I guess its OK when TR does it right, just as long as they include detailed 3DMark results. :laugh:

The Ultra is an overclocked 8800GTX, plain and simple. It has an updated cooler and a few power tweaks but any of the OC 8800GTX will perform identically to it clock for clock, as shown in numerous reviews. G80 is clearly the faster chip clock for clock, so I really have no clue what you're talking about. It'll be pretty obvious once GT200 rolls out with similar clock speeds to G92, only with more ROPs to give it the boost in performance lacking with G92.

Why is it that 9800gtx can beat 8800gtx in modest settings even with 33% reduction in ROP? :brokenheart: How can it be bottleneck when pixel fillrate is limited by memory bandwidth? For your information G92 has lower bandwidth than G80. I don't know what GTS you are talking about but if you are talking about G92GTS, 9800gtx beats it. If you are talking about G80GTS, 9800gtx beats it.
It beats 8800GTX in some benchmarks and loses in others. That's with a 100MHz core speed increase to help close the gap in fillrate. G92 certainly benefits from some of its other enhancements, but overall it still can't beat the Ultra where it only has a 50MHz lead. So once again, clock for clock, G80 is clearly far superior to G92 and that's due to its ROPs more than anything else.


Saying crap like bandwidth makes no performance impact in lower resolutions. That's full of $hit. BFG already tested on his bandwidth happy ultra. Decreasing his memory bandwidth by 20% gave him lower performance even without AA and much more with AA @ 1600x1200 resolution. Now what would happen if he downclocked to same GB/S as 9800gtx memory speeds. I'll tell you this much it won't be pretty against 9800gtx. G92 is starved for bandwidth with massive texture fillrate that sits there waiting for bandwidth to catch up.

http://episteme.arstechnica.co...7909965/m/453004231931

Before you say something ignorant as increasing the core clocks will show big improvements let me remind you that it is also tied to texture clocks and isn't a g92 but a G80 with lower texture fillrate. :brokenheart: Too late!
LMAO! More proof you can't read benchmarks, or comprehend anything more than what you see fit.

From BFG's link:
Core: -12.64% Memory: -5.45%
Commentary
The biggest performance difference clearly comes from the core clock where some games are almost seeing a 1:1 performance delta with it. I expected it would be shader clocks making the biggest difference but clearly that isn?t the case with the 8800 Ultra.

Of course texturing ability would be included with increase to core clocks, but that's not an issue when comparing to G92 since it already has the advantage over G80 with improved 1:1 TMUs. But we see with G92 that even with improved texturing units, it still can't surpass G80 without extreme core clock increases, and certainly doesn't scale nearly as well as G80. Why? Because it has 1/3rd fewer ROPs. Also notice his results mirror what I've said earlier...that performance with G80 scales linearly at nearly 1:1. You can't say the same for G92 because again, you need much faster core clocks to make up for the 1/3rd fewer ROPs which have the biggest impact on performance.

Since there is no way to test just ROP performance I think we can rest assure it's just Chizow's fantasy for now.

G92 biggest bottleneck compared to an ultra is bandwidth. If it has bandwidth it can overcome ultra with AA. It can beat an ultra without AA in most situations anyway long as it's not some obscure settings where the extra vram and pixel fillrate makes the difference. 2fps difference with AA with 60% less bandwidth is phenomenal when 20% less bandwidth gave BFG's Ultra 8-10% less frame rates.

Bigger ROP helps in uber high resolutions and AA and does improve performance no doubt... Anything higher will give you more performance but once it's bottlenecked you get minimal return much like you trying to play a game with a pentium 3 and stuck a 8800gt that limits your frame rates.

Look at 3870 that has 20% more ROP + slightly more bandwidth than 8800gt yet it loses to 8800gt with more GFLOPS to boot. Since you can only fit so much into a single die a balanced card is the way to go. G80 does just that except it cost much more money than G92. Now stick GDDR5 in G92 it can easily out pace an ultra in it's own Anti Aliasing game.
No need to talk about my fantasies, I'd be content if you properly comprehended the benchmarks and arguments you present. As for the last comment about GDDR5, that was the point in showing 9800GTX compared to G92 8800GTS, that a 270MHz increase in bandwidth isn't what G92 needs the most. Don't believe me? Go check out some of the 9800GTX OC Edition Reviews (756-770MHz core, 2300MHz memory). You'll find once again what I've been saying to be true. G92 needs fillrate more than it needs bandwidth (or anything else), which it gets with extreme core clock increases.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
I decided to do a test to show how G92 was being bottlenecked by memory bandwidth more than pixel or texel fillrate. I lowered my core clocks by 24% which would reduce both my pixel and texel fillrate. My memory clocks lowered by 24% to emphasis on this test...

I did some Crysis benches with my 8800gs... 8800gs is basically a full G92 with 1/4 of it's cluster disabled. 1440x900 no AA High settings

STOCK OC CLOCKS 729/1728/1040
37.55 fps

CORE REDUCTION 561/1728/1040
34.87 fps -7.2% difference

BANDWIDTH REDUCTION 729/1728/800

33.70 fps -10.1% difference


Conclusion. Memory bandwidth is G92 biggest bottleneck.

Yes we already know what happens when you cripple the bus/bandwidth on modern GPUs (see 8600s). Didn't you already learn that lesson with an 8600? :laugh:

My results were different, although they're more CPU limited than anything else as is the case with Crysis.

NEXT BENCH RUN- 4/17/2008 12:39:56 PM - Vista 64
Beginning Run #1 on Map-island, Demo-benchmark_gpu
DX10 1900x1200, AA=No AA, Vsync=Disabled, 64 bit test, FullScreen
Demo Loops=3, Time Of Day= 9
Global Game Quality: Custom
Custom Quality Values:
VolumetricEffects=Medium
Texture=High
ObjectDetail=High
Sound=High
Shadows=Medium
Water=Medium
Physics=Medium
Particles=Medium
Shading=Medium
PostProcessing=Medium
GameEffects=Medium


575/900
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 71.97s, Average FPS: 27.79
Min FPS: 19.27 at frame 1954, Max FPS: 41.54 at frame 1007
Average Tri/Sec: -14563790, Tri/Frame: -524069
Recorded/Played Tris ratio: -1.75
!TimeDemo Run 1 Finished.
Play Time: 62.53s, Average FPS: 31.99
Min FPS: 19.27 at frame 1954, Max FPS: 42.82 at frame 984
Average Tri/Sec: -16272098, Tri/Frame: -508720
Recorded/Played Tris ratio: -1.80
!TimeDemo Run 2 Finished.
Play Time: 62.80s, Average FPS: 31.85
Min FPS: 19.12 at frame 1967, Max FPS: 42.92 at frame 992
Average Tri/Sec: -16181058, Tri/Frame: -508082
Recorded/Played Tris ratio: -1.80
TimeDemo Play Ended, (3 Runs Performed)
==============================================================
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 31.92


621/900
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 71.19s, Average FPS: 28.09
Min FPS: 18.72 at frame 1938, Max FPS: 43.36 at frame 1000
Average Tri/Sec: -14728736, Tri/Frame: -524287
Recorded/Played Tris ratio: -1.75
!TimeDemo Run 1 Finished.
Play Time: 62.30s, Average FPS: 32.10
Min FPS: 18.72 at frame 1938, Max FPS: 44.02 at frame 1005
Average Tri/Sec: -16334562, Tri/Frame: -508804
Recorded/Played Tris ratio: -1.80
!TimeDemo Run 2 Finished.
Play Time: 61.66s, Average FPS: 32.44
Min FPS: 18.72 at frame 1938, Max FPS: 44.02 at frame 1005
Average Tri/Sec: -16525225, Tri/Frame: -509460
Recorded/Played Tris ratio: -1.80
TimeDemo Play Ended, (3 Runs Performed)
==============================================================
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 32.27



575/1000
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 71.89s, Average FPS: 27.82
Min FPS: 20.88 at frame 1954, Max FPS: 40.90 at frame 1777
Average Tri/Sec: -14587433, Tri/Frame: -524365
Recorded/Played Tris ratio: -1.75
!TimeDemo Run 1 Finished.
Play Time: 62.70s, Average FPS: 31.90
Min FPS: 20.36 at frame 1978, Max FPS: 42.39 at frame 982
Average Tri/Sec: -16240110, Tri/Frame: -509135
Recorded/Played Tris ratio: -1.80
!TimeDemo Run 2 Finished.
Play Time: 62.78s, Average FPS: 31.86
Min FPS: 20.36 at frame 1978, Max FPS: 42.39 at frame 982
Average Tri/Sec: -16223292, Tri/Frame: -509263
Recorded/Played Tris ratio: -1.80
TimeDemo Play Ended, (3 Runs Performed)
==============================================================
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 31.88



575/900
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 34.555

621/900
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 35.815

575/1000
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 34.66

621/1000
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 36.11

Feel free to do the math, you can see very clearly there's virtually no difference when I adjust memory clocks, but a small to significant gain when I increase core clocks. This is no different than any of the other benches I've run in the past with G80 GTS and G92 GT.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
I decided to do a test to show how G92 was being bottlenecked by memory bandwidth more than pixel or texel fillrate. I lowered my core clocks by 24% which would reduce both my pixel and texel fillrate. My memory clocks lowered by 24% to emphasis on this test...

I did some Crysis benches with my 8800gs... 8800gs is basically a full G92 with 1/4 of it's cluster disabled. 1440x900 no AA High settings

STOCK OC CLOCKS 729/1728/1040
37.55 fps

CORE REDUCTION 561/1728/1040
34.87 fps -7.2% difference

BANDWIDTH REDUCTION 729/1728/800

33.70 fps -10.1% difference


Conclusion. Memory bandwidth is G92 biggest bottleneck.

Yes we already know what happens when you cripple the bus/bandwidth on modern GPUs (see 8600s). Didn't you already learn that lesson with an 8600? :laugh:

My results were different, although they're more CPU limited than anything else as is the case with Crysis.

NEXT BENCH RUN- 4/17/2008 12:39:56 PM - Vista 64
Beginning Run #1 on Map-island, Demo-benchmark_gpu
DX10 1900x1200, AA=No AA, Vsync=Disabled, 64 bit test, FullScreen
Demo Loops=3, Time Of Day= 9
Global Game Quality: Custom
Custom Quality Values:
VolumetricEffects=Medium
Texture=High
ObjectDetail=High
Sound=High
Shadows=Medium
Water=Medium
Physics=Medium
Particles=Medium
Shading=Medium
PostProcessing=Medium
GameEffects=Medium


575/900
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 71.97s, Average FPS: 27.79
Min FPS: 19.27 at frame 1954, Max FPS: 41.54 at frame 1007
Average Tri/Sec: -14563790, Tri/Frame: -524069
Recorded/Played Tris ratio: -1.75
!TimeDemo Run 1 Finished.
Play Time: 62.53s, Average FPS: 31.99
Min FPS: 19.27 at frame 1954, Max FPS: 42.82 at frame 984
Average Tri/Sec: -16272098, Tri/Frame: -508720
Recorded/Played Tris ratio: -1.80
!TimeDemo Run 2 Finished.
Play Time: 62.80s, Average FPS: 31.85
Min FPS: 19.12 at frame 1967, Max FPS: 42.92 at frame 992
Average Tri/Sec: -16181058, Tri/Frame: -508082
Recorded/Played Tris ratio: -1.80
TimeDemo Play Ended, (3 Runs Performed)
==============================================================
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 31.92


621/900
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 71.19s, Average FPS: 28.09
Min FPS: 18.72 at frame 1938, Max FPS: 43.36 at frame 1000
Average Tri/Sec: -14728736, Tri/Frame: -524287
Recorded/Played Tris ratio: -1.75
!TimeDemo Run 1 Finished.
Play Time: 62.30s, Average FPS: 32.10
Min FPS: 18.72 at frame 1938, Max FPS: 44.02 at frame 1005
Average Tri/Sec: -16334562, Tri/Frame: -508804
Recorded/Played Tris ratio: -1.80
!TimeDemo Run 2 Finished.
Play Time: 61.66s, Average FPS: 32.44
Min FPS: 18.72 at frame 1938, Max FPS: 44.02 at frame 1005
Average Tri/Sec: -16525225, Tri/Frame: -509460
Recorded/Played Tris ratio: -1.80
TimeDemo Play Ended, (3 Runs Performed)
==============================================================
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 32.27



575/1000
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 71.89s, Average FPS: 27.82
Min FPS: 20.88 at frame 1954, Max FPS: 40.90 at frame 1777
Average Tri/Sec: -14587433, Tri/Frame: -524365
Recorded/Played Tris ratio: -1.75
!TimeDemo Run 1 Finished.
Play Time: 62.70s, Average FPS: 31.90
Min FPS: 20.36 at frame 1978, Max FPS: 42.39 at frame 982
Average Tri/Sec: -16240110, Tri/Frame: -509135
Recorded/Played Tris ratio: -1.80
!TimeDemo Run 2 Finished.
Play Time: 62.78s, Average FPS: 31.86
Min FPS: 20.36 at frame 1978, Max FPS: 42.39 at frame 982
Average Tri/Sec: -16223292, Tri/Frame: -509263
Recorded/Played Tris ratio: -1.80
TimeDemo Play Ended, (3 Runs Performed)
==============================================================
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 31.88



575/900
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 34.555

621/900
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 35.815

575/1000
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 34.66

621/1000
DX10 1680x1050 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 36.11

Feel free to do the math, you can see very clearly there's virtually no difference when I adjust memory clocks, but a small to significant gain when I increase core clocks. This is no different than any of the other benches I've run in the past with G80 GTS and G92 GT.

You are confusing G80 with a G92. :laugh:

G80 has 1/3 less texture fillrate on 3dmark fillrate test than G92 no wonder it's more hungry for fillrate with all that bandwidth. :laugh:

Where's G92 benches? It doesn't work this way on G92. My 8800gs is more accurate because my card is actually a REAL G92 not G80 like your 8800gtx.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
You are confusing G80 with a G92. :laugh:

G80 has 1/3 less texture fillrate on 3dmark fillrate test than G92 no wonder it's more hungry for fillrate with all that bandwidth. :laugh:

Where's G92 benches? It doesn't work this way on G92. My 8800gs is more accurate because my card is actually a REAL G92 not G80 like your 8800gtx.

Nope I'm not confusing G80 with G92, I've already given you the benchmarks to draw conclusions, they're just not all lined up in a simple 3DMark synthetic test. Yes G92 is less responsive to core clock increases than G80 and yes it is bandwidth limited in some cases, however, ROPs are still its greatest bottleneck as bench after bench and part after part demonstrate. Once again, take a look at G92 GTS vs. G92 GTX and G92 GTX vs. G92 GTX OC. Make sure to take note of the core and memory clock frequencies. Its pretty obvious which increase gives the largest gains.

And I did give you G92 results months ago... the first time we had this discussion.....

Yep I know you're going to ignore 3DMark (even though its what you base your laughable texel fillrate/bandwidth argument on), but I did run some LOTRO tests:

GT @650/850 (Stock 8800GT SC)
2007-11-20 11:27:16 - lotroclient
Frames: 7174 - Time: 120000ms - Avg: 59.783 - Min: 30 - Max: 83

GT @650/1000
2007-11-20 11:32:31 - lotroclient
Frames: 7294 - Time: 120000ms - Avg: 60.783 - Min: 31 - Max: 85

GT @675/1000
2007-11-20 11:36:28 - lotroclient
Frames: 7437 - Time: 120000ms - Avg: 61.975 - Min: 33 - Max: 87

GT @700/1000
2007-11-20 11:41:17 - lotroclient
Frames: 7467 - Time: 120000ms - Avg: 62.225 - Min: 25 - Max: 96

GT @729/1000
2007-11-20 11:50:40 - lotroclient
Frames: 7611 - Time: 120000ms - Avg: 63.425 - Min: 33 - Max: 105

GT @729/1050 (Unstable in ATITool)
2007-11-20 11:59:53 - lotroclient
Frames: 7601 - Time: 120000ms - Avg: 63.342 - Min: 28 - Max: 102

Notice some things never change with your 3DMark synthetics
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Actually most reviewers use 3DMark less and less due to the obvious problems with it, and they certainly don't bother with all of the individual synthetic results like TR does. We've already talked about a few glaring examples where the results directly contradict your arguments in real-world games, like multi-texture fillrate with G80 and G92, yet the G80 still matches G92 in games. Or G92 vs. G94, where 9600GT still matches G92 in games. Or better yet, how the R600 from ATI wins in 3DMark but still can't beat any G80/G92 parts in actual games.

I think you need to review 9600GT reviews. It keeps up with 8800GT (and even G80/9800) up until higher resolutions or with AA enabled. Bandwidth isn't a factor compared to 8800GT because it has the same 256-bit bus and similarly clocked RAM at 900-1000MHz.....and it manages all this despite having half the SP and Texturing ability of 8800GT. Wonder why?

Maybe those reviewers are cheap and couldn't afford 3dmark professional like Techreport.

G80 only matches G92 in memory limited situations.

Bandwidth and fillrate go hand in hand. You have no idea. :light:

I think you said it because you have no clue what you're arguing about. Once again, do you think number of memory controllers/bus width has an impact on performance when total bandwidth is not an issue? Yes or no.

Is this getting personal for you? Either talk about what is on subject or stop trying to personal attack. On current ddr3 wider bus is needed on cards like G92. It would benefit tremendously.

Rofl ya, except the other cards are running the same settings and in the case of QW, effects were disabled specifically because Radeon HD parts couldn't use them. But that doesn't change the fact you can't read benchmarks or make sense of anything more than synthetic 3DMark results. Also, the effects turned off in FEAR and QW, like soft shadows and particles have nothing to do with your arguments about performance with texturing and bandwidth, as those features in games stress the ROPs and shaders more than anything. They also happen to be some of the most expensive features to enable in games, with modern GPUs handling large textures and filtering gracefully. But if you actually spent time playing games rather than talking about them, you'd know turning up texture and filtering quality is much less performance expensive than any shadowing, particle or post-processing effects.

Were Radeon cards tested in that review? :laugh: Thanks Guru3d for making the 8800gtx look good! :roll:

Uh, so what's a 9800GTX? Its an overclocked 8800GTS with a few power tweaks and slightly faster RAM. In this case the Gainward uses the same .8ns Samsung so for all intents and purposes the cards are identical. And no its not any real surprise the Gainward G92 wins many tests as its only clocked 155MHz faster and closer to that 756MHz number I quoted you earlier. Oh ya, that's where its fillrate would begin to match the 33% difference in ROPs.

So what are you trying to go with this? 8800gtx gets beat.

Uber high resolutions like Crysis at 1280 with Medium settings? Weren't you knocking another review site for using similar setttings? But I guess its OK when TR does it right, just as long as they include detailed 3DMark results.

The Ultra is an overclocked 8800GTX, plain and simple. It has an updated cooler and a few power tweaks but any of the OC 8800GTX will perform identically to it clock for clock, as shown in numerous reviews. G80 is clearly the faster chip clock for clock, so I really have no clue what you're talking about. It'll be pretty obvious once GT200 rolls out with similar clock speeds to G92, only with more ROPs to give it the boost in performance lacking with G92.

There we go with medium settings where it would stress the card less. The only time G80GTS beats is with AA. Look at the bench with no AA. 8800gt wins.

Ultra uses better ram and handpicked cores that would run at higher frequency. It's not overclocked 8800gtx. G80 is faster where bandwidth has the upper hand other wise it's a weaker chip compared to G92.

It beats 8800GTX in some benchmarks and loses in others. That's with a 100MHz core speed increase to help close the gap in fillrate. G92 certainly benefits from some of its other enhancements, but overall it still can't beat the Ultra where it only has a 50MHz lead. So once again, clock for clock, G80 is clearly far superior to G92 and that's due to its ROPs more than anything else.

9800gtx is actually bit better than 8800gtx. It beats in more benches against 8800gtx than it loses. 8800gtx wins only in some ridiculous bandwidth limited situations. Now if you are trying to compare clock for clock, 9800gtx would prevail if it had the same bandwidth long as it's not some ridiculous resolution where pixel fillrate prevail.

LMAO! More proof you can't read benchmarks, or comprehend anything more than what you see fit.

Is that another personal attack? :roll:

Of course texturing ability would be included with increase to core clocks, but that's not an issue when comparing to G92 since it already has the advantage over G80 with improved 1:1 TMUs. But we see with G92 that even with improved texturing units, it still can't surpass G80 without extreme core clock increases, and certainly doesn't scale nearly as well as G80. Why? Because it has 1/3rd fewer ROPs. Also notice his results mirror what I've said earlier...that performance with G80 scales linearly at nearly 1:1. You can't say the same for G92 because again, you need much faster core clocks to make up for the 1/3rd fewer ROPs which have the biggest impact on performance.

Why isn't an issue when texture fillrate improves performance? :light: It doesn't scale well as G80 because it has lower bandwidth.


No need to talk about my fantasies, I'd be content if you properly comprehended the benchmarks and arguments you present. As for the last comment about GDDR5, that was the point in showing 9800GTX compared to G92 8800GTS, that a 270MHz increase in bandwidth isn't what G92 needs the most. Don't believe me? Go check out some of the 9800GTX OC Edition Reviews (756-770MHz core, 2300MHz memory). You'll find once again what I've been saying to be true. G92 needs fillrate more than it needs bandwidth (or anything else), which it gets with extreme core clock increases.

That's really 260mhz ddr. That just isn't enough to tap 9800gtx massive texturing ability. With a minimal core clock increase and SP a 9800gtx was able to gain 5% from 8800gts 512. This just proves it needs all the bandwidth it can get. Your link doesn't even have anything that proves your fantasy.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
You are confusing G80 with a G92. :laugh:

G80 has 1/3 less texture fillrate on 3dmark fillrate test than G92 no wonder it's more hungry for fillrate with all that bandwidth. :laugh:

Where's G92 benches? It doesn't work this way on G92. My 8800gs is more accurate because my card is actually a REAL G92 not G80 like your 8800gtx.

Nope I'm not confusing G80 with G92, I've already given you the benchmarks to draw conclusions, they're just not all lined up in a simple 3DMark synthetic test. Yes G92 is less responsive to core clock increases than G80 and yes it is bandwidth limited in some cases, however, ROPs are still its greatest bottleneck as bench after bench and part after part demonstrate. Once again, take a look at G92 GTS vs. G92 GTX and G92 GTX vs. G92 GTX OC. Make sure to take note of the core and memory clock frequencies. Its pretty obvious which increase gives the largest gains.

And I did give you G92 results months ago... the first time we had this discussion.....

Yep I know you're going to ignore 3DMark (even though its what you base your laughable texel fillrate/bandwidth argument on), but I did run some LOTRO tests:

GT @650/850 (Stock 8800GT SC)
2007-11-20 11:27:16 - lotroclient
Frames: 7174 - Time: 120000ms - Avg: 59.783 - Min: 30 - Max: 83

GT @650/1000
2007-11-20 11:32:31 - lotroclient
Frames: 7294 - Time: 120000ms - Avg: 60.783 - Min: 31 - Max: 85

GT @675/1000
2007-11-20 11:36:28 - lotroclient
Frames: 7437 - Time: 120000ms - Avg: 61.975 - Min: 33 - Max: 87

GT @700/1000
2007-11-20 11:41:17 - lotroclient
Frames: 7467 - Time: 120000ms - Avg: 62.225 - Min: 25 - Max: 96

GT @729/1000
2007-11-20 11:50:40 - lotroclient
Frames: 7611 - Time: 120000ms - Avg: 63.425 - Min: 33 - Max: 105

GT @729/1050 (Unstable in ATITool)
2007-11-20 11:59:53 - lotroclient
Frames: 7601 - Time: 120000ms - Avg: 63.342 - Min: 28 - Max: 102

Notice some things never change with your 3DMark synthetics

You are confused alright. You know why it's less responsive to the core? Because it just doesn't have the bandwidth. :music:

what benchmark is that? What resolution? AA? Most of your tests are showing core benchmarks. Of course core improvements will show increase. Did my crysis benchmark not show an increase? Could it be more shader intensive pixel hungry?

My benchmark is more accurate. WHY? Because I leave the SP clocks alone. You however don't mention your SP. Also when you consider texture is also being raised you have to account that as well.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
You are confused alright. You know why it's less responsive to the core? Because it just doesn't have the bandwidth. :music:
Uh no. Its less responsive to core clock changes than G80, but still clearly benefits more from core clock increases than memory bandwidth increases despite the fact it has less bandwidth than G80, which again shows ROP/fillrate is its greatest bottleneck.

what benchmark is that? What resolution? AA? Most of your tests are showing core benchmarks. Of course core improvements will show increase. Did my crysis benchmark not show an increase? Could it be more shader intensive pixel hungry?
Its an in-game sample from LOTRO using a 2 minute time demo from Thorin's Hall travel route to Edhelion. Travel route and camera angle are scripted by the game engine so its fully automated. Benchmark was done at 1920x1200, no AA, 16x AF, with Ultra High settings in DX10, DX10 dynamic shadows off. My results very clearly show little increase in performance when memory bandwidth is increased from 850 to 1000 but consistent gains when core clock is increased. The one test I would've done back then is 729/850 to further show bandwidth doesn't impact performance as much as core clock increases. And yes LOTRO is "shader intensive pixel hungry", as it benefits more from both core and shader clock increases than memory bandwidth with its heavy use of particle effects, post-processing, bloom, and dynamic shadows.

My benchmark is more accurate. WHY? Because I leave the SP clocks alone. You however don't mention your SP. Also when you consider texture is also being raised you have to account that as well.
No your benchmark cripples an already crippled card even further. I leave shaders linked as there's simply no reason not to, as numerous tests have shown time and again shader speed (or number) have little impact on performance until you fall below a certain threshold (64 in tests I've seen). Regardless, any increase/decrease in shader would impact core/memory significance equally. So what if texture clocks are being raised, its still not enough to match G92 as your handy 3DMark synthetics show. There's no advantage gained relative to G92 which already sports far superior synthetic numbers. Yet G92 still can't match G80 clock for clock. Wonder why?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Maybe those reviewers are cheap and couldn't afford 3dmark professional like Techreport.

G80 only matches G92 in memory limited situations.

Bandwidth and fillrate go hand in hand. You have no idea. :light:
Once again, you dodge the question. You said G94/9600GT was bandwidth limited but it has the same bus size and similar memory speeds to the rest of the G92 family. It has half the shading and texturing ability, yet it still remains competitive to full G92 parts. Same with G92 GT compared to G92 GTS. Why? Because it has the same number of ROPs.

And no one bothers with detailed 3DMark synthetics because its a waste of press space. 3DMark has already come under fire for favoring ATI parts when those results don't play out in real world game performance.

Is this getting personal for you? Either talk about what is on subject or stop trying to personal attack. On current ddr3 wider bus is needed on cards like G92. It would benefit tremendously.
No, I'm merely pointing out my observation that you have no clue what you're arguing about. You still haven't answered the question. If you have 512-bit bus and 500MHz memory do you think that's faster than 256-bit bus and 1000MHz memory? That's what you're arguing. You said "wider bus is just better" when bandwidth isn't a concern, when you have no proof of that. Its no wonder that you somehow think 320-bit bus @ 800MHz is better than 256-bit bus @ 1100MHz, which is exactly the numbers we're seeing with G80 GTS compared to G92 GTX.

Were Radeon cards tested in that review? :laugh: Thanks Guru3d for making the 8800gtx look good! :roll:
I could link any review from any site, but you wouldn't be able to understand or comprehend it. Why bother?

So what are you trying to go with this? 8800gtx gets beat.
Except in reviews that aren't from search engines or 3DMark synthetics, in which case you ignore them.

There we go with medium settings where it would stress the card less. The only time G80GTS beats is with AA. Look at the bench with no AA. 8800gt wins.

Ultra uses better ram and handpicked cores that would run at higher frequency. It's not overclocked 8800gtx. G80 is faster where bandwidth has the upper hand other wise it's a weaker chip compared to G92.
Rofl....handpicked cores...that are identical other than the fact that they're *handpicked*. Faster RAM that increases memory clocks that were less necessary to begin with due to the wider 384-bit bus. Once again, look up reviews showing GTX OC vs. Ultra and you will see that clock for clock they perform identically. There is nothing magical about the Ultra, its an OC'd GTX.

And the point wasn't about medium settings, it was to show that the G80 GTS performs very differently once its core clock is raised similarly to what its compared against instead of the 500/513MHz speeds many reviewers tested it at compared to G92 GT.

9800gtx is actually bit better than 8800gtx. It beats in more benches against 8800gtx than it loses. 8800gtx wins only in some ridiculous bandwidth limited situations. Now if you are trying to compare clock for clock, 9800gtx would prevail if it had the same bandwidth long as it's not some ridiculous resolution where pixel fillrate prevail.
I'd take my chances on that. Reduce the GTX memory clockspeed to 733MHz to match the 9800GTX @2200MHz, but downclock the 9800GTX to 575MHz. Linked shaders for both so it'd be 8800GTX @ 575/733 and 9800GTX @ 575/1100. I went ahead and ran a few benches:

575/600 (simulates 256-bit bus, 900MHz RAM)
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 30.73

575/733 (matches bandwidth to 9800GTX @ 256-bit/1100MHz)
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 30.965

575/900 (stock GTX benched earlier)
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 31.92

Now you can see the difference a 20-33% reduction in bandwidth has on G80, now how do you think a G92 GTX will do when you lower its core clock by 100MHz to 575MHz? That's why I know G80 is superior clock for clock and isn't bandwidth limited.

Is that another personal attack? :roll:
No, its an observation. You linked a reference and once again decided to selectively pick and choose what suited your argument when in reality the link substantiated everything I said. Not only that, but it also clearly shows memory bandwidth has the LEAST impact of the 3 variables, core, shader and memory clock with AA or without.

Why isn't an issue when texture fillrate improves performance? :light: It doesn't scale well as G80 because it has lower bandwidth.
Its not an issue because G92 already has the edge in texture fillrate as your 3DMark synthetics show, however you keep ducking the answer, where the 9600 manages to perform similarly despite half the texturing power. Any increase in core clock only further emphasizes an advantage G92 holds over G80 so any increase to G80 would be insignificant. This would be similar to me complaining about G92 memory speed increases knowing G80 already holds an advantage. I know its complicated, feel free to read it over a few times. :light:

That's really 260mhz ddr. That just isn't enough to tap 9800gtx massive texturing ability. With a minimal core clock increase and SP a 9800gtx was able to gain 5% from 8800gts 512. This just proves it needs all the bandwidth it can get. Your link doesn't even have anything that proves your fantasy.
Which is still ~12% increase in memory bandwidth, the area G92 was supposed to be most starved. Yet the difference in performance is much closer to the 4% difference in core clock. That's even further emphasized with the benchmark I linked with the 9800 OCX where the difference in core clock is 12% (756 vs 675) and memory clock is 4.5% (2300 vs 2200) and the performance differences are closer to the 12% difference in core clock.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Uh no. Its less responsive to core clock changes than G80, but still clearly benefits more from core clock increases than memory bandwidth increases despite the fact it has less bandwidth than G80, which again shows ROP/fillrate is its greatest bottleneck.

Uh yes. It's less responsive because it's limited to bandwidth. 70 GB/s isn't enough for a card with 50% more texture fillrate than 8800 ultra that has 103 GB/s of bandwidth. AA uses certain amount of bandwidth which Ultra has plenty of and is able to beat 9800gtx with AA.

Its an in-game sample from LOTRO using a 2 minute time demo from Thorin's Hall travel route to Edhelion. Travel route and camera angle are scripted by the game engine so its fully automated. Benchmark was done at 1920x1200, no AA, 16x AF, with Ultra High settings in DX10, DX10 dynamic shadows off. My results very clearly show little increase in performance when memory bandwidth is increased from 850 to 1000 but consistent gains when core clock is increased. The one test I would've done back then is 729/850 to further show bandwidth doesn't impact performance as much as core clock increases. And yes LOTRO is "shader intensive pixel hungry", as it benefits more from both core and shader clock increases than memory bandwidth with its heavy use of particle effects, post-processing, bloom, and dynamic shadows.

When memory was downclocked it dropped 1fps in average. Perhaps you should have used AA into the benchmark since Ultra really wins a 9800gtx with AA not really without AA. So the game is SP and pixel hungry which you handpicked again.

No your benchmark cripples an already crippled card even further. I leave shaders linked as there's simply no reason not to, as numerous tests have shown time and again shader speed (or number) have little impact on performance until you fall below a certain threshold (64 in tests I've seen). Regardless, any increase/decrease in shader would impact core/memory significance equally. So what if texture clocks are being raised, its still not enough to match G92 as your handy 3DMark synthetics show. There's no advantage gained relative to G92 which already sports far superior synthetic numbers. Yet G92 still can't match G80 clock for clock. Wonder why?

No my benchmark is not crippled and neither is my card. It's a g92 on a smaller scale. My card is exactly 75% of FULL G92 so it should scale exactly like a full G92. Why?

My card:
12 ROP
96 SP
192 bit

FULL g92
16 ROP
128 SP
256 bit

You do the math...
75% of the rop
75% of the SP and TMU
75% of the bandwidth

What are you talking about? Shader has huge improvements depending on the game. I don't know what 3 year old engine you've been testing. Without AA SP helps much as core (both pixel and texture) depending on the game as shown by BFG's benches. Crysis is SP hungry especially at high settings. Perhaps you should do medium setting benches instead. :laugh:

I can test more games for you if you'd like oh my card is crippled that's right. :roll:
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Once again, you dodge the question. You said G94/9600GT was bandwidth limited but it has the same bus size and similar memory speeds to the rest of the G92 family. It has half the shading and texturing ability, yet it still remains competitive to full G92 parts. Same with G92 GT compared to G92 GTS. Why? Because it has the same number of ROPs.

And no one bothers with detailed 3DMark synthetics because its a waste of press space. 3DMark has already come under fire for favoring ATI parts when those results don't play out in real world game performance.

When did I say 9600gt was bandwidth limited again? Show me where? G94 has much lower texture fillrate. It's a perfect combination that is why it's able to keep up with a 8800gt in most instances where bandwidth comes into play.

http://techreport.com/r.x/geforce-9800gtx/3dm-multi.gif

Look at this graph.

9600gt =16192
9800gtx=26490

Now it seems to me the card is being bottlenecked by bandwidth since 9800gtx should be double that of 9600gt fillrate but it's missing 35% of the texture fillrate. Since it's exactly same architecture except 9800gtx has twice as many TMU.

That's why techreport is better website than Guru3D who tells it's reader they turned some settings off because of Radeon cards when there wasn't any Radeon cards. They give you detailed information unlike these hardware sites that put emphasis on 3dmark score only.

No, I'm merely pointing out my observation that you have no clue what you're arguing about. You still haven't answered the question. If you have 512-bit bus and 500MHz memory do you think that's faster than 256-bit bus and 1000MHz memory? That's what you're arguing. You said "wider bus is just better" when bandwidth isn't a concern, when you have no proof of that. Its no wonder that you somehow think 320-bit bus @ 800MHz is better than 256-bit bus @ 1100MHz, which is exactly the numbers we're seeing with G80 GTS compared to G92 GTX.

Actually I do know what I'm talking about. You are just confused why the 8800gtx performs good with AA but get beat by 9800gtx when it's not deprived by bandwidth.

I personally think wider bus is actually better and also think bigger rop vs smaller rop that is clocked higher to match pixel performance is better as well not to mention wider bus improves pixel performance like it did for 2900xt. Would you pick a 4cylinder turbo charged engine that has 200HP or would you pick a v8 engine with 200HP if the weight of the car is same and you are riding for performance and consistency?

I could link any review from any site, but you wouldn't be able to understand or comprehend it. Why bother?

Really was there Radeon cards in that review or not? Answer the question! You didn't think it was strange GURU3D mentioned that and you ate it up? Oh that's right you hand picked benches where 8800gtx looks good because you are proud owner of a 8800gtx.

Except in reviews that aren't from search engines or 3DMark synthetics, in which case you ignore them.

What benches did you see? in Xbit review that overclocked 8800gts 1gig was whooping the 8800gtx.

Rofl....handpicked cores...that are identical other than the fact that they're *handpicked*. Faster RAM that increases memory clocks that were less necessary to begin with due to the wider 384-bit bus. Once again, look up reviews showing GTX OC vs. Ultra and you will see that clock for clock they perform identically. There is nothing magical about the Ultra, its an OC'd GTX.

And the point wasn't about medium settings, it was to show that the G80 GTS performs very differently once its core clock is raised similarly to what its compared against instead of the 500/513MHz speeds many reviewers tested it at compared to G92 GT.

Better yield cores is what I really meant to say. Does your 8800gtx overclock to ultra core clocks? Is that why you have it at 600mhz and not 612mhz. I'm sure ultra can easily do more than stock as well. :light:

Medium settings where it doesn't stress the card. Either way 8800gts was still beat in crysis @ medium settings. Only with AA did 8800gts bandwidth prevailed.

I'd take my chances on that. Reduce the GTX memory clockspeed to 733MHz to match the 9800GTX @2200MHz, but downclock the 9800GTX to 575MHz. Linked shaders for both so it'd be 8800GTX @ 575/733 and 9800GTX @ 575/1100. I went ahead and ran a few benches:

575/600 (simulates 256-bit bus, 900MHz RAM)
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 30.73

575/733 (matches bandwidth to 9800GTX @ 256-bit/1100MHz)
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 30.965

575/900 (stock GTX benched earlier)
DX10 1900x1200 AA=No AA, 64 bit test, Quality: Custom ~~ Overall Average FPS: 31.92

Now you can see the difference a 20-33% reduction in bandwidth has on G80, now how do you think a G92 GTX will do when you lower its core clock by 100MHz to 575MHz? That's why I know G80 is superior clock for clock and isn't bandwidth limited.

Also you have to leave the SP clocks @ 1350mhz and 9800gtx has to be tested @ 1688mhz.

custom quality??? Either High or Very High and resolution of 1600x1200 where pixel performance doesn't have advantage. No custom benches trying to make your 8800gtx look good either.

Too bad you don't have a 9800gtx though. So your G80 benches don't mean squat.


No, its an observation. You linked a reference and once again decided to selectively pick and choose what suited your argument when in reality the link substantiated everything I said. Not only that, but it also clearly shows memory bandwidth has the LEAST impact of the 3 variables, core, shader and memory clock with AA or without.

You keep saying I can't read a benchmark. You are implying I'm stupid who can't read numbers. That's a personal attack. Either prove g92 is being bottlenecked by pixel performance or don't tell me I can't read a freakin' number...

Its not an issue because G92 already has the edge in texture fillrate as your 3DMark synthetics show, however you keep ducking the answer, where the 9600 manages to perform similarly despite half the texturing power. Any increase in core clock only further emphasizes an advantage G92 holds over G80 so any increase to G80 would be insignificant. This would be similar to me complaining about G92 memory speed increases knowing G80 already holds an advantage. I know its complicated, feel free to read it over a few times.

Fillrate gets limited by bandwidth. It is a simple concept you can't even understand. This was already discussed by Scott Wasson of Techreport.

Which is still ~12% increase in memory bandwidth, the area G92 was supposed to be most starved. Yet the difference in performance is much closer to the 4% difference in core clock. That's even further emphasized with the benchmark I linked with the 9800 OCX where the difference in core clock is 12% (756 vs 675) and memory clock is 4.5% (2300 vs 2200) and the performance differences are closer to the 12% difference in core clock.

Performance increase isn't linear. 12% increase in memory clocks alone doesn't mean 12% improvements in frame rates. Please get a clue. Considering when you raise core clocks it's raising both pixel, texture, and sp I don't doubt the performance increase is more on a pixel and shader heavy game. Your whole argument still doesn't prove that rop performance is G92 biggest bottleneck over G80.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
When did I say 9600gt was bandwidth limited again? Show me where? G94 has much lower texture fillrate. It's a perfect combination that is why it's able to keep up with a 8800gt in most instances where bandwidth comes into play.

http://techreport.com/r.x/geforce-9800gtx/3dm-multi.gif

Look at this graph.

9600gt =16192
9800gtx=26490

Now it seems to me the card is being bottlenecked by bandwidth since 9800gtx should be double that of 9600gt fillrate but it's missing 35% of the texture fillrate. Since it's exactly same architecture except 9800gtx has twice as many TMU.

That's why techreport is better website than Guru3D who tells it's reader they turned some settings off because of Radeon cards when there wasn't any Radeon cards. They give you detailed information unlike these hardware sites that put emphasis on 3dmark score only.
You said:

To answer you question though 9600gt keeps up only when AA is applied even then it still trails 8800gt. When AA is disabled 8800gt is more faster than with AA. Notice the bottleneck. Memory Bandwidth! All that texture fillrate is useless if it doesn't have bandwidth to use it properly. As for shader I guess you missed 9600gt thread. Do a search.

You must really hate how you can't explain away the 9600GT lol. Once again, 9600GT has 1/2 the shading and texturing ability of full G92 parts, yet it performs comparably.

Actually I do know what I'm talking about. You are just confused why the 8800gtx performs good with AA but get beat by 9800gtx when it's not deprived by bandwidth.

I personally think wider bus is actually better and also think bigger rop vs smaller rop that is clocked higher to match pixel performance is better as well not to mention wider bus improves pixel performance like it did for 2900xt. Would you pick a 4cylinder turbo charged engine that has 200HP or would you pick a v8 engine with 200HP if the weight of the car is same and you are riding for performance and consistency?
So you agree G80 is the superior chip. We already know that.

And there's no evidence a wider bus is actually better when bandwidth is not an issue, as 2900XT and 3870 show us with a 512-bit bus and 105GB/s compared to a 256-bit bus and 70GB/s. Your HP comparison is flawed, since you're comparing top speed. If a 4-cylinder turbo did 0-60 faster than a V8 that would be a relevant comparison since we're not looking at maximum speed (or bandwidth in the example).

Really was there Radeon cards in that review or not? Answer the question! You didn't think it was strange GURU3D mentioned that and you ate it up? Oh that's right you hand picked benches where 8800gtx looks good because you are proud owner of a 8800gtx.
Rofl? You obviously don't understand Guru3D carries benchmark settings from one review to another as much as possible, that's the point of using a bench suite, so that you can actually compare benches from one review to another and gain a relative idea of overall performance. But we already know you have enough trouble comparing results within a benchmark, much less drawing conclusions comparing results over multiple reviews.

What benches did you see? in Xbit review that overclocked 8800gts 1gig was whooping the 8800gtx.
And that bench only proves my point, that G92 is most bottlenecked by ROPs. It took an 80MHz or 12% core increase to convincingly beat the 8800GTX, which still keeps up in many benchmarks regardless.

Better yield cores is what I really meant to say. Does your 8800gtx overclock to ultra core clocks? Is that why you have it at 600mhz and not 612mhz. I'm sure ultra can easily do more than stock as well. :light:
That's irrelevant, its the same core architecture, just as G92 GTS is the same as G92 GTX. There's nothing magical about it, period. And yes my 8800GTX core/shader can overclock to Ultra speeds in every game but COD4, which is known to be more sensitive to overclocks. I can't get memory to Ultra speeds but as bench after bench have shown, memory bandwidth is the least important factor on G80.

And how can you be sure Ultra can easily overclock beyond stock as well? You clearly don't own one. From what I've seen from user reports, all A3 G80 tend to cap out between 650-675MHz, regardless whether its an Ultra or GTX. One thing is obvious with factory "overclocked" parts, they tend to yield lower % overclocks than comparable stock parts.....because the factory already ate up that overhead in the overclock.

Medium settings where it doesn't stress the card. Either way 8800gts was still beat in crysis @ medium settings. Only with AA did 8800gts bandwidth prevailed.
You don't even know what stresses a GPU in games, but there are other games out there besides Crysis.

Also you have to leave the SP clocks @ 1350mhz and 9800gtx has to be tested @ 1688mhz.
Wouldn't matter to me if you kept SP at 1688, 9800GTX already has massive SP advantage and does nothing with it even at stock speeds.

custom quality??? Either High or Very High and resolution of 1600x1200 where pixel performance doesn't have advantage. No custom benches trying to make your 8800gtx look good either.

Too bad you don't have a 9800gtx though. So your G80 benches don't mean squat.
Except I'm running 1920x1200 and not 1600x1200. I could just as easily run High or Very High but 1) I don't play at those settings because they're not playable and 2) they'll only stress shaders and ROPs more than bandwidth. But I already showed you what custom settings were being run, here's the actual cvars:

con_restricted=0

d3d9_TripleBuffering=1
r_ssao_quality=1
r_ssao_amount=0.4
r_SSAO_darkening=1.3
r_TerrainAO_FadeDist=1
r_HDRlevel=1
r_TexturesStreaming=0
r_ColorGradingDOF=1
r_ShadowJittering=1.5
r_ShadowBlur=3.0
e_gsm_lods_num=5
e_shadows_from_terrain_in_all_lods=0
r_UseEdgeAA=2
e_shadows_max_texture_size=768
e_view_dist_ratio=80
e_particles_lod=0.7
e_vegetation_min_size=1.5
e_view_dist_ratio_vegetation=48
r_sunshafts=1
e_water_ocean_fft=1
e_detail_materials_view_dist_xy=4096
e_detail_materials_view_dist_z=256
r_UsePOM=1
e_lod_ratio=8
e_terrain_lod_ratio=0.6
e_vegetation_sprites_distance_ratio=1.7
r_GeomInstancing=1
e_vegetation_static_instancing=1
e_particles_thread=1
e_cull_veg_activation=70
e_max_entity_lights=20
es_MaxPhysDist=300
es_MaxPhysDistInvisible=35
r_BeamsMaxSlices=250
r_DetailDistance=12
r_TexturesStreaming=0

r_EyeAdaptationBase=0.15
e_precache_level=1

The results are very clear although they require deductive reasoning skills beyond reading simple 3DMark graphs. Its obvious G80 is NOT most bottlenecked by bandwidth, which is a point I've made continuously. I already knew memory bandwidth was excessive on both G80 GTS and GTX, I just didn't think I could get down to 600MHz (256-bit) with so little adverse affect.

So we know with modern GPUs 128-bit is not enough (8600) and 384-bit is too much (G80). I'm sure NV picked up on this as well and cut bus width back to 256-bit to cut costs, but in doing so they also had to cut ROPs since they're linked to memory controllers. End result is the situation we have now, where G92 is a cheaper die shrink of G80 but offers similar performance despite core enhancements and clock speed increases. Good for the masses, but bad for the high-end enthusiast.

You keep saying I can't read a benchmark. You are implying I'm stupid who can't read numbers. That's a personal attack. Either prove g92 is being bottlenecked by pixel performance or don't tell me I can't read a freakin' number...
I've already shown multiple examples, but you're going to come back with Crysis/COD4 benchmarks that show 1FPS difference between 8800GTX and 9800GTX and declare a win for 9800GTX (even when the 1FPS was in favor of 8800GTX). Or you'll claim they're "super hungry pixel cookie monster intensive" and favor 8800GTX. Or you'll attack the source and instead link a search engine. Or you'll claim my benches favor the 8800GTX and instead link 3DMark synthetics that don't hold up in real-world games and contradict your arguments. Or you'll point to another user's results and claim they support your argument when in reality they completely contradict everything you've said.

Fillrate gets limited by bandwidth. It is a simple concept you can't even understand. This was already discussed by Scott Wasson of Techreport.
Only when there isn't enough bandwidth, ie. you're running at 100% peak efficiency always.....we've gone over this so many times and you still don't get it. I just ran benches where I cut bandwidth by 1/3rd with little adverse effects on G80. This is why there is *some* benefit from more bandwidth when peak levels hit maximum bandwidth, however, if that peak isn't sustained at near 100% efficiency then obviously it will not have as large a return on % increase. In comparison, an increase in ROPs will always show a benefit as that directly determines how quickly you can render a frame and begin rendering the next frame. ROP increase eventually is limited by other factors but again, the fact that both G80 and G92 continue to scale in performance without adjustments to memory or shader clocks show ROPs are clearly the biggest bottleneck.

Performance increase isn't linear. 12% increase in memory clocks alone doesn't mean 12% improvements in frame rates. Please get a clue. Considering when you raise core clocks it's raising both pixel, texture, and sp I don't doubt the performance increase is more on a pixel and shader heavy game. Your whole argument still doesn't prove that rop performance is G92 biggest bottleneck over G80.
Except it IS very close to linear on a 1:1 basis with G80 and even with G92. Again, I've shown multiple sources and results that show this. For G92 look at the 9800GTX vs GTX OCX reviews and its obvious....core increase/ROP increase yields the biggest gain, hence it is the biggest bottleneck.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Uh yes. It's less responsive because it's limited to bandwidth. 70 GB/s isn't enough for a card with 50% more texture fillrate than 8800 ultra that has 103 GB/s of bandwidth. AA uses certain amount of bandwidth which Ultra has plenty of and is able to beat 9800gtx with AA.
Except 8800 Ultra doesn't need AA to beat a stock clocked 9800GTX. In fact I'd be willing to bet, same as with my 8800GTX, you could decrease memory bandwidth and see little adverse effect. Oh wait, BFG already did that.

When memory was downclocked it dropped 1fps in average. Perhaps you should have used AA into the benchmark since Ultra really wins a 9800gtx with AA not really without AA. So the game is SP and pixel hungry which you handpicked again.
What does Ultra have to do with this? Those benches were done with G92 and once again show memory bandwidth increases have a marginal impact on performance, however, performance continues to scale with core clock increases. And actually the game is ROP hungry (like most modern games), as it heavily relies on post-processing effects for its visuals.

No my benchmark is not crippled and neither is my card. It's a g92 on a smaller scale. My card is exactly 75% of FULL G92 so it should scale exactly like a full G92. Why?
The problem is you're still trying to run modern resolutions and settings with a gimp card, same as with that 8600. How'd that work out for you? :laugh: We already know decreasing bus/bandwidth too much has a drastic impact on performance, if you wanted to show memory bandwidth was more important to G92 you would've increased your 75% closer to 100%. If your 8800GS performed like a 8800GT at that point it'd be obvious memory bandwidth is the biggest bottleneck.

What are you talking about? Shader has huge improvements depending on the game. I don't know what 3 year old engine you've been testing. Without AA SP helps much as core (both pixel and texture) depending on the game as shown by BFG's benches. Crysis is SP hungry especially at high settings. Perhaps you should do medium setting benches instead. :laugh:

I can test more games for you if you'd like oh my card is crippled that's right. :roll:
Rofl, you're going to reference BFG's findings and misquote them, again. Its no wonder you cling to Crysis at unplayable settings as your lone example.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: jjzelinski
god damn, you two are animals! lol

Anyone else would've long ago ditched the thread

Rofl ya I really don't care, its good for a laugh more than anything else at this point. I can probably just shelve it til GT200 releases and proves me right, again.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
You said:

To answer you question though 9600gt keeps up only when AA is applied even then it still trails 8800gt. When AA is disabled 8800gt is more faster than with AA. Notice the bottleneck. Memory Bandwidth! All that texture fillrate is useless if it doesn't have bandwidth to use it properly. As for shader I guess you missed 9600gt thread. Do a search.


You must really hate how you can't explain away the 9600GT lol. Once again, 9600GT has 1/2 the shading and texturing ability of full G92 parts, yet it performs comparably.

Are you dense? I was talking about 8800gt when I was talking about "All that texture fillrate is useless if it doesn't have bandwidth to use it properly."


So you agree G80 is the superior chip. We already know that.

And there's no evidence a wider bus is actually better when bandwidth is not an issue, as 2900XT and 3870 show us with a 512-bit bus and 105GB/s compared to a 256-bit bus and 70GB/s. Your HP comparison is flawed, since you're comparing top speed. If a 4-cylinder turbo did 0-60 faster than a V8 that would be a relevant comparison since we're not looking at maximum speed (or bandwidth in the example).

No G92 is much superior chip. It is neck and neck with an ultra that has 60% more bandwidth with modest AA settings.

Again you keep mentioning 2900xt vs 3870 that has fillrate of 12000 mtexels. 256bit is plenty with such low fillrate but again it did improve pixel performance as shown in 3dmark fillrate test I linked you to prior. G92 in the other hand has 43200 mtexels that could use a wider bus to it's advantage.

Rofl? You obviously don't understand Guru3D carries benchmark settings from one review to another as much as possible, that's the point of using a bench suite, so that you can actually compare benches from one review to another and gain a relative idea of overall performance. But we already know you have enough trouble comparing results within a benchmark, much less drawing conclusions comparing results over multiple reviews.

Half assed job more like it. So they use the exact same words from their previous articles? That's load of crock if I ever heard one. :laugh: Way to stick up for Guru3d I suppose that makes your 8800gtx looks good when every other website 9800gtx is beating 8800gtx in Crysis. I've brought multiple reviews to the table where 9800gtx > 8800gtx, you in the other hand grabbing dear life with Guru3d benchmarks. :laugh:

And that bench only proves my point, that G92 is most bottlenecked by ROPs. It took an 80MHz or 12% core increase to convincingly beat the 8800GTX, which still keeps up in many benchmarks regardless.

It doesn't prove crap when 8800gtx is the chip with more ROP is getting ass handed by overclocked 8800gts that has lower rop. :roll:

That's irrelevant, its the same core architecture, just as G92 GTS is the same as G92 GTX. There's nothing magical about it, period. And yes my 8800GTX core/shader can overclock to Ultra speeds in every game but COD4, which is known to be more sensitive to overclocks. I can't get memory to Ultra speeds but as bench after bench have shown, memory bandwidth is the least important factor on G80.

And how can you be sure Ultra can easily overclock beyond stock as well? You clearly don't own one. From what I've seen from user reports, all A3 G80 tend to cap out between 650-675MHz, regardless whether its an Ultra or GTX. One thing is obvious with factory "overclocked" parts, they tend to yield lower % overclocks than comparable stock parts.....because the factory already ate up that overhead in the overclock.

Of course that's relevant. Your whole argument is that 8800 ultra is a overclocked 8800 gtx. I proved it isn't since it uses better memory and better yield chips.

You don't even know what stresses a GPU in games, but there are other games out there besides Crysis.

Of course I don't. :brokenheart: 9800gtx>8800gtx CRYSIS. :laugh:

Wouldn't matter to me if you kept SP at 1688, 9800GTX already has massive SP advantage and does nothing with it even at stock speeds.

Crysis is SP hungry. Go try some benchmarks @ high settings.


Except I'm running 1920x1200 and not 1600x1200. I could just as easily run High or Very High but 1) I don't play at those settings because they're not playable and 2) they'll only stress shaders and ROPs more than bandwidth. But I already showed you what custom settings were being run, here's the actual cvars:

con_restricted=0

d3d9_TripleBuffering=1
r_ssao_quality=1
r_ssao_amount=0.4
r_SSAO_darkening=1.3
r_TerrainAO_FadeDist=1
r_HDRlevel=1
r_TexturesStreaming=0
r_ColorGradingDOF=1
r_ShadowJittering=1.5
r_ShadowBlur=3.0
e_gsm_lods_num=5
e_shadows_from_terrain_in_all_lods=0
r_UseEdgeAA=2
e_shadows_max_texture_size=768
e_view_dist_ratio=80
e_particles_lod=0.7
e_vegetation_min_size=1.5
e_view_dist_ratio_vegetation=48
r_sunshafts=1
e_water_ocean_fft=1
e_detail_materials_view_dist_xy=4096
e_detail_materials_view_dist_z=256
r_UsePOM=1
e_lod_ratio=8
e_terrain_lod_ratio=0.6
e_vegetation_sprites_distance_ratio=1.7
r_GeomInstancing=1
e_vegetation_static_instancing=1
e_particles_thread=1
e_cull_veg_activation=70
e_max_entity_lights=20
es_MaxPhysDist=300
es_MaxPhysDistInvisible=35
r_BeamsMaxSlices=250
r_DetailDistance=12
r_TexturesStreaming=0

r_EyeAdaptationBase=0.15
e_precache_level=1

The results are very clear although they require deductive reasoning skills beyond reading simple 3DMark graphs. Its obvious G80 is NOT most bottlenecked by bandwidth, which is a point I've made continuously. I already knew memory bandwidth was excessive on both G80 GTS and GTX, I just didn't think I could get down to 600MHz (256-bit) with so little adverse affect.

So we know with modern GPUs 128-bit is not enough (8600) and 384-bit is too much (G80). I'm sure NV picked up on this as well and cut bus width back to 256-bit to cut costs, but in doing so they also had to cut ROPs since they're linked to memory controllers. End result is the situation we have now, where G92 is a cheaper die shrink of G80 but offers similar performance despite core enhancements and clock speed increases. Good for the masses, but bad for the high-end enthusiast.

Yeah you are using settings making your 8800gtx look good. Either high or very high @ 1600x1200.

Your whole argument about slower ROP is pointless. Why? 8800gt with smaller rop > G80GTS... 9800gtx with smaller ROP>8800gtx.

I've already shown multiple examples, but you're going to come back with Crysis/COD4 benchmarks that show 1FPS difference between 8800GTX and 9800GTX and declare a win for 9800GTX (even when the 1FPS was in favor of 8800GTX). Or you'll claim they're "super hungry pixel cookie monster intensive" and favor 8800GTX. Or you'll attack the source and instead link a search engine. Or you'll claim my benches favor the 8800GTX and instead link 3DMark synthetics that don't hold up in real-world games and contradict your arguments. Or you'll point to another user's results and claim they support your argument when in reality they completely contradict everything you've said.

No you haven't shown anything except Guru3d 8800gtx fanboi benchmarks. In that same benchmark 9800gtx is winning some resolutions. It just doesn't make sense what I say because your brain might not cope as well as mine.


Only when there isn't enough bandwidth, ie. you're running at 100% peak efficiency always.....we've gone over this so many times and you still don't get it. I just ran benches where I cut bandwidth by 1/3rd with little adverse effects on G80. This is why there is *some* benefit from more bandwidth when peak levels hit maximum bandwidth, however, if that peak isn't sustained at near 100% efficiency then obviously it will not have as large a return on % increase. In comparison, an increase in ROPs will always show a benefit as that directly determines how quickly you can render a frame and begin rendering the next frame. ROP increase eventually is limited by other factors but again, the fact that both G80 and G92 continue to scale in performance without adjustments to memory or shader clocks show ROPs are clearly the biggest bottleneck.

Your chip is G80 that has much lower fillrate than G92. G80 doesn't need that much bandwidth but G92 does so it doesn't drop performance as much when you lower bandwidth. Of course with AA it is a different story. G92 is opposite. It has enough fillrate but starving for bandwidth so lowering core wouldn't lose performance as much as lowering bandwidth. What is good about G80 bandwidth is that it's able to use all that bandwidth with AA with much smaller performance hit.

Why don't you argue with Scott Wasson and Dave Baumann of Beyond3d of what you think about G92 and bandwidth limitations and how bandwidth doesn't effect AA. I bet they will laugh in your face.


Except it IS very close to linear on a 1:1 basis with G80 and even with G92. Again, I've shown multiple sources and results that show this. For G92 look at the 9800GTX vs GTX OCX reviews and its obvious....core increase/ROP increase yields the biggest gain, hence it is the biggest bottleneck.

G80 scales better when raising core clocks because it has plenty of bandwidth. G92 is hungry for bandwidth so it gets limited when raising core clocks. What sources? Your half assed LOTR benchmarks? :laugh:

Your sources doesn't prove a thing with ROP=G92 biggest bottleneck because when you overclock the core you are also raising texture fillrate not ROP alone. I've shown you plenty where it makes perfect sense with bandwidth limitations and AA performance but you keep ignoring and go back to your guru3d medium setting Crysis benches. Ask Nvidia or Cookie Monster who actually knows what they are talking about. Cookie has mentioned this as well. G92 is starved for bandwidth which I agree 100% with.

 

AzN

Banned
Nov 26, 2001
4,112
2
0
Except 8800 Ultra doesn't need AA to beat a stock clocked 9800GTX. In fact I'd be willing to bet, same as with my 8800GTX, you could decrease memory bandwidth and see little adverse effect. Oh wait, BFG already did that.

9800gtx>ultra when not using AA. It beats it in Crysis and many others.

What does Ultra have to do with this? Those benches were done with G92 and once again show memory bandwidth increases have a marginal impact on performance, however, performance continues to scale with core clock increases. And actually the game is ROP hungry (like most modern games), as it heavily relies on post-processing effects for its visuals.

Because bandwidth improves performance much more with AA which ultra is dominant over 9800gtx and not raw performance. G80 doesn't behave like G92. You just don't get it.

The problem is you're still trying to run modern resolutions and settings with a gimp card, same as with that 8600. How'd that work out for you? We already know decreasing bus/bandwidth too much has a drastic impact on performance, if you wanted to show memory bandwidth was more important to G92 you would've increased your 75% closer to 100%. If your 8800GS performed like a 8800GT at that point it'd be obvious memory bandwidth is the biggest bottleneck.

Gimp card? :roll: It is exactly 75% of a full G92 that scales just like G92. You are contradicting yourself. You said bandwidth makes no difference now all of a sudden it makes a big difference because the card has 12gb less bandwidth. How about the ultra that has over 30GB of bandwidth over 9800gtx? :disgust:

8800gtx is limp underpowered crapola with lot of bandwidth that only beats 9800gtx @ 2560x1600 @ 8xAA that is unplayable. :laugh:

Modern Resolutions? :laugh: 1440x900 is rather low resolution for current day of age.

Rofl, you're going to reference BFG's findings and misquote them, again. Its no wonder you cling to Crysis at unplayable settings as your lone example.

Is that why you benchmark crysis @ medium settings. :laugh: I'm actually the one that pushed BFG to do the benches because of are arguments over 8800gt vs 9600gt thread which proves my point about fillrate and bandwidth over SP. :thumbsup: Maybe you need glasses. BFG benchmarks show SP makes quite a big difference in some games compared to both pixel and texel fillrate without AA and not so much with AA.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: jjzelinski
god damn, you two are animals! lol

Anyone else would've long ago ditched the thread

Rofl ya I really don't care, its good for a laugh more than anything else at this point. I can probably just shelve it til GT200 releases and proves me right, again.

How is GT200 going to prove anything? You've already been proven wrong here in multiple accounts. Your whole contradiction about bandwidth limitations.

I don't even think you understand a single thing about GPU except for the labels.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn

Are you dense? I was talking about 8800gt when I was talking about "All that texture fillrate is useless if it doesn't have bandwidth to use it properly."
You mean all that extra texture fillrate is useless period, as shown with G92's TMU improvements compared to G80 and G94. 9600GT keeps up with G92 even at lower resolutions and without AA where bandwidth is less of an issue, so again, your claim that texture fillrate and/or bandwidth are G92's biggest bottleneck simply does not hold up.


No G92 is much superior chip. It is neck and neck with an ultra that has 60% more bandwidth with modest AA settings.
Simple question, clock for clock, same bandwidth, which chip performs better? G80. Its really that simple. Why? It has more ROPs and superior fillrate.

Again you keep mentioning 2900xt vs 3870 that has fillrate of 12000 mtexels. 256bit is plenty with such low fillrate but again it did improve pixel performance as shown in 3dmark fillrate test I linked you to prior. G92 in the other hand has 43200 mtexels that could use a wider bus to it's advantage.
Except G92 doesn't do much with additional bandwidth when its given more (9800GTX and 230MHz increase). You're still arguing bus width has a greater impact than memory speed increases when total bandwidth is the same, which may have some merit, but I'm certain you lack the requisite technical knowledge and experience to make that assertion in the absence of definitive evidence. The 3870 and 2900XT certainly indicate the opposite.

Half assed job more like it. So they use the exact same words from their previous articles? That's load of crock if I ever heard one. :laugh: Way to stick up for Guru3d I suppose that makes your 8800gtx looks good when every other website 9800gtx is beating 8800gtx in Crysis. I've brought multiple reviews to the table where 9800gtx > 8800gtx, you in the other hand grabbing dear life with Guru3d benchmarks. :laugh:
No, its a template that is updated based on testing results. It offers consistency, efficiency and the ability to compare results over periods of time from review to review. The only thing the reader needs to be aware of are hardware/driver changes, but the testing methodology and observations made should be similar over time.

It doesn't prove crap when 8800gtx is the chip with more ROP is getting ass handed by overclocked 8800gts that has lower rop. :roll:
Rofl sure it does, it shows the G92 GTS, which loses more often than not to an 8800GTX, finally manages to surpass the 8800GTX with an 80MHz or 12% *CORE* overclock. Conclusion: G92 ROPs are still its greatest bottleneck, not memory bandwidth.

Of course that's relevant. Your whole argument is that 8800 ultra is a overclocked 8800 gtx. I proved it isn't since it uses better memory and better yield chips.
No, you implied the chips are different when they are not. They're both G80. I acknowledged the memory chips are different and never said otherwise, but then again to me (and anyone who owns a G80) that's irrelevant since we know memory bandwidth isn't what gives the G80 its balls. Same holds true for G92.

Crysis is SP hungry. Go try some benchmarks @ high settings.
Which does nothing to support your argument that G92 is bandwidth limited. We already know G92 has far superior SP performance relative to G80, but it only manages to prove worthwhile in a couple of games and even then its performance gains are small.

Yeah you are using settings making your 8800gtx look good. Either high or very high @ 1600x1200.
You wouldn't know either way. I'm actually running the settings more likely to prove your point about G92 being bandwidth/texture bottlenecked than the ones you suggested. I'm running High textures and object detail with 16x AF at a higher resolution than you prefer. Adjusting other settings upwards, like post-processing and particles would only impact ROP and SP performance, neither of which support your claim G92 is bandwidth bottlenecked or that G80 has a significant advantage due to bandwidth. But again, playing games and knowing what certain features do and how they impact performance, instead of just flapping your lips about them actually helps.

Your whole argument about slower ROP is pointless. Why? 8800gt with smaller rop > G80GTS... 9800gtx with smaller ROP>8800gtx.
Yep the G92 parts perform about the same despite all their core improvements, but worst at higher resolutions or with AA. You think its bandwidth, except ROPs also impact performance at high resolutions or with AA. G80 proves bandwidth isn't its biggest bottleneck, and G92 GTX proves bandwidth isn't its biggest bottleneck either. The 1GB variants show frame buffer size isn't a huge factor either, which was one of the last arguments of G80's advantage over G92.

No you haven't shown anything except Guru3d 8800gtx fanboi benchmarks. In that same benchmark 9800gtx is winning some resolutions. It just doesn't make sense what I say because your brain might not cope as well as mine.
Rofl, ya you specifically link a page with Crysis and CoH benches and claim a convincing win for 9800GTX, when there's mostly ties and only a few 1FPS differences between the parts. But you somehow only see the benches 9800 wins. People who can actually read benchmarks understand the 9800 and 8800 GTX perform similarly enough that its not clear if one is faster than the other.

Your chip is G80 that has much lower fillrate than G92. G80 doesn't need that much bandwidth but G92 does so it doesn't drop performance as much when you lower bandwidth.
No it only has lower texture fillrate, but has higher pixel fillrate. If G80 doesn't need that much bandwidth (which I agree, it doesn't), why do you keep on insisting G80 Ultra has some massive advantage over G92 or even G80 GTX with its additional bandwidth?

Of course with AA it is a different story. G92 is opposite. It has enough fillrate but starving for bandwidth so lowering core wouldn't lose performance as much as lowering bandwidth.[/b] What is good about G80 bandwidth is that it's able to use all that badwidth with AA with much smaller performance hit.
Except G80 doesn't need to enable AA to keep up with G92....that's the whole point. Even in resolutions and settings where AA isn't used and bandwidth isn't an issue for either card, G80 is still competitive with G92. On top of that, both cards continue to benefit the most from core clock increases even at resolutions and settings you claim bandwidth to be the biggest bottleneck.

Why don't you argue with Scott Wasson and Dave Baumann of Beyond3d of what you think about G92 and bandwidth limitations and how bandwidth doesn't effect AA. I bet they will laugh in your face.
I haven't read B3D in years, but if either of them wanted to do a comprehensive test with G80 and G92 and focus on testing the impact of memory bandwidth, I'd be very interested in seeing their results. Fact of the matter is, I haven't seen a single site explore the impact of core/memory clockspeeds on performance in detail. Truthfully I think its because NV doesn't want people to know where their cards are crippled so they can continue to sell variants that only differ in nomenclature and a few MHz at vastly different prices. Or so they can perpetuate the myth SP has the biggest impact on performance and show 64SP on the 9600GT compared to 128SP on the GTS and justify 2x the price.

G80 scales better when raising core clocks because it has plenty of bandwidth. G92 is hungry for bandwidth so it gets limited when raising core clocks. What sources? Your half assed LOTR benchmarks? :laugh:
Rofl, G80 still scales based on core clock even when I reduce bandwidth to 256-bit or 9800GTX bandwidth in settings and resolutions where bandwidth isn't an issue. You even acknowledged G80 doesn't need all its bandwidth, yet you still don't get it.

Once again refer to any G92 GTX OC benchmarks for more proof G92 still gets more increase from core clock vs. bandwidth. G92 still scales better with ROPs compared to memory bandwidth, it just doesn't scale as well as G80 because it gets less return per clock increase due to fewer ROPs in its design.

Your sources doesn't prove a thing with ROP=G92 biggest bottleneck because when you overclock the core you are also raising texture fillrate not ROP alone. I've shown you plenty where it makes perfect sense with bandwidth limitations and AA performance but you keep ignoring and go back to your guru3d medium setting Crysis benches. Ask Nvidia or Cookie Monster who actually knows what they are talking about. Cookie has mentioned this as well. G92 is starved for bandwidth which I agree 100% with.
And what does texture fillrate increases do when the part is already bandwidth starved, as you claim? It also has 50% higher texture fillrate than G80 yet performs similarly. Texture fillrate increases from core clock shouldn't result in any performance increase if you're already pushing peak bandwidth efficiency, yet it does...and much more increase than actually increasing actual bandwidth. :light:

I'd love to see what Cookie has to say about it, I haven't seen him post anything of the sort. But its clearly obvious to anyone who has used a G80 or G92 that raising that first slider that says "core" has a much greater impact than raising that slider that says "memory". I'm not interested at all what NV has to say about it as they're interested in selling product and shooting for big numbers. But hey I got a 1GB 8500GT to sell you for $250 if you don't believe me. [/quote]

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Originally posted by: chizow
Originally posted by: jjzelinski
god damn, you two are animals! lol

Anyone else would've long ago ditched the thread

Rofl ya I really don't care, its good for a laugh more than anything else at this point. I can probably just shelve it til GT200 releases and proves me right, again.

How is GT200 going to prove anything? You've already been proven wrong here in multiple accounts. Your whole contradiction about bandwidth limitations.

I don't even think you understand a single thing about GPU except for the labels.

Because from early reports, GT200 is going to make improvements where they're needed the most. Early reports of 32 ROPs or more along with 512-bit bus. I'm sure GT200 will need some of the additional bandwidth, but again, I'm betting core/ROP has a much greater impact than memory bandwidth (just as it always has, historically). I'll be sure to test on mine when it releases.

As for not understanding a single thing about GPUs except for the labels. LMAO. Coming from the guy who purchased not one, but two 8-series parts thinking they performed anywhere close to real 8-series parts based on labels alone. 8600GT and now 8800GS......
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |