Why ditching 640MB?

Zenoth

Diamond Member
Jan 29, 2005
5,201
214
106
The GeForce 8 GTS 640MB is the only card from nVidia offering a 640MB version. You guys remember that when it came out along with the GTX it was already 640MB (I mean the GTS)? And that no 512MB variants ever came out (or no?). I'm just trying to understand what was "wrong" with having a card with just a little more than 512MB for those higher resolutions while using a single GPU solution. Was it too expensive to give tons of cards that amount? Was it more valuable to give less cards an higher amount like 768MB or 1024MB instead?

I always thought that 640MB was the "perfect spot" for a single GPU solution. It's not too much of it, and just above 512MB to ensure that you won't lack virtual memory during game-play so that the page file of the OS kicks in because one lacks system RAM to compensate. No? I mean even AMD/ATi never touched anything in between 512MB and 1024MB, they never made a single 640MB or 768MB card, was it because nVidia had some sort of architectural advantage on the GeForce 8 allowing more RAM than "usual" on a graphics card and that ATi's graphics architecture couldn't do it at all? Or maybe because the GPU's board was too small or restrictive or "something"?

Basically all I want to know is why do all the latest graphics card both from nVidia and AMD/ATi are always either 512MB or 1024MB and doesn't seem to be 320MB or 640MB or 768MB anymore. I'm making this thread because sometimes I feel like it's time for me to upgrade my GPU, either going SLi or getting a dual-GPU solution (X2 family). The thing is my G80 GTS is the "old" 640MB variant, and they're a heck of a lot difficult to find over here, and no I never buy anything on-line. I think it'd be safe to say that, at least locally those cards were discontinued (I mean that the stores are not ordering them anymore, I already asked, but I don't know if that's because nVidia actually discontinued them or because the store itself thinks they're not selling anymore).

Now if I look at the G92 variants... I can't SLi those with my G80 can I? But if I can, I'll loose 128MB because in SLi the card with the less amount of Memory is the one being used for Memory storage, right? And, well as I said I don't know if SLi'ing a G80 and G92 is possible. Even if it was... there's the Memory thing... which is exactly where it brings me back and asking and repeating myself again, why can't the G92 family have a 640MB variant too? Heck, why there's no such Memory amounts either for the GeForce 9 series? I'm asking for some enlightenment on this matter please.
 

geoffry

Senior member
Sep 3, 2007
599
0
76
It has to do with the memory bus.

The G80 GTS had a 320 bit bus and the GTX had a 364 bit bus while the G92 GT, GTS, GTX are on a 256 bit bus. The memory chips put on the card are always a multiple of the bus, hence the strange memory amounts on the G80 cores due to the "odd" memory bus size that wasn't the usual step from 64 - 128 - 256 - 512.
 

Zenoth

Diamond Member
Jan 29, 2005
5,201
214
106
I see, then the question suddenly becomes why are the Memory Bus'es suddenly back to "normal"? Less expensive? More efficient?
 

geoffry

Senior member
Sep 3, 2007
599
0
76
While I'm no expert I think that a smaller bus is indeed less complex / less expensive to create. And the benefits of the higher bus on current gaming applications are negligable (2900 XT had a 512 bit bus yet the HD 3870 on a 256 bit bus with higher clocks performs better).
 

secretanchitman

Diamond Member
Apr 11, 2001
9,352
23
91
cause the 256bit bus is faster, more efficient.

makes me realize how pointless those cards are now...well when they first came out they werent, but its funny to see how a $150-$200 card (8800GT/8800GTS) can give you 90% or more of the 8800GTX performance.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Zenoth
I see, then the question suddenly becomes why are the Memory Bus'es suddenly back to "normal"? Less expensive? More efficient?

I believe the wider the bus is the more complex the PCB has to be to support those extra connections, which makes the card more expensive. Nvidia probably figured that the 256 bit wide bus is 'good enough' and lets them sell the current 8800GT's for under $200. If they would have kept the wider bus then the cards would cost more. Also, you have to remember memory bandwidth is more then just the width of the bus, but also the speed at which the memory runs at. They cut down on the bus width but can use faster memory to partially make up for the more-narrow bus, and still keep the prices where they feel is competitive.

If you have 1GHz memory over a 128 bit bus you get pretty much the same bandwith compared to having 500mhz memory over a 256 bit bus, but the 128 bit bus is probably less complex to build and cheaper.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
IMO, it was mostly for cost-cutting reasons that the g9x cards have fewer ROP's, and as a result, use a narrower memory bus. There are 1GB versions available, but they offer almost no performance gain compared to the 512MB variants. While it seems like the 256-bit bus limits performane somewhat, Nvidia decided it was good enough for a product targeting mainstream and midrange markets.
 

Zenoth

Diamond Member
Jan 29, 2005
5,201
214
106
Ok, well all of your explanations are making all this clearer for me now, thanks everyone

Looks like I'm going to wait for GeForce 10 or Radeon HD 4K for an upgrade, because I believe that the current X2 cards from both AMD and nVidia consume a little too much power for my personal tastes. Though SLi is tempting, but that's even worse than a GX2/3870X2 it terms of power consumption... I think (anyone willing to clarify that?).
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: secretanchitman
cause the 256bit bus is faster, more efficient.

makes me realize how pointless those cards are now...well when they first came out they werent, but its funny to see how a $150-$200 card (8800GT/8800GTS) can give you 90% or more of the 8800GTX performance.

G92 is just a superior chip compared to G80. It is able to pull more fps with less memory bandwidth. Not because 256bit is faster or efficient.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: geoffry
It has to do with the memory bus.

The G80 GTS had a 320 bit bus and the GTX had a 364 bit bus while the G92 GT, GTS, GTX are on a 256 bit bus. The memory chips put on the card are always a multiple of the bus, hence the strange memory amounts on the G80 cores due to the "odd" memory bus size that wasn't the usual step from 64 - 128 - 256 - 512.

Correct.

Not only memory bus but ROP count as well.
 

Cheex

Diamond Member
Jul 18, 2006
3,123
0
0
Originally posted by: error8
Originally posted by: secretanchitman
cause the 256bit bus is faster, more efficient.

makes me realize how pointless those cards are now...well when they first came out they werent, but its funny to see how a $150-$200 card (8800GT/8800GTS) can give you 90% or more of the 8800GTX performance.

Are you sure about that? In what games is the 8800GTS 512 90% faster then GTX? The 8800GT is quite a bit slower then the GTX.
From what I know, the GTX is only some 20% slower then the GTS, in the majority of games and the difference is even smaller at higher resolutions, just because of that memory bus and amount of VRAM.

What he meant was....90% of the performance of a GTX.
 

JujuFish

Lifer
Feb 3, 2005
11,379
1,020
136
Originally posted by: error8
Originally posted by: secretanchitman
cause the 256bit bus is faster, more efficient.

makes me realize how pointless those cards are now...well when they first came out they werent, but its funny to see how a $150-$200 card (8800GT/8800GTS) can give you 90% or more of the 8800GTX performance.

Are you sure about that? In what games is the 8800GTS 512 90% faster then GTX? The 8800GT is quite a bit slower then the GTX.
Secretanchitman never said it was 90% faster.
 

error8

Diamond Member
Nov 28, 2007
3,204
0
76
Originally posted by: JujuFish
Originally posted by: error8
Originally posted by: secretanchitman
cause the 256bit bus is faster, more efficient.

makes me realize how pointless those cards are now...well when they first came out they werent, but its funny to see how a $150-$200 card (8800GT/8800GTS) can give you 90% or more of the 8800GTX performance.

Are you sure about that? In what games is the 8800GTS 512 90% faster then GTX? The 8800GT is quite a bit slower then the GTX.
Secretanchitman never said it was 90% faster.

Ups, it means that I've got it backwards. I'm stupid, I know....
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
It's funny, I wondered (before the launch of the 9800GTX) why none of the OEMs had made an 8800GTS 512MB using GDDR4 memory to enhance the bandwidth (which is definitely limited by the 256-bit interface).

I still wonder why nVidia would then launch the "new & improved" 9800GTX using fast GDDR3 instead of much faster GDDR4. At least DAAMIT had the brains to offset the bandwidth shortage a bit with faster memory (not that it makes their cards any faster, but anyway).
 

Lithan

Platinum Member
Aug 2, 2004
2,919
0
0
Mainly because g80's texture processing fails compared to g92. Think of G80 as G92's Beta, Nvidia tested their new texture processor, then tweaked it, and made the card quite a bit better. It also rolls in additional efficiency due to smaller process, a couple new features, and higher shader clock. For this it sacrificed a little memory and memory bus that the card couldn't really make use of in 99% of settings anyway.

Basically, 8800gts 320 ~= 8800gs (A $100 card) 8800gts 640 ~= 9600gt ($125 card) and 8800gtx ~= 8800 512 GTS ($200 card).

Essentially, Nvidia tuned the system to get much better performance with less power on paper by improving the cards overall efficiency and managed ~= performance as the last round of cards at much lower production costs. Which didn't make everyone happy (the guys who wanted to buy a faster card), that's why everyone's saying wait till next gen in summer.

Denithor, I think they probably didn't go with ddr4 for the same reason they didn't keep the 384bit bus. Too much production cost for a benefit that only really shows itself @ the highest settings. Since ATI can't touch their performance lead right now without using dual gpu's, they probably made the right choice. A $300 card that is the fastest single core gpu available if you aren't running a 30" monitor will likely sell a lot better than a $400 one that was the fastest single core available even if you are. Plus they make the guys who spent $500+ on Ultra's and/or overclocked 8800GTX's even happier. Their cards got a MASSIVE run of being the fastest kid on the block.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Given how huge the g80 chip was, I think that was the main reason why they decided to use only half as many TA units as TF units. Or, you could look at it as having twice TF as TA, with the added marketing benefit of claiming "free" trilinear filtering. In either case, the g80 still had a huge texturing performance lead over the competition, so in no way do I see the g80 as having the texture units crippled or failed. That was probably a decision made early in the design process, and not the result of some mistakes or failures.
 

hooflung

Golden Member
Dec 31, 2004
1,190
1
0
Nvidia still hasn't found the sweet spot of their technology. Look at the 9600GT and 8800GT. Both have the same GPU. Both have the same memory bus. 9600GT comes in crippled ( yeah, I would bet the farm its a 8800GT that had broken SPs ) and the thing can almost keep up. Also, the driver issues are pissing me off. 8800GS SC x 2 here and I am still using January Drivers on Vista 64. Don't want to do betas. The 9 series get new drivers for no GOOD reason. I know what Nvidia said, just am not buying it. They are having a marketing meltdown imo.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I'd say it was a design they did at the time because worked for what they needed. But it has been quite apparent for 2-3 years that memory bandwidth is not holding us back as much as shader power.

In the end a 320bit bus requires a more complicated memory controller and physical design. Which leads to a higher cost for OEMs. When it was feasible they went with a 256 bit bus.

I will say my 8800GTS640 is a great card. The best card I have ever bought. It should last me another 12-15 months with the games I play and expect to play. Which means it will have lasted 2.5 years.

 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Lithan
Mainly because g80's texture processing fails compared to g92. Think of G80 as G92's Beta, Nvidia tested their new texture processor, then tweaked it, and made the card quite a bit better. It also rolls in additional efficiency due to smaller process, a couple new features, and higher shader clock. For this it sacrificed a little memory and memory bus that the card couldn't really make use of in 99% of settings anyway.

Basically, 8800gts 320 ~= 8800gs (A $100 card) 8800gts 640 ~= 9600gt ($125 card) and 8800gtx ~= 8800 512 GTS ($200 card).

Essentially, Nvidia tuned the system to get much better performance with less power on paper by improving the cards overall efficiency and managed ~= performance as the last round of cards at much lower production costs. Which didn't make everyone happy (the guys who wanted to buy a faster card), that's why everyone's saying wait till next gen in summer.

Denithor, I think they probably didn't go with ddr4 for the same reason they didn't keep the 384bit bus. Too much production cost for a benefit that only really shows itself @ the highest settings. Since ATI can't touch their performance lead right now without using dual gpu's, they probably made the right choice. A $300 card that is the fastest single core gpu available if you aren't running a 30" monitor will likely sell a lot better than a $400 one that was the fastest single core available even if you are. Plus they make the guys who spent $500+ on Ultra's and/or overclocked 8800GTX's even happier. Their cards got a MASSIVE run of being the fastest kid on the block.

G80 texture fails compared to G92? You mean G92 has higher bilinear texel fillrate and lower FP16 texel fillrate depending on which G92.

So what your saying is 8800gts 320 ~=8800gs and 8800gts 640 ~=9600gt

Aren't 8800gts 320 and 640 the same exact card except vram? So 8800gs-=9600gt :shocked: That's pretty shocking coming out of you.

How does 8800gs beat 9600gt in some game with only 12 rop and 192bit memory?
 

Lithan

Platinum Member
Aug 2, 2004
2,919
0
0
Originally posted by: hooflung
Nvidia still hasn't found the sweet spot of their technology. Look at the 9600GT and 8800GT. Both have the same GPU. Both have the same memory bus. 9600GT comes in crippled ( yeah, I would bet the farm its a 8800GT that had broken SPs ) and the thing can almost keep up. Also, the driver issues are pissing me off. 8800GS SC x 2 here and I am still using January Drivers on Vista 64. Don't want to do betas. The 9 series get new drivers for no GOOD reason. I know what Nvidia said, just am not buying it. They are having a marketing meltdown imo.

The beta's aren't out for 64bit OS? Not that it matters, they suck anyway. No real performance difference.
Really, then Nvidia has hit the sweet spot, and it's the 9600gt. Sweet spot's a hard thing to quantify unless you know what you're looking at. 1280x1024 max AA is one sweet spot and 2560xwhatever is another one. 9600gt is pretty much even with top of the line cards in most games in the first case, and can't even hit double digit frames in the second. So it's not that the extra shaders on the 8800gt go to waste any more than the extra mem bandwidth on the 8800gtx goes to waste. These cards DO show improvement when their features are required. Now if you dropped the 8800gt's memory bandwidth by 50% or reduced the 8800gtx's shader count to 32, then we're talking about "missing the sweet spot", right now we can only conjecture that aspects of these cards are overkill in most scenarios.

Munky, "fails compared to" means is beaten by.
 

BlizzardOne

Member
Nov 4, 2006
88
0
0
Originally posted by: AznHow does 8800gs beat 9600gt in some game with only 12 rop and 192bit memory?

My guess would be the 50% more SP's compared to the 9600GT.. though that that only helps in some circumstances...

 

Mr. Lennon

Diamond Member
Jul 2, 2004
3,492
1
81
Originally posted by: Lithan
Mainly because g80's texture processing fails compared to g92. Think of G80 as G92's Beta, Nvidia tested their new texture processor, then tweaked it, and made the card quite a bit better. It also rolls in additional efficiency due to smaller process, a couple new features, and higher shader clock. For this it sacrificed a little memory and memory bus that the card couldn't really make use of in 99% of settings anyway.

Basically, 8800gts 320 ~= 8800gs (A $100 card) 8800gts 640 ~= 9600gt ($125 card) and 8800gtx ~= 8800 512 GTS ($200 card).

Essentially, Nvidia tuned the system to get much better performance with less power on paper by improving the cards overall efficiency and managed ~= performance as the last round of cards at much lower production costs. Which didn't make everyone happy (the guys who wanted to buy a faster card), that's why everyone's saying wait till next gen in summer.

Denithor, I think they probably didn't go with ddr4 for the same reason they didn't keep the 384bit bus. Too much production cost for a benefit that only really shows itself @ the highest settings. Since ATI can't touch their performance lead right now without using dual gpu's, they probably made the right choice. A $300 card that is the fastest single core gpu available if you aren't running a 30" monitor will likely sell a lot better than a $400 one that was the fastest single core available even if you are. Plus they make the guys who spent $500+ on Ultra's and/or overclocked 8800GTX's even happier. Their cards got a MASSIVE run of being the fastest kid on the block.

At higher resolutions the 8800GTX still spanks the 9800GTX. 8800 512 GTS in no way replaces the 8800GTX
 

Lithan

Platinum Member
Aug 2, 2004
2,919
0
0
Zepp, read the entire post.
8800gtx is about the same as 8800gts unless running very high reso's, at extreme reso's 8800gtx is still faster. But @ stock speeds and lower reso's the 8800gts is almost exactly even with it (leading me to suspect Nvidia chose the 8800gts' stock clocks to intentionally put it right on par with the 8800gtx... so the 9800gtx would show a decent advantage (again, not at very high resos). I'm not sure exactly how high 8800gtx clock, but people brag about ones that hit ultra clocks and gts seems to hit 9800gtx clocks without sneezing... so Unless running 1920 or above, I'd say that yes 8800gts is an equivalent card.

Toms review seems to be done right at the settings that pull these cards apart.

9800gtx > 8800ultra @ 1920 no AA by a tiny amount but < by a tiny amount with AA... still better than 8800gtx.

@ 1680x1050, or 1920 AA off 8800gts and 8800gtx are basically neck and neck.

So yes, if you're @ 1920 or above 8800gtx is still definately the better card, but below that it's practically a dead heat.
The exceptions seem to be World in Conflict and Crysis, which apparently chew up vram, because as soon as AA is enabled, the g80 cores leap waaaaaay ahead.
Of course, there are exceptions in the other direction as well (though less extreme) The GTS beats the GTX in test drive with AA enabled.

Do you have a link to a review that shows a stock 8800gtx beating a stock 9800gtx in most games? I'm having trouble finding nonSLI reviews that test above 1920.

But anyway, my point was that on the lower resolutions, the 8800gts performance is almost the same as the 8800gtx performance. GTS wins some and GTX wins some. 1920 with AA seems about the point where GTX pulls ahead.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |