Why ditching 640MB?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: Zenoth
I see, then the question suddenly becomes why are the Memory Bus'es suddenly back to "normal"? Less expensive? More efficient?

When creating a graphics card, there are many ways in which costs can go down over time.
During the manufacturing of the GPU itself, you get improving yields, which mean a combination of more dies per wafer, and more dies clocking at the required speeds, so you get better output over time. This reduces the price of each individual die, contributing to lower overall cost for the card.

As well as this, you can reduce the cost of the memory chips, as they go down over time for basically similar reasons.

Cheaper memory + cheaper GPU = cheaper card.

One component you can't really reduce the cost of is the PCB. PCB's are made up of multiple layers, and the layers are of varying complexity. A larger memory bus requires a more complex PCB. If you reduce the number of layers and the complexity of layers, you reduce the cost of the PCB. Obviously, though, you can't reduce the complexity of the PCB very easily, except by doing things like reducing the memory bus.

The reason that we now have cheaper products is because the PCB can cost less due to the lower memory bus, and the RAM and GPU, which go down over time, will also allow them to further cut costs.
Since we're now a bit further on from the initial release of 8800 cards, the price of fast memory chips has come down, which means that you can put higher speed memory on, and make do with a lower bandwidth memory bus. Add in an increased efficiency of the GPU which means that they can make do with less bandwidth without giving up too much performance, and you have an effective way of reducing an effectively fixed cost (the PCB) while allowing you to increase your overall pricing flexibility by allowing your memory and GPU costs to be more influential on the prices you can set.


Or at least that's my understanding.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Zeppelin2282
Originally posted by: Lithan
Mainly because g80's texture processing fails compared to g92. Think of G80 as G92's Beta, Nvidia tested their new texture processor, then tweaked it, and made the card quite a bit better. It also rolls in additional efficiency due to smaller process, a couple new features, and higher shader clock. For this it sacrificed a little memory and memory bus that the card couldn't really make use of in 99% of settings anyway.

Basically, 8800gts 320 ~= 8800gs (A $100 card) 8800gts 640 ~= 9600gt ($125 card) and 8800gtx ~= 8800 512 GTS ($200 card).

Essentially, Nvidia tuned the system to get much better performance with less power on paper by improving the cards overall efficiency and managed ~= performance as the last round of cards at much lower production costs. Which didn't make everyone happy (the guys who wanted to buy a faster card), that's why everyone's saying wait till next gen in summer.

Denithor, I think they probably didn't go with ddr4 for the same reason they didn't keep the 384bit bus. Too much production cost for a benefit that only really shows itself @ the highest settings. Since ATI can't touch their performance lead right now without using dual gpu's, they probably made the right choice. A $300 card that is the fastest single core gpu available if you aren't running a 30" monitor will likely sell a lot better than a $400 one that was the fastest single core available even if you are. Plus they make the guys who spent $500+ on Ultra's and/or overclocked 8800GTX's even happier. Their cards got a MASSIVE run of being the fastest kid on the block.

At higher resolutions the 8800GTX still spanks the 9800GTX. 8800 512 GTS in no way replaces the 8800GTX

How high are you talking about? In most situations Full G92 equal to 8800gtx. Usually beating out G80 in lower resolutions or G80 beating G92 @ high resolutions with AA that isn't smooth playable frame rates.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
You guys fail to realise that G80 anf G9x are quite the same in the sense that they share the same architectural traits.

The only major difference i can think of is the integration of the NVIO chip on G9x, slight variation in TMU design (TA:TF ratio) and the pure video engine.

If you look at the 8800GTS 640MB specs, you will see that its core/shader clocks are far behind that of a 8800GT. (500/1200 compared to 600/1500). Not to mention difference in SP amount. This is where most of the performance deficit comes from when comparing the G80 generation cards to G9x generation cards. TMU difference could have been another factor, but not as big as its shader performance i.e SP.

This is why an 8800 Ultra is faster across the board compared to the 9800GTX. The only reason a 9800GTX beats out a 8800GTX at low res is due to having a faster shader/core clock. The table is turned when you start introducing AA with high res scenarios. This is where the 8800GTX's bandwidth and 768MB frame buffer plays a big role.

The reason they chose 256bit memory bus is because it was more economical then say shrinking the full G80 to 65nm and disabling parts of it to produce a 256bit part. However i think this is one of the major culprit for the high end derivatives for G9x. Sure for the mianstream/midhigh end parts look great but what about the high end? It lacks the bandwidth and framebuffer to challenege the old G80GTX/Ultra in highend gaming scenarios.

As lonyo pointed out, memory bus size can determine PCB cost. There are other costs involved such as HSF, framebuffer quantity, GPU cost, etc. Its true (came straight from the horse's mouth) that the intial wave of 8800GTs had a very low margin because they used a more complex PCB (8 layer i believe), and the GPU itself was very expensive (almost a full fledged G92).

Note that GDDR3 has already hit the speed limit (0.8ns is the fastest i believe), and the reason IHVs are not using GDDR4 is mianly because of cost. (There are other issues involved with GDDR4 that makes them unfavorable to GDDR3).

 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Cookie Monster

This is why an 8800 Ultra is faster across the board compared to the 9800GTX. The only reason a 9800GTX beats out a 8800GTX at low res is due to having a faster shader/core clock. The table is turned when you start introducing AA with high res scenarios. This is where the 8800GTX's bandwidth and 768MB frame buffer plays a big role.

Thank god you are here. You need to be more active and educate the mass.
 

Rusin

Senior member
Jun 25, 2007
573
0
0
Originally posted by: Azn

How does 8800gs beat 9600gt in some game with only 12 rop and 192bit memory?
8800 GS loses against 9600 GT pretty much in every game if AA and AF are enabled, on other words best playable settings are used.

And 8800 GTS 320MB and 640MB already has performance difference between them since 320MB isn't enough. Specially in DX10 with AA and AF enabled difference can reach level of 30-40%.

Cookie Monster:
One reason why Nvidia doesn't use GDDR4 is because it doesn't really produce any performance difference over GDDR3. Highest clocked GDDR4 product currently is clocked at 1200MHz (HD3870 Atomic) and highest clocked GDDR3 product is clocked at 1180MHz (Foxconn 9800 GTX EX OC).


 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Rusin
Originally posted by: Azn

How does 8800gs beat 9600gt in some game with only 12 rop and 192bit memory?
8800 GS loses against 9600 GT pretty much in every game if AA and AF are enabled, on other words best playable settings are used.

And 8800 GTS 320MB and 640MB already has performance difference between them since 320MB isn't enough. Specially in DX10 with AA and AF enabled difference can reach level of 30-40%.

Cookie Monster:
One reason why Nvidia doesn't use GDDR4 is because it doesn't really produce any performance difference over GDDR3. Highest clocked GDDR4 product currently is clocked at 1200MHz (HD3870 Atomic) and highest clocked GDDR3 product is clocked at 1180MHz (Foxconn 9800 GTX EX OC).

Rusin. I already know the answer but I was specifically asking Lithan for the answer.

To correct you it's only AA performance not AF and AA.
 

Lithan

Platinum Member
Aug 2, 2004
2,919
0
0
Originally posted by: Cookie Monster
You guys fail to realise that G80 anf G9x are quite the same in the sense that they share the same architectural traits.

The only major difference i can think of is the integration of the NVIO chip on G9x, slight variation in TMU design (TA:TF ratio) and the pure video engine.

If you look at the 8800GTS 640MB specs, you will see that its core/shader clocks are far behind that of a 8800GT. (500/1200 compared to 600/1500). Not to mention difference in SP amount. This is where most of the performance deficit comes from when comparing the G80 generation cards to G9x generation cards. TMU difference could have been another factor, but not as big as its shader performance i.e SP.

This is why an 8800 Ultra is faster across the board compared to the 9800GTX. The only reason a 9800GTX beats out a 8800GTX at low res is due to having a faster shader/core clock. The table is turned when you start introducing AA with high res scenarios. This is where the 8800GTX's bandwidth and 768MB frame buffer plays a big role.

The reason they chose 256bit memory bus is because it was more economical then say shrinking the full G80 to 65nm and disabling parts of it to produce a 256bit part. However i think this is one of the major culprit for the high end derivatives for G9x. Sure for the mianstream/midhigh end parts look great but what about the high end? It lacks the bandwidth and framebuffer to challenege the old G80GTX/Ultra in highend gaming scenarios.

As lonyo pointed out, memory bus size can determine PCB cost. There are other costs involved such as HSF, framebuffer quantity, GPU cost, etc. Its true (came straight from the horse's mouth) that the intial wave of 8800GTs had a very low margin because they used a more complex PCB (8 layer i believe), and the GPU itself was very expensive (almost a full fledged G92).

Note that GDDR3 has already hit the speed limit (0.8ns is the fastest i believe), and the reason IHVs are not using GDDR4 is mianly because of cost. (There are other issues involved with GDDR4 that makes them unfavorable to GDDR3).

8800ultra isn't faster across the board.

Ultra wins about 3/4 the tests @ 1920 w/ AA and loses almost all of them @ 1920 without AA. I'd say it is the faster card, yes. But it's not an ass-whooping, not even close... and given the price difference, I'd say the 9800gtx is by far the better card to buy right now.

Not sure what overclocks 9800gtx's are hitting, but I'd expect they can reach higher core and shader clocks than most 8800 ultra's, so those speeds are related to architecture in the end.

As for high end gaming... honestly there's only a couple instances where the spread is noticable: and I don't forsee that number growing, since even now these cards are skirting the bottom end of playable in those cases. A big part of why the 9800gtx went 256 and kept itself from being the fastest was probably the simple fact that the architecture itself isn't going to have the muscle to survive long in the environment where the additional memory architectural strength will really get to stretch it's legs. Better to release a card that's as fast for 90% of users, slower for 10%, costs much less, has a better margin and let the next gen that can actually take full advantage of that strong memory subsystem do so... a few months down the road. Reminds me of when Intel started slapping huge caches onto their top tier P4's... it only made them a percent or two faster in most apps (if that): and probably raised cost and defect rate enormously. Basically throw a ton of muscle at a card... much more than it can efficiently use. It'll show a boost of course, but the cost/performance will be absurdly high.

This applies much less to the Ultras than to the old GTS's, but I feel it's still the case to some degree.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Rusin
Cookie Monster:
One reason why Nvidia doesn't use GDDR4 is because it doesn't really produce any performance difference over GDDR3. Highest clocked GDDR4 product currently is clocked at 1200MHz (HD3870 Atomic) and highest clocked GDDR3 product is clocked at 1180MHz (Foxconn 9800 GTX EX OC).

I wouldn't compare max GDDR3/4 speeds using retail cards.

Here take a look at this
Fastest samsung GDDR3 is rated at 0.833ns i.e 1200MHz (2400MHz effective)

Where as GDDR4 is
Link
The fastest samsung GDDR4 is 1400MHz (2800MHz effective). They use to have a 1600MHz part 0.6ns i believe, guess it didn't work out.

GDDR4s are more expensive. It may offer lesser power consumption, and higher clocks but the cost is probably its downfall making it unfavorable for IHVs to use GDDR4s.

 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Originally posted by: Cookie Monster
GDDR4s are more expensive. It may offer lesser power consumption, and higher clocks but the cost is probably its downfall making it unfavorable for IHVs to use GDDR4s.

Which is probably why we've even started seeing HD3870 cards from some OEMs built with GDDR3 instead of the specified GDDR4.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |