nVidia GT200 Series Thread

Page 31 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: Cookie Monster
I would say that the G200 is abit different to the G8x architecture since alot of the underlying aspects (triangle setup, the thread scheduler etc etc) of its architecture is changed.

Not really, they have only increased the shader processors which is the main limiting factor for DX10 apps since most dx10 apps are heavily dependent on shaders than the GPU itself. I hope the software side of GTX 2xx is improved.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Ocguy31
Originally posted by: Janooo
NV under pressure.

Another article with nothing in it, just speculation from sources such as "we've been hearing".

Id really like for these cards to come out already. !!!!

and by "we have been hearing" they mean "we have read on the anandtech forums"

I have seen it happen before.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: BFG10K
Take a look at G80 -> GT200.

GT200 is around ~2x G80 in terms of specs and performance.

G80 = 484mm^2 @ 90nm
GT200 = 576mm^2 @ 65nm

So for 2x the performance, you are talking about 100mm^2 larger chip at the next full process.
Sure, until GT200 moves to 55nm when the cycle begins anew (i.e I would expect a competitive die size at 55 nm and being smaller @ 45 nm).

It is indeed possible that we will see a bit more single-GPU, since TSMC is ramping up their move to advanced process nodes. We will see 32nm from TSMC in early 2010 and 40nm sometime in 2009. But after that, the single-GPU dies as far as I am concerned, if not before that. TSMC might have 32nm in 2010, but then it will be a 2 year wait until 2012 for 22nm.
In addition to process shrinks there are other elements constantly being explored and developed such as different materials and manufacturing processes. That and we haven?t even touched organic or laser parts.

People have been screaming for years about limits but in reality traditional silicon + electricity doesn?t even scratch the surface of the potential out there.

My view of the future GPU is one where a number of GPUs (likely 2-4) are connected via hardware just like we see Intel's MCM quad-cores. The future GPU will be multi-GPU but I don't think it will always rely on software scaling.
Like I said earlier if single GPUs hit a wall then so will multi-GPU as multi-GPU is built up of single GPUs. If the R600 hadn?t been shrunk to 55nm the 3870 X2 wouldn?t have been possible.

The only way forward from that point would be to add more and more PCIe slots and then start building sever racks after that to hold extra cards, none of which are viable in consumer space.

You also can?t expect to peddle multi-GPU to the mid or low range so they need single GPU upgrades or they won?t buy your product.

GT200b going to 55nm doesn't have anything to do with what I was saying. I'm talking about after GT200/b..... G80 was shrunk to 65nm with G92 and it was still rather large (324mm^2) but then we have GT200, the next gen chip, 576mm^2. Let's say nVidia continues down the single GPU path and to make an easy comparison, let's say the next chip is just another GT200 refresh.

If we follow the same scaling (performance vs. die size) from G80 -> GT200, a hypothetical GT300 on 45nm could be ~2x GT200 and be 600-700mm^2 in size. Obviously that is not a possible die size for a consumer part. The only chip today of that size that I know of is Tukwila, but that's targeting a market where the selling price of the chip warrants the cost and low yields.

I think you are misinterpreting what I am saying. I am not talking about the limits of silicon here; that is another discussion for another time. I am talking about the ever increasing die size of GPUs, even as we move to a new full process ever 2 years or so.

This is not hard to see. Look at NV40 & R420... R420 was 260mm^2 and NV40 was 288mm^2. If you look ahead to 2005, G70 was ~300mm^2, R520 was ~265mm^2. In early 2006, we see R580 at 314mm^2. Then fast forward to the end of 2006, we see G80 at 484mm^2.... early next year, R600 is 420mm^2 (would have been same/larger than G80 if it were on 90nm). Then this year, we see GT200 at 576mm^2. So except for a few outliers, die size just keeps going up, despite smaller and smaller processes. We were on 130nm four years ago, yet die size was 260-288mm^2..... now we are on 65nm, and die size is 576mm^2, despite the process fitting ~4x the number of transistors in the same die space.

This trend cannot keep increasing. Die size is now approaching limits that cannot be broken, unless you want a chip that will cost several hundred dollars and graphics cards costing $1,000. In the last four years, die sizes have more than doubled. Four years from now, die sizes cannot be doubled. A 1,200mm^2 chip isn't going to happen.

Multi-GPU, or I should say Multi-die GPU, is the future and as far as I am concerned, is the only way to go to sustain current growth. As I said, there may be a last hurrah for single-die GPUs in the next two years, since TSMC's process technology is going to increase rapidly during that time. By early 2010, TSMC should have 32nm, and next year they should have 40nm. But past that.... it's multi-GPU or bust from what I see.

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
The next thing to do is to "stack" the dies in the same package. And of course we all know that the "nm" isn't the smallest object in the universe. Shrinkage does not stop when we hit 1nm. They will have to come up with other terminology for smaller processes. Who knows, it may not even be silicon. I have heard that Intel has had some luck with carbon nanotubes. Carbon. Organic. Limitless possiblities. So, I wouldn't worry too much about how big or how small a die is as it is really not important from a technological point of view. They are always looking to shrink something anyway to save costs and such.

Just look what Nvidia did going from a G70 (7800) to a G71 (7900). Reduced considerably the number of transistors while at the same time shrinking the die. I forget the sq.mm size difference and transistor count.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
well.. considering 1nm is about 100 atoms of hydrogen in a row, there isn't much more shrinking past 1 nm... 0.02 or 0.03, where there is only a single atom apart (atom of insulation, atom of conductive material) would be the smallest you could get physically get and that is IF it is even possible to have a 1 atom wide process...
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: taltamir
well.. considering 1nm is about 100 atoms of hydrogen in a row, there isn't much more shrinking past 1 nm... 0.02 or 0.03, where there is only a single atom apart (atom of insulation, atom of conductive material) would be the smallest you could get physically get and that is IF it is even possible to have a 1 atom wide process...

How many atoms in a sq. nm? 100x100? 10,000? Sounds like a lot of playing room to me.
And we are also only talking about 2 dimensions here as well. Eventually, they will figure out how to go 3D.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
After 32nm/22nm, (@ 16nm process) i think its impossible to go any further since transistors will consist of 1~2 atoms!. But this will probably lead to another era of process technology based on something else other than transistors of today.

Anyway, what makes you think that G300 is 2xG200? we aren't going to see or need 1024bit memory interfaces (plus it looks like 256bit memory bus paired with GDDR5 may well be the next best alternative to say 512bit memory bus and GDDR3), nor 64 ROPs for that matter. Transistor cost for ALUs aren't as expensive as other circuitries like TMUs and the like. Extelleron you cant really compare apples (55nm) to oranges (45nm) can you? Unlike transition from a full node to a half node process (65 -> 55nm) where a optical shrinking is possible, 45nm is a whole new different beast since they differ in alot areas, mainly the transistor density (and normally along with greatly reduced power consumption and heat). A good example is G70 to G71. A transition from the 110nm process (half node) to 90nm process. The die size of G70 basically went from 334mm² to 196mm². That is a whooping ~40% reduction in die size alone. Same will happen each and every time a transition is taken to the next full node process.

Die sizes are all relevant to this. Die sizes basically determine how many chips are possible to fit per wafer. Obviously smaller they are, the more you get. Having a bigger die is more of an economical issue more than anything else, especially if you want to have this product in volume. Obviously the G200 IS NOT a volume product. Its targeted at a different market segment, higher than what AMD/ATi is releasing in the shape of RV770. Yields can be a problem, but nobody here knows what the yields will be like unless basing claims on rumour sources. But seeing as the 65nm process is a really mature process, i can safely assume that yields aren't all that bad.

I agree with keys also. Die sizes aren't really the problem. Theres mulitple ways of solving this issue. So people being alarmed at current G200 die size is just a short term thing i suppose. I think its concrete that this card will be cool during idle (since it consumes 25~35W), and only draw its max load during 3d intensive tasks. No need to worry about power/heat unless your playing crysis 24/7. Im kind of impressed by the techniques nVIDIA has implemented to offset almost all the disadvantages of a monolithic GPU die in the heat/power department. Shutting parts of the GPU sounds easy when you read about them, but taking this concept and implementing it is one of the most difficult thing to do.

Originally posted by: Aberforth
Not really, they have only increased the shader processors which is the main limiting factor for DX10 apps since most dx10 apps are heavily dependent on shaders than the GPU itself. I hope the software side of GTX 2xx is improved.

Theres more (well a whole lot more) to the GPU than just shaders/ALUs.

edit - it also depends on what atom your basing your calculations on because not all atoms are created equal

edit2 - Transistors of today cannot scale beyond 16nm. I would have clarified what i meant, when i said "after 32nm".

And heres an interesting link on that topic.
Link

Enough being OT though.
 

uclaLabrat

Diamond Member
Aug 2, 2007
5,629
3,039
136
Silicon bond lengths are about 2 angstroms, or about .2nm Carbon bond lengths are about 1.5, depending on the bond type. 32nm silicon would be about 150 atoms wide, not 1-2.
 

JPB

Diamond Member
Jul 4, 2005
4,064
89
91
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1

The short story: not enough, not nearly enough. It is barely faster than a GX2, yields are crap, and there will only be a syphilitic trickle of parts on launch, slowing to next to nothing after that.

Remember though, these parts are $449 for the 260, $649 for the 280, and they are barely faster than the ATI 770/4870. On price/performance, they lose badly, really badly, to the 770. On the high end, the R700 spanks them by wide margins, but those numbers will have to wait a bit.

If you are thinking that NV will put out a dual card, don't hold your breath, they are power limited, die size limited, cost limited and production limited.

They can't make it until the shrink in late fall, and even then it is questionable. Is the card quick? Yeah, it is decent. Is it good enough? Nope, not even close. This card is a dinosaur, too hot, too late. µ

:laugh:

Numbers
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
GT200b going to 55nm doesn't have anything to do with what I was saying.
Sure it does. It has everything to do with it because a die shrink usually makes the die size smaller (assuming you haven?t added transistors of course).

If we follow the same scaling (performance vs. die size) from G80 -> GT200, a hypothetical GT300 on 45nm could be ~2x GT200 and be 600-700mm^2 in size.
This is making the assumption process improvements aren?t happening which they are. The fact is you have no idea what the die size will be when they decide to make the GT300 because you have no idea how manufacturing will improve by the time 45 nm rolls around.

I think you are misinterpreting what I am saying. I am not talking about the limits of silicon here; that is another discussion for another time. I am talking about the ever increasing die size of GPUs, even as we move to a new full process ever 2 years or so.
You seem to be saying that die size is getting out of hand but you?re ignoring the fact that manufacturing processes are also improving. If they weren?t improving you?d have a point but that isn?t the case.

This is not hard to see. Look at NV40 & R420... R420 was 260mm^2 and NV40 was 288mm^2. If you look ahead to 2005, G70 was ~300mm^2, R520 was ~265mm^2. In early 2006, we see R580 at 314mm^2. Then fast forward to the end of 2006, we see G80 at 484mm^2.... early next year, R600 is 420mm^2 (would have been same/larger than G80 if it were on 90nm).
Sure, but what size would the NV40 be on the current 65 nm process? Tiny enough to be passively cooled and running off a low-end PSU I?ll bet.

In that situation you?d have very small die sizes so are you saying you?d rather have sixteen (or whatever) NV40s on a single die rather than a single GT200 and rely on the driver to deliver sixteen way scaling? I sure as heck wouldn?t as that kind of scaling is simply not going to happen.

If die sizes get out of hand for current manufacturing processes they?ll simply hold off until manufacturing catches up. That?s pretty much what they?ve been doing anyway since these GPUs are often on the bleeding edge of current technologies.

Multi-GPU, or I should say Multi-die GPU, is the future and as far as I am concerned, is the only way to go to sustain current growth.
How? If you?re saying the die sizes are getting too big for single GPUs how can they work for multi-GPUs?

Case and point: the R600 before the 55nm shrink.

Without a die shrink a 3870 X2 wouldn?t be possible so your only option is to put more of them into a system. When you run out of PCIe slots what then? Without a 55 nm shrink you?ve already hit your thermal limits so how do you go faster for next gen?

About the only way is to start building server farms with heaps of PCIe slots in them and populate them with more R600s. And good luck trying to get n-way scaling out of Crossfire/SLI given two way is already brittle compared to a single GPU.

That?s my point - the 55nm shrink on a single R600 GPU provided the building blocks necessary for a dual-GPU 3870 X2. Without manufacturing improvements multi-GPU is hampered just as much as single GPUs are, especially if you?re trying to go multi-die.

Like I said earlier, people have been proclaiming the demise of Moore?s law for years but it never happens.
 

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: JPB
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1


Numbers

Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.
The INQ really don't like nVidia, do they?

Having said that the 9800 GX2 scores 30 FPS at similar settings so assuming the same benchmark was used the GTX 280 is 19% faster than the 9800 GX2 while the GTX 260 is actually slower.
 

Dkcode

Senior member
May 1, 2005
995
0
0
If those scores are real then its pretty piss poor.

However going on the attitude in which that article was written (and their previous ones), i'll wait for a more reliable source of info.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: uclaLabrat
Silicon bond lengths are about 2 angstroms, or about .2nm Carbon bond lengths are about 1.5, depending on the bond type. 32nm silicon would be about 150 atoms wide, not 1-2.

The current process is estimated to be limited to 10nm (or 50 atoms).
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Aberforth
Originally posted by: JPB
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1


Numbers

Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

When I see real reviews showing NVIDIA has been surpassed in performance or image quality, I'll believe they're second best in the world at what they do.

Until then, I'll keep thinking they're the best in the world at what they do, rather than "incompetent".

 

JPB

Diamond Member
Jul 4, 2005
4,064
89
91
Originally posted by: nRollo
Originally posted by: Aberforth
Originally posted by: JPB
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1


Numbers

Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

When I see real reviews showing NVIDIA has been surpassed in performance or image quality, I'll believe they're second best in the world at what they do.

Until then, I'll keep thinking they're the best in the world at what they do, rather than "incompetent".

No refuting heh ?

I think the *keyword* in your post is *until* :thumbsup:

You are under NDA though right nRollo ?
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: JPB
Originally posted by: nRollo
Originally posted by: Aberforth
Originally posted by: JPB
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1


Numbers

Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

When I see real reviews showing NVIDIA has been surpassed in performance or image quality, I'll believe they're second best in the world at what they do.

Until then, I'll keep thinking they're the best in the world at what they do, rather than "incompetent".

No refuting heh ?

I think the *keyword* in your post is *until* :thumbsup:

You are under NDA though right nRollo ?

I can't comment on the performance of these parts due to NDAs I've signed.

You're reading too much into my "until" comment. When I posted it my thought was:
"NVIDIA has been, and is, the world leader in graphics performance and image quality. We haven't seen any benchmarks or slides showing that has changed, so until we do, I'll have to assume they are still."

Had nothing to do with upcoming reviews, I know nothing about the RV770 or R700 that I can compare to what I know of the GT200 series.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: Aberforth
Originally posted by: JPB
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1


Numbers

Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

nVidia isn't an incompetent company at all.... they just made one mistake. That's assuming that these results will prove true on launch day. They're probably close, I don't think INQ would blatantly lie 6 days before launch.... but I'll be more interested in what Anand thinks.

I think with GT200 nVidia pushed the boundaries a bit too much and they might end up suffering because of it. In one way it is awesome to see a huge, no-compromise chip like GT200.... but as I have said before, it is not going to be the best strategy now or in the future. I'm pretty sure the problem with GT200 is not design but yields and nVidia not being able to reach target clocks with such a large chip.

This is making the assumption process improvements aren?t happening which they are. The fact is you have no idea what the die size will be when they decide to make the GT300 because you have no idea how manufacturing will improve by the time 45 nm rolls around.

I have definitely accomodated for moving to smaller processes in what I said. If by process improvements you mean actual improvements in the process used, such as changes to the transistors (i.e. high-k dielectric) or the interconnects, then that is another story. But that will not affect die size, it will only improve transistor performance (allowing for clock increases) and reduce leakage (improving power consumption, thus allowing for either higher clocks or lower power usage).

What I am talking about, and I've said this before, is that the die size of GPUs keeps rising despite the move to smaller processes at a rapid rate. I've supported this with more than enough evidence. So tell me why this won't continue? Why will it suddenly be different now than it has been for years? The bottom line is unless you can find some way to stop this trend, GPUs will simply get too large. I think we can see with GT200 that 576mm^2 is already excessive. Yet as I said, if the current trend continues, die sizes can only get larger even on more and more advanced processes. This is a clear problem, there is no denying it. How can it be fixed? There are two options that I can see. One way is to slow down the rate of progress in the GPU industry. The other is to split GPUs into multiple die.

Let's imagine GT200 if it were built like I think the future GPU should be. Instead of a single chip, let's imagine that it were built from four die, connected via Hypertransport links. These four die would all be located right next to eachother, under the IHS. A very similar setup to Intel's Kentsfield/Yorkfield CPUs, except we have four die instead of two.

Our four chips would have a die size of around ~144mm^2. That is a very acceptable die size for a chip and the yields would be excellent. But it gets even better.... we don't need to use a 65nm process anymore, we can go to 55nm. This reduces heat/power consumption, allows us to increase clocks, and allows for lower die sizes. Given a 100% shrink, the die size of our chips would actually be 103mm^2 (obviously this doesn't take into account that very few die shrinks will be 100%). Our hypothetical GT200 will have significantly better yield, lower power consumption, and higher performance than the single-GPU GT200 nVidia will put out. And creating an additional SKU is extremely easy; we just put 3 die instead of 4. There we have our GTX 260, but we aren't wasting any die space.

Which GT200 do you think is better? The single-GPU one, or one made out of 4 die? I think it is pretty obvious.

There is a reason why Intel does the exact thing I am saying; the yield of two 143mm^2 chips makes it much cheaper to produce than a single 286mm^2 chip. Intel doesn't suffer any noticeable performance loss from having two die, so nVidia shouldn't have a problem either. And just to put yields in perspective here.... if Intel thinks a 286mm^2 chip is too big for its own process, and prefers to make it as a two die CPU, imagine what the yields of a chip 576mm^2 are on a foundry process.





 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Extelleron
Originally posted by: Aberforth
Originally posted by: JPB
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1


Numbers

Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

nVidia isn't an incompetent company at all.... they just made one mistake. That's assuming that these results will prove true on launch day. They're probably close, I don't think INQ would blatantly lie 6 days before launch.... but I'll be more interested in what Anand thinks.

I think with GT200 nVidia pushed the boundaries a bit too much and they might end up suffering because of it. In one way it is awesome to see a huge, no-compromise chip like GT200.... but as I have said before, it is not going to be the best strategy now or in the future. I'm pretty sure the problem with GT200 is not design but yields and nVidia not being able to reach target clocks with such a large chip.

This is making the assumption process improvements aren?t happening which they are. The fact is you have no idea what the die size will be when they decide to make the GT300 because you have no idea how manufacturing will improve by the time 45 nm rolls around.

I have definitely accomodated for moving to smaller processes in what I said. If by process improvements you mean actual improvements in the process used, such as changes to the transistors (i.e. high-k dielectric) or the interconnects, then that is another story. But that will not affect die size, it will only improve transistor performance (allowing for clock increases) and reduce leakage (improving power consumption, thus allowing for either higher clocks or lower power usage).

What I am talking about, and I've said this before, is that the die size of GPUs keeps rising despite the move to smaller processes at a rapid rate. I've supported this with more than enough evidence. So tell me why this won't continue? Why will it suddenly be different now than it has been for years? The bottom line is unless you can find some way to stop this trend, GPUs will simply get too large. I think we can see with GT200 that 576mm^2 is already excessive. Yet as I said, if the current trend continues, die sizes can only get larger even on more and more advanced processes. This is a clear problem, there is no denying it. How can it be fixed? There are two options that I can see. One way is to slow down the rate of progress in the GPU industry. The other is to split GPUs into multiple die.

Let's imagine GT200 if it were built like I think the future GPU should be. Instead of a single chip, let's imagine that it were built from four die, connected via Hypertransport links. These four die would all be located right next to eachother, under the IHS. A very similar setup to Intel's Kentsfield/Yorkfield CPUs, except we have four die instead of two.

Our four chips would have a die size of around ~144mm^2. That is a very acceptable die size for a chip and the yields would be excellent. But it gets even better.... we don't need to use a 65nm process anymore, we can go to 55nm. This reduces heat/power consumption, allows us to increase clocks, and allows for lower die sizes. Given a 100% shrink, the die size of our chips would actually be 103mm^2 (obviously this doesn't take into account that very few die shrinks will be 100%). Our hypothetical GT200 will have significantly better yield, lower power consumption, and higher performance than the single-GPU GT200 nVidia will put out. And creating an additional SKU is extremely easy; we just put 3 die instead of 4. There we have our GTX 260, but we aren't wasting any die space.

Which GT200 do you think is better? The single-GPU one, or one made out of 4 die? I think it is pretty obvious.

There is a reason why Intel does the exact thing I am saying; the yield of two 143mm^2 chips makes it much cheaper to produce than a single 286mm^2 chip. Intel doesn't suffer any noticeable performance loss from having two die, so nVidia shouldn't have a problem either. And just to put yields in perspective here.... if Intel thinks a 286mm^2 chip is too big for its own process, and prefers to make it as a two die CPU, imagine what the yields of a chip 576mm^2 are on a foundry process.

You lost me somewhere. How will 4 chips in a same package and equal in power to a GT200 equate to 144mm2? And furthermore, why are you sort of obsessing on die size? Don't take this the wrong way, not at all. It's just that, it's not that important for the most part.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: keysplayr2003
Originally posted by: Extelleron
Originally posted by: Aberforth
Originally posted by: JPB
GT200 scores revealed

THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.

Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1


Numbers

Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

nVidia isn't an incompetent company at all.... they just made one mistake. That's assuming that these results will prove true on launch day. They're probably close, I don't think INQ would blatantly lie 6 days before launch.... but I'll be more interested in what Anand thinks.

I think with GT200 nVidia pushed the boundaries a bit too much and they might end up suffering because of it. In one way it is awesome to see a huge, no-compromise chip like GT200.... but as I have said before, it is not going to be the best strategy now or in the future. I'm pretty sure the problem with GT200 is not design but yields and nVidia not being able to reach target clocks with such a large chip.

This is making the assumption process improvements aren?t happening which they are. The fact is you have no idea what the die size will be when they decide to make the GT300 because you have no idea how manufacturing will improve by the time 45 nm rolls around.

I have definitely accomodated for moving to smaller processes in what I said. If by process improvements you mean actual improvements in the process used, such as changes to the transistors (i.e. high-k dielectric) or the interconnects, then that is another story. But that will not affect die size, it will only improve transistor performance (allowing for clock increases) and reduce leakage (improving power consumption, thus allowing for either higher clocks or lower power usage).

What I am talking about, and I've said this before, is that the die size of GPUs keeps rising despite the move to smaller processes at a rapid rate. I've supported this with more than enough evidence. So tell me why this won't continue? Why will it suddenly be different now than it has been for years? The bottom line is unless you can find some way to stop this trend, GPUs will simply get too large. I think we can see with GT200 that 576mm^2 is already excessive. Yet as I said, if the current trend continues, die sizes can only get larger even on more and more advanced processes. This is a clear problem, there is no denying it. How can it be fixed? There are two options that I can see. One way is to slow down the rate of progress in the GPU industry. The other is to split GPUs into multiple die.

Let's imagine GT200 if it were built like I think the future GPU should be. Instead of a single chip, let's imagine that it were built from four die, connected via Hypertransport links. These four die would all be located right next to eachother, under the IHS. A very similar setup to Intel's Kentsfield/Yorkfield CPUs, except we have four die instead of two.

Our four chips would have a die size of around ~144mm^2. That is a very acceptable die size for a chip and the yields would be excellent. But it gets even better.... we don't need to use a 65nm process anymore, we can go to 55nm. This reduces heat/power consumption, allows us to increase clocks, and allows for lower die sizes. Given a 100% shrink, the die size of our chips would actually be 103mm^2 (obviously this doesn't take into account that very few die shrinks will be 100%). Our hypothetical GT200 will have significantly better yield, lower power consumption, and higher performance than the single-GPU GT200 nVidia will put out. And creating an additional SKU is extremely easy; we just put 3 die instead of 4. There we have our GTX 260, but we aren't wasting any die space.

Which GT200 do you think is better? The single-GPU one, or one made out of 4 die? I think it is pretty obvious.

There is a reason why Intel does the exact thing I am saying; the yield of two 143mm^2 chips makes it much cheaper to produce than a single 286mm^2 chip. Intel doesn't suffer any noticeable performance loss from having two die, so nVidia shouldn't have a problem either. And just to put yields in perspective here.... if Intel thinks a 286mm^2 chip is too big for its own process, and prefers to make it as a two die CPU, imagine what the yields of a chip 576mm^2 are on a foundry process.

You lost me somewhere. How will 4 chips in a same package and equal in power to a GT200 equate to 144mm2? And furthermore, why are you sort of obsessing on die size? Don't take this the wrong way, not at all. It's just that, it's not that important for the most part.

They wouldn't be equal to the GT200 at all. I'm saying if you could split the GT200 into four chips, and together you would have 240SP / 80 TMU / 32 ROP, then they would (in theory) be 144mm^2.

Die size doesn't matter much at all for consumers, but for nVidia it does. Die size is the most important thing, along with power/heat, that gets in the way of advancing performance. The faster you want a chip to be, the bigger you need to make it.... and obviously there are limits on that.

Here are supposed overclocking results/performance for the GTX 280:

http://digidownload.libero.it/hackab321/gtx280.bmp

It seems to scale very well with clocks, but the overclocking results are not very good. I was hoping GT200 would be capable of 700MHz+, and heat was restricting the clocks. Could be that they just got a bad chip.
 

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: Extelleron
Originally posted by: Aberforth
Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

nVidia isn't an incompetent company at all.... they just made one mistake. That's assuming that these results will prove true on launch day. They're probably close, I don't think INQ would blatantly lie 6 days before launch.... but I'll be more interested in what Anand thinks.

I think with GT200 nVidia pushed the boundaries a bit too much and they might end up suffering because of it. In one way it is awesome to see a huge, no-compromise chip like GT200.... but as I have said before, it is not going to be the best strategy now or in the future. I'm pretty sure the problem with GT200 is not design but yields and nVidia not being able to reach target clocks with such a large chip.

First of all there are no boundaries in the technical world, when a person or a company says "they are in the boundaries of technical limitation" means that they are running out of options due to inadequate resources/research and they push alternate technologies to stay in the competition- like for instance introduction of SLi/Crossfire or the acquisition of Ageia or buy one get one free schemes. That being the case, people take hardware power for granted and write extremely bloated software for it, the code doesn't make efficient use of the hardware because of sloppy algorithms and unbalanced use of resources like over abusing DX10 api's to make games look realistic, making horrible drivers and such. The same thing has happened to multi-core technologies, the software side is too bloated. We should know that both software and hardware are interrelated so one cannot perform well with another being underpowered. When one writes really kick-ass algorithms it is possible to make games like Crysis run on old P4 machines but in reality it is only theory for now. I know some people believe there comes a time where we cannot go beyond 12nm fab blah blah... but such great thinkers also existed years ago who believed 90nm is the last they can achieve. But the question is does it really matter? In 5 years if there is 50% progress in the hardware industry, there will be only 5% progress in the software, mainly because software is more on the intellectual side and it's customizable unlike hardware.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: Aberforth
Originally posted by: Extelleron
Originally posted by: Aberforth
Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.

nVidia isn't an incompetent company at all.... they just made one mistake. That's assuming that these results will prove true on launch day. They're probably close, I don't think INQ would blatantly lie 6 days before launch.... but I'll be more interested in what Anand thinks.

I think with GT200 nVidia pushed the boundaries a bit too much and they might end up suffering because of it. In one way it is awesome to see a huge, no-compromise chip like GT200.... but as I have said before, it is not going to be the best strategy now or in the future. I'm pretty sure the problem with GT200 is not design but yields and nVidia not being able to reach target clocks with such a large chip.

First of all there are no boundaries in the technical world, when a person or a company says "they are in the boundaries of technical limitation" means that they are running out of options due to inadequate resources/research and they push alternate technologies to stay in the competition- like for instance introduction of SLi/Crossfire or the acquisition of Ageia or buy one get one free schemes. That being the case, people take hardware power for granted and write extremely bloated software for it, the code doesn't make efficient use of the hardware because of sloppy algorithms and unbalanced use of resources like over abusing DX10 api's to make games look realistic, making horrible drivers and such. The same thing has happened to multi-core technologies, the software side is too bloated. We should know that both software and hardware are interrelated so one cannot perform well with another being underpowered. When one writes really kick-ass algorithms it is possible to make games like Crysis run on old P4 machines but in reality it is only theory for now. I know some people believe there comes a time where we cannot go beyond 12nm fab blah blah... but such great thinkers also existed years ago who believed 90nm is the last they can achieve. But the question is does it really matter? In 5 years if there is 50% progress in the hardware industry, there will be only 5% progress in the software, mainly because software is more on the intellectual side and it's customizable unlike hardware.

There definitely are boundaries in the technical world, fortunately boundaries can be pushed back because of ever increasing process technology. But there are boundaries; there are die size boundaries, heat boundaries, and power consumption boundaries. These boundaries can be pushed back every time we see a new fabrication process, but we will never be rid of them.

GT200 pushes those boundaries in all three aspects; it is the largest consumer chip ever produced AFAIK, it is the hottest GPU ever produced, and it uses the more power than any GPU we have ever seen before.

I really don't think there is anything wrong with the GT200 design, as I said in my post. It has 1.875x the number of shaders that G80 had, more TMUs, more ROPs, a huge bus supplying plenty of bandwidth... that seems like a successful design plan for a chip, doesn't it? G80 was amazing in terms of performance, so why should a chip based on the same architecture with increased executution resources and some optimizations not be amazing as well?

The problem is that the yield on GT200 absolutely sucks (obviously I have no official data regarding this, but it is quite evident.) And clearly the chips are not capable of the same clocks that even G80 could hit on a 90nm process, much less what G92 hit on 65nm. I think this is related to the size of the chip and the extreme complexity. GT200 pushes against the boundaries of 65nm technology too much, and this caused problems.

The latest news is that the launch date for the GTX 280 & GTX 260 is being moved up to June 16th, so Monday:

http://www.theinquirer.net/gb/...ia-changes-gt200-dates

INQ seems to believe that the 260 won't be available until June 26th.... if that's true it makes my life easier actually. I won't have to make a decision, it's HD 4870, because my Step-up expires on June 23rd.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: geokilla
I did a quick search in the topic and I didn't find any info regarding the following so here it is:

Geforce GTX specifications are leaking all over the web, so why not share them here since they are all over in the past 24 hours.

1.4 billion transistors and 240 cores

GeForce GTX 280:

Codenamed GT200, the GTX 280 GPU developed in 65nm process and has 1.4 billion transistors clocked at 602MHz. The processor clock, something that we use to call Shaders, works at 1296MHz and Nvidia's new chip has impressive 240 cores.

The GTX 280 card uses GDDR3 memory with 512-bit memory interface clocked to 1107MHz (2214MHz). The card has 141.7GB/s bandwidth and it comes with a total of 1GB memory.

The GT200 chip has 32 ROPs, 80 texture filtering units, 48.2 Gigatexels/sec texture filtering rate. The card supports HDCP, and HDVI via DVI to HDMI adapter and comes with two dual link DVI-I and a single HDTV out.

Ramdac is set to 400MHz, and the card itself is dual-slot with PCIe 2.0 interface and has one 8-pin and one 6-pin power connector. So, now you know



GeForce GTX 260:

Nvidia?s second GT200-based card is the GeForce GTX 260. The Geforce GTX 260 is, as well, based on 65nm GT200 core that has 1.4 billion transistors, but this time clocked to 576MHz. Some of these transistors will sit disabled, as GTX 260 has one of eight clusters disabled.

The Shaders are clocked to 1242MHz and the card has a total of 192 Shaders (what used to be called Shader units are now called processor cores).

The card has an odd number of 896MB GDDR3 memory clocked at 999MHz (1998MHz effectively) and this is enough for 111.9GB/s bandwidth. The slower of the chips has 28 ROPs, 64 Texture filtering units and 36.9GigaTexels/second of texture filtering rate.

If you look at these specs closely, you will see that GTX 260 is the same as GTX 280, but with one cluster disabled. If GTX 280 has eight clusters, GTX 260 ends up at seven.

The card has HDCP, HDMI via DVI, but this time two 6-pin power connectors; and launches next Tuesday.



When it comes to power the leaks are saying this:

Geforce GTX 280 will need a lot of power. Its maximal board power is set to ultra high 236W, which is about the same number that we reported months ago.

One 8-pin power connector can provide up to 150W of power, as a 6-pin is stuck at 75W and so is the PCIe 2.0 slot. If you use one 8-pin and one 6-pin together with PCIe 2.0 you can end up with up to 300W.

The GTX 260 is happy with two times 6-pin (2x75W) and PCIe 2.0 that also provides additional 75W. GTX 260 can get up to 225W and the card actually needs much less than that.

The chip has a thermal threshold at 105 degrees Celsius, and once the GPU reaches this temperature the clock speed will automatically drop down.

GTX 260 is a bit better, as its maximal board power is 182 Watts and with 2x6-pin and the power from PCIe 2.0 interface tends to be enough. The GPU threshold is again 105 degrees Celsius, but as we said before it is the same chip, just with one cluster disabled.



Legit Reviews will as always have a full review on launch day.

doesn't pci-e 2.0 provide 150w?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |