Originally posted by: Extelleron
Originally posted by: Aberforth
Originally posted by: JPB
GT200 scores revealed
THANKS TO NVIDIA'S shutting us out, we are not handcuffed about the GT200 numbers, so here they are. Prepare to be underwhelmed, Nvidia botched this one badly.
Since you probably care only about the numbers, lets start out with them. All 3DMark scores are rounded to the nearest 250, frame rates to the nearest .25FPS. The drivers are a bit old, but not that old, and the CPU is an Intel QX9650 @ 3.0GHz on the broken OS, 32-bit, SP1
Numbers
Ouch, are these numbers real? I've never seen such an incompetent company really...they've been pushing their SLI agenda just to prove it they are out of brains.
nVidia isn't an incompetent company at all.... they just made one mistake. That's assuming that these results will prove true on launch day. They're probably close, I don't think INQ would blatantly lie 6 days before launch.... but I'll be more interested in what Anand thinks.
I think with GT200 nVidia pushed the boundaries a bit too much and they might end up suffering because of it. In one way it is awesome to see a huge, no-compromise chip like GT200.... but as I have said before, it is not going to be the best strategy now or in the future. I'm pretty sure the problem with GT200 is not design but yields and nVidia not being able to reach target clocks with such a large chip.
This is making the assumption process improvements aren?t happening which they are. The fact is you have no idea what the die size will be when they decide to make the GT300 because you have no idea how manufacturing will improve by the time 45 nm rolls around.
I have definitely accomodated for moving to smaller processes in what I said. If by process improvements you mean actual improvements in the process used, such as changes to the transistors (i.e. high-k dielectric) or the interconnects, then that is another story. But that will not affect die size, it will only improve transistor performance (allowing for clock increases) and reduce leakage (improving power consumption, thus allowing for either higher clocks or lower power usage).
What I am talking about, and I've said this before,
is that the die size of GPUs keeps rising despite the move to smaller processes at a rapid rate. I've supported this with more than enough evidence. So tell me why this won't continue? Why will it suddenly be different now than it has been for years? The bottom line is unless you can find some way to stop this trend, GPUs will simply get too large. I think we can see with GT200 that 576mm^2 is already excessive. Yet as I said, if the current trend continues, die sizes can only get larger even on more and more advanced processes.
This is a clear problem, there is no denying it. How can it be fixed? There are two options that I can see. One way is to slow down the rate of progress in the GPU industry. The other is to split GPUs into multiple die.
Let's imagine GT200 if it were built like I think the future GPU should be. Instead of a single chip, let's imagine that it were built from four die, connected via Hypertransport links. These four die would all be located right next to eachother, under the IHS. A very similar setup to Intel's Kentsfield/Yorkfield CPUs, except we have four die instead of two.
Our four chips would have a die size of around ~144mm^2. That is a very acceptable die size for a chip and the yields would be excellent. But it gets even better.... we don't need to use a 65nm process anymore, we can go to 55nm. This reduces heat/power consumption, allows us to increase clocks, and allows for lower die sizes. Given a 100% shrink, the die size of our chips would actually be 103mm^2 (obviously this doesn't take into account that very few die shrinks will be 100%). Our hypothetical GT200 will have significantly better yield, lower power consumption, and higher performance than the single-GPU GT200 nVidia will put out. And creating an additional SKU is extremely easy; we just put 3 die instead of 4. There we have our GTX 260, but we aren't wasting any die space.
Which GT200 do you think is better? The single-GPU one, or one made out of 4 die? I think it is pretty obvious.
There is a reason why Intel does the exact thing I am saying; the yield of two 143mm^2 chips makes it much cheaper to produce than a single 286mm^2 chip. Intel doesn't suffer any noticeable performance loss from having two die, so nVidia shouldn't have a problem either. And just to put yields in perspective here.... if Intel thinks a 286mm^2 chip is too big for its own process, and prefers to make it as a two die CPU, imagine what the yields of a chip 576mm^2 are on a foundry process.