Originally posted by: Extelleron
They wouldn't be equal to the GT200 at all. I'm saying if you could split the GT200 into four chips, and together you would have 240SP / 80 TMU / 32 ROP, then they would (in theory) be 144mm^2.
Die size doesn't matter much at all for consumers, but for nVidia it does. Die size is the most important thing, along with power/heat, that gets in the way of advancing performance. The faster you want a chip to be, the bigger you need to make it.... and obviously there are limits on that.
Here are supposed overclocking results/performance for the GTX 280:
http://digidownload.libero.it/hackab321/gtx280.bmp
It seems to scale very well with clocks, but the overclocking results are not very good. I was hoping GT200 would be capable of 700MHz+, and heat was restricting the clocks. Could be that they just got a bad chip.
Originally posted by: keysplayr2003
Originally posted by: Extelleron
They wouldn't be equal to the GT200 at all. I'm saying if you could split the GT200 into four chips, and together you would have 240SP / 80 TMU / 32 ROP, then they would (in theory) be 144mm^2.
Die size doesn't matter much at all for consumers, but for nVidia it does. Die size is the most important thing, along with power/heat, that gets in the way of advancing performance. The faster you want a chip to be, the bigger you need to make it.... and obviously there are limits on that.
Here are supposed overclocking results/performance for the GTX 280:
http://digidownload.libero.it/hackab321/gtx280.bmp
It seems to scale very well with clocks, but the overclocking results are not very good. I was hoping GT200 would be capable of 700MHz+, and heat was restricting the clocks. Could be that they just got a bad chip.
144mm^2 per chip you mean? But you would still need four of them to equal a single GTX280, and now, you're using even more wafer real estate. 144x4 = 576mm^2, PLUS all the transistors to connect them all together (hypertransport like or whatever).
I don't get your line of thinking here. You end up with using more wafer. Each 144^2 chip would have to have additional transistors to be able to connect with others.
At any rate, that is neither here nor there. It is what it is.
Originally posted by: Aberforth
excelltron, the boundaries you talk is only a delusive assumption of human mind not being able to think good enough, such assumptions existed several years ago and would continue to exist.
Originally posted by: Extelleron
Originally posted by: Aberforth
excelltron, the boundaries you talk is only a delusive assumption of human mind not being able to think good enough, such assumptions existed several years ago and would continue to exist.
I don't know what you are getting at here. There are boundaries in the tech world, as I said those boundaries can be pushed back with newer fabrication technology. But you can't just produce whatever you want at any point in time. Intel couldn't produce an 820M transistor quad-core CPU back in 2000 when we were on 180nm, just like you can't produce a 4 Billion transistor CPU w/ 16 cores right now on 45nm. I shouldn't say you couldn't, because you could.... but you couldn't build it to sell it.
Originally posted by: Aberforth
Originally posted by: Extelleron
Originally posted by: Aberforth
excelltron, the boundaries you talk is only a delusive assumption of human mind not being able to think good enough, such assumptions existed several years ago and would continue to exist.
I don't know what you are getting at here. There are boundaries in the tech world, as I said those boundaries can be pushed back with newer fabrication technology. But you can't just produce whatever you want at any point in time. Intel couldn't produce an 820M transistor quad-core CPU back in 2000 when we were on 180nm, just like you can't produce a 4 Billion transistor CPU w/ 16 cores right now on 45nm. I shouldn't say you couldn't, because you could.... but you couldn't build it to sell it.
I am talking about architecture limitation than the die itself, when you come up with a good architecture you wouldn't need a 16 core fiasco (*especially* when they dunno how to make use of 2 cores), one core will be equal to the speed of 16 cores. Such things rarely happen as you know. Multi-core itself is a big compromise. The architecture you see these days aren't innovations, they are forced and backed by commercial gains.
It will continue. Now tell me why you don?t think this doesn?t affect multi-GPU given single GPUs are their building blocks? If die size is a problem for single core why don?t you think it won?t eventually become a problem for four such cores slapped onto one die?What I am talking about, and I've said this before, is that the die size of GPUs keeps rising despite the move to smaller processes at a rapid rate. I've supported this with more than enough evidence. So tell me why this won't continue?
This is all well and good until it?s time to move forward again and you already have four cores on a GPU. What then? You?ll either have to make each of those four GPUs faster and as you start doing this their die sizes get bigger and eventually you?ll hit die size limits again like you did with single core.Our four chips would have a die size of around ~144mm^2. That is a very acceptable die size for a chip and the yields would be excellent. But it gets even better.... we don't need to use a 65nm process anymore, we can go to 55nm. This reduces heat/power consumption, allows us to increase clocks, and allows for lower die sizes. Given a 100% shrink, the die size of our chips would actually be 103mm^2 (obviously this doesn't take into account that very few die shrinks will be 100%).
The only way it?ll have higher performance is if four-way scaling works substantially in every game which of course will never happen. Four-way scaling in cherry picked benchmarks from reviews doesn?t count.Our hypothetical GT200 will have significantly better yield, lower power consumption, and higher performance than the single-GPU GT200 nVidia will put out.
But we?ll get absolutely no performance gain unless the driver can scale from 3-way to 4-way in all applications. That will never happen.And creating an additional SKU is extremely easy; we just put 3 die instead of 4. There we have our GTX 260, but we aren't wasting any die space.
It is pretty obvious ? the GT200 by far. To think otherwise demonstrates a fundamental lack of understanding of the limitations of multi-GPU systems. Slapping X cores on either a CPU or GPU doesn?t guarantee X times speed-up, In fact it doesn?t guarantee any kind of speedup.Which GT200 do you think is better? The single-GPU one, or one made out of 4 die? I think it is pretty obvious.
You?re assuming Intel chose between a bigger single core and several smaller cores when that necessarily isn?t the case. It could be they designed a single core to be as fast as possible in line with current market competition and then found they could add other cores for ?free? because the manufacturing process was advanced enough.There is a reason why Intel does the exact thing I am saying; the yield of two 143mm^2 chips makes it much cheaper to produce than a single 286mm^2 chip. Intel doesn't suffer any noticeable performance loss from having two die, so nVidia shouldn't have a problem either.
Originally posted by: HOOfan 1
Originally posted by: Aberforth
I've never seen such an incompetent company really...
more like I've never seen such an incompetent "tech news" site
Originally posted by: BassBomb
$411 - PALIT VCX GTX280 896MB GDDR3 DUAL-DVI HDCP HDMI & CRT PCI-E
XNE/TX260+T394
interesting
Originally posted by: Extelleron
Way overpriced at this point.
The latest rumors put GTX 260 at $399 and the GTX 280 at $499, and IMO even that might be overpriced.
HD 4870 will be $299, and HD 4850 CF for $400 should equal the GTX 280 in most cases (in Vantage Extreme, it will beat the 280).
See here: http://forums.vr-zone.com/showthread.php?t=287874
Originally posted by: tuteja1986
rape fest begins :! the 512MB 7800GTX all over again :!
Originally posted by: nRollo
Originally posted by: tuteja1986
rape fest begins :! the 512MB 7800GTX all over again :!
It may be a little premature to forecast pricing and availability on the GTX260/280 as they haven't launched yet.
Often parts listed as "for sale" pre- launch command a higher price, because supply is short as most vendors honor NDA, demand is high.
Originally posted by: Piuc2020
I think ATI is going to win this generation, those are outrageous prices, crossfire HD4870s would cost less and (according to rumours) might end up being faster than a GTX 280.
This is nice because ATI will finally get back in the game and the fierce competition will make it even better for us customers.
Originally posted by: Piuc2020
Crysis just doesn't play well with the current architecture of cards, something is limiting cards in Crysis heavily and just incrementing shaders, rops, etc, linearly is obviously not going to fix the performance until the bottleneck is discovered and fixed.