Nvidia reveals Specifications of GT300

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
146
106
www.neftastic.com
Originally posted by: Idontcare
Originally posted by: thilan29
Originally posted by: Keysplayr
Originally posted by: SunnyD
Originally posted by: OCguy
Wow...that could be an amazing chip. :Q

Amazingly HUGE and HOT and POWER HUNGRY... yeah. Oh yeah, also amazingly EXPENSIVE too.

You don't know the size, you don't know the heat dissapation, you don't know the power it will draw, you don't know the price. Thanks for crapping by.

They're pretty good guesses though. If what he said was completely unfathomable (like saying GT300 would perform worse than GT200) I could see your issue with it. But can you honestly say GT300 (and the ATI 5000 series for that matter) WON't be larger, and more power hungry than this generation's cards?

55nm -> 40nm transition involved in there too, which makes most assertions regarding power consumption and die-size a pointless debate until we have data.

All that aside, we're talking something roughly twice the "power" of a GT200, literally as the article reads 50% more functional units. Last I knew, these things took transistors to make, which means 50% more transistors. Okay, I'll grant you some changes in architecture which, being generous we'll say a grand total of 33% more transistors as a GT200 (very debatable given that the article says they're moving to MIMD - more logic needed), which weighed in at what - 1.4 billion transistors? So we're moving to 1.8 billion transistors. That also doesn't account for DX11 spec callign for two new types of shaders, plus tessellation hardware as well (among other things). Even factoring in the die shrink, and you still have a humongous die pushing an awful lot of power through it. So pardon me while I take a well educated dump here - it's going to be hotter, bigger, and more expensive (for Nvidia). :roll:
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: SunnyD
Originally posted by: Idontcare
Originally posted by: thilan29
Originally posted by: Keysplayr
Originally posted by: SunnyD
Originally posted by: OCguy
Wow...that could be an amazing chip. :Q

Amazingly HUGE and HOT and POWER HUNGRY... yeah. Oh yeah, also amazingly EXPENSIVE too.

You don't know the size, you don't know the heat dissapation, you don't know the power it will draw, you don't know the price. Thanks for crapping by.

They're pretty good guesses though. If what he said was completely unfathomable (like saying GT300 would perform worse than GT200) I could see your issue with it. But can you honestly say GT300 (and the ATI 5000 series for that matter) WON't be larger, and more power hungry than this generation's cards?

55nm -> 40nm transition involved in there too, which makes most assertions regarding power consumption and die-size a pointless debate until we have data.

All that aside, we're talking something roughly twice the "power" of a GT200, literally as the article reads 50% more functional units. Last I knew, these things took transistors to make, which means 50% more transistors. Okay, I'll grant you some changes in architecture which, being generous we'll say a grand total of 33% more transistors as a GT200 (very debatable given that the article says they're moving to MIMD - more logic needed), which weighed in at what - 1.4 billion transistors? So we're moving to 1.8 billion transistors. That also doesn't account for DX11 spec callign for two new types of shaders, plus tessellation hardware as well (among other things). Even factoring in the die shrink, and you still have a humongous die pushing an awful lot of power through it. So pardon me while I take a well educated dump here - it's going to be hotter, bigger, and more expensive (for Nvidia). :roll:

Bottom line and the only things that should concern you are..................

Performance, power consumption, heat, price. You should not give a rats arse if it takes 90 billion transistors to make. 55 > 40nm and the architecture has changed. For all you know, GT200 was as somebody said in here, just a test run. Maybe there are millions of transistors not actually needed after observation of years of testing the GT200 (in house and out). Who knows, it could be a 2 billion transistor GPU. So what?

So please stop pulling number out of the air. They mean nothing, yet.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
have you guys ever heard of tukwila? more than 2 billion transistors. more than 700 sq mm. yeah i know intel makes better parts than TSMC, and i know tukwila is mostly cache whereas GT300 is mostly FPU, but if it's feasible on 65nm, it's going to be feasible at 40nm also. has any form of GT300 taped out yet?

The ROP count has got to increase. Shaders have more than doubled, but we are going down to 256-bit for the RAM. Everywhere you read about the morbid obesity of a 512-bit bus on 40nm, so that would imply exorbitant savings for this generation, this being their first 256-bit flagship in 4 years (7800 GTX june 22, 2005). 1.8 billion is a nice minimum i think.



hey, 100.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: SunnyD

All that aside, we're talking something roughly twice the "power" of a GT200, literally as the article reads 50% more functional units. Last I knew, these things took transistors to make, which means 50% more transistors. Okay, I'll grant you some changes in architecture which, being generous we'll say a grand total of 33% more transistors as a GT200 (very debatable given that the article says they're moving to MIMD - more logic needed), which weighed in at what - 1.4 billion transistors? So we're moving to 1.8 billion transistors. That also doesn't account for DX11 spec callign for two new types of shaders, plus tessellation hardware as well (among other things). Even factoring in the die shrink, and you still have a humongous die pushing an awful lot of power through it. So pardon me while I take a well educated dump here - it's going to be hotter, bigger, and more expensive (for Nvidia). :roll:


Bigger - What, the card? The die size? If they keep the actual card equal to or the same as the GT200 cards, who cares. If you are referring to die size, who cares? How does that worry a consumer

Hotter - How much hotter are you talking about? So if they average a couple degrees warmer, but double performance, does the consumer care?

More Expensive - New tech is always going to be more expensive at launch. The market will set the price.

nV tried to set the price of a 280 @ $649, but the market corrected it due to a competitive and cheaper ATi product.

If nV is far ahead of ATi, or the other way around, they will price higher accordingly. This is all basic economics.

Assuming that ATi would not have to increase costs, size, and heat in order to keep up with this theoretical chip is laughable at best, intellectually dishonest at worst.

 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
ok so you say 33% increase in transistors (and thus) die space...
55 to 40nm means a 27% decrease in die space INCLUDING the 33% from before...
So if you start with X, add transistors you have 1.33x (1x +0.33x), factor size decrease gives you 0.97x (1.33x * (1-0.27)) the size of current die.

lets say a flat 50% increase in transistor count... so 1.5x die size on current tech, 1.095x die size on 40nm. a 9.5% increase in overall size to cram 50% more transistors in there.

The magic of percentages is that it gives different results depending on if you are adding or subtracting, the order of operations, and the reference point... And no i don't mean for MANIPULATING... there is only ONE way to calculate it correctly. and getting 50% more functional units at 33% more space is not so unbeleiveable because a lot of the space is things other than functional units, controllers, logic, etc... they specifically said they are increasing the size of each shader GROUP by 50%, not adding 50% more groups (aka, they are eliminating a lot of overhead).

Granted 33% more is not necessarily the case, it might be more, it might be less. time will tell.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,001
126
Originally posted by: SunnyD

Amazingly HUGE and HOT and POWER HUNGRY... yeah. Oh yeah, also amazingly EXPENSIVE too.
If won?t necessarily be that bad given its almost certain to use 40nm.
 

schneiderguy

Lifer
Jun 26, 2006
10,801
91
91
Wouldn't this be the first time since the geforceFX nvidia has done a high end card on a brand new (for them) process? Or do they have a midrange 40nm card like rv740 planned that I'm not aware of GTX250 anyone?
 

Elfear

Diamond Member
May 30, 2004
7,163
819
126
Woot. Bring it on Nvidia. Hopefully the new cards from both camps can get close to doubling the performance of the current generation.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Just learning
So will ATI 58xx have to come out with 2000 stream processors to compete? Or will they continue to stick with a similar die size?

I am betting 1600 stream processors will be enough as long as ATI comes out with HD58xx before Nvidia releases GT300.

SP alone doesn't dictate the whole picture.

And this is way too early to tell.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Or do they have a midrange 40nm card like rv740 planned that I'm not aware of GTX250 anyone?

They are planning on a GT2x based 40nm part to hit a while before GTX3xx is ready to roll. Honestly haven't paid much attention to the details as it will be much like the most recent round of launches- parts plugging holes.

The ROP count has got to increase.

It likely won't too much. We are very close to fillrate complete, moving forward ROPs are going to increase far more slowly then shader hardware.

but we are going down to 256-bit for the RAM. Everywhere you read about the morbid obesity of a 512-bit bus on 40nm

If the chip ends up being huge, then 512bit bus wouldn't be all that bad. Not saying they won't go 256bit, just pointing out that either way seems like a viable option if the chip really ends up being huge.

So pardon me while I take a well educated dump here - it's going to be hotter, bigger, and more expensive (for Nvidia).

It certainly matches the dump part, as far as the rest of your prediction- how is the power draw comparison for the current generation? Seems to me like one company has a significantly smaller die size, yet fails to end up with anything resembling significant power savings. In a realistic sense, no matter how you approach it, if you shoot for the top spot you are going to push one end of the envelop or the other. ATi currently has smaller chips running higher clocks, nV larger chips with lower clocks. Which way is 'right'? Seems to me they are neck and neck in almost every price segment, probably the closest they have been overall ever that I can recall. IIRC, they haven't been further apart in terms of general design philosophy then they are right now either. As a consumer, it should be all about the end results. I'm hoping that we see a very tight grouping as we did this generation, help force down the obscene prices first nV(260 and 280 at launch) then ATi(4870x2 at launch) pulled this generation.

If they are both neck and neck, we win. I wish some of the die hard loyalists would have the sense to understand that.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
regarding the 40nm launch, wasn't it reported that nV had huge issues trying to shrink GT200 to 40nm and it just wasn't going to happen? That would require a 40nm launch with GT300; OR a 40nm G9x. Which would make me laugh.

Dual machines takes care of the issues with power consumption quite nicely. Dedicate the power hungry desktop for gaming and an ultraportable laptop with docking station for the main machine. Screens are power hungry (my 24+20" setup uses 100W), but a docking station and a KVM make it really easy to switch back and forth. Its a good setup.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,551
136
Everyone needs to keep in mind any specs put out now (whether for the ATI RV870 or GT300) is mostly speculation and rumors.

As far as heat output goes, I don't think the GT300 is going to be much different from what we've seen on the top end of video cards today. Companies design for certain heat output and power draw and trying to squeeze as much performance as they can while staying within those heat and power specs.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: akugami
Everyone needs to keep in mind any specs put out now (whether for the ATI RV870 or GT300) is mostly speculation and rumors.

As far as heat output goes, I don't think the GT300 is going to be much different from what we've seen on the top end of video cards today. Companies design for certain heat output and power draw and trying to squeeze as much performance as they can while staying within those heat and power specs.

Agree. Assuming it comes in at less than an HD48x0X2 or GTX295 it won't be a massive problem.
We are talking about the flagship card here, with a theoretical doubling/tripling of performance, and with a die shrink it should (based on speculated specs) be able to match something like the GTX295, and with the 40nm process probably come in somewhere lower in power.

It seems reasonable to expect it could consume more than a single GT200 based card, but if it's enough higher in performance, it shouldn't necessarily matter. We already have "single" cards which consume more power than the GT200 and people buy them.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,001
126
Originally posted by: schneiderguy

Wouldn't this be the first time since the geforceFX nvidia has done a high end card on a brand new (for them) process? Or do they have a midrange 40nm card like rv740 planned that I'm not aware of GTX250 anyone?
There's a mid range 40nm part supposed to be coming in Q2.
 

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
Nvidia never debuts a new chip design on a new process. They've used the older, reliable, process ever since their FX Fiasco. The first GT300s will likely be 55nm, with a 40nm refresh coming later.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,001
126
Originally posted by: Bateluer

The first GT300s will likely be 55nm, with a 40nm refresh coming later.
I find that highly unlikely for the simple reason that the necessary performance increase in order to remain competitive would produce a die size larger than the original G80.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Bateluer
Nvidia never debuts a new chip design on a new process. They've used the older, reliable, process ever since their FX Fiasco. The first GT300s will likely be 55nm, with a 40nm refresh coming later.

nVIDIA never debuts their next generation high end chip on a new process, unless they have tested the new process by using such technology for its mid/low range GPU.

However nVIDIA sticking with 55nm process technology is rather doubtful since this strategy of sticking with older/mature process backfired with GT200. With rumours of GT300 boasting twice or so of computational power (and probably logic) compared to GT200, its obvious that this chip will be much larger and use more transistors. 55nm is already at its limits, so Im guessing that nVIDIA will definitely use 40nm process for its next generation architecture.

However it wont be the first 40nm product from nVIDIA (This is in reference to GT214/5/6/8, cards that will replace the current mid/low range)
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Keysplayr
Bottom line and the only things that should concern you are..................

Performance, power consumption, heat, price. You should not give a rats arse if it takes 90 billion transistors to make.
We all know that, as a rule of thumb, the more complex a GPU is, the higher its power consumption is. That translates to more heat. The more heat a GPU puts out, generally the lower the clock speed it will stably run at. It also is more expensive to produce a higher transistor count GPU than one on the same process that uses fewer transistors since you will get fewer dies per wafer. And typically, yields will be lower on the GPU with a higher transistor count simply from a mathematics point of view. That ties directly into price.

So yes, we should give a rats arse what its transistor count is as it relates to everything you just mentioned (performance, power consumption, heat, price).
 

fleshconsumed

Diamond Member
Feb 21, 2002
6,486
2,363
136
If the following statement is accurate: "Before the chip tapes-out, there is no way anybody can predict working clocks, but if the...", this means it's too early to get excited. It looks that ATI will be first to market with the next generation card, nVidia got a little scared and wants to steal the thunder by leaking very vague specs to a chip that hasn't even been taped out.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: fleshconsumed
If the following statement is accurate: "Before the chip tapes-out, there is no way anybody can predict working clocks, but if the...", this means it's too early to get excited. It looks that ATI will be first to market with the next generation card, nVidia got a little scared and wants to steal the thunder by leaking very vague specs to a chip that hasn't even been taped out.

Actually that statement you quoted is in error.

Before tape-out a chip design is fully simulated for clockspeed envelope and power-consumption among many other things that incorporate elements of DFM to ensure yields will not catastrophically suffer from inherent process variability.

Where the wheels fall off the wagon is two-fold, first you can have critical speedpaths that were not uncovered during the design/layout verification and simulation. It happens (unnoticed speedpath limiters) because time is a critically short component in the process and not everything can be tested and checked in reality.

The second thing that can cause the wheels to fall off the wagon in terms of chips not meeting their designed clockspeed/power-consumption envelope is the underlying process technology failing to match the rudimentary spice-model parametric targets that were communicated/committed by the process development team to the layout/design teams.

Hitting Ion but failing to meet Ioff or IDDQ for example results in clockspeed targets being met but dynamic/static leakage being higher than anticipated and thus you can end-up with TDP limited clockspeeds.

Alternatively you could find yourself with xtors that fail to deliver spec'ed Ion (not enough Idrive normalized to Vcc) but Ioff and IDDQ are well within spec, resulting in chips that just won't clock very high but at the same time aren't barn burners even when overclocked.

But IF critical speedpaths are discovered/corrected and process technology is matching the spice-models and parametrics then the clockspeed/power-consumption envelopes determined from simulations and models done prior to tape-out actually works out just fine. I've seen this happy occasion numerous numerous times with TI's DSPs and SUN CPUs, and I doubt our pre-tapeout design verification procedures were anything unique or special in this industry where employees change employers every 3-4 years.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I wonder how much this will actually benefit gaming though? It sounds to me like they are going after the high computational market that needs more flexibility from the units?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
However nVIDIA sticking with 55nm process technology is rather doubtful since this strategy of sticking with older/mature process backfired with GT200.

How would you say it backfired? It seems to me nV hit everything they were shooting for this generation, as did ATi. ATi didn't choke as they had the two prior generations which meant they were competitive, but I don't see how that meant that the GT200 was any sort of backfire. Clearly nV's focus was on GPGPU performance this generation, that was obvious from launch. If that strategy works out for them or not won't really be obvious for a while yet, it will depend on how much market penetration and mind share they manage to grab before OpenCL makes its' presence felt at some point in the future. I don't see that as being related to their build process so much as their overall strategy to what front they chose to focus on for this generation. I'm not going to say it was right or wrong, simply that they seemed to do exactly what they were trying to, much as ATi did this generation.

I wonder how much this will actually benefit gaming though? It sounds to me like they are going after the high computational market that needs more flexibility from the units?

The considerably higher amount of die space on a percentage basis dedicated to shader hardware will benefit titles considerably that are shader limited. Given, as of right now, Crysis is still the only title really pushing hardware I would expect to see its' performance improve by a decent amount. Going forward, we don't really have a great perspective on what elements game developers are going to take to continue to improve visuals. DX11 CS, PhysX and OpenCL open up a lot of oppurtunities for developers to exploit GPGPU processing power to help in game elements, if developers choose to use them. As far as big improvement in existing games, the GTX 295 pushes in the 50+ FPS or higher range in the overwhelming majority of games running 25x16 w/w4x aa and 16x af, really, how much more is ramping up fill going to give us? We don't have much to gain on that end at this point, incremental improvements moving forward are the best bet for fillrate at this point in time, from ATi or nV. General computational power is what all GPUs are going to be focusing on if not this generation then very shortly.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BenSkywalker

The ROP count has got to increase.

It likely won't too much. We are very close to fillrate complete, moving forward ROPs are going to increase far more slowly then shader hardware.

Fillrate complete? Is there even a such thing? More fillrate the card has faster it performs. So not until games aren't programmed that way we will always need more to run it faster.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |