nVidia GT200 Series Thread

Page 29 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Nemesis 1
Why do you find the bolded part interesting . Rollo . I have been recieving free ATI cards since the 800xtpe . The differance between why I recieve my parts is way differant than why you recieve yours. You get yours to lead people around like sheep and to confuse issues like DX10.1.

I on the other hand get mine to test the hardware. Big differance ROllo. Interesting now even more isn't it. I was only recieving 1 GPU per generation . No other hardware . Now I get Almost all of it . For beta testing not marketing.

confuse people about DX10.1? you were the one who started saying here that MS took the DX10.1 parts out of the DX10 spec to help nvidia... When I confronted you with dates showing nvidia had a DX10 part ready 5 months before ATI (vista released on the same DAY as the first DX10 card), and that ATI did not have DX10.1 until 11 whole months after vista was released (and then MS added DX10.1 to vista with SP1, two and a half months afterwards) You admitted to having no facts or links to support this, but it is just speculation on your part based on your "gut feeling".
If MS took things out of the spec, (and that is a big if), it was done to have vista on the shelf a year earlier.

I can't beleive you would actually use that as an example.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Janooo
NV does not look into the future. AMD/ATI does and they set directions. It happened with 580 and it's happening with multi GPU. NV will just follow.

[sarcasm]Heck, while your at it add intel to the list of followers too.[/sarcasm]

Not to be rude or anything, but do you even have the faintest idea of what you've just said?





 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: bryanW1995
Originally posted by: SickBeast
Originally posted by: Extelleron
nVidia needs to license SLI to Intel
NV's insane CEO has already pissed intel off to the point that they are revoking their chipset licence for the Nehalem (future CPU). I don't think they will be giving them SLI any time soon. They seem to be aligning themselves with VIA, which could create a 3-horse race of sorts. Of the 3, AMD looks like the most balanced company. VIA has terrible CPUs and intel has horrific graphics.

NV may be in a bit of a situation if they indeed cannot make a chipset for Nehalem. In a way I suppose it's intel's way of forcing them to licence SLI. IMO it's fair game if NV wouldn't give it to them to begin with.

huang, isn't insane, he's a genius. he took a company from nowhere to $20 billion + market cap in 15 years. now he's set his sights on the ultimate target: intel. I'm not a blind fanboy of intel, amd, or nvidia, but as an impartial 3rd party you have to admire the cojones of that guy. Intel should be scared.
Genius and insanity tend to go hand-in-hand.

I agree that he's a great CEO. That said, I really don't see NV as being ready to take on intel. If they do indeed partner up with VIA it will take years and years for them to work together on engineering a competetive desktop/server CPU. Not only that, but it would require dedicating many of NV's talented engineering staff to such a project and would detract from their GPU development.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: BFG10K
That's not a scaling issue.
Scaling is a big part of it. If the game's not scaling properly then the multi-GPU solution is broken for that game.

A single card wouldn't have that problem as increasing its specs automatically increases the game's performance (assuming you?re not CPU limited of course).

Often single cards do not perform as well as expected with individual games. Anyways you are correct that current dual gpu solutions are not for most of us. This is about future cards and pre-judging based on last years model is decidedly questionable.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I bet you its an american site hoping people wouldn't notice the EUR part and order it from them for "50 less then the 650 it is supposed to cost" (damn, since when is the eur so powerful and the dolar so weak?).
And suddenly there is a 950$ charge on your CC and an extra "foreign transaction charge"
300$ above MSRP... delicious...
 

Heatlesssun

Junior Member
Jan 19, 2006
11
0
66
Originally posted by: taltamir
I bet you its an american site hoping people wouldn't notice the EUR part and order it from them for "50 less then the 650 it is supposed to cost" (damn, since when is the eur so powerful and the dolar so weak?).
And suddenly there is a 950$ charge on your CC and an extra "foreign transaction charge"
300$ above MSRP... delicious...

Call me crazy but I think that the average American would have noticed well maybe the German on the site. Call me crazy.

 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: Cookie Monster
Originally posted by: Janooo
NV does not look into the future. AMD/ATI does and they set directions. It happened with 580 and it's happening with multi GPU. NV will just follow.

[sarcasm]Heck, while your at it add intel to the list of followers too.[/sarcasm]

Not to be rude or anything, but do you even have the faintest idea of what you've just said?

Yeap...
I believe multi GPU is here to stay. I don't think we'll see another >500mm2 GPU.
I am surprised that you are surprised.
Look what Charlie wrote on Nov 17/2006.
700 design started when AMD/ATI were on top, no financial issues at the time. The decision was made by choice.
 

golem

Senior member
Oct 6, 2000
838
3
76
Originally posted by: ronnn
Originally posted by: BFG10K
That's not a scaling issue.
Scaling is a big part of it. If the game's not scaling properly then the multi-GPU solution is broken for that game.

A single card wouldn't have that problem as increasing its specs automatically increases the game's performance (assuming you?re not CPU limited of course).

Often single cards do not perform as well as expected with individual games. Anyways you are correct that current dual gpu solutions are not for most of us. This is about future cards and pre-judging based on last years model is decidedly questionable.

Isn't it even more questionable to disregard history based on... rumors?
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: golem

Isn't it even more questionable to disregard history based on... rumors?


Exactly, now is not the time to call the next generation of multi-gpu's a success or failure. Most of us may be skeptical, but ..... wait 3 or 4 weeks until the dust settles before knowing.
 

golem

Senior member
Oct 6, 2000
838
3
76
Originally posted by: ronnn
Originally posted by: golem

Isn't it even more questionable to disregard history based on... rumors?


Exactly, now is not the time to call the next generation of multi-gpu's a success or failure. Most of us may be skeptical, but ..... wait 3 or 4 weeks until the dust settles before knowing.

That makes perfect sense, and I don't disagree with that. But I don't think it hurts to speculate as long as you don't get carried away and remember it's just speculation. Helps to kill time for that long 3 or 4 week wait
 

krnmastersgt

Platinum Member
Jan 10, 2008
2,873
0
0
Originally posted by: taltamir
I bet you its an american site hoping people wouldn't notice the EUR part and order it from them for "50 less then the 650 it is supposed to cost" (damn, since when is the eur so powerful and the dolar so weak?).
And suddenly there is a 950$ charge on your CC and an extra "foreign transaction charge"
300$ above MSRP... delicious...

Actually Europe is routinely screwed on the price front in terms of hardware, they usually pay the same numerical value as people here in the States do, e.g. $300 card here is 300 euros there, they don't really take price converting into account. Don't ask me why, just what I've noticed.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
And how do you know it's going to be AFR?
The only other thing it can be is SFR and if it is it won't scale as well because vertex performance isn?t combined due to the frame split at the pixel level.

And even if it's AFR how do you know it's going to suffer all the AFR issues?
Because of the fundamental principles of how AFR works?

What is your biggest AFR concern?
Games not scaling properly is the biggest one. That?s a concern because it makes the extra GPUs paperweights. As for others: input lag, micro-stutter and rendering issues, none of which a different bridge or shared memory will cure.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Often single cards do not perform as well as expected with individual games.
Right, and multi-GPU cards have multi-GPU scaling issues on top of this. So if one GPU has issues chances are the multi-GPU will have even more issues, often to the point of running slower than a single card.

This is about future cards and pre-judging based on last years model is decidedly questionable.
Why? They?ll either use AFR or SFR and the mechanics of both are soundly understood. All the vendors can do is optimize drivers for individual titles more but the fundamental problem will always be there because of the nature multi-GPU rendering.

Again doubling a single GPU?s specs is far more robust than slapping two slower GPUs together.

It?s exactly the same as taking a CPU core and either doubling everything on it or leaving everything the same and adding a second CPU core. The single CPU will always be more robust because it doesn?t rely on application optimization to run faster.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Adding to what BFG said, unless theres a new rendering mode that solves those above mentioned issues, i cant see the multi GPU solutions being the future. (This doesn't mean that the concept of Multi GPU is rubbish, but its rather a technology place holder til something more cost/performance efficient solutions takes over e.g 7950GX2 --> 8800GTX ). I almost forgot about super tiling. Yep that doesn't work either seeing as it provides almost no performance increase. I think its been abandoned by ATi even after how they hyped the rendering mode up compared top SFR/AFR and claimed that it required no game profiles/optimizations.

People are also forgetting that the process technology hasn't reached a wall yet on the GPU front. Until its physically impossible to package anymore transistors without having to go over kill on die size/power/heat, we are still going to see single chip solutions as the better solution for a long time to come. I guess the rumoured 24x24 die size of the G200 has some people alarmed, but by no means its the end of single GPU solutions. nVIDIA has never risked using newer processes for their next gen architecture ever since the NV30 fiasco hence the use of 65nm. (R600 also suffered from sever leakages in the 80nm process which was relatively new at the time). A die shrink of 55nm (rumored G200b) will reduce the die size by 15% or so which makes it much less alarming than the 576mm2 figure. Heck, they could get rid of the 512bit memory interface (in doing so reduce alot of die size/transistors) and use a combination of 256bit + fast GDDR5 to reach even a higher level of bandwidth. See where i am going? IHVs release a GPU based on a new architecture. Then over the course of 1~2 years, they cut off the unnecessary fat while tweaking other aspects of the architecture to make it more efficent. Then they release another GPU based on a new architecture. Repeat the cycle over and over.

note - Currently TSMC is already preparing the 40nm process, and maybe 2~3 years alway from 32nm.

 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: Cookie Monster
Adding to what BFG said, unless theres a new rendering mode that solves those above mentioned issues, i cant see the multi GPU solutions being the future. (This doesn't mean that the concept of Multi GPU is rubbish, but its rather a technology place holder til something more cost/performance efficient solutions takes over e.g 7950GX2 --> 8800GTX ). I almost forgot about super tiling. Yep that doesn't work either seeing as it provides almost no performance increase. I think its been abandoned by ATi even after how they hyped the rendering mode up compared top SFR/AFR and claimed that it required no game profiles/optimizations.

People are also forgetting that the process technology hasn't reached a wall yet on the GPU front. Until its physically impossible to package anymore transistors without having to go over kill on die size/power/heat, we are still going to see single chip solutions as the better solution for a long time to come. I guess the rumoured 24x24 die size of the G200 has some people alarmed, but by no means its the end of single GPU solutions. nVIDIA has never risked using newer processes for their next gen architecture ever since the NV30 fiasco hence the use of 65nm. (R600 also suffered from sever leakages in the 80nm process which was relatively new at the time). A die shrink of 55nm (rumored G200b) will reduce the die size by 15% or so which makes it much less alarming than the 576mm2 figure. Heck, they could get rid of the 512bit memory interface (in doing so reduce alot of die size/transistors) and use a combination of 256bit + fast GDDR5 to reach even a higher level of bandwidth. See where i am going? IHVs release a GPU based on a new architecture. Then over the course of 1~2 years, they cut off the unnecessary fat while tweaking other aspects of the architecture to make it more efficent. Then they release another GPU based on a new architecture. Repeat the cycle over and over.

note - Currently TSMC is already preparing the 40nm process, and maybe 2~3 years alway from 32nm.

So what's the limit? 10nm? 10 years from now? The question is how long they'll wait.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
People have been saying there's a limit for about 5 years now and it's never the case; they always find new ways to advance technology.

Besides, if single GPUs hit a wall then so will multi-GPUs since the former are the building blocks of the latter.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: BFG10K
And how do you know it's going to be AFR?
The only other thing it can be is SFR and if it is it won't scale as well because vertex performance isn?t combined due to the frame split at the pixel level.

And even if it's AFR how do you know it's going to suffer all the AFR issues?
Because of the fundamental principles of how AFR works?

What is your biggest AFR concern?
Games not scaling properly is the biggest one. That?s a concern because it makes the extra GPUs paperweights. As for others: input lag, micro-stutter and rendering issues, none of which a different bridge or shared memory will cure.

My question is, even though the memory is "shared" between two or more cards, do they each have their own data in their RAM, or do they still have a carbon copy of one another?

How does this shared memory work? I mean, theoretically.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: BFG10K
People have been saying there's a limit for about 5 years now and it's never the case; they always find new ways to advance technology.

Besides, if single GPUs hit a wall then so will multi-GPUs since the former are the building blocks of the latter.

Take a look at G80 -> GT200.

GT200 is around ~2x G80 in terms of specs and performance.

G80 = 484mm^2 @ 90nm
GT200 = 576mm^2 @ 65nm

So for 2x the performance, you are talking about 100mm^2 larger chip at the next full process.

So no, there isn't a way to continue this. If G80 -> GT200 scaling continued, we would see "GT300" being 2x GT200 and 700mm^2 on a 45nm process. That isn't going to happen, I can promise you that.

AMD, meanwhile, has a small ~260mm^2 chip on 55nm. The problem nVidia seems to be facing as well is their architecture appears to take up more room. G80 -> GT200 gives 87.5% more SPs / ~ 2x TMUs (not exactly sure of TF/TA arrangement in GT200 so its hard to tell) / 1.33x ROPs / 1.50x bus size. And GT200 is well above 2x G80 in terms of transistors/die size.

RV770, meanwhile, is 2.5x SPs / 2-2.5x TMUs / optomized RBE's and it is only 30-40% larger than RV670. For the most part, the jump from RV670 -> RV770 is larger than G80 -> GT200, yet we see a 2x+ jump in die size for nVidia meawhile we see a 30-40% jump for AMD. So nVidia probably needs multi-GPU more than AMD, actually. Their architecture takes up a lot of space.

It is indeed possible that we will see a bit more single-GPU, since TSMC is ramping up their move to advanced process nodes. We will see 32nm from TSMC in early 2010 and 40nm sometime in 2009. But after that, the single-GPU dies as far as I am concerned, if not before that. TSMC might have 32nm in 2010, but then it will be a 2 year wait until 2012 for 22nm.

My view of the future GPU is one where a number of GPUs (likely 2-4) are connected via hardware just like we see Intel's MCM quad-cores. The future GPU will be multi-GPU but I don't think it will always rely on software scaling.

 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: BFG10K
And how do you know it's going to be AFR?
The only other thing it can be is SFR and if it is it won't scale as well because vertex performance isn?t combined due to the frame split at the pixel level.

And even if it's AFR how do you know it's going to suffer all the AFR issues?
Because of the fundamental principles of how AFR works?

What is your biggest AFR concern?
Games not scaling properly is the biggest one. That?s a concern because it makes the extra GPUs paperweights. As for others: input lag, micro-stutter and rendering issues, none of which a different bridge or shared memory will cure.

What about tile based rendering? That would be the shit.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Extelleron
Originally posted by: BFG10K
People have been saying there's a limit for about 5 years now and it's never the case; they always find new ways to advance technology.

Besides, if single GPUs hit a wall then so will multi-GPUs since the former are the building blocks of the latter.

Take a look at G80 -> GT200.

GT200 is around ~2x G80 in terms of specs and performance.

G80 = 484mm^2 @ 90nm
GT200 = 576mm^2 @ 65nm

So for 2x the performance, you are talking about 100mm^2 larger chip at the next full process.

So no, there isn't a way to continue this. If G80 -> GT200 scaling continued, we would see "GT300" being 2x GT200 and 700mm^2 on a 45nm process. That isn't going to happen, I can promise you that.

AMD, meanwhile, has a small ~260mm^2 chip on 55nm. The problem nVidia seems to be facing as well is their architecture appears to take up more room. G80 -> GT200 gives 87.5% more SPs / ~ 2x TMUs (not exactly sure of TF/TA arrangement in GT200 so its hard to tell) / 1.33x ROPs / 1.50x bus size. And GT200 is well above 2x G80 in terms of transistors/die size.

RV770, meanwhile, is 2.5x SPs / 2-2.5x TMUs / optomized RBE's and it is only 30-40% larger than RV670. For the most part, the jump from RV670 -> RV770 is larger than G80 -> GT200, yet we see a 2x+ jump in die size for nVidia meawhile we see a 30-40% jump for AMD. So nVidia probably needs multi-GPU more than AMD, actually. Their architecture takes up a lot of space.

It is indeed possible that we will see a bit more single-GPU, since TSMC is ramping up their move to advanced process nodes. We will see 32nm from TSMC in early 2010 and 40nm sometime in 2009. But after that, the single-GPU dies as far as I am concerned, if not before that. TSMC might have 32nm in 2010, but then it will be a 2 year wait until 2012 for 22nm.

My view of the future GPU is one where a number of GPUs (likely 2-4) are connected via hardware just like we see Intel's MCM quad-cores. The future GPU will be multi-GPU but I don't think it will always rely on software scaling.

Another thing you might not be considering, Extelleron, is that all of those transistors just might not be all for graphics purposes only. You are forgetting about compute ability and what new transistors may be dedicated to CUDA. I have no specific information about this, yet, but I am on the edge of my seat to find out. Nvidia has been cooking more than just graphics stew since G80.

P.S. Sorry, I accidentally hit edit instead of quote. Nothing was altered in your post. I hate when I do that.
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: allies
Originally posted by: BFG10K
And how do you know it's going to be AFR?
The only other thing it can be is SFR and if it is it won't scale as well because vertex performance isn?t combined due to the frame split at the pixel level.

And even if it's AFR how do you know it's going to suffer all the AFR issues?
Because of the fundamental principles of how AFR works?

What is your biggest AFR concern?
Games not scaling properly is the biggest one. That?s a concern because it makes the extra GPUs paperweights. As for others: input lag, micro-stutter and rendering issues, none of which a different bridge or shared memory will cure.

What about tile based rendering? That would be the shit.

The idea of tile based rendering is great. However, based on my experiences with Kryo, it seems to be hit or miss. In many ways it doesn't seem any more reliable than current Multi-GPU type setups. It doesn't have consistent gains based on the type of environment being rendered. For example, rendering Down Town New York would have better gains than rendering a beach. So in some cases, tile based rendering wouldn't be much faster at all.
 

WT

Diamond Member
Sep 21, 2000
4,816
59
91
Has anybody gotten any solid, reliable numbers on the power requirements ?? I am looking at a Step Up from a 9800GTX to the GTX260 card. I'm running a Fortron Epsilon 600w quad rail with 15a per rail. This seems like it would be fine for the card paired with an eVGA 750FTW and a Q9550. Thanks in advance !!
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |