NVIDIA NV30 & NV40 Scoopage: Details

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Leo V

Diamond Member
Dec 4, 1999
3,123
0
0
Originally posted by: vash
Multi chip solutions is an "evil" to some and welcome to others, but lets face it, its going to happen.

Chips run hot, putting more onto a chip takes time and money. So, why not develop a chip that is fully scaleable when you add multiple GPUs to the mix (I'm guessing the cards will come with 3-5 GPUs). Instead of putting A LOT of effort into one chip that conquers all (and costs plenty of cash), bring out some chip that is pretty fast, but when combined with more of itself, will yield 100% gains.

Also, each chip has its own memory interface--your effective memory bandwidth is multiplied by the number of GPU's. If each of 3 GPU's has a 14.4GB/sec 900MHz DDR interface, we're talking 43.2GB/sec effective bandwidth--assuming NVIDIA is still using 128-bit memory interface and not 256.

Personally I would prefer a single GPU with a 256-bit interface (900MHz 256-bit DDR memory-->28.8GB/sec). NVIDIA's history is one of delivering single-chip cards, and each new GPU was always greatly improved (with the latest fab process, latest standards, etc.) so they never needed a multi-chip solution.

Chief Scientist David Kirk said in an old interview that NVIDIA had no intention of making multi-chip cards because OEM's hate them. I don't see why they'd change their minds now.
 

GoodRevrnd

Diamond Member
Dec 27, 2001
6,801
581
126
For the multi processors, would they have say 3 identical mid-power processors, or 1 super pixel rendering processor, 1 super texture rendering processor, and 1 super featureset processor (just as an example)?
 

vash

Platinum Member
Feb 13, 2001
2,510
0
0
Personally I would prefer a single GPU with a 256-bit interface (900MHz 256-bit DDR memory-->28.8GB/sec). NVIDIA's history is one of delivering single-chip cards, and each new GPU was always greatly improved (with the latest fab process, latest standards, etc.) so they never needed a multi-chip solution.
Not needing a multi-chip solution and building a chip that can do it is another story. For OEMs, they can take the single chip solution ("budget") and do just fine with selling the card to the masses. But if the chip could be designed for simple "bolt on" performance, Nvidia not only gets the OEMs, but the gamers and the specialist OEMs (Alienware) that cater to some higher-end customers (gamers)

Chief Scientist David Kirk said in an old interview that NVIDIA had no intention of making multi-chip cards because OEM's hate them. I don't see why they'd change their minds now.
That interview was a while ago (back in the heyday of people boasting their desire for multichip cards). We have to consider the possibility now that Nvidia and its name does carry a lot of weight. Back then, when Nvidia was fighting, they can sit back and say "our single chip solution is better". Now that they are up against only single chip solutions (ATI, Matrox, for OEM space), they can now go back and say "our single chip solution kicks butt, but our multichip solutions are that much faster". Considering Nvidia's market share, brand name, money put into R&D, we cannot discount them with the possibility of going multichip (at least, perhaps for the Quadro, workstation line).

vash
 

rahvin

Elite Member
Oct 10, 1999
8,475
1
0
Rampage was going to be a two chip production with the use of any combination of chips to generate a board. 3dfx designed rampage to be a pixel-texel engine (rampage) + a programable geometry engine (sage). The board combinations would supposidly have been Rampage, Rampage + Sage and 2*Rampage+Sage. 3dfx hinted they could almost run this out to infinatium. Such as a board with X Rampages and Y Sages.

The supposed advantage to this approach was that the individual chips would require much less die space, would run much cooler and be much cheaper to produce (lower failure rates).

Nvidia's monster chips sizes appear to be reaching their limits, current sizes are nearly the die size of a microprocessor. Nvidia may have decided to finally pursue a multi-chip architecture to save production cost and lower the total overall cost of production. This modular design would I think be easier to integrate into Nforce as well because the logic would have been seperated into individual units that could be positioned on the north bridge as die space allows.

Just a few thoughts... this may be where the rumors are from, regardless if they are based in fact or not.
 

NFS4

No Lifer
Oct 9, 1999
72,636
47
91
Update from 3DChipset:

News post pulled upon request.

Solomon - I pulled the news post due to the request of our source and not from Nvidia as the information was too accurate and could be pointed back to it/her/him. Yeah, like we would fold if Nvidia's great PR clan told us to remove it! LOL I respect his decision and pulled the information.
 

GoodRevrnd

Diamond Member
Dec 27, 2001
6,801
581
126
Originally posted by: NFS4
Update from 3DChipset:

News post pulled upon request.

Solomon - I pulled the news post due to the request of our source and not from Nvidia as the information was too accurate and could be pointed back to it/her/him. Yeah, like we would fold if Nvidia's great PR clan told us to remove it! LOL I respect his decision and pulled the information.

Pffff... pathetic ploy for credibility.
 

NFS4

No Lifer
Oct 9, 1999
72,636
47
91
Originally posted by: GoodRevrnd
Originally posted by: NFS4
Update from 3DChipset:

News post pulled upon request.

Solomon - I pulled the news post due to the request of our source and not from Nvidia as the information was too accurate and could be pointed back to it/her/him. Yeah, like we would fold if Nvidia's great PR clan told us to remove it! LOL I respect his decision and pulled the information.

Pffff... pathetic ploy for credibility.

And you would know this because....................?????????????

I can say for sure that a few of the specs looked dead on from what I understand about NV30.
 

GoodRevrnd

Diamond Member
Dec 27, 2001
6,801
581
126
Originally posted by: NFS4
Originally posted by: GoodRevrnd
Originally posted by: NFS4
Update from 3DChipset:

News post pulled upon request.

Solomon - I pulled the news post due to the request of our source and not from Nvidia as the information was too accurate and could be pointed back to it/her/him. Yeah, like we would fold if Nvidia's great PR clan told us to remove it! LOL I respect his decision and pulled the information.

Pffff... pathetic ploy for credibility.

And you would know this because....................?????????????

I can say for sure that a few of the specs looked dead on from what I understand about NV30.

I'm just trying to play devil's advocate and debunk it.

I'd say some of the specs are pretty reasonable (memory clock), and it's vague enough and far enough in the future they *should* be able to fulfill it. We'll see I guess.
 

Leo V

Diamond Member
Dec 4, 1999
3,123
0
0
OK, so suppose a multi-chip solution is realistic.

But here's another curiosity: they talked about having an "odd" number of NV30 chips. For the sake of OEM space, NVIDIA must allow a 1-chip solution--but this means that each NV30 chip has to be self-sufficient (i.e. it has to do everything including texels and T&L, no dedicated chips like 3dfx Rampage/Sage).

But then, how do you explain the need for 1,3,5,etc. chips? If each chip is self-sufficient, you could just as easily have 2.
 

Leo V

Diamond Member
Dec 4, 1999
3,123
0
0
Since they pulled it, here's all the content:

"NV30 & NV40 Scoopage: Details - Tuesday, July 9 | Solomon

As I pointed out earlier in the ATi scoopage post the mice I converse with have some very interesting and new information regarding the NV30 and features of this new chipset. Plus talks about NV40 have already begun for quite some time too, which I didn't know either. I won't take the glory as this info is credited toward my mice in high places.

I've received some information regarding the NV30, NV31 and some insight of the NV40 Nvidia is working on. First off though, the NV30 is AGP 8X and it's still a go with 900MHz DDR memory. Also what is now known is that the NV30 is multi chip capable. The number of chips is an odd number. That's all I can say without giving out the exact number. I'll say this though. It's under 8. With regards to FSAA performance the NV30 has a zero (0) performance cost using 4X FSAA. Specifications have been discussed and the NV30 is able to churn out 16 textures/pixel per pass. The NV30 also uses a 4-1 colour compression technique. Nvidia has explained to us/me that the performance we all will see with the NV30 is about double the performance of the Xbox/GF Ti4600. Another feature that is not known to the public is the use of IEE 1394 or known as Firewire will be on NV30 video cards. What this feature is for wasn't discussed thoroughly and all we know is that it has to do with the hardware mpeg2 acceleration built into the chip.

The NV31 has also been discussed with us/me. The NV31 comes out one month after the NV30 and will not suffer the fate of the MX by branding it GeForce 4. You remember that debate about the GF4 MX shouldn't of been called GeForce 4. Well Nvidia has listen and the NV31 line will feature everything the NV30 has but just be lower clocked cards.

Now on to the NV40. Very few details where presented to us on the future chip, but we know that Nvidia is shooting for 600 million polys per second with a 4 Gigapixel fillrate.

So let's recap all of this shall we. What is now known is this:

- Zero (0) performance cost using 4X FSAA
- 4-1 Colour Compression Technique
- 16 Textures/Pixel Per Pass
- 900MHz DDR is still a go
- Multi Chip capable
- Firewire Port

That is some pretty interesting information. The NV30 sounds to be a major improvement over the GeForce 4 Ti line. What is still unknown though is the name of the chip or GPU line featuring the NV30."
 

bluemax

Diamond Member
Apr 28, 2000
7,182
0
0
I'm just really glad that they're taking FSAA (or other AA) seriously!! If they do it right, it looks a LOT better than 1600x1200.
(Well 1024x768 4x FSAA *should* look better. And it does.)

At this point it's all speculation.
 

mchammer187

Diamond Member
Nov 26, 2000
9,114
0
76
Originally posted by: bluemax
I'm just really glad that they're taking FSAA (or other AA) seriously!! If they do it right, it looks a LOT better than 1600x1200.
(Well 1024x768 4x FSAA *should* look better. And it does.)

At this point it's all speculation.

lets just hope its not some hack job like Quincux (sp?)
 

Leo V

Diamond Member
Dec 4, 1999
3,123
0
0
"lets just hope its not some hack job like Quincux (sp?)"

Quincunx is not a hack job, it's similar to 3dfx 2X FSAA (and about equally cheap). It does its job correctly. If anything could be accused of being a "hack" job, it's Parhelia's "16X" FSAA which fails to detect (and antialias) certain edges.
 

vash

Platinum Member
Feb 13, 2001
2,510
0
0
Originally posted by: Leo V
OK, so suppose a multi-chip solution is realistic.

But here's another curiosity: they talked about having an "odd" number of NV30 chips. For the sake of OEM space, NVIDIA must allow a 1-chip solution--but this means that each NV30 chip has to be self-sufficient (i.e. it has to do everything including texels and T&L, no dedicated chips like 3dfx Rampage/Sage).

But then, how do you explain the need for 1,3,5,etc. chips? If each chip is self-sufficient, you could just as easily have 2.
Without knowing anything of their hardware, I'll speculate a bit.

Nvidia's design with the chip can be broken down into two or three different rendering areas. With the single chip solution, it'll do all two or three steps with just one piece of silicon. When they want to go multi, they REALLY break out the redering to different chips so that each of the redering chips have specific goals vs. just sharing the same goals.

The design could really be nifty to think of the pipeline with specific chips. One chip to hand the basic 2D and *can* render everything will work, but when you break out the intense redering into a few seperate chips, the performance will be too good not to pass up. If any company can make this multichip design, its Nvidia -- they have the money, R&D, brandname and loyalty that'll carry them far.

As long as Nvidia keeps selling chips and doesn't ever get into selling boards, they'll be fine. 3Dfx's downfall was trying to make and sell the cards themselves -- it cost them everything.

vash
 

Pabster

Lifer
Apr 15, 2001
16,986
1
0
Ah, the competition is beginning to heat up. The big R300 vs NV30 showdown will be at hand before you realize it. If ATi can come up with a solid, full-featured driver set at launch they just might be in business. I'm not holding my breath. I have a sneaky suspicion that I had better keep dropping a few bucks here and there in to the "NV30 Fund" at this point.
 

jbond04

Senior member
Oct 18, 2000
505
0
71
I've been putting away money for the NV30 ever since I bought my Geforce 3 in June of 2001 (just when they became available on the retail market). Although my GeForce 3 was plenty powerful, I knew that the NV30 was the one to spring for...... You just can't go wrong with any of nVIDIA's first gen cards (NV10, NV20, NV30, etc...)
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
multi-chip solutions:

3 chip = 2xGPU + memory controller
5 chip = 4xGPU + memory controller

just a guess.
 

MadRat

Lifer
Oct 14, 1999
11,965
279
126
Multichip manufacturing makes sense in the realm of graphics chips. If you can increase yields substantially by moving functions across multiple chip cores then you can also simplify the design of each core. The total energy consumption may even hover around the same, if not decrease, using a multiple core approach. At certain processing requirements they may even be able to turn off peripheral cores to save energy consumption.

I'm guessing that they will use something akin to HDT to talk between cores... yes?
 

Degenerate

Platinum Member
Dec 17, 2000
2,271
0
0
multi-chip solutions:

3 chip = 2xGPU + memory controller
5 chip = 4xGPU + memory controller

just a guess.
Just curious, is there anyreason why there couldn't be an odd number of GPUs? Or is it just natural that since data size is in multiples of 2, it is logical to have an even number?
 

kgraeme

Diamond Member
Sep 5, 2000
3,536
0
0
Originally posted by: Leo V
OK, so suppose a multi-chip solution is realistic.

But here's another curiosity: they talked about having an "odd" number of NV30 chips. For the sake of OEM space, NVIDIA must allow a 1-chip solution--but this means that each NV30 chip has to be self-sufficient (i.e. it has to do everything including texels and T&L, no dedicated chips like 3dfx Rampage/Sage).

But then, how do you explain the need for 1,3,5,etc. chips? If each chip is self-sufficient, you could just as easily have 2.

You may be reading too much into it. He only says that this board has an odd number of chips, not that the architecture requires it. There is a difference.
 

Leo V

Diamond Member
Dec 4, 1999
3,123
0
0
kgraeme: you're right, but if NV30 architecture didn't require an odd number, then a 2-chip solution makes a lot more sense to me than a 3+ chip one. Why eliminate such an attractive possibility from the start?

3 chip = 2xGPU + memory controller
5 chip = 4xGPU + memory controller
That seems to make sense! Someone please correct me if I'm wrong here: memory controller means unified memory architecture. Bad: memory bandwidth is shared amongst all GPU's. Good: no need to store duplicate textures in each GPU's own memory bank.
 

kgraeme

Diamond Member
Sep 5, 2000
3,536
0
0
Originally posted by: Leo V
kgraeme: you're right, but if NV30 architecture didn't require an odd number, then a 2-chip solution makes a lot more sense to me than a 3+ chip one. Why eliminate such an attractive possibility from the start?

A 2-chip board may be a possibility. We simply can't know from the quoted information. There is nothing in the quote that says the system precludes a 2-chip design, just that what he saw had an odd number of chips and it was less than 8.

I personally don't care, but if I had to guess, NVIDIA created a board to wow people with the new multi-gpu architecture. They looked at how many cores they could cram on the board and it turned out to be seven. That doesn't mean that boards have to have seven, it may just happen to be what they could cram on an over-the-top demo board.

Like I said, don't read too much into it. The paragraph doesn't support most of the speculation here.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |