Discussion Nvidia Blackwell in Q4-2024 ?

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrTeal

Diamond Member
Dec 7, 2003
3,572
1,710
136
2x L2 per die seems odd but is possible. H100 had 60MB of L2, though with one of the six MC's disabled most only had 50MB. If the layout is similar and they are referring to L2, that would be 30MB per MC.
 

SmokSmog

Member
Oct 2, 2020
56
88
61
104B xtors at 850mm2 ( 858mm2 is the reticle limit) = 122.3MTr/mm2
So similar to AD102 which has 125MTr/mm2 on 4N.

They probably used the same libraries as AD102.
Don't expect more than 130MTr/mm2 on GB202.

Navi31 GCD has 150MTr/mm2 cuz it's just a compute die without memory controllers.
 

MoogleW

Member
May 1, 2022
56
28
61
To process these datasets efficiently on GPUs, the Blackwell architecture introduces a hardware decompression engine that can natively decompress compressed data at scale and speed up analytics pipelines end-to-end. The decompression engine natively supports decompressing data compressed using LZ4, Deflate, and Snappy compression formats.

https://developer.nvidia.com/blog/n...rameter-llm-training-and-real-time-inference/

Nvidia could perhaps improve their DirectStorage implementation from previous gen for consumer cards.
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,228
5,228
136
Looks like the rumors were true: B100 is an MCM (not quite chiplet imo) comprised of two reticle limit TSMC N4P dies connected with a high-bandwidth interconnect, likely NVLink 5 C2C. 8 stacks of HBM3E memory total, or 4 stacks per die. It's an engineering marvel and an absolute beast but I'm kind of numb to all this AI stuff tbh, especially since it's the new buzzword so it's impossible not to get reminded of it on a daily basis. This stuff doesn't impact me directly so while it is impressive, it's largely forgettable for me, the average consumer casual gamer.

Essentially this is the same path as Apple M1 - Ultra. Two massive chips with a massive interconnect.
 
Reactions: Mopetar

gdansk

Platinum Member
Feb 8, 2011
2,145
2,676
136
Why are both gaming and data/AI center called Blackwell this time? Are they more similar than Ada and Hopper? It sounds like they will be very different chips.

Did Ada and Hopper have different ISA?
 

blckgrffn

Diamond Member
May 1, 2003
9,131
3,072
136
www.teamjuchems.com
https://developer.nvidia.com/blog/n...rameter-llm-training-and-real-time-inference/

Nvidia could perhaps improve their DirectStorage implementation from previous gen for consumer cards.

Whatever they do in that regard, it should be accelerating the calls that MS DirectStorage uses. Otherwise, why bother other than to sponsor three titles that use it and plant a victory flag for yet another proprietary tech that will likely see little industry uptake?

HW acceleration for DirectStorage does seem like something we could see in the next major release or two of GPUs and might be a feature people actually care about.
 

MoogleW

Member
May 1, 2022
56
28
61
Fat chance anyone is working there who believes in improving old stuff for the good of mankind.
I feel Nvidia trying to be better than others in some aspect of cutting edge graphics tech is as nvidia as nvidia ever gets.

On another note, I am yet to see Nvidia's work graphs performance to compare with AMD's demo.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,870
6,103
136
Cache in general is stupidly expensive on N3E.

PHY is even worse. It's probably worth it to take whatever space you would save dropping from a 512-bit bus to a 384-bit bus and turn it into cache.

Question: is consumer Blackwell also going to be a multi-chip solution? If so, that would make it very expensive since the wafer yield will essentially get halved.

This is not a consumer chip and there's no reason to use it as the basis for any guesses about what the dies that get used for consumer products will look like.

Unless NVidia is willing to release a consumer card with HBM, you won't see one of these dies on a GForce card, let alone two of them.
 
Reactions: coercitiv and Tlh97

Aapje

Golden Member
Mar 21, 2022
1,395
1,885
106
Makes sense. The AI people will pay whatever it will cost. I do think that the 5090 will get a significant cut. They can sell a 176 SM 5090 for $2000 or so. Then if RDNA5 is really good or the AI market crashes, they can sell 5090 Ti later on.
 

Mahboi

Senior member
Apr 4, 2024
342
580
91
It's for real x2 SM compared to 5080?
> 512 bit GB202
I wish we would stop with this meme. Every card that did this was an incredibly expensive waste of money. Look at the Titans.
Giant buses are not coming back except as the most desperate last ditch attempt to punch above your weight.
 

jpiniero

Lifer
Oct 1, 2010
14,645
5,273
136
Makes sense. The AI people will pay whatever it will cost. I do think that the 5090 will get a significant cut. They can sell a 176 SM 5090 for $2000 or so. Then if RDNA5 is really good or the AI market crashes, they can sell 5090 Ti later on.

I think it's going to be a lot less than 170 SM for 5090. Also the memory bus cut down to 384-bit.

All the fully enabled memory controllers will go to AI cards.
 

Mahboi

Senior member
Apr 4, 2024
342
580
91
That's exactly what I said, yes. With GDDR7 having 3Go chips, the bus makes even less sense.
 

MrTeal

Diamond Member
Dec 7, 2003
3,572
1,710
136
> 512 bit GB202
I wish we would stop with this meme. Every card that did this was an incredibly expensive waste of money. Look at the Titans.
Giant buses are not coming back except as the most desperate last ditch attempt to punch above your weight.
Even the Titans didn't have 512 bit busses, Nvidia hasn't made one since GT200 15 years ago. Since then anything that's needed more than 384 bit could provide has been HBM.
 
Reactions: Tlh97 and Mahboi

Aapje

Golden Member
Mar 21, 2022
1,395
1,885
106
That's exactly what I said, yes. With GDDR7 having 3Go chips, the bus makes even less sense.

There aren't any 3 Gb chips until at least the refresh.

I wish we would stop with this meme. Every card that did this was an incredibly expensive waste of money. Look at the Titans.
Giant buses are not coming back except as the most desperate last ditch attempt to punch above your weight.

You seem to be judging it as a gaming chip. It's not. It's a professional & prosumer card. If gamers want to pay out of the nose for it, that's great, but it's not what it's designed for.
 

jpiniero

Lifer
Oct 1, 2010
14,645
5,273
136
You seem to be judging it as a gaming chip. It's not. It's a professional & prosumer card. If gamers want to pay out of the nose for it, that's great, but it's not what it's designed for.

Yeah. That's what I was getting at. These are AI first, AI second, and very expensive gaming cards third.
 

Aapje

Golden Member
Mar 21, 2022
1,395
1,885
106
Even the Titans didn't have 512 bit busses, Nvidia hasn't made one since GT200 15 years ago. Since then anything that's needed more than 384 bit could provide has been HBM.

There is a HBM shortage, so companies are using GDDR cards for AI and other tasks that would benefit from HBM.
 
Reactions: Tlh97 and maddie
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |