Discussion RDNA4 + CDNA3 Architectures Thread

Page 226 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,774
6,757
136





With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it

This is nuts, MI100/200/300 cadence is impressive.



Previous thread on CDNA2 and RDNA3 here

 
Last edited:

JustViewing

Senior member
Aug 17, 2022
267
470
106
If AMD really wants to sell more graphics card, they can simply increase the memory. If 9070XT have like 48GB it will sell like hot cakes because of AI craze. This would in turn in help AMD in open source AI frameworks. AMD even boasting about Zen 5 Halo CPU is 2.4 times faster than NVidia 4090 in 70GB models.
 

Meteor Late

Senior member
Dec 15, 2023
289
314
96
If AMD really wants to sell more graphics card, they can simply increase the memory. If 9070XT have like 48GB it will sell like hot cakes because of AI craze. This would in turn in help AMD in open source AI frameworks. AMD even boasting about Zen 5 Halo CPU is 2.4 times faster than NVidia 4090 in 70GB models.

And that's exactly why they won't do it, because they want to sell you Strix Halo for that exact purpose.
 

JustViewing

Senior member
Aug 17, 2022
267
470
106
Because Pytorch is already open-source and getting higher ram than MI325X is hard.
This has no relation to what I mentioned. How, having 48GB in consumer card any connection to Pytorch and MI325X super computer/server part? Having 48GB in AMD consumer card will bring loads of new developers. You know, most of the open source AI tools/framework doesn't work optimally or not at all with AMD graphics card.
 

gaav87

Senior member
Apr 27, 2024
652
1,272
96
If AMD really wants to sell more graphics card, they can simply increase the memory. If 9070XT have like 48GB it will sell like hot cakes because of AI craze. This would in turn in help AMD in open source AI frameworks. AMD even boasting about Zen 5 Halo CPU is 2.4 times faster than NVidia 4090 in 70GB models.
AI exp is so bad on AMD i gave up like 8months ago even bing is better... In the begining of AI craze like 2+ years ago u litterary had to know python to use it with way way way way worse results and perf vs rtx 3000 and 4000. You have so many models for nvidia its not even fair.
 

JustViewing

Senior member
Aug 17, 2022
267
470
106
AI exp is so bad on AMD i gave up like 8months ago even bing is better... In the begining of AI craze like 2+ years ago u litterary had to know python to use it with way way way way worse results and perf vs rtx 3000 and 4000. You have so many models for nvidia its not even fair.
That is exactly the point I was making, if there is a consumer card with huge amount of memory it will encourage more AI developers/researchers to buy AMD cards. This in turn will benefit AMD and consumers as well.
BTW, Amuse and LMStudio works flawlessly with Radeon graphics cards with single click installation. ComfyUI with Zluda also works most of the time as it is.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,981
6,562
136
That is exactly the point I was making, if there is a consumer card with huge amount of memory it will encourage more AI developers/researchers to buy AMD cards. This in turn will benefit AMD and consumers as well.
BTW, Amuse and LMStudio works flawlessly with Radeon graphics cards with single click installation. ComfyUI with Zluda also works most of the time as it is.

If memory for AI is the demand, that is what Strix Halo is all about. They covered, this at the press event, where it seemed primarily aimed at AI. With up to 128 GB of RAM, 96GB useable by the GPU for AI. This blows away any GPU.
 

gdansk

Diamond Member
Feb 8, 2011
4,156
6,913
136
That is exactly the point I was making, if there is a consumer card with huge amount of memory it will encourage more AI developers/researchers to buy AMD cards. This in turn will benefit AMD and consumers as well.
I'm not following, why would they do that? Because their model doesn't fit in 32GB? I know some people like a challenge but I don't think it's significant enough to justify switching to clamshell 24gb GDDR6 chips that don't seem to exist.
 

JustViewing

Senior member
Aug 17, 2022
267
470
106
I'm not following, why would they do that? Because their model doesn't fit in 32GB? I know some people like a challenge but I don't think it's significant enough to justify switching to clamshell 24gb GDDR6 chips that don't seem to exist.
Well, as I mentioned in the previous post if AMD wants to increase the sale of 9070 they can do that by increasing the amount RAM. Whether it is practical or not, I have no idea. But it would benefit AMD in the long run as more AI devs and students will buy this card (of course if it cost competitive only).

If memory for AI is the demand, that is what Strix Halo is all about. They covered, this at the press event, where it seemed primarily aimed at AI. With up to 128 GB of RAM, 96GB useable by the GPU for AI. This blows away any GPU.
That is true, however 9070 is a more powerful card than Strix Halo. I think for AI multi-card solution works. So 2 48GB cards can function as 96GB card. Anyway, I doubt AMD will do this. They may release a Pro version later with higher markup.
 
Reactions: Tlh97 and Win2012R2
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |