Question Zen 6 Speculation Thread

Page 96 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Doug S

Diamond Member
Feb 8, 2020
3,120
5,362
136
I never see it mentioned by others and I think it's a major key point:

computing requirements are plateauing.

Every time people have claimed that it has quickly proven false. You're assuming that AI is the end all and be all of innovation and we'll never (or at least not for the next decade) come up with anything that needs more computing power. Before LLMs became something everyone was pouring money into, you could have made the exact same claims you made here and been quickly proven wrong.

The only things that plateau are things where human senses are a limitation in the loop. That's why I think broadband speeds have plateaued - 1 gigabit is more than fast enough for most of us and there is no use on the horizon where anyone needs 10 gigabits. The most bandwidth intensive things you can do with your broadband is video streaming but one person can only one watch thing at once - or maybe 4 if they do one of those sports watch 4 way viewing things but those are usually delivered at lower resolution than a single stream since there's no point streaming 4 4K streams to put on a single 4K display) We could go higher if VR/AR ever goes mainstream, but even that's limited especially if you use foveal trickery to avoid delivering every at the max resolution of center of vision. But as far as computing requirements in terms of CPU, RAM and storage performance? Nope, there is always going to be something on the horizon that wants "more", beyond the eventual limits of technology to deliver "more" or to deliver it at a price anyone is able to pay.

Now for CONSUMERS I largely agree with you, with the proviso that I would have said (and did say) the same thing 20 years ago. Not that the "typical PC" in 2005 was fast enough for 100% of consumers, but that it was fast enough for some of them and still would be today. The percentage of consumers who need "more" has continually shrunk this whole century and will continue to do so. There will be always be some who want "more" (and they are wildly overrepresented on a site like this one) but there will be fewer and fewer of them as time goes by, unless some new "killer app" appears that causes some of those who for whom today offers them more than enough to suddenly say "I can't buy anything today that gives me everything I need/want".
 

fastandfurious6

Senior member
Jun 1, 2024
498
643
96
Every time people have claimed that it has quickly proven false
No other point in time I would have said the same, it didn't apply.

There are good and specific reasons I'm saying this now in 2025.

The only things that plateau are things where human senses are a limitation in the loop.
Right. Computers are not lagging anymore, everything's fluid, fast, quick, spacious.

there is always going to be something on the horizon that wants "more"
Name one.

Now for CONSUMERS I largely agree with you, with the proviso that I would have said (and did say) the same thing 20 years ago.
No, no other point in time where consumer tech was good enough to say "it's plateauing". Mobile is key. Until Alder Lake, intel's mobile was shit since Ivy Bridge (2013) and AMD was on bulldozer for almost a decade lmao. Shit lagged all the time, stutters, slow/hot laptops, small/expensive SSDs, 4K was unplayable for very long time, gaps all around.

Zen 2/3 (2020+) and TSMC N7 - N5 were the real breakthrough.

For servers/enterprise, things can always get more and more efficient but a single 128c Zen 5 server is already extremely powerful enough to serve so many clients.
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
5,098
3,608
136
I taught my kids to memorize our phone number. Now, the only phone number I know is my wife's and mine .... and only because they are both used for store accounts (Kroger and CVS ). Otherwise, I might not even remember those .

You have a good point though. I have actually argued the opposite in the framework of engineering. I have been designing and running design teams for decades. My youngest daughter is taking Calc III right now for her ME degree. I couldn't do a triple integral if my life depended on it . I can do some pretty scary FEA on CAD, and can make SPICE jump through hoops for circuit analysis.

Does this mean I am no longer a good engineer though? I simply have better tools than I had 40 years ago to solve problems with.

FWIW, I still use the pythagorean theorem and SOH, CAH, TOA (Soak-ah-toe-ah) .

Ironically, there is still wild debate in the digital mixing world about the benefits of 96Khz vs 48Khz processing. Even with the Nyquist theorem, 48Khz can reproduce 24Khz sound perfectly. Shoot, I am lucky to hear 16Khz these days .

I wonder how useful LP cores are though compared to the utility of a Zen 6c? Cores are getting so small that unless you have a butt ton of them, you spend more size on other things. In desktop, I just wonder how important it is to have more than 16 cores, especially with 32T? Also, with super high core counts comes the need to feed all these cores with more memory controllers and more memory channels. All of these things smack of HPC like Threadripper to me.

In very high core count applications, I think you end up being power and bandwidth limited per socket. In order to get around the problem, you need a bigger socket with more pins and more power.

No it's not. Chiplet technology, packaging, memory speed, and a bunch of other technologies are all outside Moore's law and contribute greatly to performance/$
I'm also an ME. 5 semesters of calc but I still have to review things just to help my daughter with pre-calc these days. Triple integrals? Use to be easy I remember, or solving differential equations. Not anymore! Reminds me of "Spocks Brain." "Why it's child's play!" Then 10 minutes later, "It's impossible!" Sometimes I wonder if sitting for 4 - 80 minutes lectures while I watched the Navier-Stokes equations being derived was the best use of class time? I *think* there is some resistance for well-trained engineers to not fall prey to "garbage in, garbage out," which began with FEA and now is off the charts with AI. Some understanding of the theory behind equations, tables, software, helps to avoid "garbage in, garbage out" mistakes I think. I remember one of my first assignments as a junior engineer to was to verify stress in some beams used for belt filter presses for a municipal authority (sewage processing) our firm was designing. Yes, the manufacturer had done the calcs using FEA but my boss wanted to me to get an "estimate" by hand to check those numbers. Maybe he needed the verfication or maybe he was testing to see if I could do it? I don't know to this day. Reminds me of another story... Way back I interviewed for a job at Princeton Plasma Physics. I thought it'd be cool to work on fusion development. The interview process was about 5 hours long with 5 or 6 different people and most of them gave me problems to solve. One guy showed me a model of the reactor and asked, "how do you imagine the magnetic fields look for this?" It was reallyl complicated and I was pretty confused.. Another gave me a relatively simple continuum mechanics problem and I solved it, but he said, "did you ever thing of doing this this way?" He of course showed me the smart, elegant way and I did it the dumb way. They said I didn't get the job due to funding cuts they said (probably to save my ego) but I was way out of my league! I had some hubris back then though to even go through with that.

Now I pontificate on microprocessors! Just kidding, I'm generally out of my league here as were. I guess it's my nature to reach for the fruit I can't reach.

I do a lot of audio stuff and can hear to about 12kHz these days. Most of the "high end" in audio is actually around 8kHz or maybe 10kHz.

I could see something like 12 Zen 6 cores on one chiples and 18 6c on the other chiplet. More ST oriented apps could be directed to the faster chiplet while big MT jobs to the 2nd, or both if necessary. After 8 or 10 or perhaps 12 cores strong ST becomes redundant and more MT is more productive, especially as applications become better threaded. I know this is a subject akin to religion and politics around here so I DO understand this is my subjective opinion formed from my workloads and therefore not representative of computer users as a whole.
 
Reactions: Tlh97 and Schmide

Hulk

Diamond Member
Oct 9, 1999
5,098
3,608
136
Every time people have claimed that it has quickly proven false. You're assuming that AI is the end all and be all of innovation and we'll never (or at least not for the next decade) come up with anything that needs more computing power. Before LLMs became something everyone was pouring money into, you could have made the exact same claims you made here and been quickly proven wrong.

The only things that plateau are things where human senses are a limitation in the loop. That's why I think broadband speeds have plateaued - 1 gigabit is more than fast enough for most of us and there is no use on the horizon where anyone needs 10 gigabits. The most bandwidth intensive things you can do with your broadband is video streaming but one person can only one watch thing at once - or maybe 4 if they do one of those sports watch 4 way viewing things but those are usually delivered at lower resolution than a single stream since there's no point streaming 4 4K streams to put on a single 4K display) We could go higher if VR/AR ever goes mainstream, but even that's limited especially if you use foveal trickery to avoid delivering every at the max resolution of center of vision. But as far as computing requirements in terms of CPU, RAM and storage performance? Nope, there is always going to be something on the horizon that wants "more", beyond the eventual limits of technology to deliver "more" or to deliver it at a price anyone is able to pay.

Now for CONSUMERS I largely agree with you, with the proviso that I would have said (and did say) the same thing 20 years ago. Not that the "typical PC" in 2005 was fast enough for 100% of consumers, but that it was fast enough for some of them and still would be today. The percentage of consumers who need "more" has continually shrunk this whole century and will continue to do so. There will be always be some who want "more" (and they are wildly overrepresented on a site like this one) but there will be fewer and fewer of them as time goes by, unless some new "killer app" appears that causes some of those who for whom today offers them more than enough to suddenly say "I can't buy anything today that gives me everything I need/want".
Exactly. I recently got my Comcast (xFinity) internet down to $25/month but I had to "downgrade" to 150mbps. That's about 18.75MB/sec, more than enough for 4 Netflix streams, which is all we need. I downgraded and no one in the family has noticed a thing. So it takes a minute longer for the rare big download. Big deal. It's not like I'm back on an analog modem. Now that was pain.
 

OneEng2

Senior member
Sep 19, 2022
512
742
106
I think MLID is also confused about utility of the LP cores in modern client computers. Their role is to keep the lights on the PC, while in idle and light loads, and allowing the full speed core complex to shut down.

What is not the role of the LP cores is to meaningfully add to Cinebench scores. Usefulness of the dense cores, like Zen 6c is to improve the Cinebench benchmark, as their primary role.

There may be a whole new role for the LP cores in server environment - as alternative to Arm cores, for very light loads, but that is unrelated to client.
100% agree!

If there is a market for a VERY high count processor, with low computing requirements, I still wonder if a Zen 6c isn't a better use of silicon space than an LP core. After all, Zen 5c gets 1.4x performance from SMT. So it seems to me that you would likely have to have an LP core that was 1/2 the size of a Zen Xc core for an LP core to make sense.

I think the high end Nova Lake variants will have some LP cores, so we can see how that works out.
If you would deliver arguments instead of 1-liners without any meaningful content....
Funny, I made the same comment a couple of weeks ago for the same poster for replies of "No it isn't" and such. At the time I said that we should just go back to the "My dad is bigger than your dad" line of discussion as it is equally stimulating .
I think the way the discussion is framed i.e. moore's law etc is kinda outdated. maybe that's the point around saying moore's law is dead.
I agree. Perhaps we should be talking about the performance / $ vs. the density of transistors? As mentioned, there are many ways to increase performance that do not include transistor density. This is particularly true for specialty instructions and processors.
There will be always be some who want "more" (and they are wildly overrepresented on a site like this one) but there will be fewer and fewer of them as time goes by, unless some new "killer app" appears that causes some of those who for whom today offers them more than enough to suddenly say "I can't buy anything today that gives me everything I need/want".
I agree that as time moves forward, it seems that the % of the market that needs more for that "killer app" is decreasing. This is also my argument for very high core count desktop processors. We are approaching the point where you can put so many cores in a socket that you easily over-run the ability of RAM to feed them in highly threaded apps ..... and there simply aren't that many of those or people that use them (Mr. Hulk being one of the exceptions of course ).
I'm also an ME.
I actually hold a degree in EE, but in the Navy, I was a nuke mechanic .... so kind of a strange bird.
Some understanding of the theory behind equations, tables, software, helps to avoid "garbage in, garbage out" mistakes I think.
I agree. I just think that College overdoes the math and algebra at the expense of focus on theory and why things behave the way they do. Too many engineering students spend 90% of their thought on the calculus and algebra and 10% on how to setup and solve the problem.
Yes, the manufacturer had done the calcs using FEA but my boss wanted to me to get an "estimate" by hand to check those numbers.
In the Navy nuclear power program all engineers were taught thumb rules and general theory so that you could quickly assess what you were seeing in the instrumentation and be able to determine causes through intuition. Lots of thinking on your feet instead of thinking on a computer.
Way back I interviewed for a job at Princeton Plasma Physics. I thought it'd be cool to work on fusion development.
LOL. I worked my way through college at UIUC by working with post docs in the Fusion Studies Lab where I did work on the dense plasma focus fusion propulsion program and the Tokamak hydrogen pellet rail gun fueling system . Lots of plasma physics and lots of fluid modeling on the CRAY .
Now I pontificate on microprocessors!
Me too .
I do a lot of audio stuff and can hear to about 12kHz these days. Most of the "high end" in audio is actually around 8kHz or maybe 10kHz.
I have been playing in a band since high school. My right ear (the one closest to the drummer when I was in my first band) doesn't do much over 10kHz I think. Left ear is ~14-16kHz though. I do mostly live audio, but make some live recordings and post process them for band videos.
I could see something like 12 Zen 6 cores on one chiples and 18 6c on the other chiplet.
That would be a very interesting combination.... so 30c, 60T. Do you think that will be enough to overcome the Cinebench scores from the Nova Lake 52 core? It really only has 48 cores that can breath well enough to push any CB scores though IMO.

I have been hearing more lately about AMD's new memory controller setup on Zen 6. Can someone please comment on how you see having 2 memory controllers vs 1 might help with bandwidth and or latency over the current single memory controller and dual channel configuration?
 
Reactions: Tlh97 and Joe NYC

yottabit

Golden Member
Jun 5, 2008
1,588
676
146
I never see it mentioned by others and I think it's a major key point:

computing requirements are plateauing.

no real new-demanding-thing on the horizon that requires more compute other than LLM/AI.

N2 chips, performance wise, whether cpu or gpu or w/e, will have little reason to be replaced for a very very long time. They will play/run everything at top speed for a very long time. My guess up to 10+ years.

We will be after better form factors rather than higher performance.

By 2030 smartphone-sized devices will have Medusa Halo performance (i.e. top-end N2 performance). You could still get desktop with x2 performance but what for really? 8K gaming?

tablet-sized already exists (halo rog x13, 1.3kg) this is where all the future is going to. we're hitting golden ratios in cost/performance with 9700XT and 9800X3D, tomorrow will bring these to minimal power/thermal/size ratios.

I think the way the discussion is framed i.e. moore's law etc is kinda outdated. maybe that's the point around saying moore's law is dead.
Don’t worry, Win11 turned half the GUI into Electron apps, software developers are working hard to make sure they can tax your hardware
 
Reactions: Tlh97 and dr1337

branch_suggestion

Senior member
Aug 4, 2023
647
1,366
96
With N3, cost per adjusted area did increase for the first time in history.
But looking beyond just client, consider all the skyrocketing CapEx/OpEx in DC, costs of networking, cooling and system architecture are colossal compared to even 5 years ago.
 

Hulk

Diamond Member
Oct 9, 1999
5,098
3,608
136
That would be a very interesting combination.... so 30c, 60T. Do you think that will be enough to overcome the Cinebench scores from the Nova Lake 52 core? It really only has 48 cores that can breath well enough to push any CB scores though IMO.
For a minute there I was going to do some napkin math. Then I realized that AMD and Intel know what the other is doing almost before they do. So, just like the past few generations CB will be very close. It's a game of leapfrog. Raptor Lake topped Zen 4 then Zen 5 got on top and now Arrow Lake is on top. Keep in mind there are no blow outs just a bit more. I expect the next gens will be close again with the one that comes out 2nd ending up on top for a while.

I have been playing in a band since high school. My right ear (the one closest to the drummer when I was in my first band) doesn't do much over 10kHz I think. Left ear is ~14-16kHz though. I do mostly live audio, but make some live recordings and post process them for band videos.
Okay now this is weird because playing in a band has been a side job for me for 35 years. I have a small studio as well. My right ear (closest to the drummer) is my good one because I wear ear plugs and that one stays all the way in, the left one I keep loose so that's the bad ear.

For some reason my degree reads "Mechanical and Aerospace Engineering" but I only had one aerospace specific class besides aerodynamics and that was aerospace structures. Interesting class. Prof worked for RCA labs and told us once he had a bad weekend because someone had left a small piece of something in a satellite, like a washer or something. It would have been horrendously expensive to disassemble so they did some calculations to find out how much momentum/impact it could present to the inside of the satellite durng the g's of launch and decided it was okay to leave it in there. I always remember that because it seemed "real life." After that I'm sure we got back to deriving the "stiffness matrix" equations or something.
 
Reactions: Tlh97 and OneEng2

LightningZ71

Platinum Member
Mar 10, 2017
2,134
2,587
136
AMD has done multiple CCXs in the same die a few times already, so that's not very surprising if it is indeed the case. However, that looks like just a single giant CCX.
 

GTracing

Senior member
Aug 6, 2021
478
1,109
106
It does look like a ringbus and nothing like mesh. And why would anybody split mesh to 2 different parts in same silicon?

AMD has publicly revealed that Zen5 switched to a mesh interconnect.


I assume 2*8 is referring to the layout of the mesh; it's a two by eight grid.
 

Josh128

Senior member
Oct 14, 2022
766
1,268
106
For a minute there I was going to do some napkin math. Then I realized that AMD and Intel know what the other is doing almost before they do. So, just like the past few generations CB will be very close. It's a game of leapfrog. Raptor Lake topped Zen 4 then Zen 5 got on top and now Arrow Lake is on top. Keep in mind there are no blow outs just a bit more. I expect the next gens will be close again with the one that comes out 2nd ending up on top for a while.


Okay now this is weird because playing in a band has been a side job for me for 35 years. I have a small studio as well. My right ear (closest to the drummer) is my good one because I wear ear plugs and that one stays all the way in, the left one I keep loose so that's the bad ear.

For some reason my degree reads "Mechanical and Aerospace Engineering" but I only had one aerospace specific class besides aerodynamics and that was aerospace structures. Interesting class. Prof worked for RCA labs and told us once he had a bad weekend because someone had left a small piece of something in a satellite, like a washer or something. It would have been horrendously expensive to disassemble so they did some calculations to find out how much momentum/impact it could present to the inside of the satellite durng the g's of launch and decided it was okay to leave it in there. I always remember that because it seemed "real life." After that I'm sure we got back to deriving the "stiffness matrix" equations or something.

Saying Arrow Lake is on top of Zen 5 is quite a stretch, especially when you consider it has a full node advantage and 50% more cores, yet still loses out to Zen 5 more often than not.





 
Reactions: Joe NYC and Racan

naukkis

Golden Member
Jun 5, 2002
1,004
844
136
AMD has publicly revealed that Zen5 switched to a mesh interconnect.
Mesh is different topology. Ringbus is ringular for obvious reasons - and I have only see AMD stated ladder optimization for ringbus - for those two bidirectional rings Zen5 cores have ringstops in core pairing style effectively splitting max ring distance to half without needing complex mesh logic at all. It's not a mesh - data can't take different path at different mesh points which have obvious advantage at clock speeds - ringbus can operate at core speed where mesh logic can only operate at much lower frequencies. And because it's not a mesh but a way to split ring distance to half do not except AMD increase their CCD size over 16 without going to real mesh network scheme - which as case with other cpu manufacturers using mesh L3 networks will kill L3 performance vs ringbus designs so those designs will be server only.
 
Last edited:
Reactions: Tlh97

Hulk

Diamond Member
Oct 9, 1999
5,098
3,608
136
Saying Arrow Lake is on top of Zen 5 is quite a stretch, especially when you consider it has a full node advantage and 50% more cores, yet still loses out to Zen 5 more often than not.
Please, please, please don't misquote me. My reply was specifically in reference to the question, which was about Cinebench performance.

Now please post some CB scores to show that I was correct with my reply.
 

fastandfurious6

Senior member
Jun 1, 2024
498
643
96
there has to be a better way to gauge speed and responsiveness of cpus...

like a series of 10000 actions done rapidly with macro/automation

e.g. open 3 browsers, 20 pages in each, open 12 apps, do this do that etc

CB and GB are so silly, PCmark 3Dmark even more
 

Hulk

Diamond Member
Oct 9, 1999
5,098
3,608
136
there has to be a better way to gauge speed and responsiveness of cpus...

like a series of 10000 actions done rapidly with macro/automation

e.g. open 3 browsers, 20 pages in each, open 12 apps, do this do that etc

CB and GB are so silly, PCmark 3Dmark even more
Remember Winstone? It was fun to watch but very dependent on the storage system of the computer.
It was also very prone to various errors and configuration issues.

Seems like there is an Heisenburg uncertainity principle with computer testing. The more realistic the test the less reliability you get with scores among systems and vice-versa! As the test probes "deeper" into the computer setup, configuration, installed software, and other factors create lots of testing variability.

The ideal bench has yet to be devised unfortunately.
 
Last edited:
Reactions: Makaveli and Tlh97

Abwx

Lifer
Apr 2, 2011
11,783
4,691
136
Please, please, please don't misquote me. My reply was specifically in reference to the question, which was about Cinebench performance.

Now please post some CB scores to show that I was correct with my reply.
Depend of the CB version, beside it s easy to do slightly better if you use 240W instead of 200W, i wouldnt call this a win.
 

gdansk

Diamond Member
Feb 8, 2011
4,070
6,734
136
there has to be a better way to gauge speed and responsiveness of cpus...

like a series of 10000 actions done rapidly with macro/automation

e.g. open 3 browsers, 20 pages in each, open 12 apps, do this do that etc

CB and GB are so silly, PCmark 3Dmark even more
That ends up being an operating system benchmark.
 

OneEng2

Senior member
Sep 19, 2022
512
742
106
Okay now this is weird because playing in a band has been a side job for me for 35 years. I have a small studio as well. My right ear (closest to the drummer) is my good one because I wear ear plugs and that one stays all the way in, the left one I keep loose so that's the bad ear.
I have been using IEM's since the mid 90's. Sadly, in the early 80's is where my hearing loss happened.
Saying Arrow Lake is on top of Zen 5 is quite a stretch, especially when you consider it has a full node advantage and 50% more cores, yet still loses out to Zen 5 more often than not.
I understood Hulk to mean CB only .... and at stock.

What I do find interesting is that CB23 still favors Zen 5 I believe. It makes me think that perhaps CB24 is perhaps more bandwidth dependent vs compute dependent. Anyway, just a theory.
Interesting. I was certain that the 2nm Turin D would be a 32c CCD using Zen 6c cores while the standard Turin would be 16c of full Zen 6.

Are you certain?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |