Question AMD Phoenix/Zen 4 APU Speculation and Discussion

Page 64 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

adroc_thurston

Platinum Member
Jul 2, 2023
2,809
4,121
96
Do you remember all those super interesting accelerated things we were exposed to at those Fusion Dev Summits?
On the plus side, exascale APU from DOE Fastforward programs is real!
Drivers regularly breaking browser HW acceleration, OpenCL kernels being rather disabled due instability, etc.
I mean half the issue is there being no unified accelerator programming model on Windows ever since MS killed C++ AMP off.
 

gdansk

Platinum Member
Feb 8, 2011
2,279
2,958
136
AMD is right to include it. Microsoft seems pretty hellbent on cramming ML crap everywhere. Might lose sales if some new BingOS (neé Windows) feature doesn't work even if no one uses it.
 

Joe NYC

Platinum Member
Jun 26, 2021
2,150
2,727
106
That's a newer development/toy.

It's kind of funny how much AI has blown up recently. AI has been part of mobile phones for ages and nobody really cared (at least compared to today's extend).

I wonder how much Artificial Intelligence it takes to distinguish between humans typing and butt dialing / texting. But Apple with all the AI in the iPhones is still not capable of doing it.
 
Reactions: igor_kavinski

moinmoin

Diamond Member
Jun 1, 2017
4,988
7,758
136
Microsoft is pushing ARM so hard because Qualcomm is willing to give them what Intel and AMD have been unable to.
Aside frequency the Qualcomm SoCs Microsoft uses aren't even anything special in the context of high end ARM SoCs. Even lagging behind contemporary SoCs in high end phones when they originally launched.

I wonder how much Artificial Intelligence it takes to distinguish between humans typing and butt dialing / texting. But Apple with all the AI in the iPhones is still not capable of doing it.
Considering ChatGPT is already perfectly mimicking humans' ability of empty sweet talking without regards to facts the chance AI will ever be able to detect actual human typing has become nil.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Aside frequency the Qualcomm SoCs Microsoft uses aren't even anything special in the context of high end ARM SoCs. Even lagging behind contemporary SoCs in high end phones when they originally launched.
Nuvia should help that substantially. But regardless, you need to think of this from Microsoft's perspective. Which means asking, "What will fundamentally change how someone uses their computer?".

10% better CPU performance, while nice, isn't going to change how you use your device. But true all day battery life, pervasive cellular connectivity, and ubiquitous AI - those will. Microsoft is willing to put up with Qualcomm's shortcomings in order to have a partner willing to push those experiences will them.
 

adroc_thurston

Platinum Member
Jul 2, 2023
2,809
4,121
96
10% better CPU performance
The focus of 8cx g4 is literally CPU performance (and perf in general).
Like lol.
Nuvia should help that substantially
Ugh, no.
It's good but not that good.
But true all day battery life
Ugh, no.
Still very efficient.
pervasive cellular connectivity
Modem is separate since 8cx g3 and that ain't changin'.
and ubiquitous AI
I too love free dark silicon!
Microsoft is willing to put up with Qualcomm's shortcomings
They literally aren't.
 
Reactions: moinmoin

SpudLobby

Senior member
May 18, 2022
788
488
106
The focus of 8cx g4 is literally CPU performance (and perf in general).
Like lol.

Ugh, no.
It's good but not that good.

Ugh, no.
Still very efficient.

Modem is separate since 8cx g3 and that ain't changin'.

I too love free dark silicon!

They literally aren't.

RE: efficiency and battery life: why do you say that?
 

NTMBK

Lifer
Nov 14, 2011
10,254
5,059
136
The current edge/notebook AI hype brings back AMD memories.

Do you remember all those super interesting accelerated things we were exposed to at those Fusion Dev Summits? So much Microsoft, so much heterogenous, so much APU, so much faster, so much notebooks, so much wow.

A decade later we are pretty much struggling to accelerate like anything. Drivers regularly breaking browser HW acceleration, OpenCL kernels being rather disabled due instability, etc.
We've spent a decade seeing more and more of our workflow move into badly written Electron apps, with web tech running on top of an entire JavaScript engine. And now we all need top end single thread CPU performance just to run a basic chat app smoothly (I'm looking at you, Slack).
 

yuri69

Senior member
Jul 16, 2013
400
651
136
I'm looking at you, Slack.
It sounds like you have not been corporately forced to use Microsoft Teams... It's way worse than Slack - a combo of utterly slow rendering of even the tiniest element combined with terrible backend response times, broken search functionality, broken formatting, terrible Office365 webhooks, laughable notification rate limits, etc. Slack is great, trust me.
 

moinmoin

Diamond Member
Jun 1, 2017
4,988
7,758
136
Slack may be better than Teams, still doesn't make it great though. Slack's current move to Canvas makes the UI much less productive by no longer offering a sticky right sidebar. I wish I didn't have to use it at all.
 

Glo.

Diamond Member
Apr 25, 2015
5,743
4,632
136
I think there's some truth to that, but they're heavily pushing AI on the client side as well. No doubt it will be deeply integrated into Windows and Office in the not-so-distant future. There's a reason Intel and AMD are suddenly so concerned with it. This first gen isn't good for much more than Windows Studio effects, but I think we'll see big jumps with Strix Point. It's quite possible we see a future where the AI accelerator is as big and important as the GPU.
Oh, suddenly people are seeing the light on this topic?

Visual AI, spacial computing will be much, much bigger than people UNDERSTAND it now. Its emerging market, and a lot of paradigm shifts still have to happen, but the paint is on the wall.

The compute hardware WILL have to adapt.
 
Reactions: BorisTheBlade82

NTMBK

Lifer
Nov 14, 2011
10,254
5,059
136
It sounds like you have not been corporately forced to use Microsoft Teams... It's way worse than Slack - a combo of utterly slow rendering of even the tiniest element combined with terrible backend response times, broken search functionality, broken formatting, terrible Office365 webhooks, laughable notification rate limits, etc. Slack is great, trust me.
They're both bad software. I miss when desktop developers cared about performance.
 

MadRat

Lifer
Oct 14, 1999
11,915
258
126
Why do we allow developers from BRIC countries that have every incentive to create mediocre results? There once was a time such behavior was supporting the enemy. Justifications often involve the price. Cheap is cheap, never breeds good results.
 

gdansk

Platinum Member
Feb 8, 2011
2,279
2,958
136
Why do we allow developers from BRIC countries that have every incentive to create mediocre results? There once was a time such behavior was supporting the enemy. Cheap is cheap, never breeds good results.
Zoom's developers are from China - the C in that initialism - and it's pretty good with using resources. Maybe not good software but at least it uses a reasonable amount of memory.

Electron bloat is popular in the US because developers are expensive and using web tools expands the developer labor pool to just about every developer in the US.
 

MadRat

Lifer
Oct 14, 1999
11,915
258
126
I wonder how much statistical analysis by AMD goes into CPU design and microcode instruction design. If 25% of your microcode is used by 95% of programs, do you target performance across recognized bottlenecks, at that 25%, at the 5% of programs using less common microcode, or spread efforts evenly across all instructions? My intuition would guess effort is directed at the biggest bottlenecks. I would imagine there is always low hanging fruit to tackle as processes shrink. But I do not know.
 
Last edited:

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I wonder how much statistical analysis by AMD goes into CPU design and microcode instruction design. If 25% of your microcode is used by 95% of programs, do you target performance across recognized bottlenecks, at that 25%, at the 5% of programs using less common microcode, or spread efforts evenly across all instructions? My intuition would guess effort is directed at the biggest bottlenecks. I would imagine there is always low hanging fruit to tackle as processes shrink. But I do not know.
Generally you'd want hardware to handle the simpler/common cases, and ucode for the rarer and more complex ones. To quote Jim Keller:
80% of core execution is only six instructions - you know, load, store, add, subtract, compare and branch. With those you have pretty much covered it.
 

SpudLobby

Senior member
May 18, 2022
788
488
106
Nuvia should help that substantially. But regardless, you need to think of this from Microsoft's perspective. Which means asking, "What will fundamentally change how someone uses their computer?".

10% better CPU performance, while nice, isn't going to change how you use your device. But true all day battery life, pervasive cellular connectivity, and ubiquitous AI - those will. Microsoft is willing to put up with Qualcomm's shortcomings in order to have a partner willing to push those experiences will them.
AI is broad, but for what it's worth: in the near future, for stuff on a 128-bit bus with basic LPDDR5, running generalized LLM's locally won't be as much of a hit as people think nor as plausible in the sense of a good experience.

With superior data quality and and training data or time (see: Chinchilla-optimal models, or conversely pruning existing ones), more aggressive quantization (to a point, and quantization-aware training architectures) you can massively lower the floor for useful LLMs especially in a specialized sense - we have 2.7B param models passing medical exams - but I still don't think it will be a particularly fun experience especially keeping the actual bandwidth requirements in line if it's running constantly. With a 256-bit bus (and more memory, though the bandwidth is arguably a bigger deal for e.g. a 3-10B param model) things change, but that's still niche and certainly not coming to Windows save for what will likely be a gaming laptop-but-chiplet-APU (AMD's Halo) more than a monolithic device with a broad market.

Plus, all of what I described is still useful for cloud instances and just means we'll see more advanced models plausible at the same compute as these techniques evolve and become increasingly utilized, in particular the better training data etc, and network latency is just low these days.

Wrapping back around, there are still various particularly useful things on-device AI can be used for and with a superior perf/MM^2 or perf/W case than guzzling it down on the CPU (GPU I think is defensible but still). Audio transcription, some image editing, noise isolation, stuff like that.

But I strongly suspect that if they execute, the main win from QC will be about performance at low power - ST especially which is important - and then overall runtime (going back to idle power) more than AI, 5G in part because the others will compete on the former and on-device is only so relevant, and on the latter the market still isn't big enough yet and it only buys them so much of an advantage.

So a slightly watered down M-chip of sort for Windows. If they don't hit that bar on runtimes or low power perf, then AMD and Intel will wreck them, because no one cares that much about having a slightly better Apt-X and I think AMD et. al will eventually adopt 5G modems as it becomes more common.

As for modems, the modem is still discrete so no power savings on that front from integration, it's rumored to be an X65. Probably just doesn't make sense to integrate it given the market interest today.
 
Last edited:

LightningZ71

Golden Member
Mar 10, 2017
1,645
1,929
136
In a theoretical sense, your optimization targets change over time as well. The instruction mix that a typical PC or server saw in 2003 is considerably different on the last 50% of code (the above intel statement is still broadly relevant) than what they see today. Why? There are many new instructions associated with various set extensions that are now very relevant and, the type of programs being run on these systems are also quite different.

Each generation, they have to profile CURRENT code and i struction behavior and also speculate on what will be on use 5 years down the road as they begin a new core development cycle.
 

Gideon

Golden Member
Nov 27, 2007
1,687
3,837
136
They're both bad software. I miss when desktop developers cared about performance.
Yeah, discord is the only chat app, that has a decent native PC client And by far (memory consumption, perfomance etc...). It used to be golang and progressively switched to rust. A far cry from the electron apps nost other chats use.

Too bad it has the reputation of being a quirky gamer-only software.
 

yuri69

Senior member
Jul 16, 2013
400
651
136
Yeah, discord is the only chat app, that has a decent native PC client And by far (memory consumption, perfomance etc...). It used to be golang and progressively switched to rust. A far cry from the electron apps nost other chats use.

Too bad it has the reputation of being a quirky gamer-only software.
Last time I checked the official "Discord Desktop app for Linux" it was Electron based. Is there an alt client?
 

Gideon

Golden Member
Nov 27, 2007
1,687
3,837
136
Last time I checked the official "Discord Desktop app for Linux" it was Electron based. Is there an alt client?
Damn, it seems I was mistaken. The serverside uses Rust, Elixir, Python and C++ It seems that the client side is still either Electron or React Native (mobile clients).
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |