- Mar 3, 2017
- 1,622
- 5,892
- 136
What are these AVX512 heavy apps? Aside of PS3 emulator.
Also the AV1 decoder dav1d and Intel's AV1 encoder SVT-AV1 has a fair chunk of AVX512 SIMD at this point.Some math stuff but they're all limited by the bandwidth available.
Ok so this is now in full blown ideology mode, "anything that we have touched is ours".Computex and WWDC 2024.
Zen4 uses dual cycle 256 bit computation for AVX512 yes, but beyond that it supports most of the AVX512 instructions that Intel did prior to Alder Lake gimping their AVX512 support in the consumer market.Zen4 has a trimmed AVX512 support, where they run some kind of dual 256bit design, I forget the details, but it's more of a "can do AVX512" than a "is custom made for AVX512".
What I mean by this is: watch those two events.Ok so this is now in full blown ideology mode, "anything that we have touched is ours".
Computex has been here decades before AI. It has nothing to do with AI, it's a computing event. AI just grafted (grifted?) itself onto it.
Never heard of WWDC 2024 and I think I'm happy with that. And the fact that I don't even know about it confirms my opinion about AI's success.
Also I asked for crowds. Not for nerds and computing professionals. B2C. Large, as in millions, of people looking for an AI product and who have found one at an acceptable price.
Been there done that https://chipsandcheese.com/2022/10/27/why-you-cant-trust-cpuid/Can I petition for the Silicon Gang to bait WTFTech as their next target?
Yeah, even more recently too I realized:Been there done that https://chipsandcheese.com/2022/10/27/why-you-cant-trust-cpuid/
Nah, you're just saying things for the sake of it, you wouldn't actually make this comparison if you've used both old ms office and new AI. There is a massive difference between having some random tips and templates, and having the program literally generate new and novel things for you specifically.A party trick is a party trick. Like clippy, or wordart templates.
The press is playing Starcraft with us, we're just Terrans hiding in a bunker, trying to share some stories, and they Zerg rush articles all over us until we catch on fire.Sadly, even sending fake news to a trash website like WTFTech won't ever kill their revenues. It's like cutting the head of a hydra: for every one article that gets outed as fake, they've already written two or more rumor articles.
Yeah so...that'll probably up your productivity by a few percent of daily worktime.There is a massive difference between having some random tips and templates, and having the program literally generate new and novel things for you specifically.
iOS and their goggles OS actually could use “AI”. More so than Windows at the consumer level.even more no?
I don't believe what they are building will happen anytime soon. Basic AI - sure.iOS and their goggles OS actually could use “AI”. More so than Windows at the consumer level.
Android is good too.
Windows is so meh with consumer AI.
AI is essential to compute, and to say otherwise is dishonest and ignorant. It's just not that ready/useful for the mass except for a few cases. Completely different stories in the enterprise space though. Regardless, we need more compute and more efficient compute.Ok so this is now in full blown ideology mode, "anything that we have touched is ours".
Computex has been here decades before AI. It has nothing to do with AI, it's a computing event. AI just grafted (grifted?) itself onto it.
Never heard of WWDC 2024 and I think I'm happy with that. And the fact that I don't even know about it confirms my opinion about AI's success.
Also I asked for crowds. Not for nerds and computing professionals. B2C. Large, as in millions, of people looking for an AI product and who have found one at an acceptable price.
Thats why I started one..... Back to Zen 5When did this become an AI thread?
So about Turin-AI, is it real or just another manifestation of MLID schizo. Replacing 1 or 2 or even 3 chiplets with AI oriented ones seems feasible and interesting to me. Thoughts?When did this become an AI thread?
Dumb question... +50% performance at +25% power increase... are we still talking 1T or since power is now involved its nT?1.25X, the actual core power is lower for reasons you can guess.
This is Turin socket perf, so rate N aka nT.Dumb question... +50% performance at +25% power increase... are we still talking 1T or since power is now involved its nT?
It’ll be a part of Zen5 via NPU or other means depending on model. Just like the iGPU, IOD, main CPU, etc. So when discussing Zen5 it’s relevant to cover all those aspects, including AI related compute.When did this become an AI thread?
There was talk about doubled pipeline width, not necessarily doubled throughput. Remember that many FP heavy/ vector math centric scenarios are memory bandwidth limited or power limited.Given things seem to point to Zen5 doing a Zen2 and doubling the FP/SIMD throughput [...]
The addition of AVX512 instruction set support to Zen 4 was not just about "can do so now too" though, IOW not just a mere feature checkmark. It was also about increasing the FP pipeline utilization and power efficiency in practice. (An analysis: mersenneforum.org --> Zen4's AVX512 Teardown)Zen4 has a trimmed AVX512 support, where they run some kind of dual 256bit design, I forget the details, but it's more of a "can do AVX512" than a "is custom made for AVX512".
Maybe MLID schizo, maybe it's possible to get a decent enough thing running on OneAPI or whatever equivalent AMD will pull out of their hat. But IIRC AI is about membw and streaming very large data quantities. I just can't picture a CPU somehow getting to reaching a GPU's value in that. And AMD sells GPUs too, so even at the highest of the AIHype craze, I do not expect them to do that even if the craziest clients demand it. Not unless they're paying mad money for it.So about Turin-AI, is it real or just another manifestation of MLID schizo. Replacing 1 or 2 or even 3 chiplets with AI oriented ones seems feasible and interesting to me. Thoughts?
But IIRC AI is about membw and streaming very large data quantities. I just can't picture a CPU somehow getting to reaching a GPU's value in that.
Free dark silicon is funny but so was the literal free space on Cezanne.including AI related compute.