- Mar 3, 2017
- 1,747
- 6,598
- 136
But Genoa is 5nm, and Turin is supposed to be 3 or 4 nm ?
we know it isn't 8 wide decode , it does something in decode but we don't know exactly what, two fetch blocks is all that is listed. is that parallel, used for branches etc .I'm preparing a little video (no I'm not trying to be a MLID/RGT, it's a different kind of video) about Zen 5, can we recap what we know about its internals?
- 8 wide decode
- Same or higher clocks
- SPECINT +40%
- full width AVX 512 implem
What else?
I’ve heard from people that I consider reliable that it uses N4X. I’ve got a hard time believing it since N4X was regarded as a bit of a meme.Turin is N4P or N4X (not sure which tbh) which is in the same family as N5. Just better/more refined.
Early Q3 is one estimate.Does anyone know when the 9000 series will hit the stores? Microcenter here I come!
Early Q3 is one estimate.
April but now July.What about @adroc_thurston saying it would already be available? Surely he is an insider, no?
April was the launch, not the availability window.April but now July.
April was the launch, not the availability window.
There's no need to be upset.Lol, you are just making crap up
Mobo vendor timeline.So what was April, when they started HVM?
On a single socket? Are you able to utilize them properly or do you need to resort to putting your workloads in VMs for better core occupancy?I have 352 Genoa cores myself.
Not sure if the Win11 scheduler has been improved but Linux is supposedly better at dealing with hybrid cores: https://www.phoronix.com/news/Linux-6.5-Intel-Hybrid-SchedSome distributed computing enthusiasts do have Intel p+e CPUs, but even though an e core performs roughly similar to one p HT thread, these CPUs are still awkward to handle in a distributed computing node. Just recently I heard of weird issues with Windows' CPU time accounting on these CPUs. And way before that I saw several reports of performance problems of multithreaded distributed computing applications on these CPUs, which are completely to be expected and can only be worked around by restricting the application to run on cores of same type.
Special instructions to accelerate AI workloads. (Source: AMD slides)- 8 wide decode
- Same or higher clocks
- SPECINT +40%
- full width AVX 512 implem
What else?
you forget AI gimmickI'm preparing a little video (no I'm not trying to be a MLID/RGT, it's a different kind of video) about Zen 5, can we recap what we know about its internals?
- 8 wide decode
- Same or higher clocks
- SPECINT +40%
- full width AVX 512 implem
What else?
Intel can't even afford to put 16 fat cores. Power consumption would either shoot beyond 500W or the cores would be power starved if Intel limits the TDP.I think Stefan was talking about 32-128 fat cores with avx-512 for DT performance
Now that we've smoked out the rat, I can't wait for details on Zen 5 LP.
Not just the perf, but what did they take out, power draw, etc.
Apparently there will be only a small number of LP cores. [Purpose: to host background tasks in idle situations/ connected standby maybe — not to prop up Cinebench. ;-) ] Thus, areal density, while not unimportant, may not be a central design goal. For Zen 5LP, that is.Be interesting to see areal density too.
Genoa and Bergamo still have some spare room under the lid. (According to published photos, not that I'd delidded one myself.) I guess the new IOD for Turin and Turin-Dense could be a more slender rectangle than Genoa's and Bergamo's IOD, for some more "shore line" to area ratio. Both for putting the additionally needed GMI links on the chip and to facilitate their routing on the package.Given Bergamo was only a 1.33x increase in cores over Genoa and the Zen5 successor is supposed to be more like 1.5x there must be a significant difference in layout there too.
I have 352 Genoa cores myself.
4x 64c/128t and 1x 96c/192t according to Mark's signature. All 1P I think.On a single socket?
In Distributed Computing, we often run n instances of single-threaded processes. This scales without problem to so many threads (on Linux; I am not up to date with Windows). Sometimes we run fewer instances of multi-threaded processes. With some of such applications, performance suffers a lot if threads of one such process end up running on different CCXs. That has been an issue with Zen 1...4 and obviously will remain with Zen 5. Hard to say what will happen with Zen 6 with its substantially changed SOCs. (Or Strix Halo already, in fact.) The problem is two-fold: Inter-thread shared data gets onto more caches than strictly needed, and inter-thread communications beyond CCX boundaries is slow and energy costly. But we don't need VMs or even containers to solve this; we can do this with helper tools, or in case of EPYCs can use a BIOS option which (ab)uses NUMA hints to coerce NUMA aware operating system into cache-aware thread scheduling. (Neither Windows' nor Linux's kernel implement a cache-aware scheduling policy. The kernel developers probably have their reasons to leave this to userspace to handle.)Are you able to utilize them properly or do you need to resort to putting your workloads in VMs for better core occupancy?
Some distributed computing enthusiasts do have Intel p+e CPUs, but even though an e core performs roughly similar to one p HT thread, these CPUs are still [...troublesome...]
When Inte'ls offer was 8c/16t + 8c/8t, even with scheduling like that (regardless if implemented in kernelspace or userspace), you are left with a large asymmetry. It is now better with 8c/16t + 16c/16t at the top end, but still not symmetric.Not sure if the Win11 scheduler has been improved but Linux is supposedly better at dealing with hybrid cores: https://www.phoronix.com/news/Linux-6.5-Intel-Hybrid-Sched
What about: Possibly stagnant or even lower SMT uplift, despite beefier execution resources, due to much improved frontend; source: Hopium? :-)Possibly more performant SMT (due to beefier execution resources). (Source: Hopium)
My hopium is for >30% st uplift.What about: Possibly stagnant or even lower SMT uplift, despite beefier execution resources, due to much improved frontend; source: Hopium? :-)
It will be hopefully higher in certain cases. Some applications/games will benefit more from the expanded resources than others that are bottlenecked elsewhere, either due to bad programming or simple limits of x86 instruction execution.My hopium is for >30% st uplift.
get copium instead https://wccftech.com/intel-royal-co...nther-cove-cpu-architecture-tackle-amd-zen-5/My hopium is for >30% st uplift.
somebody post the human_centipede.png againWtfTech using Mlid as source. ROTFLMAO
They have also used comments from this thread to source articles.WtfTech using Mlid as source. ROTFLMAO
From August 2021