Whatever the solution is, it will certainly involve cycling threads between CPUs, such that any threads which are holding back dependents will move to big cores. That will incur a penalty, but the efficiency gain with small cores, and the resulting TDP headroom afforded to the big cores, is...
It reads more like a zen based processor with a small cluster of 2-4 arm style cores that can take up small tasks, rather than an actual big.little processor. But a layout where each zen core contains a small co-processor might work as well.
This is the first data we've seen for the 10SF process using more power. It seems it scales well and can clock high. I would expect Intel to keep the single thread crown w/r/t alder lake vs. Ryzen 6xxxx.
I doubt it's that. They seemed more difficult to fix than that, and I think AMD would have promoted the fact if they did manage to fix that so quickly.
Meanwhile genoa is expected to offer 96 cores, 12 channels of DDR5 (which should help to offset lack of HBM) with a 10% lower TDP. It seems 2022 will only increase the server gap between Intel and AMD.
Being fabbed on 10SF, which should be comparable to TSMC 7nm in density and efficiency, I think the Icelake Xeon review makes it pretty clear that Intel's newest cores are simply less efficient than current Zen. Tigerlake cores wouldn't change the equation much there.
The benefits of big.little are more apparent on laptops, where there will be a very real increase in battery life. But there should be benefits on desktop too. Hypothetically, if you just wanted to maximize multicore performance on desktop, you would ONLY use little cores. They're more...
Optimizing homogenous multicore performance would likely require moving threads from little to big cores periodically if keeping time is important for the program. That would entail a performance hit whether or not the scheduler is smart enough to do it (either cores will be waiting, or there...
Does anyone have an idea about what percentage of multicore workloads will scale properly with assymetrical cores? I know some programs divide a workload evenly between cores but need to wait for all threads to finish before moving forward. Might this be a common problem for Alderlake?
Density did NOT go up for the 14nm pluses. They actually relaxed density slightly to boost clock speeds. The reason for the good scaling is the parts of the die which didn't double, such as memory controller, gpu, etc.
I agree with you about the CPU, but with a reduction in resolution comes a reduction in memory consumption from graphics (RAM is combined for CPU and GPU). They just need it to approximately scale so the CPU has equal resources.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.