LOL. Look in the mirror. Go ask your leaker friends about what design landing zones are, maybe one of them is an actual engineer and not a PowerPoint jockey and might be nice enough to explain it until you maybe get it through your thick skull. Maybe.
This is just pitiful now.
Here is yet another engineering revelation for you: a typical simulation point is at the voltage point or slightly past the point where gains flatten (i.e. where that bend is). It is usually the highest voltage simulated. Hence, "designed to x ghz" (at that voltage...
LOL. It is not even semantics, it is just basic engineering terminology and knowledge. Look at what just happened. You say "design to x ghz" to any engineer in the business, they will instantly know what that means. Then we can have a meaningful conversation about simulation corners.
But...
Because Alderlake atoms have a higher acceptable wattage/thermal limit than Tremont, that's how. Christ, how thick are you?
Why are you still grasping at straws on this? You already demonstrated you are utterly clueless on what silicon design targets are, and conflated that with maximum...
The Austin atom team is smaller and less funded. It is also on the receiving end of quite a bit of abuse from the "prestige" teams (I use that term sarcastically).
JFC... this is just utterly sad on your part.
"Designed to 3.3ghz" (what I said) and "run at 4ghz" (what you said) are two entirely different things. "Designed to 3.3ghz" means a specific voltage/frequency corner that the design is modeled to run at during the design process. It is called...
If you had the slightest bit of honesty, you would have remembered that I stated Intel can take Gracemont perf higher than my estimates if they pumped crazy power into it, but I did not expect Intel to because it would defeat the entire purpose of a low power core to augment the big core. To...
Hah, that is nothing compared to the mess Intel is in. For a long time Intel was in a position where it had no competition and a culture where all new ideas had to beat x86 xeon margins. This essentially killed all incentive to innovate and take risks. The Opteron threat was largely handled...
Nah, that is not true. The ARM team was in Intel Massachusetts, which was about as far away from the Intel mothership as you can get, and as a result did not get infected by nearly as much of the mothership toxicity and politics. There were a lot of smart people passing through that place, I...
Heh, besides not knowing anything about silicon engineering, you also have no clue on tech history either. Anyone with a basic knowledge of how the Wintel monopoly came to dominate computing knows Wintel is dead, and more importantly, that the individual advantages that MS and Intel had to keep...
Don't you worry, Motorola still makes 68k parts to power dinosaur tech, just like your current software will be in a decade. Intel's path to a trillion dollar market cap is clear: DIY gamers and software that is not important enough to be rebuilt for other platforms.
Yes, Intel should spend engineering effort on an even more obese CPU (when their p-core already has the worst area efficiency on the market), and end up with a product which would be ridiculously expensive to manufacture, be worthless for servers due to high area/dollar cost for core scaling...
Well, I don’t bother divining the future from non-technical powerpoint slide decks like you do. It is not my fault you have to stoop that low to come up with garbage scraps.
“I wasn't able to get confirmation of the name, but when I mentioned that 2x IPC target to my source, he just laughed...
Heh, an annual 15% jump to reach 2x is absurd enough, but now it is a single 2x leap in one moonshot? Man, you twitterati will say anything for clicks.
I suppose that is one way to move on from your delusional fantasy predictions about Alderlake. Quick reminder: 10 watt Gracemont cores.
That is fine because if your goal is to chart core efficiency and how far some product is willing to go to score a benchmark win, you need to let it turbo/boost to its programmed range to plot the full curve. If it matters to you, prune out workloads that you believe are skewing results due to...
If you want to remove variables, run ST and generate the perf/power curve. Leave aside the fact that ADL has much bigger p-cores and it will not scale as well on core counts per socket, by running ST you can see the architectural intent on how much area and power engineers are willing to spend...
Because that is the wrong way to look at CB MT. CB MT is an exercise in perf with core scaling, that is fundamentally a question of how many cores you can cram into a socket of some number of tolerable watts. If you set an anchor as some perf number but compare two different die sizes, it...
The funny thing about Dunning Kruger is the people on Peak Ignorance are literally incapable of absorbing actual knowledge.
1: Apple M1 core cluster at 30 watts is the highest power it will ever draw, therefore the lowest efficiency it will operate at. 30W for a 12900k is about as low as it...
Look at the M1 review if you want to isolate the single core power, Pro/Max have much more significant power overhead with the other portions of the SoC. 5-6W is correct for 1C max. All core loading will be lower per core.
Heh, I said the new atoms will suck down 5 watts, looks like I was off by a factor of 2... in the correct direction. The efficiency gap is already increasing. Try comparing the Apple Icestorm core vs the new Atom for extra laughs.
Note the constantly shifting goalposts.
ADL wins in raw performance!
*But only when the obese core sucks down 7x the power
Apple is not 500% more efficient!
*When the Intel CPU is downvolted, downclocked and performance crippled, and it sure is not Apple’s fault Intel allows their parts to run...
Pretty much. Next week the Alderlake reviews will come out and it will win some ST benchmarks and the “I told ya so” will flow. But you can imagine what the results would be if Apple/AMD allowed their big cores to use 35 watts apiece. The key word is allowed, there is no engineering...
Hah. This guy claimed Apple made a huge mistake moving to custom silicon instead of sticking with x86, a week before M1 Pro/Max was announced, then poo-poohed all the benchmarks because *his* software didn't run on ARM. He also proclaimed Intel had a great/fantastic chip with Alderlake before...
It is slow for compilation and development, which is mainly what I care about. The other thing I care about is video encoding and I haven't even tried running that on an Atom, but I suspect it wouldn't be so stellar at that workload.
LOL, I don't know what workloads you care about. Go look up whatever you want. The differentiation is fairly obvious.
BTW, dispatch port count is a red herring. If you can't feed the CPU backend, having all those ports does not help anything. Notice the ROB/LQ/SQ sizes in Gracemont haven't...
One of the most deceptive marketing claims that the Atoms are now equivalent to fairly recent big cores. The big cores spend area to extract parallelism and reduce latency. The atoms scale area back and sacrifice some of that capability. In some workloads, the atoms do just as well as the big...
AMD could have set the maximum power to 250W, but chose not to because they achieved total victory on perf/W. But please, crank that 5950x to 250W and post benchmark results. It is always fun to see where things really stand, again. We already know that would be a massacre even against ADL.
So to summarize, you missed both 3.9/4.0ghz marks even with unlimited power feeding those cores, and your extrapolation is an iso-power straight line so your technical justification is also wrong.
Man, you are a disaster.
Hah, no 4ghz in sight. And wasn't your extrapolation literally a straight line you drew from iso-power Icelake to Tigerlake frequency increase due to magical superfin scaling, therefore according to you, Gracemont ought to be able to hit 4ghz iso-power w.r.t Tremont max boost? Is that the...
Haha, it is called engineering reality. Go look at the Tremont reviews and datasheets. Tremont-based products have a strict thermal envelope, Alderlake obviously does not. It is not that hard to understand.
Francois? Is that you?
Bunch of dilettante nonsense. That is like saying if you ignore the TDP and turn the voltage to 11 and the chip gets to some frequency it is overclocking, but if the manufacturer does the exact same thing it is stock.
Either way, you have no point, because 257 package watts and still no 4ghz...
So you actually think a manufacturer doing exactly what a consumer overclocker does (throw more volts at a chip to get frequency and ignore thermal constraints) is not overclocking, just because… reasons?
You might have a point if Intel/AMD just stopped specifying a stock frequency and TDP...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.