LOL. Look in the mirror. Go ask your leaker friends about what design landing zones are, maybe one of them is an actual engineer and not a PowerPoint jockey and might be nice enough to explain it until you maybe get it through your thick skull. Maybe.
This is just pitiful now.
Here is yet another engineering revelation for you: a typical simulation point is at the voltage point or slightly past the point where gains flatten (i.e. where that bend is). It is usually the highest voltage simulated. Hence, "designed to x ghz" (at that voltage...
LOL. It is not even semantics, it is just basic engineering terminology and knowledge. Look at what just happened. You say "design to x ghz" to any engineer in the business, they will instantly know what that means. Then we can have a meaningful conversation about simulation corners.
But...
Because Alderlake atoms have a higher acceptable wattage/thermal limit than Tremont, that's how. Christ, how thick are you?
Why are you still grasping at straws on this? You already demonstrated you are utterly clueless on what silicon design targets are, and conflated that with maximum...
The Austin atom team is smaller and less funded. It is also on the receiving end of quite a bit of abuse from the "prestige" teams (I use that term sarcastically).
JFC... this is just utterly sad on your part.
"Designed to 3.3ghz" (what I said) and "run at 4ghz" (what you said) are two entirely different things. "Designed to 3.3ghz" means a specific voltage/frequency corner that the design is modeled to run at during the design process. It is called...
If you had the slightest bit of honesty, you would have remembered that I stated Intel can take Gracemont perf higher than my estimates if they pumped crazy power into it, but I did not expect Intel to because it would defeat the entire purpose of a low power core to augment the big core. To...
Hah, that is nothing compared to the mess Intel is in. For a long time Intel was in a position where it had no competition and a culture where all new ideas had to beat x86 xeon margins. This essentially killed all incentive to innovate and take risks. The Opteron threat was largely handled...
Nah, that is not true. The ARM team was in Intel Massachusetts, which was about as far away from the Intel mothership as you can get, and as a result did not get infected by nearly as much of the mothership toxicity and politics. There were a lot of smart people passing through that place, I...
Heh, besides not knowing anything about silicon engineering, you also have no clue on tech history either. Anyone with a basic knowledge of how the Wintel monopoly came to dominate computing knows Wintel is dead, and more importantly, that the individual advantages that MS and Intel had to keep...
Don't you worry, Motorola still makes 68k parts to power dinosaur tech, just like your current software will be in a decade. Intel's path to a trillion dollar market cap is clear: DIY gamers and software that is not important enough to be rebuilt for other platforms.
Yes, Intel should spend engineering effort on an even more obese CPU (when their p-core already has the worst area efficiency on the market), and end up with a product which would be ridiculously expensive to manufacture, be worthless for servers due to high area/dollar cost for core scaling...
Well, I don’t bother divining the future from non-technical powerpoint slide decks like you do. It is not my fault you have to stoop that low to come up with garbage scraps.
“I wasn't able to get confirmation of the name, but when I mentioned that 2x IPC target to my source, he just laughed...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.