Discussion Intel Meteor, Arrow, Lunar & Panther Lakes Discussion Threads

Page 803 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
762
717
106






As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



Comparison of upcoming Intel's U-series CPU: Core Ultra 100U, Lunar Lake and Panther Lake

ModelCode-NameDateTDPNodeTilesMain TileCPULP E-CoreLLCGPUXe-cores
Core Ultra 100UMeteor LakeQ4 202315 - 57 WIntel 4 + N5 + N64tCPU2P + 8E212 MBIntel Graphics4
?Lunar LakeQ4 202417 - 30 WN3B + N62CPU + GPU & IMC4P + 4E012 MBArc8
?Panther LakeQ1 2026 ??Intel 18A + N3E3CPU + MC4P + 8E4?Arc12



Comparison of die size of Each Tile of Meteor Lake, Arrow Lake, Lunar Lake and Panther Lake

Meteor LakeArrow Lake (N3B)Lunar LakePanther Lake
PlatformMobile H/U OnlyDesktop & Mobile H&HXMobile U OnlyMobile H
Process NodeIntel 4TSMC N3BTSMC N3BIntel 18A
DateQ4 2023Desktop-Q4-2024
H&HX-Q1-2025
Q4 2024Q1 2026 ?
Full Die6P + 8P8P + 16E4P + 4E4P + 8E
LLC24 MB36 MB ?12 MB?
tCPU66.48
tGPU44.45
SoC96.77
IOE44.45
Total252.15



Intel Core Ultra 100 - Meteor Lake



As mentioned by Tomshardware, TSMC will manufacture the I/O, SoC, and GPU tiles. That means Intel will manufacture only the CPU and Foveros tiles. (Notably, Intel calls the I/O tile an 'I/O Expander,' hence the IOE moniker.)



 

Attachments

  • PantherLake.png
    283.5 KB · Views: 24,023
  • LNL.png
    881.8 KB · Views: 25,513
Last edited:

511

Platinum Member
Jul 12, 2024
2,403
2,123
106
Everything sucks about Lion Cove, the only saving factor was that it was made on the most dense node. Otherwise, the PPA would have been so embarrassing, a donkey could have done a better job.
The Lion Cove on LNL is fine though it's the ARL LNC that is the issue
You hear that, Intel HR??? Start looking for donkeys

Or maybe don't. You already have a few. Put them to work!
HR are already donkey's you want them to look for more of their own kind 🤣.
 

DavidC1

Golden Member
Dec 29, 2023
1,518
2,483
96
The Lion Cove on LNL is fine though it's the ARL LNC that is the issue
I would say 3x core size with 10% gain on a "Big" core is not fine.
Even so, I feel that Lion Cove has become much smaller than Raptor Cove.
Raptor is on Intel 7 and Lion Cove is on TSMC N3B. It's like a 2.5x density difference.
It's like trying to get a child to think better. What is 2+2? 5? Are you sure? Ummm....6? Think harder!
A child will understand new concepts often with a single example, without the arrogance that what they are saying is 100% correct. If you show a 3-year old a cat, they will be able to identify future cats regardless of the color, size, weight.

A glorified comparator doesn't even come close to a child. Even animals have sort of intuition which exists exactly zero on modern "AI".

ChatGPT is trained on billions of parameters, with hundreds of millions of users "training" using Google ReCaptchas and even manual labor with people in 3rd world countries being paid human-rights-violating wages and hours classifying what is what. You could change the slightest detail and it'll go from identifying a STOP sign as a squirrel.
 
Reactions: Io Magnesso

Doug S

Diamond Member
Feb 8, 2020
3,193
5,477
136
You can't trust "AI" for facts, because it often makes stuff up, but with the arrogance that it's completely right, until it's corrected.*

The problem with AI isn't that it makes things up, whether arrogantly or not. It is that as the consumer of that information you have no cues to help you make a judgment about whether or not to accept the answer.

I recently encountered a Linux system (DD-WRT router) that had fully deprecated 'ifconfig' so I was forced to use the 'ip' command for the first time, and the behavior of the options "change" / "replace" didn't match my expectations. I thought there must be a single command to change the IP address of an interface like there is with ifconfig, but I couldn't figure out how to manage it. So I did a DDG search basically asking "how do I change the IP address of an interface using the Linux ip command" and I tried several links then did the same search with Google and tried a few other links on it. All told me the same unsatisfying answer that it is a two step process, and stupidly that's by design. Just for the hell of it I tried ChatGPT and it told me the same thing, and it gave examples showing the exact command syntax except it said you should remove the old IP first then add the new one! That's stupidly wrong for obvious reasons.

There's an interesting lesson in this. With web search if I get an answer I don't like (as I did in this case) I can check other sources by clicking on other links. When I decide to click on a link I'm making a semi-conscious evaluation based on the URL and the preview text. If I click on it I will make other semi conscious evaluations based on context like what's the purpose of the site this link is taking me to, do I have reason to trust the answer I'm being provided / the person providing it whether that's someone giving their real name or some Reddit moniker. Internet search is part art, you get better at it from doing it - by necessity, as SEOs are always trying to poison the well and trolls will try to mislead people - and if it is a search about something with political overtones, trying to astroturf. Now we're having to learn to discern pages written by AI and figure out how much to discount that.

Asking a question directly to an AI is different. You have no cues, no context. You either accept its answer or don't. The only way you can determine how much credence to give the answer is based on your previous experience with AI. If it has given you a lot of correct answers in the past - or at least answers you have decided to believe are correct - human nature means you're more likely to believe its answers in the future. That's a dangerous game though. Just because it has been right in one domain doesn't mean it will do as well in another, or if you pose it a different type of problem. There's also the "you're too dumb to know when its wrong" - unless you are asking questions you already know the answer to, how do you know it has been right in the past?

The answer it gave me shows the perils. It is almost right - it has the correct information in it, but because it can't reason it doesn't "understand" that removing the IP first is a real problem if that's the only way you have of connecting over the network. Typically you'd use the 'ifconfig' command and in a single step change the IP. It causes your connection to lock up but that's no problem you just reconnect on the new IP. ChatGPT can't reason, so it doesn't know that deleting the IP first will cause your connection to hang and leave you with no way of re-establishing that connection without physical access to the device in question.

The best way to describe it is that using AI is like if Google's only option was the "I'm feeling lucky" button.
 

511

Platinum Member
Jul 12, 2024
2,403
2,123
106
I would say 3x core size with 10% gain on a "Big" core is not fine.
Yeah the only thing it has to show for is AVX-512 support but that is fused off(Stupid Intel why can't we have it).
Ehhhh part of it is frequency targets.
Atom trying to do 5.5-ish will also be xboxhueg (relatively).
It will still be relatively smaller than Lame Cove still.
Raptor is on Intel 7 and Lion Cove is on TSMC N3B. It's like a 2.5x density difference.
Yup the same difference as Intel 14nm to Intel 10nm++ at least Golden Cove was not an embarrassment.
 
Last edited:

Io Magnesso

Member
Jun 12, 2025
38
18
36
I would say 3x core size with 10% gain on a "Big" core is not fine.

Raptor is on Intel 7 and Lion Cove is on TSMC N3B. It's like a 2.5x density difference.

A child will understand new concepts often with a single example, without the arrogance that what they are saying is 100% correct. If you show a 3-year old a cat, they will be able to identify future cats regardless of the color, size, weight.

A glorified comparator doesn't even come close to a child. Even animals have sort of intuition which exists exactly zero on modern "AI".

ChatGPT is trained on billions of parameters, with hundreds of millions of users "training" using Google ReCaptchas and even manual labor with people in 3rd world countries being paid human-rights-violating wages and hours classifying what is what. You could change the slightest detail and it'll go from identifying a STOP sign as a squirrel.
It's a little smaller than the Redwood Cove...?
Well, I don't know how much the transistor density of the N3B used by Intel is...
 

DavidC1

Golden Member
Dec 29, 2023
1,518
2,483
96
also removal of HT as well
The logic itself is extremely small to enable SMT:

Registers and buffers are essentially caches, at tiny capacities. The commonly quoted number is 3-5% at the core level, so excluding L2 in this case. I think even 3% might be too high.

What really matters is the increased complexity in validation, to make everything work without corner case bugs and erratas. And you have to do that for every new design. I'm 95% convinced that SMT is one reason the x86 vendors are falling further and further behind.
The problem with AI isn't that it makes things up, whether arrogantly or not. It is that as the consumer of that information you have no cues to help you make a judgment about whether or not to accept the answer.
Yea, absolutely that matters. With search engines, you can narrow to what you want/need. AI just throws it at you. Come to think of it, it seems to remind me of the mobile era, where it focused on simplicity even at the cost of details. The "hamburger" icon used in sites actually hinders you if you are on a computer because it's an extra step instead of having all the options available immediately.

If I search on a laptop for a definition of a word, it gives me options that are detailed with the etymology of a word and multiple examples, the way to pronounce it, etc. Even the search engine summary is more detailed. On a phone? It gives you just one sentence!

LLMs in a way do the same thing. Now you get one answer! Simple!
 
Reactions: Thunder 57 and 511

511

Platinum Member
Jul 12, 2024
2,403
2,123
106
The logic itself is extremely small to enable SMT:

Registers and buffers are essentially caches, at tiny capacities. The commonly quoted number is 3-5% at the core level, so excluding L2 in this case. I think even 3% might be too high.

What really matters is the increased complexity in validation, to make everything work without corner case bugs and erratas. And you have to do that for every new design. I'm 95% convinced that SMT is one reason the x86 vendors are falling further and further behind.
Not to mention not fewer only 16 GPR vs ARM's 32 also x86 validation takes additional time something people in ARM camp don't worry about.
 

Io Magnesso

Member
Jun 12, 2025
38
18
36
The logic itself is extremely small to enable SMT:

Registers and buffers are essentially caches, at tiny capacities. The commonly quoted number is 3-5% at the core level, so excluding L2 in this case. I think even 3% might be too high.

What really matters is the increased complexity in validation, to make everything work without corner case bugs and erratas. And you have to do that for every new design. I'm 95% convinced that SMT is one reason the x86 vendors are falling further and further behind.

Yea, absolutely that matters. With search engines, you can narrow to what you want/need. AI just throws it at you. Come to think of it, it seems to remind me of the mobile era, where it focused on simplicity even at the cost of details. The "hamburger" icon used in sites actually hinders you if you are on a computer because it's an extra step instead of having all the options available immediately.

If I search on a laptop for a definition of a word, it gives me options that are detailed with the etymology of a word and multiple examples, the way to pronounce it, etc. Even the search engine summary is more detailed. On a phone? It gives you just one sentence!

LLMs in a way do the same thing. Now you get one answer! Simple!
Maybe it's the x86's SMT implementation, or maybe it's worse than other companies' SMT implementations.
Even NVIDIA has adopted SMT in the next-generation architecture VERA.
Still, AMD's SMT seems to be working well because it's new...
SMT isn't that bad...? Depends on the implementation...?
(Even so, SMT was introduced for the first time in AMD, and I really believe that the first Zen generation will be 10 years old and the flow of time will be cruel in two years.)
 

511

Platinum Member
Jul 12, 2024
2,403
2,123
106
I’m guessing here but it looks like coyote cove is only on N2 on the Ultra 9 SKU? Rest are 18A?
Nope anything with more than 4+8 is using the 8+16 tile on N2 also i think the 6+8 U5 is incorrect should be 4+8 also Coyote Cove is on both 18AP and N2.
I'm getting major AdoredTV vibes.

But hey, bring on all the free cores!
Adored has been MIA
 
Reactions: poke01
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |