Apple A14 - 5 nm, 11.8 billion transistors

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
What A14 is showing is once they exceeded the previous best by some, they have also reached the same problems and limitations.

Think of DRAM. DRAM cells are far, far smaller than any used for CPUs or GPUs. They are few process generations ahead in terms of that. But they are scaling at an absolute snails pace now. Because they were the first one to reach the fundamental limits.

All will reach the same conclusion, even if they took radically different steps to get there.

I had a suspicion that TSMC's 5nm claims were overblown, because their previous 10nm and 20nm showed similar lackluster gains. Actually most manufacturers skipped those processes.

I assume its getting really hard now so such huge jumps don't make sense anymore. Intel, the one company insisting on full node jumps with full node terminology are also the one having problems transitioning to one. Is that a coincidence? Maybe not. In the future they may have to use half-node jumps as well. The plusses and the P's will always exist regardless.
 

Doug S

Platinum Member
Feb 8, 2020
2,833
4,819
136
I think we need to see more than one CPU on N5 before claiming it is a "half node". We also need to see the results in ARM Macs (and what differences that "A14T" has with the A14 in the iPhone) before saying it was a "placeholder".

There are other explanations for what was observed - maybe they changed the cache design to use less power, which would make it take up more room. Maybe they focused all their efforts on power saving in the CPU because they knew that 5G would require more power. Even if 5G sits mostly unused in most iPhone 12s today, you don't want to overwhelm its ability to deliver power if you use 5G and run the CPU flat out at the same time...
 
Reactions: Gideon and Tlh97

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Not sure what you are comparing, but Apple could have just used higher performance SRAM, which is larger. In addition, Apples is using custom SRAM - not the generators provided by TSMC. So it is a moot point comparing iPhone SRAM density improvements with TSMCs claims.
 
Reactions: Gideon and Tlh97

name99

Senior member
Sep 11, 2010
511
395
136
What A14 is showing is once they exceeded the previous best by some, they have also reached the same problems and limitations.

Think of DRAM. DRAM cells are far, far smaller than any used for CPUs or GPUs. They are few process generations ahead in terms of that. But they are scaling at an absolute snails pace now. Because they were the first one to reach the fundamental limits.

All will reach the same conclusion, even if they took radically different steps to get there.

I had a suspicion that TSMC's 5nm claims were overblown, because their previous 10nm and 20nm showed similar lackluster gains. Actually most manufacturers skipped those processes.

I assume its getting really hard now so such huge jumps don't make sense anymore. Intel, the one company insisting on full node jumps with full node terminology are also the one having problems transitioning to one. Is that a coincidence? Maybe not. In the future they may have to use half-node jumps as well. The plusses and the P's will always exist regardless.

You're building a huge tower of speculation on a SINGLE data point -- and a data point that, as I've already pointed out, does not mean what you think it means.
This is not the behavior of a scientist or an engineer; it's the behavior of the PR man.
Is that how you want to be thought of?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
You're building a huge tower of speculation on a SINGLE data point -- and a data point that, as I've already pointed out, does not mean what you think it means.
This is not the behavior of a scientist or an engineer; it's the behavior of the PR man.
Is that how you want to be thought of?

Not a single data point. L2 caches of both the cores and the shared L3 caches show the same 19-20% gain.

It was also few years ago I noticed SRAM scaling slowing down drastically. IEDM article also talked about how once they started using FinFETs SRAM scaling slowed noticeably.

I don't care how people think of me. My thoughts are my thoughts and people will judge in however way they want.
 

Antey

Member
Jul 4, 2019
105
153
116
A14X in CPU-monkey


A12Z iGPU is 1,59 GHz??? has to be a mistake (it's the same freq as the cpu)
 

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
A14X in CPU-monkey

Hmm... AV1 hardware decode.

And up to 8 GB RAM for the GPU and support for 3 displays. I suspect that doesn't apply to the iPad Pros, just the Macs.

Geekbench 5: 1634 / 7220

And they mention MacBook 12 and MacBook Pro 13. That'd make for one helluva MacBook 12!

Is this legit?

EDIT:

The mention a "TDP" of 15 W. If that's what these numbers reflect, then that wouldn't be in a 12" MacBook. The 12" MacBook could run a downclocked one though.

OTOH, I still say I'd be perfectly fine with an A14 non-X MacBook 12", at least from a performance perspective (and not a marketing perspective), if it could handle 16 GB RAM, etc.

P.S. I just ordered my first A14 device today, a 6 GB iPhone 12 Pro Max. I'm coming from a 3 GB iPhone 7 Plus with A10 from 2016, and it's only been in the last year or so that I've started to notice some occasional lag. Otherwise that A10 machine works just fine.
 
Last edited:

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
So what is the core content going to be like with the A14X? 4+4, 8+0, 6+2? I ask for the i9 9980HK (45w) gets about 1096 / 6870 in Geekbench 5 single and multi thread.

Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Hmm... AV1 hardware decode.

And up to 8 GB RAM for the GPU and support for 3 displays. I suspect that doesn't apply to the iPad Pros, just the Macs.

Geekbench 5: 1634 / 7220

And they mention MacBook 12 and MacBook Pro 13. That'd make for one helluva MacBook 12!

Is this legit?

EDIT:

The mention a "TDP" of 15 W. If that's what these numbers reflect, then that wouldn't be in a 12" MacBook. The 12" MacBook could run a downclocked one though.

OTOH, I still say I'd be perfectly fine with an A14 non-X MacBook 12", at least from a performance perspective (and not a marketing perspective), if it could handle 16 GB RAM, etc.

P.S. I just ordered my first A14 device today, a 6 GB iPhone 12 Pro Max. I'm coming from a 3 GB iPhone 7 Plus with A10 from 2016, and it's only been in the last year or so that I've started to notice some occasional lag. Otherwise that A10 machine works just fine.

The A12Z in the comparison had "15W" TDP and ended up in iPad pros, so the A14X spec here should suit just fine in the non-pro Macbook, 8GB max ram support would not work for Macbook Pro implementations anyway. Now I really wanna see what they put in the Macbook Pros...
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
The A12Z in the comparison had "15W" TDP and ended up in iPad pros, so the A14X spec here should suit just fine in the non-pro Macbook, 8GB max ram support would not work for Macbook Pro implementations anyway. Now I really wanna see what they put in the Macbook Pros...
If I understand the page earlier, up to 32 gb memory, but memory can dynamically shared with the graphics which can access up to 8 gb of that 32 gb of memory.
 
Reactions: Tlh97 and Eug

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
If I understand the page earlier, up to 32 gb memory, but memory can dynamically shared with the graphics which can access up to 8 gb of that 32 gb of memory.
Yes. My statement was to indicate the 8 GB may be meaningless for the iPad Pro. It references the max seen by the GPU for a Mac, at least according to that page.

I wouldn’t be surprised if the next iPad Pro only has 6GB, although one can hope for 8 GB.
 

jeanlain

Member
Oct 26, 2020
159
136
116
Yes. My statement was to indicate the 8 GB may be meaningless for the iPad Pro. It references the max seen by the GPU for a Mac, at least according to that page.
But this doesn't make sense. The GPU and CPU share the same memory pool, as stated in WWDC session. So if the CPU sees 32GB, si does the GPU.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?

Nothing you can say with a single answer.

Even on iso-everything(node, process, R&D funds), some companies just execute better than the rest.

Also Intel has been stuck with Skylake for 5 years now. The 9980HK is Skylake. They are getting their asses kicked from all directions by AMD, a company a fraction of their size, and one they were beating significantly not too long ago.

Even if you are a world class marathon runner, if you stand still in a spot for 50 mins after leading for a bit, its not going to look good for you.
 
Reactions: Tlh97

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
But this doesn't make sense. The GPU and CPU share the same memory pool, as stated in WWDC session. So if the CPU sees 32GB, si does the GPU.
I might guess that although they can access the same physical memory pool, there is nothing to say the GPU can actually utilize more than 8 GB RAM at any one time. It hasn't mattered though in any of the iDevices though, since none of them even have 8 GB RAM in the first place.
 

name99

Senior member
Sep 11, 2010
511
395
136
Yes. My statement was to indicate the 8 GB may be meaningless for the iPad Pro. It references the max seen by the GPU for a Mac, at least according to that page.

I wouldn’t be surprised if the next iPad Pro only has 6GB, although one can hope for 8 GB.

iPhone Pro is already at 6GB.
8GB for iPad Pro is the way I would bet. At the very least for the higher end (ie larger SSD) models, like they did with the A12X Pro.
 

name99

Senior member
Sep 11, 2010
511
395
136
But this doesn't make sense. The GPU and CPU share the same memory pool, as stated in WWDC session. So if the CPU sees 32GB, si does the GPU.

Both sides see what the MMU is programmed to allow them to see...
But I agree, there's no obvious reason why the GPU should be constrained to only be able to see some subset of the DRAM.
(UNLESS Apple is, even in this generation, giving us something interesting like a pool of HBM and a separate pool of DRAM... )
 

name99

Senior member
Sep 11, 2010
511
395
136
So what is the core content going to be like with the A14X? 4+4, 8+0, 6+2? I ask for the i9 9980HK (45w) gets about 1096 / 6870 in Geekbench 5 single and multi thread.

Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?

Actually crypto (as measured by GB5) is one of the few areas where Intel is still substantially ahead of Apple even for A14.

It's not dedicated Silicon, it's, as others have said, competence. Saying more would requires thousands of pages but the highlights are
- x86 (AMD and Intel) is substantially more complex than ARM, meaning that it takes a lot longer to design and validate an x86 design meaning that x86 is always operating with the best ideas of seven years ago, Apple/ARM is operating with the best ideas of two to three years ago, and can more rapidly update designs to better match evolving needs.

- Intel apparently made a decision around 10 years ago to prioritize making money over doing research and development. Everything since then has been on the cheap. Meaning, among other things, designs that stand still. Intel also suffers from severe NIH -- they still haven't fully come to terms with the fact that they're not the only one in the world with good ideas. So they're designing around understandings that 15 to 20 years old without updating that understanding to new reality.

- Apple and ARM are designing to the strengths of current and future processes, meaning designing in a way that gets maximum value out of slower but many many many transistors. Intel (and to some extent AMD, they are trying to pivot but it's slow, see my earlier point) prioritize frequency over smart use of many transistors.


- What's the value of many transistors? That gets into CPU design. Suffice it to say that EVERYTHING that slows down a CPU (branch mispredictions, cache misses, instruction dependencies, ...) can be *substantially* improved with clever design and the use of many transistors. That's how Apple can match Intel performance at essentially half the frequency.

An A14 has 11.8B transistors. A Pentium Pro had 5.5M transistors.
Slightly different (SoC vs core) but still, that's a factor of 2000! To get optimal value from all those transistors you have to think very differently, you don't just try to scale up what made sense for the Pentium Pro. But for the most part that's where Intel's design mentality still is, creating ever faster Pentium Pro's.
 
Reactions: yeshua

SAAA

Senior member
May 14, 2014
541
126
116
So what is the core content going to be like with the A14X? 4+4, 8+0, 6+2? I ask for the i9 9980HK (45w) gets about 1096 / 6870 in Geekbench 5 single and multi thread.

Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?

Nothing you can say with a single answer.

Even on iso-everything(node, process, R&D funds), some companies just execute better than the rest.

Also Intel has been stuck with Skylake for 5 years now. The 9980HK is Skylake. They are getting their asses kicked from all directions by AMD, a company a fraction of their size, and one they were beating significantly not too long ago.

Even if you are a world class marathon runner, if you stand still in a spot for 50 mins after leading for a bit, its not going to look good for you.

This 10x. Beside the fact that Apple could probably pay more engineers and more than Intel ever did (and look at how they put billions in repurchasing stock rather than tech), the main issue still lies in a 5-6 year gap of nothing but Skylake.
Consider Intel's own Tiger lake chips overclass that i9.
Heck:

Intel Core i7-1165G7
2.8 GHz (4 cores)
1426

Intel Core i9-10900K
3.7 GHz (10 cores)
1411

That's for single core.

Intel Core i7-1165G7
2.8 GHz (4 cores)
4837

Intel Core i7-7740X
4.3 GHz (4 cores)
4833

And geekbench multi core. Skylake on desktops losing to slim laptops... tells you how late they are with the 10nm node.
Even Apple wouldn't compete well if they were still using TSMC 16 nm.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
- x86 (AMD and Intel) is substantially more complex than ARM, meaning that it takes a lot longer to design and validate an x86 design meaning that x86 is always operating with the best ideas of seven years ago

I disagree. The high level fundamentals of how to make x86 code run fast has been largely the same for 25 years, and AMD is able to churn out Zen generations at a faster clip than Intel. x86 complexity is not the reason Intel design is in the gutter.
 
Reactions: Tlh97 and moinmoin

Entropyq3

Junior Member
Jan 24, 2005
22
22
81
I disagree. The high level fundamentals of how to make x86 code run fast has been largely the same for 25 years, and AMD is able to churn out Zen generations at a faster clip than Intel. x86 complexity is not the reason Intel design is in the gutter.
I don’t want to argue your point, but you say something subtly different than name99, who said that Intel has spent the last decades trying to make the Pentium Pro run faster.
You essentially make a statement regarding the reason for that - the x86 target code base having similar requirements over time. Of course, that’s due to history and architecture both.

Regarding the point of contention, I agree with both of you - yes, decades of accumulated cruft makes design, validation and debugging a more complex process. But No, that is probably not the primary reason for the position Intel is in right now, which (from the outside) seems to have more to do with a combination of process woes and suboptimal management, more than the x86 ISA per se.
 
Feb 17, 2020
108
289
136
Focusing on the differences between X86 and ARM misses the point. From the perspective of a chip designer, ISA doesn't matter.

Intel's in the gutter because their management wanted to make their team leaner and speed up their schedule, and decided the best way to do that was to cut pre-silicon validation.

That's literally the dumbest possible "solution", since now they're stuck taping out broken products 4 or 5 times and then debugging said broken silicon. And some of them still see this as being fine because they own the fabs.

Has nothing to do with X86 and everything to do with whatever executives decided to go that route.

They had a chance to turn it around when they brought Keller on board, but then forced him out when he tried to fix this glaring issue.

What other companies like Apple are doing isn't rocket science. Since they don't have Intel's luxury of owning their own fabs, they throw an army of verification engineers at their designs and get the bugs fixed before silicon.
 
Reactions: Tlh97 and moinmoin

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
I don’t want to argue your point, but you say something subtly different than name99, who said that Intel has spent the last decades trying to make the Pentium Pro run faster.
You essentially make a statement regarding the reason for that - the x86 target code base having similar requirements over time. Of course, that’s due to history and architecture both.

Regarding the point of contention, I agree with both of you - yes, decades of accumulated cruft makes design, validation and debugging a more complex process. But No, that is probably not the primary reason for the position Intel is in right now, which (from the outside) seems to have more to do with a combination of process woes and suboptimal management, more than the x86 ISA per se.

AMD faces the same x86 cruft, that has not stopped them from making progress.

Intel is behind because they still believe they have the best silicon designers in the world and won't change their mindset or methods. They tape out chips with a hundred bugs on silicon and work on it for 15 months to bring that chip to market. Their competitors have a half dozen bugs (on more complex designs no less) and are in market after 6 months. Intel's answer is to just keep the course because the politics allows no other option.

They are falling off a cliff faster than Kodak: at least Kodak maintained their technical dominance of traditional film to the bitter end, Intel is already second place or worse in every CPU market.
 
Last edited:
Reactions: Tlh97
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |