Discussion Apple Silicon SoC thread

Page 35 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
OK, here it probably matters how much you travel especially in airplanes and at day time ( I usually book long flights over night and just sleep). I mean how often do you need 15hrs of battery life? I can't think of ever having needed it.
If I ration/plan my usage, 6-8 hours is fine. But I don't like to ration/plan my usage. Plus batteries lose capacity over time.

Battery life is going to be a very big selling point. Not for everyone of course, but a lot of people.

When my colleagues buy laptops for their business work, they often come to me to ask questions. In fact, the things I tell them to pay the most attention to are (in no particular order):

1) Battery life
2) Screen quality. Not resolution per se, but quality.
3) Screen size, but bigger isn't always better.
4) Machine weight.
5) Amount of memory. It should have enough, but you don't need 32 GB to run PowerPoint.
6) Does it have SSD?
7) Keyboard type? International keyboards suck, unless you need them.
8) Fan noise.

Notice I have not mentioned CPU speed as a priority. Yeah it matters, and you shouldn't get some lame Atom setup, but most of the time CPU is not the limiting factor. For most such users, any low power Core iX is usually OK.

However, I can't tell you the number of times they've ignored my advice and gone with their teenage son's recommendation of an inexpensive gaming laptop with a big screen with mediocre viewing angles. They buy it, and then hate it because they have to lug this giant thing around with lousy battery life and so-so image quality. Then a year later they buy another one closer to what I recommended, at higher cost yet often with a slower CPU.


If you factor in R&D of the SOC itself and more so production costs (masks etc, TSMC isn't cheap especially not purchasing all the bleeding edge capacity) I'm betting they actual pay significantly more than before.
I don't buy that at all.


I'm trying, but something on my mind is preventing me from getting more impressed with this M1.
I mean, is a surprise that 6 ALU performs better than 4 ALU, for example? Yes, Apple deserves praise for delivering such a wide architecture, and they didn't started yesterday, they're doing it for years now. But, even so, even with such a wider architecture the x86 competition still gets so close, it's really that impressive? Both AMD and Intel seems to bet transitioning for a bit wider architectures, can't we expect a similar boost in performance? The real M1 advantage is all the other specialized hardware built-in, but again this seems to be becoming a industry standard, AMD even joined with Xilinx.

This good for Apple and will increase their market share a bit, but no reason to fuss so much about. At least, I can't even if I try (for some reason ).
I've never understood this argument. "If you do this, of course it will be better, so it's no big deal." If it's so easy to implement, why don't others do it? Esp. the big boys?
 
Last edited:
Reactions: Tlh97

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
If you factor in R&D of the SOC itself and more so production costs (masks etc, TSMC isn't cheap especially not purchasing all the bleeding edge capacity) I'm betting they actual pay significantly more than before.

This is the "arbitrage" of accounting. Do we count all the R&D costs and pretend other Apple silicon for phones do not exist to make the number look bigger, or do we acknowledge that most of the R&D already existed for you are already paying thousands of silicon designers already so adding a few hundred more to make the m1 version of the chip is a "valued added expense" but also a "value added benefit" since you do not have to pay the initial cost.

Put another way apple iPhones now subsidize the apple MacOS chip development and we are playing language games about accounting arbitrage. When in reality you either do or you do not do no matter what the accountants and there spreadsheets say.

-----

For example illustrating the last sentence with a concrete example. Many people assume prices are "fixed" for there day to day lived experience of prices being fixed is like that when they go to the grocery store, or the gas station, or best buy, and so on. But apple spends billions of dollars with TSMC and thus they get to negotiate prices for individual steps in the bill, Apple can say I want a cheaper rate for the masks than you would give a smaller customer who wants to make less chips for I am promising you 20 million 120 mm2 chips. So that is what? 400 to 480 chips per wafer depending on defects. That is what 50k wafers? Aka 850 to 1 billion dollars of TSMC revenue if TSMC is selling a wafer for 17k which is what some analysts predict.

TSMC will negotiate in order to guarantee this revenue, its counter arguments is everyone else who wants 5nm and thus if there is lots of customers TSMC gets a higher price and if lots of customers are fine with 7nm for now then TSMC can't charge Apple as much for there chips. Regardless for just the raw wafer cost before masks and so on we are talking $38 to $50 per chip for 17k / 400 = $42.5
 
Last edited:

insertcarehere

Senior member
Jan 17, 2013
639
607
136
I'm trying, but something on my mind is preventing me from getting more impressed with this M1.
I mean, is a surprise that 6 ALU performs better than 4 ALU, for example? Yes, Apple deserves praise for delivering such a wide architecture, and they didn't started yesterday, they're doing it for years now. But, even so, even with such a wider architecture the x86 competition still gets so close, it's really that impressive? Both AMD and Intel seems to bet transitioning for a bit wider architectures, can't we expect a similar boost in performance? The real M1 advantage is all the other specialized hardware built-in, but again this seems to be becoming a industry standard, AMD even joined with Xilinx.

This good for Apple and will increase their market share a bit, but no reason to fuss so much about. At least, I can't even if I try (for some reason ).

A "wide" architecture is absolutely no guarantee of performance, the custom performance cores in Exynos SoCs were wide designs but went nowhere against the narrower ARM Cortex cores. If it was so trivial to implement a wide x86 design for improved IPC without screwing up elsewhere, Intel/AMD would have done it ages ago just for their server parts alone.

What's really amazing with the Apple-designed cores is not that they are merely a wide design with high performance. It's that they combine such performance with very low power consumption and keep their cores sizes in line or smaller than the x86 competition.
 
Last edited:
Reactions: Tlh97

beginner99

Diamond Member
Jun 2, 2009
5,234
1,611
136
This is the "arbitrage" of accounting. Do we count all the R&D costs and pretend other Apple silicon for phones do not exist to make the number look bigger, or do we acknowledge that most of the R&D already existed for you are already paying thousands of silicon designers already so adding a few hundred more to make the m1 version of the chip is a "valued added expense" but also a "value added benefit" since you do not have to pay the initial cost.

Put another way apple iPhones now subsidize the apple MacOS chip development and we are playing language games about accounting arbitrage. When in reality you either do or you do not do no matter what the accountants and there spreadsheets say.

True see my edit in the wrong post. Of course you can't put full R&D in this M1 but you can't also completely ignore it and just take the plain wafers costs, die size and yields. Even this SOC needed some design R&D, validation, Mask costs (easily double digit millions on 5nm) and so forth. Plus apple basically pays TSMC node development. They pay a lot more than say AMD.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
True see my edit in the wrong post. Of course you can't put full R&D in this M1 but you can't also completely ignore it and just take the plain wafers costs, die size and yields. Even this SOC needed some design R&D, validation, Mask costs (easily double digit millions on 5nm) and so forth. Plus apple basically pays TSMC node development. They pay a lot more than say AMD.
Laughs. I did an edit as well to give more concrete stuff to explain the rough idea of silicon cost if we just ignore R&D. We are talking about a $40 to $50* chip if analysts are correct about what the going rate of a 5nm 300mm wafer is going for.

*It is going to be higher at first due to defects but best case scenario with no defects (impossible) we are talking 480 chips more or less per wafer.
 

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
True see my edit in the wrong post. Of course you can't put full R&D in this M1 but you can't also completely ignore it and just take the plain wafers costs, die size and yields. Even this SOC needed some design R&D, validation, Mask costs (easily double digit millions on 5nm) and so forth. Plus apple basically pays TSMC node development. They pay a lot more than say AMD.
My point earlier is I suspect the average cost per chip charged by Intel would be less than $200-$250, but the cost to Apple for their own chips is more than $50 if you factor in incremental* R&D costs and masks and such.

So, I don't believe the $150-200 saved-per-laptop claim that some analysts are suggesting, but I don't for a second believe this is costing Apple more than Intel is charging, at least if you amortize these incremental R&D costs over a couple of years.

*I say "incremental" because of the bulk of these costs should be attributed to A series development, and they sell far, far, far more iPhone units than Mac units.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
My point earlier is I suspect the average cost per chip charged by Intel would be less than $200-$250, but the cost to Apple for their own chips is more than $50 if you factor in incremental* R&D costs and masks and such.

So, I don't believe the $150-200 saved-per-laptop claim that some analysts are suggesting, but I don't for a second believe this is costing Apple more than Intel is charging, at least if you amortize these incremental R&D costs over a couple of years.

*I say "incremental" because of the bulk of these costs should be attributed to A series development, and they sell far, far, far more iPhone units than Mac units.

Even if BOM / hardware costs are identical, having to maintain APIs and core OS on two different hardware architectures, compilers to target those architectures, is undoubtedly costly. To do this, they probably have duplication of work within the teams that create the OS and all its core components, and in particular the teams that are responsible for XCode and the various APIs that developers use.

Edit: I'm referring to having a library / API for iOS (ARM) targets and x86 (MacOS) targets.
 

deathBOB

Senior member
Dec 2, 2007
569
239
116
Intel's R&D isn't free either, and they also get huge margins. And it's not like M1 is a ground up design. Apple can spread that cost over Macs and the hundreds of millions of iOS devices it sells.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
I wouldn't be surprised if Apple makes close to nothing on the base model Air and Mini.

But +$200 for 8GB of RAM more than base, and +$200 for 256GB to 512GB of SSD. Each probably costs apple like $15-$20. That would be like +$360 profit to Apple.

The base model specs, high as they are, are just bait and not out of line given their quality. Its the (slightly) higher spec models where prices get whack. $1100 for a Mini with 16GB RAM and 512GB SSD vs $700 for 8GB RAM and 256GB SSD.
 

Doug S

Platinum Member
Feb 8, 2020
2,833
4,819
136
I have very little use case for it, if I'm being honest, which is why I was agreeing with you. I think the lizard part of my brain wants 8 performance cores, while rationally I have little use for them. I do a few tasks regularly that completely peg the CPU on my 2016 MBP 16" (OCR, bizarrely, is one, which I do very frequently. I wonder if that could be sped up by the neural engine though?). But in general, I'm only very, very occasionally pushing the CPU to the max. The M1 is probably twice as fast as my current laptop in ST and thrice as fast in MT. I really don't need more than that.

<MontyPython>
8 cores! Luxury! We were lucky to have even a single core, and all 26 of us had to timeshare it!
</MontyPython>
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,347
5,471
136
<MontyPython>
8 cores! Luxury! We were lucky to have even a single core, and all 26 of us had to timeshare it!
</MontyPython>

That was my experience when I started Computer Science many ages ago. Timeshare on a mainframe, but among the entire campus, though probably only hundreds of CompSci and engineering students actually using it. You where only allotted so much computer time for your batch submissions. Burn it up with an infinite loop and you had slink shame faced to the office to have your allotment increased.

At least I avoided punch cards by a couple of years...

Now you can buy a watch that has more computing power than the "Super Computer" that our campus used.
 

Doug S

Platinum Member
Feb 8, 2020
2,833
4,819
136
The part that surprised me a bit is the full DRAM on package. It makes sense, but I didn't think they'd do it. I'm still wondering what they'll do for the higher end Macs. I can see up to 64 GB being feasible, but will they actually do that in-package? What about 128 GB?

The higher end Macs need a better GPU, and at some point you don't want to be competing too much with the CPU for memory access to your GPU.

So I would see for at least some of the higher end the closely coupled DRAM will be GDDR6 or HBM2e, and a traditional DDR4/DDR5 controller. With double (or maybe more?) the number of GPU cores the M1 has and a ton of additional memory bandwidth that GPU would fly.

If they did this for the 8+4 chip, that could serve as the midrange, and also work as a chiplet that serves the high end. That gets you 4 memory controllers and high DRAM capacity in the Mac Pro to go along with your 32 big cores, and assuming the 4 GPUs spread across the chiplets work well enough together you'd have a hell of a beefy GPU.

How many times higher would the performance of M1's GPU have to be before it could match the current fastest dGPU from AMD or Nvidia?
 

Doug S

Platinum Member
Feb 8, 2020
2,833
4,819
136
True see my edit in the wrong post. Of course you can't put full R&D in this M1 but you can't also completely ignore it and just take the plain wafers costs, die size and yields. Even this SOC needed some design R&D, validation, Mask costs (easily double digit millions on 5nm) and so forth. Plus apple basically pays TSMC node development. They pay a lot more than say AMD.

As I've said before I'm willing to bet heavily the M1 silicon is the EXACT same silicon that will be called A14X when it goes in the next version of the iPad Pro early next year. So really the incremental R&D costs of M1 were basically zero since they were going to do an A14X anyway.

What you say will be true about the rumored upcoming 8+4 chip since that will be Mac only. Obviously it will leverage a lot of the M1s design, but I think there will be differences beyond just more big cores and more GPU cores.
 
Reactions: Tlh97 and teejee

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
Yes, I am in the firm belief that Apple purpose built this chip (and the A series chips beforehand in preparation) to be smooth for video editors. No I don't think Apple is only targeting them, but this has been a very big issue for a long time, and the previous solutions have mainly been to add more cores to brute force it, with some GPU acceleration thrown in. This hasn't really fared all that well (unless you had an unlimited budget) since the codecs have also advanced at the same time. Who in 2010 would have thought that by 2015 we'd be recording 4Kp30 video on our phones?

Mac mini beats out Mac Pro for Final Cut Pro editing. Mac Pro is faster for export, but he says timeline playback freezes. In contrast, the Mac mini is perfectly smooth for timeline playback and scrubbing with the same project. Video is 4K Pro Res (Better Quality setting).

He didn't spell out the exact Mac Pro specs, but it is running the XDR monitor and he did say it cost $20000 CAD with 16-cores, 200 GB RAM, and dual GPUs with a combined 64 GB GPU RAM. So it would appear the Mac Pro he is comparing against is:

2019 Mac Pro
16-core 3.2 GHz Intel Xeon W with Turbo to 4.4 GHz
192 GB RAM
2 x Radeon Pro Vega II 32 GB

Total price CAD$20249 (US$15483)

Export times for 4K h.264:
Mac mini 3'14"
Mac Pro 2'33"

I'm now looking forward to picking up a Mac Pro cheap on the used market.

I wouldn't be surprised if Apple makes close to nothing on the base model Air and Mini.

But +$200 for 8GB of RAM more than base, and +$200 for 256GB to 512GB of SSD. Each probably costs apple like $15-$20. That would be like +$360 profit to Apple.

The base model specs, high as they are, are just bait and not out of line given their quality. Its the (slightly) higher spec models where prices get whack. $1100 for a Mini with 16GB RAM and 512GB SSD vs $700 for 8GB RAM and 256GB SSD.
What? No. Did you take a look inside the Mac mini? It's basically a tablet mobo without the screen with a giant fan and a power transformer.

Basically you're getting an iPad Pro with with better I/O and a little bit more RAM, but no screen at all. You can be sure they have a nice healthy margin on these.
 

name99

Senior member
Sep 11, 2010
511
395
136
With all extra functional units inside the SoC, what is your use case that still needs more than 4 performance cores?

Oh come on!
For some people it is compiling. For some people it is various types of engineering. For me it's running Mathematica.

Yes, we all know most people will be comfortable with the low end -- that's why the low end sells best!
But that doesn't change the fact that there are some people using computers for more "traditional" tasks (like compiling, or large scale compute) who can keep using ever more compute as it becomes available.
 

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
Oh come on!
For some people it is compiling. For some people it is various types of engineering. For me it's running Mathematica.

Yes, we all know most people will be comfortable with the low end -- that's why the low end sells best!
But that doesn't change the fact that there are some people using computers for more "traditional" tasks (like compiling, or large scale compute) who can keep using ever more compute as it becomes available.
He already responded and said he probably doesn't need more cores.
 

name99

Senior member
Sep 11, 2010
511
395
136
He already responded and said he probably doesn't need more cores.
I think it's always good for people to be reminded that others use their computers in very different ways.
Some people care about video games, some care about FCP editing. Some care about XCode, and some about Mathematica...
 

Hitman928

Diamond Member
Apr 15, 2012
6,328
11,123
136
Has anyone else noticed how the same Tiger Lake chip performed 20% slower in SPEC 2017 in the M1 review compared to the review they did 2 months ago?

It would help a lot if Anandtech documented what compilers and settings they used to build the SPEC 2017 benchmarks they used.

If you are looking at the total scores, I believe it's because they didn't run all sub-tests in the M1 review because not all of them would work on the M1. So you can't compare the total score in the M1 review to the one in Tigerlake review because not all sub-tests are included in the M1 review.

I did take a quick look at the individual test scores and they are all identical, except in povray which gets a huge boost in the new score on Tigerlake for some reason. The Renoir score is unchanged and all the other scores are identical, so I don't know if there was an actual change in settings/compiler or if one of the povray scores is a typo.
 

Panino Manino

Senior member
Jan 28, 2017
873
1,128
136
I've never understood this argument. "If you do this, of course it will be better, so it's no big deal." If it's so easy to implement, why don't others do it? Esp. the big boys?
A "wide" architecture is absolutely no guarantee of performance, the custom performance cores in Exynos SoCs were wide designs but went nowhere against the narrower ARM Cortex cores. If it was so trivial to implement a wide x86 design for improved IPC without screwing up elsewhere, Intel/AMD would have done it ages ago just for their server parts alone.

What's really amazing with the Apple-designed cores is not that they are merely a wide design with high performance. It's that they combine such performance with very low power consumption and keep their cores sizes in line or smaller than the x86 competition.

Not saying that it's easy, Apple didn't suddenly came with an architecture this big from the start. Just that I feel x86 will not get left behind seeing what AMD and Intel are right doing now.

The Exynos disaster was a shame, but maybe they tried to move too far too fast?
 

nxre

Member
Nov 19, 2020
60
103
66
But the problem with this approach is that you can have a ton of unused silicon at any given time.
I think what you claim to be the problem is actually the reason behind this switch to accelerators. Litography is getting increasingly harder and more complex, so much of a chip nowadays is just dark sillicon, accelerators kind of provide a way out of this problem by requiring that only the cpu needs to hit increasingly higher clock speeds while a big chunk of the chip is accelerators that can operate at much lower power requirements. But I may be wrong on this one, as litography nowadays is an increasingly complex problem for just one person to understand.
 

nxre

Member
Nov 19, 2020
60
103
66
The Exynos disaster was a shame, but maybe they tried to move too far too fast?
The Exynos custom cores always struck me as someone tried to reverse engineer apple cores and make their own based on it, it definitely seemed weird, and not a natural progression of their uarch. I definitely think they tried to move too fast with that, and its a shame as now only ARM and Apple make custom cores.
 

nxre

Member
Nov 19, 2020
60
103
66
Also, how do you guys think apple will handle gpus in their higher end macbooks and eventually desktops next year?
I'm going to guess Apple is not moving yet to their own GPUs in higher end parts. They seem to want the GPU to be packed into the SOC itself, and function as an integrated no matter how performant it is. They could make huge monolithic dies and bin them by GPU cores active, but at a certain point it just becomes far too expensive to produce for such a small market.
What I would guess is they are going for a chiplet approach to GPUs, but I think they would likely do that in late 2021/early 2022 with the M2 architecture. They don't seem to want to change everything at the same time, so I think they would first change the cpu, and keep discrete graphic options for those who need it, and then after changing cpus going after gpus if everything goes well on round one.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |