- Mar 11, 2000
- 23,870
- 1,438
- 126
What A14 is showing is once they exceeded the previous best by some, they have also reached the same problems and limitations.
Think of DRAM. DRAM cells are far, far smaller than any used for CPUs or GPUs. They are few process generations ahead in terms of that. But they are scaling at an absolute snails pace now. Because they were the first one to reach the fundamental limits.
All will reach the same conclusion, even if they took radically different steps to get there.
I had a suspicion that TSMC's 5nm claims were overblown, because their previous 10nm and 20nm showed similar lackluster gains. Actually most manufacturers skipped those processes.
I assume its getting really hard now so such huge jumps don't make sense anymore. Intel, the one company insisting on full node jumps with full node terminology are also the one having problems transitioning to one. Is that a coincidence? Maybe not. In the future they may have to use half-node jumps as well. The plusses and the P's will always exist regardless.
You're building a huge tower of speculation on a SINGLE data point -- and a data point that, as I've already pointed out, does not mean what you think it means.
This is not the behavior of a scientist or an engineer; it's the behavior of the PR man.
Is that how you want to be thought of?
Hmm... AV1 hardware decode.A14X in CPU-monkey
Apple A12Z Bionic Benchmark, Test and specs
Apple A12Z Bionic benchmark results and review of this cpu with specs including the number of cores, threads, memory bandwidth, pcie lanes and power consumption. Benchmarks in Cinebench R23 and Geekbench 5www.cpu-monkey.com
Hmm... AV1 hardware decode.
And up to 8 GB RAM for the GPU and support for 3 displays. I suspect that doesn't apply to the iPad Pros, just the Macs.
Geekbench 5: 1634 / 7220
And they mention MacBook 12 and MacBook Pro 13. That'd make for one helluva MacBook 12!
Is this legit?
EDIT:
The mention a "TDP" of 15 W. If that's what these numbers reflect, then that wouldn't be in a 12" MacBook. The 12" MacBook could run a downclocked one though.
OTOH, I still say I'd be perfectly fine with an A14 non-X MacBook 12", at least from a performance perspective (and not a marketing perspective), if it could handle 16 GB RAM, etc.
P.S. I just ordered my first A14 device today, a 6 GB iPhone 12 Pro Max. I'm coming from a 3 GB iPhone 7 Plus with A10 from 2016, and it's only been in the last year or so that I've started to notice some occasional lag. Otherwise that A10 machine works just fine.
If I understand the page earlier, up to 32 gb memory, but memory can dynamically shared with the graphics which can access up to 8 gb of that 32 gb of memory.The A12Z in the comparison had "15W" TDP and ended up in iPad pros, so the A14X spec here should suit just fine in the non-pro Macbook, 8GB max ram support would not work for Macbook Pro implementations anyway. Now I really wanna see what they put in the Macbook Pros...
Yes. My statement was to indicate the 8 GB may be meaningless for the iPad Pro. It references the max seen by the GPU for a Mac, at least according to that page.If I understand the page earlier, up to 32 gb memory, but memory can dynamically shared with the graphics which can access up to 8 gb of that 32 gb of memory.
This site also indicates 15W TDP for the A12Z.The mention a "TDP" of 15 W. If that's what these numbers reflect, then that wouldn't be in a 12" MacBook. The 12" MacBook could run a downclocked one though.
But this doesn't make sense. The GPU and CPU share the same memory pool, as stated in WWDC session. So if the CPU sees 32GB, si does the GPU.Yes. My statement was to indicate the 8 GB may be meaningless for the iPad Pro. It references the max seen by the GPU for a Mac, at least according to that page.
Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?
Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?
I might guess that although they can access the same physical memory pool, there is nothing to say the GPU can actually utilize more than 8 GB RAM at any one time. It hasn't mattered though in any of the iDevices though, since none of them even have 8 GB RAM in the first place.But this doesn't make sense. The GPU and CPU share the same memory pool, as stated in WWDC session. So if the CPU sees 32GB, si does the GPU.
Yes. My statement was to indicate the 8 GB may be meaningless for the iPad Pro. It references the max seen by the GPU for a Mac, at least according to that page.
I wouldn’t be surprised if the next iPad Pro only has 6GB, although one can hope for 8 GB.
But this doesn't make sense. The GPU and CPU share the same memory pool, as stated in WWDC session. So if the CPU sees 32GB, si does the GPU.
So what is the core content going to be like with the A14X? 4+4, 8+0, 6+2? I ask for the i9 9980HK (45w) gets about 1096 / 6870 in Geekbench 5 single and multi thread.
Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?
So what is the core content going to be like with the A14X? 4+4, 8+0, 6+2? I ask for the i9 9980HK (45w) gets about 1096 / 6870 in Geekbench 5 single and multi thread.
Is there some specific thing that can make the apple silicon outperform 8 big cores of intel to such an extent? Like a benchmark where apple has dedicated silicon to do something like cryptography and so on?
Nothing you can say with a single answer.
Even on iso-everything(node, process, R&D funds), some companies just execute better than the rest.
Also Intel has been stuck with Skylake for 5 years now. The 9980HK is Skylake. They are getting their asses kicked from all directions by AMD, a company a fraction of their size, and one they were beating significantly not too long ago.
Even if you are a world class marathon runner, if you stand still in a spot for 50 mins after leading for a bit, its not going to look good for you.
Intel Core i7-1165G7 2.8 GHz (4 cores) | 1426 | |
Intel Core i9-10900K 3.7 GHz (10 cores) | 1411 |
Intel Core i7-1165G7 2.8 GHz (4 cores) | 4837 | |
Intel Core i7-7740X 4.3 GHz (4 cores) | 4833 |
- x86 (AMD and Intel) is substantially more complex than ARM, meaning that it takes a lot longer to design and validate an x86 design meaning that x86 is always operating with the best ideas of seven years ago
I don’t want to argue your point, but you say something subtly different than name99, who said that Intel has spent the last decades trying to make the Pentium Pro run faster.I disagree. The high level fundamentals of how to make x86 code run fast has been largely the same for 25 years, and AMD is able to churn out Zen generations at a faster clip than Intel. x86 complexity is not the reason Intel design is in the gutter.
I don’t want to argue your point, but you say something subtly different than name99, who said that Intel has spent the last decades trying to make the Pentium Pro run faster.
You essentially make a statement regarding the reason for that - the x86 target code base having similar requirements over time. Of course, that’s due to history and architecture both.
Regarding the point of contention, I agree with both of you - yes, decades of accumulated cruft makes design, validation and debugging a more complex process. But No, that is probably not the primary reason for the position Intel is in right now, which (from the outside) seems to have more to do with a combination of process woes and suboptimal management, more than the x86 ISA per se.