Discussion Apple Silicon SoC thread

Page 48 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,870
1,438
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

HurleyBird

Platinum Member
Apr 22, 2003
2,761
1,462
136
I don't see the point of running a server suite against a low end consumer laptop part vs a part drawing about 8 times the wattage in benchmarks designed for unlimited power draw.

Server suite, perhaps.

Everything else, not really.

As a forward looking exercise, a 5600X should line up pretty well with a 5800U when Cezanne comes out. Actually, that's probably conservative. The 3600/X and 4800U were pretty comparable. But common sense says that Vermeer is much more bottlenecked by the IOD than Matisse (when everything else becomes faster, the thing that was already a bottleneck is going to become that much more significant). Add to that the fact that Cezanne gets 1/2 the L3 cache of its desktop and server brethren while Renoir made due with a paltry 1/4.
 
Reactions: Tlh97

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Enjoy Apple's engineering-driven management right now. It won't last 😢. We're probably safe as long as Tim is in charge; maybe even as long as Jony is in charge of his division.
But in the end the parasites always win...

Maybe it will for a bit longer The engineering is certainly hugely impressive.

I guess its also getting potentially quite important for them to at least be able to offer a reasonable capability to do this for quite a wide range of developers.
 

IvanKaramazov

Member
Jun 29, 2020
56
102
66
What I mean by favor mobile CPUs, I mean the benchmarks themselves are very short so as to not cause throttling.
That is fair. Honestly, for that same reason I’ve been trying to wrap my mind around why Geekbench doesn’t unfairly favor Intel and to a lesser extent AMD over chips like the M1. A roughly 30-second benchmark should allow many x86 chips to run at PL2 essentially the entire time, and both Intel and AMDs mobile chips exhibit far higher boosts in clock and power draw than the M1 seems to. You’d expect then that the GB scores for the M1 would actually be more representative of sustained workloads than the GB scores for the x86 competitors. I don’t think longer benchmarks agree with that characterization though.
 

jeanlain

Member
Oct 26, 2020
159
136
116
Honestly, for that same reason I’ve been trying to wrap my mind around why Geekbench doesn’t unfairly favor Intel and to a lesser extent AMD over chips like the M1. A roughly 30-second benchmark should allow many x86 chips to run at PL2 essentially the entire time, and both Intel and AMDs mobile chips exhibit far higher boosts in clock and power draw than the M1 seems to.
Geekbench should unfairly favour intel CPUs that are more prone to throttling than the M1, which should be the case in most laptop configurations. But the M1 core is just faster than any intel core even at its boost frequency. This is also true in SPEC tests, which are longer.
This is not a general trend. For instance, the M1 core isn't the fastest in Cinebench. This program runs better on X86 because...? It could be just due to Cinebench having only recently been ported to ARM and lacking some optimisations.
Cinebench is not designed as a cross-architecture benchmark tool. I believe SPEC tests give a more accurate picture of the power (and bandwidth) of a CPU core/package.
 

moinmoin

Diamond Member
Jun 1, 2017
5,094
8,098
136
That is fair. Honestly, for that same reason I’ve been trying to wrap my mind around why Geekbench doesn’t unfairly favor Intel and to a lesser extent AMD over chips like the M1. A roughly 30-second benchmark should allow many x86 chips to run at PL2 essentially the entire time, and both Intel and AMDs mobile chips exhibit far higher boosts in clock and power draw than the M1 seems to. You’d expect then that the GB scores for the M1 would actually be more representative of sustained workloads than the GB scores for the x86 competitors. I don’t think longer benchmarks agree with that characterization though.
PL2 is an Intel concept. AMD is actually doing the opposite of Intel with its latest mobile chips, not boosting before some form of timeout to preserve efficiency:
 
Reactions: Tlh97 and Viknet

LightningZ71

Golden Member
Mar 10, 2017
1,827
2,203
136
Or Smaller NPU and Smaller GPU, and a bit smaller cache, or made a slightly larger die.

I think we can be very certain that they wouldn't have dropped below 4 performance cores even if they were still on 7nm.

It's ridiculous argument, and rather pointless hypothetical.

I have to agree that they would have retained four performance cores. I also agree that they would have reduced the size of the NPU and likely the SLC to compensate. Unfortunately, they would have had to sacrifice some clock speed, and power draw numbers would have been higher across the board. The air would have had more issues with thermal throttling and battery life would have been shorter.

Again, I'm not saying that M1 is a bad chip. I'm pointing out that the node advantage is more significant than many on here are claiming.
 
Reactions: Tlh97 and Carfax83

DrMrLordX

Lifer
Apr 27, 2000
22,117
11,783
136
I thought the M1 was an 8 core CPU?

It is, but it's a DynamIQ/big.LITTLE type arrangement with four smaller cores that can't perform as much work as the four larger cores. So it's more of an alternative to SMT.

Geekbench has issues and arguably shouldn’t be used as the primary benchmark by as many sites, but I’ve never seen compelling evidence that it actually “favors mobile CPUs”, despite constant claims to that effect. If you read the many, extended discussions on it by Torvalds and everyone at real world tech, for example, there is much debate about whether the selected workloads and means of testing are really representative of meaningful, real world workload, but no ones decrying it as a mobile-friendly benchmark, or arguing that it unfairly favors Apple.

GB5 and SPEC numbers have been slung around by some rather unsavory posters here (cough cough) as a way to try to prove that ARM - in particular Apple's ARMv8 implementations - are so good that AMD and Intel should just go out of business now because of how far behind they are. And you think I'm making that last bit up, but I'm not. You may know which poster's I'm talking about, and if not . . . good on you for not having read that dross. So when we get M1 running MacOS and I see people hiding behind GB5 and SPEC numbers again, it sort of makes me twitch a bit in agony that we've got to go through all that nonsense.

GB5 in particular has always irked me since it seems not to tax my CPUs much, even in the MT test segments. Only a few of the MT subtests really get the CPU hot (indicating full use of available execution resources). Why am I supposed to care that such-and-such CPU produces higher benchmark results in a benchmark that isn't making my CPU actually do anything?

It's going to take time before there are a lot of applications compiled and optimized for M1 that we can use to compare it to CPUs utilizing other ISAs on other operating systems. One thing we can do is maybe lean on this a little bit:


Now some of you are saying, "Oh that just benchmarks Java performance if you run Java applications" but that's not really true. Anything you can do in C, C++, Go, Rust, etc. you can more-or-less do in Java/Scala/Kotlin if that's your preference. There's overhead that can make JVM-based applications slower, but not THAT much slower, and besides you still get a pretty faithful representation of performance if you know what you're doing when coding to make use of the JVM. I can tell you that OpenJDK has done an excellent job of optimizing the JVM for ARM in existing ARM Linux builds, and I'm confident that Azul will deliver binaries based on OpenJDK that work just as well. As an added bonus, I even put together a Java-based version of Dr. Ian Cutress' 3DPM (first mode only) ages ago that I've since run on my Snapdragon 855+ phone. If someone were to run that bench on an M1 laptop, that would be the bees knees. And if anyone here actually buys one of the things they could do just that. Hopefully Azul will offer their builds for free (my assumption is they'll charge for support where necessary).

edit:

in fact, you can get the Azul Zulu builds of OpenJDK here:


(I'm assuming you want the ARMv8 builds)

Let me know if anyone wants to run the Java version of 3DPM and I'll dig up a link or just embed some files. It ran quite well on a Snapdragon 855+ so it would be interesting to see how it does on M1. It's very fp-intensive. The JVM should utilize NEON.

Yes, he is grasing at straws trying to debunk a benchmark. If he doesn't like geekbench, why not use SpecCPU?

Here we go again.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,141
6,838
136
Yes, he is grasing at straws trying to debunk a benchmark. If he doesn't like geekbench, why not use SpecCPU?

Why not use both and several other benchmarks on top of that? The idea that you can get a single benchmark to tell you everything you need or want to know is rather silly, never mind the problem of getting everyone to actually agree on what constitutes "real world use" and whether or not a benchmark adequately captures that.

People have all manner of different real world uses for these products and trying to encapsulate that in a single benchmark is only going to leave something or someone out and open itself up to the same criticisms and arguments about real world use all over again.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,347
5,471
136
GB5 and SPEC numbers have been slung around by some rather unsavory posters here (cough cough)

Resorting to personal attacks on posters who disagree with you?

as a way to try to prove that ARM - in particular Apple's ARMv8 implementations - are so good that AMD and Intel should just go out of business now because of how far behind they are. And you think I'm making that last bit up, but I'm not.

Yes, it does look like you are making that up. Where did I say anything remotely like that? If someone says that, address that actual comment, don't make vague innuendos after the fact.

So when we get M1 running MacOS and I see people hiding behind GB5 and SPEC numbers again, it sort of makes me twitch a bit in agony that we've got to go through all that nonsense.

Hiding behind these benchmarks? Just another attempt to belittle people you disagree with. Are you lumping Andrei Frumusanu of Anandtech in your attack? Geekbench is a perfectly reasonable benchmark, so are the SPEC suites. It looks a lot more like you just want to discount any results you don't like, without providing sound backing.

Here is what I see with M1. It's an extremely impressive low power SoC and here is why:

Top class single threaded CPU performance, matching even desktop CPU, while consuming a fraction of the power, and running at MUCH lower clock speed. This points to an extremely powerful desktop versions to come when more power and cores are used. But this may actually be the least impressive part of the mighty M1.

Best iGPU performance: Beats any PC iGPU by large margin. While single threaded CPU performance is matching best in class. iGPU is trouncing best in class, and this is hardly even mentioned.

Superb HW Media Encoders: I have seen comments that is is almost like a mini Afterburner card.

Hefty ML core allocation: Less demonstrable at this point, but it seems reasonable that this will also have the best ML performance of any other APU/SoC.

Incredible perf/watt.

Here is my opinion on where this device stands: M1 is the best overall mobile SoC/APU on the market.

Just because I am saying it's the best overall mobile SoC/APU on the market doesn't mean it has to win every single esoteric benchmark. It won't win embarrassingly parallel benchmarks against devices with many more performance cores. While all the individual parts are quite impressive, the real win here is that the whole is greater than sum of the parts. This device has many great subsystems, all together in one chip, that really deliver more than just synthetic individual benchmark wins. They are delivering amazing real world performance and experience in end user applications like Final Cut Pro, where they are destroying much higher end devices, being much more responsive, and faster, in real world comparison. All while sipping battery juice and remaining cool and quiet. This is the true benchmark that matters for a laptop.
 

jeanlain

Member
Oct 26, 2020
159
136
116
Best iGPU performance: Beats any PC iGPU by large margin. While single threaded CPU performance is matching best in class. iGPU is trouncing best in class, and this is hardly even mentioned.
Some may say that it is expected given its specs, since the M1 GPU has more EUs than the competition. But what matters is, as always, perf/W.
We know that the M1 GPU consumes 7W to achieve 39 fps on RoTR 1080p "enthusiast" settings. To achieve, say, 25 fps, which may be the performance of the best competing iGPU, how much power would the M1 GPU consume? 4W?
 

IvanKaramazov

Member
Jun 29, 2020
56
102
66
I know some people treat Geekbench or SPEC like they're the holy grail of benchmarks, which is misguided. Apple people tend to focus almost exclusively on GB, while in Windows land everyone is enamored with Cinebench for reasons I can't understand. Linux people are obsessed with all the obscure benchmarks Phoronix uses, some of which are insanely niche. If it seems like I'm defending Geekbench in this thread it's only because it feels like at the exact moment in time when Arm chips started posting high numbers a lot of PCMR types adopted a highly convenient scorn for it. SPEC was generally treated with respect, but since Anandtech began using it to compare the A-series chips with x86 world I see more and more people on r/hardware and similar haunts dismissing SPEC as well. The fact is all these benchmarks are mostly fine. A chip that is really fast in one tends to be fast in the others. One can quite often accurately predict scores in Cinebench or whatever just by extrapolating GB numbers, and vice versa, with a surprisingly small margin of error.

Generally speaking, I don't particularly care which company has the best or fasted CPU for the moment in any particular benchmark. I just want my devices to get my work done better, and I want a better overall experience of using them. I suspect that some of the very non-PC aspects of the M1 will drive that user experience forward in interesting ways, so I'm probably most excited about Apple's stuff at the moment.

And like most of us here I just like numbers and comparing them; it's fun! Seems like a shame to make a religion out of it though.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,347
5,471
136
I know some people treat Geekbench or SPEC like they're the holy grail of benchmarks, which is misguided. Apple people tend to focus almost exclusively on GB, while in Windows land everyone is enamored with Cinebench for reasons I can't understand. Linux people are obsessed with all the obscure benchmarks Phoronix uses, some of which are insanely niche. If it seems like I'm defending Geekbench in this thread it's only because it feels like at the exact moment in time when Arm chips started posting high numbers a lot of PCMR types adopted a highly convenient scorn for it. SPEC was generally treated with respect, but since Anandtech began using it to compare the A-series chips with x86 world I see more and more people on r/hardware and similar haunts dismissing SPEC as well. The fact is all these benchmarks are mostly fine. A chip that is really fast in one tends to be fast in the others. One can quite often accurately predict scores in Cinebench or whatever just by extrapolating GB numbers, and vice versa, with a surprisingly small margin of error.

Agreed and some additional thoughts.

Geekbench was used in Apple comparison, because it was the primary cross platform benchmark that would actually work on iPhones/iPads. I can't even think of anything else that easily fits the bill, unless you want to compile your own. Thus everyone used Geekbench. It's understandable that when it's pretty much all you have, you use it.

IMO scorn for Geekbench grew as performance of recent generations iPhone/iPad showed iPhone delivering desktop performance. The mindset developed that this was "Too good to be True" performance for a Smartphone SoC, and therefore Geekbench must be faulty.

BTW I am not an "Apple Person". I feel like some kind of endangered species these days, in that I have never owned a single Apple product in my life. The Last Mac I used was in University Multimedia Class in 90's, and it was 68K based.

I also found myself wondering the same about Geekbench. Was it misrepresenting Apple SoC performance.

IMO this has been one of the revelations of M1 Macs. Geekbench was not faulty. Apple SoC performance is legitimately Desktop class.

Cinebench. I don't remember being prominent until Ryzen hit the scene. Since then it seems to be choice benchmark for AMD, and "AMD people" to show the core count advantage over Intel. If anything it's even less applicable than Geekbench to the real world. Geekbench is a composite benchmark. CB is a just one single task. In a way it's just one of the most simple embarrassingly parallel benchmarks out there.

AFAIK, the Phoronix benchmarks are exclusively open source, for the Open Source community. They most often seem to have been used to show that Linux is a Faster OS than Windows (2nd place) and MacOS(3rd Place). In the comparisons so far, M1 Mac has faced the double burden of having the "Slowest OS" vs hand optimized x86 on the "Fastest OS" (Linux) in this suite.
 
Last edited:
Reactions: jeanlain

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Cinebench. I don't remember being prominent until Ryzen hit the scene. Since then it seems to be choice benchmark for AMD, and "AMD people" to show the core count advantage over Intel. If anything it's even less applicable than Geekbench to the real world. Geekbench is a composite benchmark. CB is a just one single task. In a way it's just one of the most simple embarrassingly parallel benchmarks out there.
Just a reminder Anandtech has been using some version of cinebench as one of its main tests all the way back to 2006.
 
Reactions: Tlh97 and Mopetar

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
I know it was used, but it wasn't so prominent.
Maybe.

I am not going to argue other people's subjective experiences so without external data like google search trends I am only going to speak my own experience (understanding others subjective experience may be different.)

I subjectively felt it was used prominently around 2012 with Cedertrail platform (Saltwell architecture) and later 2013 Baytrail (Silvermont architecture) and how these slow atom cores suck against the "Big" Intel Cores but at least SIlvermont was catching up and may someday be decent.

But it being used with Intel vs AMD competitions only started to occur when AMD had a good architecture with Zen derivatives instead of Bulldozer derivatives.

2006 to 2008 was before my time, I know of it, but I did not get jacked into the deeper comparisons till about 2008.
 
Reactions: Tlh97 and coercitiv

coercitiv

Diamond Member
Jan 24, 2014
6,727
14,499
136
But it being used with Intel vs AMD competitions only started to occur when AMD had a good architecture with Zen derivatives instead of Bulldozer derivatives.
I distinctly remember CB being used as an evaluation and comparison tool between Intel and AMD silicon well before Zen, at the very least during Kaveri and Carizzo mobile product launches.

More importantly than that, Intel themselves used Cinebench as proof for performance/watt improvements in their initial Broadwell demo (2013). The irony of that demo from long ago can easily be summed up by the last sentence in the Ananadtech article:
A physical size reduction is necessary to get Broadwell into fanless tablet designs that can have competitive battery capacities to ARM based designs.
 
Reactions: Tlh97 and Mopetar

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
More importantly than that, Intel themselves used Cinebench as proof for performance/watt improvements in their initial Broadwell demo (2013). The irony of that demo from long ago can easily be summed up by the last sentence in the Ananadtech article:
By Jove 7 years ago. 7 [expletive] years ago!!!

Now 14nm is not the same thing with universal performance for all 14nm products, it did get better over the years...

But such stagnation ... from 2013 to 2018 when the rest of the silicon marketplace was not stagnated since that time. (Only good intel 10nm silicon started around Sept 2019, there was previous intel 10nm but it performed worse than 10nm and did not have an integrated gpu so it was sold only to say we had 10nm in 2018 for wall street.)

I was so hyped way back when and I feel so "old" now-a-days 😑 (this is 2020 malaise speaking.)
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,141
6,838
136
Arguing over whether Cinebench is any good is just as pointless about arguments about Geekbench or SPEC. It's just another benchmark and should be treated only as indicative of what it measures and the kind of workloads that might be similar. It's not as though Cinebench is was only thing that Ryzen did well on, but it did a pretty good job of showcasing what kind of workloads the chip handled well so it's no surprise that it gets used as a stand-in for all of those other things. If it seems more prominent it's likely for that reason and not because it really is any more prominent.
 

Doug S

Platinum Member
Feb 8, 2020
2,833
4,819
136
I'm not so sure about this. With all the fanfare (especially from Andrei F.) surrounding Apple's ultra wide designs, I think people are being premature in assuming that wider is the way to go, but that could be because of their novelty more than anything. Zen 3 is a four wide design that is still able to achieve incredible performance relative to the eight wide M1 despite being on a bigger node, and using a chiplet design.

What people miss is that a wider design only matters when the code actually allows many instructions to be executed in parallel. Wider designs mostly shine during certain tight loops that can be unrolled/register renamed, and there are still diminishing returns the wider you get. Those tight loops also happen to the place where higher clock rates pretty much always work to your advantage.

Now obviously Apple believes further benefit remains for going wider (at least for the codebases they consider important) since they keep doing so, but I agree that people shouldn't just be assuming wider is better. There are a lot of things that trip up a CPU and limit its performance, and thus a lot of ways to improve it.

Anyone thinking "oh Apple's SoCs perform as well as they do mainly because they are so wide" is missing all the other things they've done to reduce cache latency (though obviously, going lower when measured in cycles is easier when your cycle time is 1/3.2 GHz instead of 1/5.0 GHz) improve branch prediction (apparently better than Intel's, which had been considered the gold standard is prediction efficiency) and countless other little things that alone probably aren't even close to a 1% improvement but collectively add up.

Going wider alone isn't going to get you much more than getting another hundred MHz higher in clock rate will. You still need to do a lot of that other stuff to avoid having other bottlenecks cancel out most of the gain from a wider or faster CPU.
 

mikegg

Golden Member
Jan 30, 2010
1,847
471
136









Source:



Posting just links or images without comment is not allowed in CPU forums. This is your one zero point warning.


esquared
Anandtech Forum Director
 
Last edited by a moderator:
Reactions: scannall

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
M1 is 4 big cores + 4 little cores (Firestorm + Icestorm, the Firestorm use more power and are faster, Icestorm is the energy efficient ones that are also smaller amount of die compared to the "large" cores.)

This is kind of dancing around the terminology if you ask me. If the smaller more efficient cores are being tapped during multithreaded workloads, then the M1 is an 8 core CPU no matter which way you slice it.

The fact that a 65w 6 core (12 thread) cpu is competing with a 15w 4+4 core cpu is more a compliment to the 15w mobile chip.

Yes, I've already stated multiple times that the M1 is an impressive CPU, especially from a performance per watt perspective.

But Apple has marketed the M1 as having the "World's fastest CPU core" so they should be able to back that up if you ask me:

 
Last edited:
Reactions: Tlh97 and amrnuke

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Talking about browsers, the M1 mops the floor on every single browser benchmark out there. Even real world, performance impressions of page load, scrolling and running multiple tabs show a major uplift in performance. Originally people thought it was because Safari was so well optimized, but guess what? Chrome was just compiled for the M1 and shows a similar superior score.

Browsers have tons of hardware acceleration these days so that's not really surprising.

And you cannot dismiss all the other real world benchmarks out there. Lightroom and Premiere Pro running under Rosetta in many cases outperforms the best x86 processor out there.

Again, hardware acceleration. Not to say that takes away from the M1's blistering performance, but if you look at the recent benchmarks that @senttoschool posted above, you can see why hardware acceleration can make doing valid architectural comparisons futile.
 
Reactions: beginner99

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
But mobile CPUs throttle because they are placed in a chassis that cannot dissipate heat well (y'know, a smartphone), not because of any flaw with the chips themselves. Given that the M1 placed with active cooling doesn't throttle anyways I don't see how that favors it vs x86.

This is a valid point, and opens up a broader conversation as to the actual wisdom of using a mobile oriented benchmark to assess the performance of a desktop/workstation/server grade CPU.

I know I've said it multiples times, but when I ran Geekbench on my old 6900K, it didn't even get my CPU to boost into turbo clocks, nor did it recognize my CPU's quad channel memory. That was a while ago, but it left a bad taste in my mouth as to the worth of Geekbench....for desktop CPUs at any rate.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |