Discussion Apple Silicon SoC thread

Page 283 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,725
1,261
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

SpudLobby

Senior member
May 18, 2022
912
611
106
What happens if you allow SME?
In PRINCIPLE LLVM should
- detect loops that look like matrix multiples or similar (and also appropriate long vector loops)
- map them to linalg operations
- which should then be lowered to SME or SSVE if the compiler has been given permission to do so

The various steps in this process are newish, in the sense that they've been written over the past two or three years, and haven't had much real world testing. But in THEORY they should work.

You could also try the multiversioning support, as described here,
for a single function that looks like it should use SSVE2 or SME2, and see what happens.
Isn’t autovectorization still pretty shoddy for SVE?
 

name99

Senior member
Sep 11, 2010
427
324
136
Isn’t autovectorization still pretty shoddy for SVE?
I JUST GAVE a reply answering exactly that question.
If people refuse to look at the references the first time, why would I bother to answer again?

If you're interested in scoring debating points around the precise meaning of "pretty shoddy", well go find someone to fight with.
Is 2/3 or so of relevant loops (in 2022, based on work probably in 2021, and with on-going improvement since then) "pretty shoddy" or "not bad"?

 

SarahKerrigan

Senior member
Oct 12, 2014
539
1,161
136
I JUST GAVE a reply answering exactly that question.
If people refuse to look at the references the first time, why would I bother to answer again?

If you're interested in scoring debating points around the precise meaning of "pretty shoddy", well go find someone to fight with.
Is 2/3 or so of relevant loops (in 2022, based on work probably in 2021, and with on-going improvement since then) "pretty shoddy" or "not bad"?


This kind of tone is beneath you, Maynard.

Your linked paper says that manual vectorization of a test suite of loops explicitly designed to be vectorizable gets another 40% boost over the tested vectorizing compiler. I don't find that enormously whelming given my experience with real-world autovec across a few compilers (Intel, LLVM, GCC, NEC.) It's clearly doing a decent job of autovectorization, but a decent job doesn't actually buy you that much across a large spectrum of real applications IME.

It is nifty that ARM is able to make a Fortran Sudoku solver in the SPEC suite go twice as fast with autovectorization, but it's also the smallest subtest in SPECint. That's not to say it's bad or that it's unimpressive, or that I'm ignoring the smaller but still substantial improvements in other subtests in previous LLVM versions, but autovectorization is a long, long way from a solved problem.
 

SpudLobby

Senior member
May 18, 2022
912
611
106
The ARM blog doesn't think so.

If you go through their annual changes to LLVM and GCC, every year they call out some big change in one of the SPEC benchmarks enabled by some new vectorization, though each year it tends to be a different function.

eg

It seems like the linalg stuff might be lagging (in the compilers being used) as opposed to leading edge of Flang 18 and LLVM 18, but that just means we should see big boosts after WWDC? (Look at eg the Flang numbers at the above link)

The real question is how aggressively (and sensibly) the compiler routes to SME and SSVE, and the answer to both may be "not at all", and "terribly", until XCode 16 (which, while surely far from perfect, will presumably also at least make some sort of intelligent effort, when given an M4 target).

This kind of tone is beneath you, Maynard.

Your linked paper says that manual vectorization of a test suite of loops explicitly designed to be vectorizable gets another 40% boost over the tested vectorizing compiler. I don't find that enormously whelming given my experience with real-world autovec across a few compilers (Intel, LLVM, GCC, NEC.)


It's clearly doing a decent job of autovectorization, but a decent job doesn't actually buy you that much across a large spectrum of real applications IME.
Exactly this: doing a decent job in a narrow set of benchmarks is fine but just because we use those as proxies for general performance, I don’t think it directly implies codegen is similar enough for autovectorization elsewhere.
It is nifty that ARM is able to make a Fortran Sudoku solver in the SPEC suite go twice as fast with autovectorization, but it's also the smallest subtest in SPECint.
Lol, this is exactly the kind of narrow domain-specific emission of autovectorization I had in mind. Thank you Sarah.
That's not to say it's bad or that it's unimpressive, or that I'm ignoring the smaller but still substantial improvements in other subtests in previous LLVM versions, but autovectorization is a long, long way from a solved problem.
Yep!
 
Reactions: SarahKerrigan

SpudLobby

Senior member
May 18, 2022
912
611
106
I am actually crying of laughter from this, lol. This is amazing.

Maynard, you have a lot of insight and much to offer the hardware community, but this sort of characteristic skittishness where you attempt to be condescending to people about things you yourself clearly haven’t thought as much as you imply is tiresome and I think counterproductive.

Autovectorization is known to be a tough problem by almost all “real” accounts I’ve seen. Of course Arm will up-play it and by no means am I suggesting it is a joke or on a decline from here on out, the opposite it seems, but come on man. This is eerily reminiscent of reading too much into (some) patents.
 

Doug S

Platinum Member
Feb 8, 2020
2,420
3,913
136
I am actually crying of laughter from this, lol. This is amazing.

Maynard, you have a lot of insight and much to offer the hardware community, but this sort of characteristic skittishness where you attempt to be condescending to people about things you yourself clearly haven’t thought as much as you imply is tiresome and I think counterproductive.

Autovectorization is known to be a tough problem by almost all “real” accounts I’ve seen. Of course Arm will up-play it and by no means am I suggesting it is a joke or on a decline from here on out, the opposite it seems, but come on man. This is eerily reminiscent of reading too much into (some) patents.


Yep, he obviously has not tried to get a compiler to convert real world code into SIMD instructions. It is a giant pain in the butt, unless you can figure out exactly what the compiler is looking for (maybe that's something ChatGPT is good for)

The idea that looking at compiler changelogs and seeing a mention of "SME support" means that SPEC would compile to use it is just so laughable I can't even. Clearly he has never tried to compile SPEC, and if he has, has never looked at the assembly output to see what the compiler is doing and wondered "why the heck did it do it that way when it should be obvious there's a better way?"

There is approximately 0% chance that anything in SPEC will be compiled to use SME with today's version, and an exactly 0% chance that Geekerwan's SPEC results compiled to SME because he told us. Maynard is fighting the same battle on RWT, and everyone there disagrees with him too. Not sure why he's fighting this particular battle. You'd think he was a compiler writer defending his craft, but he doesn't seem to have a horse in the fight at all other than this is the initial position he took so now he's going to defend it to the death no matter how many people tell him he's wrong.
 

poke01

Golden Member
Mar 8, 2022
1,201
1,389
106
Fwiw geekerwan seems to be measuring now with internal Apple APIs. There are other figures putting total M4 power at 11W which is what I actually expect it’s running platform level (idle normalized).
Agree, 11 watts should be platform level. Using Apple Internal APIs is okay when only comparing to Apple SoC's.

Geekerwan will probably use VRMs when testing 8 Gen 4 vs A18 Pro.
 

SpudLobby

Senior member
May 18, 2022
912
611
106
Yep, he obviously has not tried to get a compiler to convert real world code into SIMD instructions. It is a giant pain in the butt, unless you can figure out exactly what the compiler is looking for (maybe that's something ChatGPT is good for)
Yep. Also, totally agree re; LLMs: this sort of natural language with a mild bit of heuristic reasoning could be a significant enhancement to compilers. It’s just insane to act like it’s a standard, solved problem. We would be living in a bit different world today RE: CPU performance if it were, in a certain sense.
The idea that looking at compiler changelogs and seeing a mention of "SME support" means that SPEC would compile to use it is just so laughable I can't even. Clearly he has never tried to compile SPEC, and if he has, has never looked at the assembly output to see what the compiler is doing and wondered "why the heck did it do it that way when it should be obvious there's a better way?
Yeah, dude I literally was crying laughing at how funny this was. Like, I almost feel sadistic at how predictable it was that he’d bs (like Adroc but on the other end of the fence) except it’s I was also confused he could really believe this. For a moment I thought he might come to his senses.
There is approximately 0% chance that anything in SPEC will be compiled to use SME with today's version, and an exactly 0% chance that Geekerwan's SPEC results compiled to SME because he told us. Maynard is fighting the same battle on RWT, and everyone there disagrees with him too.
hahah. I haven’t kept up, but I’ve seen him do similar. Will check it out. Ofc I agree there is 0 chance Spec is throwing this in anytime soon on today’s model.
Not sure why he's fighting this particular battle. You'd think he was a compiler writer defending his craft, but he doesn't seem to have a horse in the fight at all other than this is the initial position he took so now he's going to defend it to the death no matter how many people tell him he's wrong.
My guess is it’s that — and, as usual, an interest in believing whatever Apple does will be revolutionary overnight or whatever they don’t adopt is doomed and useless. I mean, Maynard I think has been in an awkward position after he’s hyped up how far ahead Apple would be and IPC improvements have been a bit slower, if we will (while still significantly ahead overall yes, but a slowing lead) and now they’re standardizing a significant little low latency accelerator, and he just can’t help himself.

It’s less blatantly manipulative than with some others, and maybe more just stubborn, and Maynard is a smart guy, but it’s every bit as unnecessary, annoying and abrasive.

People here got mad at me when I talked about “the problem”, and AMD/Apple’s fan bases at times, but this is it fellas, right here. The loudest and most numerous. You don’t see this type of indignity from anybody else these days.
 
Last edited:

SpudLobby

Senior member
May 18, 2022
912
611
106
Agree, 11 watts should be platform level. Using Apple Internal APIs is okay when only comparing to Apple SoC's.

Geekerwan will probably use VRMs when testing 8 Gen 4 vs A18 Pro.
Yes, thank you. Agreed. It’s extremely stupid people don’t get this though, the inconsistencies are so stark you have to be a motivated fan to believe otherwise. I am fine with the APIs or powermetrics for Apple-only comparisons, but everything else needs to be VRM or the wall.
 

SpudLobby

Senior member
May 18, 2022
912
611
106
Apple blowing up their power consumption to 11 watts makes Qualcomm not look so bad with their 14W power consumption.
Mx chip platform power for ST has always been more than like 5W though. So you have to keep that in mind. More like 5-9W (idle normalized). The second thing is that Apple is getting more performance, around 25-30% more at 20-22% less power simultaneously. Or just like 30-35% more at the same power. (Minus SME stuff). Still more than an easy node change and frequency boost, but with some modest IPC gains, process gains alone they should be able to narrow the gap.

But yes dude your freakouts on this were unnecessary. You should freakout if 8 Gen 4 sucks, that’s our tell if Oryon, where they had extra time to do phydes and use N3E, sucks in phones. That and if V2 is like single digit IPC improvement. Otherwise, time to chill out.
 
Reactions: carancho

SpudLobby

Senior member
May 18, 2022
912
611
106
I would add android fan bases too in there.
Agree but in my experience even on Reddit they are far more willing to be objective about hardware itself than AMD/Apple, it’s a humongous upgrade. Imperfect though and it might change if QC starts pounding Apple in CPUs or X5 closes gaps.
 

FlameTail

Platinum Member
Dec 15, 2021
2,903
1,636
106
Agree but in my experience even on Reddit they are far more willing to be objective about hardware itself than AMD/Apple, it’s a humongous upgrade. Imperfect though and it might change if QC starts pounding Apple in CPUs or X5 closes gaps.
r/Android is more critical of Android phones than even r/Apple. It's ironic.

But it might not necessarily be a bad thing, becuase it's coming from a place of open-mindness.
 
Reactions: SpudLobby

FlameTail

Platinum Member
Dec 15, 2021
2,903
1,636
106
But yes dude your freakouts on this were unnecessary. You should freakout if 8 Gen 4 sucks, that’s our tell if Oryon, where they had extra time to do phydes and use N3E, sucks in phones. That and if V2 is like single digit IPC improvement. Otherwise, time to chill out.
The rumour mill is not optimistic. We'll find out, as the time comes.
X5 closes gaps.
When is X5 being announced? Isn't it sometime this month?

I am tired of waiting. Reveal Blackhawk!
 

name99

Senior member
Sep 11, 2010
427
324
136
This kind of tone is beneath you, Maynard.

Your linked paper says that manual vectorization of a test suite of loops explicitly designed to be vectorizable gets another 40% boost over the tested vectorizing compiler. I don't find that enormously whelming given my experience with real-world autovec across a few compilers (Intel, LLVM, GCC, NEC.) It's clearly doing a decent job of autovectorization, but a decent job doesn't actually buy you that much across a large spectrum of real applications IME.

It is nifty that ARM is able to make a Fortran Sudoku solver in the SPEC suite go twice as fast with autovectorization, but it's also the smallest subtest in SPECint. That's not to say it's bad or that it's unimpressive, or that I'm ignoring the smaller but still substantial improvements in other subtests in previous LLVM versions, but autovectorization is a long, long way from a solved problem.
Like I said, now we are getting into semantics about what counts as "pretty shoddy".

I'm frustrated that people (frequently the same people) get excited about some chip being able to boost by 100MHz, but still insist that a free boost of their code by 5% or so from the compiler is not interesting...

Yes vectorization is not a solution to the problem of traversing a complex data structure faster.
But it often *is* a solution to a small chunk of throughput code otherwise embedded in a mess of latency code. And yet people refuse to acknowledge this because of a mindset that's still stuck in 1997 when the only way to get value from MMX was via intrinsics.

Plenty of stuff in computer engineering is not a solved problem. But insisting on perfection gets us nowhere! If heuristics frequently work well, why not acknowledge that?
 
Reactions: Tlh97 and Mopetar

name99

Senior member
Sep 11, 2010
427
324
136
Yep, he obviously has not tried to get a compiler to convert real world code into SIMD instructions. It is a giant pain in the butt, unless you can figure out exactly what the compiler is looking for (maybe that's something ChatGPT is good for)

The idea that looking at compiler changelogs and seeing a mention of "SME support" means that SPEC would compile to use it is just so laughable I can't even. Clearly he has never tried to compile SPEC, and if he has, has never looked at the assembly output to see what the compiler is doing and wondered "why the heck did it do it that way when it should be obvious there's a better way?"

There is approximately 0% chance that anything in SPEC will be compiled to use SME with today's version, and an exactly 0% chance that Geekerwan's SPEC results compiled to SME because he told us. Maynard is fighting the same battle on RWT, and everyone there disagrees with him too. Not sure why he's fighting this particular battle. You'd think he was a compiler writer defending his craft, but he doesn't seem to have a horse in the fight at all other than this is the initial position he took so now he's going to defend it to the death no matter how many people tell him he's wrong.
I'm curious.
Smart people at LLVM, for example, have been working on Linalg for 4+ years.
https://mlir.llvm.org/docs/Dialects/Linalg/
What do you think drives them? What do you think their endgame is?

Likewise for SVE and SME. You expect the outcome hoped for ten years from now is that everyone is writing assembly for these?

As pointed out I don't have a dog in this fight in terms of any stake in LLVM or GCC. I just don't understand why people look at something (anything) new -- we are seeing this right now with AI -- and the first response is to mock it.
Or this weird motte-and-bailey thing where someone will admit, when pushed, that sure, certain loops can and are being auto-vectorized every day but, hey, this super-complicated loop can't be auto-vectorized therefore ?something? It's the same as in AI "Well sure, translation is now good enough to be used every day but it can't poetically translate the Illiad into Japanese therefore ?something?"
 

SarahKerrigan

Senior member
Oct 12, 2014
539
1,161
136
I'm curious.
Smart people at LLVM, for example, have been working on Linalg for 4+ years.
https://mlir.llvm.org/docs/Dialects/Linalg/
What do you think drives them? What do you think their endgame is?

Likewise for SVE and SME. You expect the outcome hoped for ten years from now is that everyone is writing assembly for these?

As pointed out I don't have a dog in this fight in terms of any stake in LLVM or GCC. I just don't understand why people look at something (anything) new -- we are seeing this right now with AI -- and the first response is to mock it.
Or this weird motte-and-bailey thing where someone will admit, when pushed, that sure, certain loops can and are being auto-vectorized every day but, hey, this super-complicated loop can't be auto-vectorized therefore ?something? It's the same as in AI "Well sure, translation is now good enough to be used every day but it can't poetically translate the Illiad into Japanese therefore ?something?"

"Autovec exists" is just not the deep and controversial and heterodox revelation you seem to think it is. Like... we know it exists. It's even improved a fair bit in recent years. That does not mean that compilers are gonna start pulling amazing autovectorized or auto-matrixized (is that a word?) codegen out of most random apps. Hell, "this is about to get really good" was basically the argument for Itanium loop pipelining back in the day, which in a lot of ways has similar concerns for compilers as autovec does (detection of inter-iteration dependencies, detection of loop length.)

Did Intel's SoA-AoS conversion trick with libquantum raise eyebrows? Did it have legitimate uses? Yes to both. But it was looked at critically. I'm comfortable both being impressed by the clear gains being made with newer compilers and hesitant to generalize from them.
 

name99

Senior member
Sep 11, 2010
427
324
136
I'm curious.
Smart people at LLVM, for example, have been working on Linalg for 4+ years.
https://mlir.llvm.org/docs/Dialects/Linalg/
What do you think drives them? What do you think their endgame is?

Likewise for SVE and SME. You expect the outcome hoped for ten years from now is that everyone is writing assembly for these?

As pointed out I don't have a dog in this fight in terms of any stake in LLVM or GCC. I just don't understand why people look at something (anything) new -- we are seeing this right now with AI -- and the first response is to mock it.
Or this weird motte-and-bailey thing where someone will admit, when pushed, that sure, certain loops can and are being auto-vectorized every day but, hey, this super-complicated loop can't be auto-vectorized therefore ?something? It's the same as in AI "Well sure, translation is now good enough to be used every day but it can't poetically translate the Illiad into Japanese therefore ?something?"
You can read the full set of tweets here:
and assume I am wrong.
OR you can actually understand what I am saying, which is equivalent to what Chris Lattner and many others are saying:
you can only do so much if you are forced to use C's low-level abstractions (in particular specification of data layout) but no-one says C is the only language in the world. Even with Fortran you escape some of these shackled, and as you move up to truly abstracted languages like Mojo or Python or Mathematica the compiler has vastly more freedom.

But even IN the low-level C case, you can certainly do a bunch of easy obvious things to make a substantial amount of autovectorization happen.
 

SarahKerrigan

Senior member
Oct 12, 2014
539
1,161
136
You can read the full set of tweets here:
and assume I am wrong.
OR you can actually understand what I am saying, which is equivalent to what Chris Lattner and many others are saying:
you can only do so much if you are forced to use C's low-level abstractions (in particular specification of data layout) but no-one says C is the only language in the world. Even with Fortran you escape some of these shackled, and as you move up to truly abstracted languages like Mojo or Python or Mathematica the compiler has vastly more freedom.

But even IN the low-level C case, you can certainly do a bunch of easy obvious things to make a substantial amount of autovectorization happen.

I am in broad agreement with this.

Fortran's "elemental" keyword was a huge leap on this front, to the extent that it is used. C barely has arrays at all - really just syntactic sugar around bare pointers - and it makes detecting the effects of a given dereference unnecessarily hard.

The belligerent tone remains inappropriate.
 

name99

Senior member
Sep 11, 2010
427
324
136
"Autovec exists" is just not the deep and controversial and heterodox revelation you seem to think it is. Like... we know it exists. It's even improved a fair bit in recent years. That does not mean that compilers are gonna start pulling amazing autovectorized or auto-matrixized (is that a word?) codegen out of most random apps. Hell, "this is about to get really good" was basically the argument for Itanium loop pipelining back in the day, which in a lot of ways has similar concerns for compilers as autovec does (detection of inter-iteration dependencies, detection of loop length.)

Did Intel's SoA-AoS conversion trick with libquantum raise eyebrows? Did it have legitimate uses? Yes to both. But it was looked at critically. I'm comfortable both being impressed by the clear gains being made with newer compilers and hesitant to generalize from them.
So then what are we fighting about?

I reacted to the claim "Isn’t autovectorization still pretty shoddy for SVE?" by saying "no it's not pretty shoddy, for any reasonable definition of the term, and here's my evidence for why".

And was piled on for doing so. But it seems that you AGREE with what I am saying.

I never claimed that any random code anywhere could be recognized as a matrix multiply. I claimed that it is a specific goal of LLVM (and has been for quite a few years) to handle appropriately written matrix (or vector) code on the optimal hardware

You seem to be hung up on the idea of "autovectorization means I take the crappiest spaghetti C code, feed it into the compiler, and SME comes out".
But I don't care about that. What I care about is that a person writes code using appropriate abstractions (for example a templated class in C++) that allows them to write what they want and still compile to NEON or SME. This is perfectly feasible and being done today.

I don't understand why the uninteresting case of "take some bad code that is 40 years old and refuse to modify it" rather than "use modern languages and modern methodology" is your benchmark of success.
 

SarahKerrigan

Senior member
Oct 12, 2014
539
1,161
136
So then what are we fighting about?

I reacted to the claim "Isn’t autovectorization still pretty shoddy for SVE?" by saying "no it's not pretty shoddy, for any reasonable definition of the term, and here's my evidence for why".

And was piled on for doing so. But it seems that you AGREE with what I am saying.

I never claimed that any random code anywhere could be recognized as a matrix multiply. I claimed that it is a specific goal of LLVM (and has been for quite a few years) to handle appropriately written matrix (or vector) code on the optimal hardware

You seem to be hung up on the idea of "autovectorization means I take the crappiest spaghetti C code, feed it into the compiler, and SME comes out".
But I don't care about that. What I care about is that a person writes code using appropriate abstractions (for example a templated class in C++) that allows them to write what they want and still compile to NEON or SME. This is perfectly feasible and being done today.

I don't understand why the uninteresting case of "take some bad code that is 40 years old and refuse to modify it" rather than "use modern languages and modern methodology" is your benchmark of success.

@Doug S expressed skepticism that SPEC is, at this point, getting anything out of SME with any actually-existing compiler. I agree with him; I seriously doubt it is. (SPECfp might be a little bit, but probably not to any sort of breakthrough degree.)

Feel free to do a run yourself and check. That would be interesting and valuable information.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |