Info LPDDR6 @ Q3-2024: Mother of All CPU Upgrades

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Diamond Member
Jun 1, 2017
4,993
7,763
136
I wonder does this mean AMD's Sound Wave won't happen at all, or will be repurposed as the steam-deck's APU was?

We know that:
  • Sound Wave was only designed for MS as an alternative to this nvidias chip
  • This "leak" (straight from the horse's mouth) tells us beyond any reasonable doubt that MS will use nvidia
So what purpose would it serve?
Does anybody know what's the deal with those MS initiated semi custom chips (first Van Gogh, now apparently Sound Wave)? Was MS fronting the development cost and then just allowing AMD to sell the result to other market partners?

While Apple has not stated this directly, there have been indications they could pull out of the EU market if the EU puts too much pressure on them.
Just like Apple is pulling out of the Chinese market due to the increasing pressure?
 

FlameTail

Platinum Member
Dec 15, 2021
2,909
1,646
106


LPDDR5X-9600 -> LPDDR6-14400 is a 50% increase in data rate, but they are saying the effective bandwidth actually doubled from 19.2 GBps to 38.4 GBps.

Can someone explain what's going on here?
 

soresu

Platinum Member
Dec 19, 2014
2,883
2,092
136
Microsoft will never be Apple, neither will NV no matter how hard they copy Steve Jobs.
IMHO they don't need to.

They just need to stop 'innovating' with unnecessary changes to everything in every new version of Windows and just start optimising only.

Windows has a HUGE installed base to leverage if they can just stop shooting themselves in the foot consistently.
 
Reactions: igor_kavinski

soresu

Platinum Member
Dec 19, 2014
2,883
2,092
136
Does anybody know what's the deal with those MS initiated semi custom chips (first Van Gogh, now apparently Sound Wave)? Was MS fronting the development cost and then just allowing AMD to sell the result to other market partners?


Just like Apple is pulling out of the Chinese market due to the increasing pressure?
I wonder if Samsung might use it in the future in place of SDXE given they are already in a relationship with AMD for gfx.
 

Doug S

Platinum Member
Feb 8, 2020
2,420
3,915
136
Lmao yeah

Only issue for the Pro/Ultra/Ultra is the modules are huge, my understanding is currently doing a 512-bit bus with LPCAMM would be a mess in the laptop. But maybe with round LPCAMM for LPDDR6 this isn’t as much a problem.

They don't need 512 bits, with LPDDR6 they'll probably do 384 bits for a Max whether or not they adopt CAMMs.

Apple is using custom carriers now, they can route things however they want - and having the modules attach from underneath would seem to make the most sense. They could easily connect two CAMMs for the Max, and four for the Ultra.

Now like I said I'm skeptical about this on a laptop. But I would be shocked if the Pro doesn't adopt it. In fact, I wouldn't be surprised to see an M4 based Mac Pro using LPCAMM2 before the end of this year. People assume Apple is soldering memory because they're ***holes, but they didn't have a choice to get the bandwidth they needed for their GPU, because DIMMs/SODIMMs are so much slower. Now at least they have the option. Even if they don't do it for laptops, they could no Studio/Pro so long as they produce a different carrier for the Max. Even if they don't want to do that there is absolutely no reason not do it for the Ultra since that's a custom carrier anyway and it is only used in those higher products with cases designed to open.

As for soldering storage (which they aren't doing on the high end, those are socketed) they have a controller on their SoC, so they are use raw NAND. Nobody makes raw NAND modules. Now sure, a lot of it is "we don't want to compromise our laptops' design" by giving them easy access for people to swap stuff out, and if it isn't (easily) customer serviceable soldering is better from the standpoint of avoiding failures due to loose connections. So the stuff that's designed for easy access (other than the Mini maybe, I'm not sure) they are socketing storage, though apparently it isn't as easy as swapping things out (whether that's tying parts together or protecting you from putting the wrong module in the wrong slot/host and losing data, take your pick)
 

SpudLobby

Senior member
May 18, 2022
913
618
106
They don't need 512 bits, with LPDDR6 they'll probably do 384 bits for a Max whether or not they adopt CAMMs.
You mean just because of the BW improvement? But why would they reduce the width and compromise what would otherwise be more BW improvement? I guess I see where at a certain point it ceases to matter and you can meet in the middle, yeah. 512 to 384 wouldn’t be a big deal whatsoever at the rates LPDDR6 is throwing, no one would complain.

Apple is using custom carriers now, they can route things however they want - and having the modules attach from underneath would seem to make the most sense. They could easily connect two CAMMs for the Max, and four for the Ultra.
Yeah
Now like I said I'm skeptical about this on a laptop. But I would be shocked if the Pro doesn't adopt it.
Agreed, but I’d love to see it in a laptop.
In fact, I wouldn't be surprised to see an M4 based Mac Pro using LPCAMM2 before the end of this year. People assume Apple is soldering memory because they're ***holes, but they didn't have a choice to get the bandwidth they needed for their GPU, because
Yeah of course.
DIMMs/SODIMMs are so much slower. Now at least they have the option. Even if they don't do it for laptops, they could no Studio/Pro so long as they produce a different carrier for the Max. Even if they don't want to do that there is absolutely no reason not do it for the Ultra since that's a custom carrier anyway and it is only used in those higher products with cases designed to open.

As for soldering storage (which they aren't doing on the high end, those are socketed) they have a controller on their SoC, so they are use raw NAND. Nobody makes raw NAND modules. Now sure, a lot of it is "we don't want to compromise our laptops' design" by giving them easy access for people to swap stuff out, and if it isn't (easily) customer serviceable soldering is better from the standpoint of avoiding failures due to loose connections. So the stuff that's designed for easy access (other than the Mini maybe, I'm not sure) they are socketing storage, though apparently it isn't as easy as swapping things out (whether that's tying parts together or protecting you from putting the wrong module in the wrong slot/host and losing data, take your pick)
 

FlameTail

Platinum Member
Dec 15, 2021
2,909
1,646
106
The channel size was increased by 50%. From 16 --> 24 bits wide.
Oh okay that checks out.

100 × 150% (50% increase in data rate)
= 150

150 × 150% (50% increase in channel size)
= 225

225 × (100-11)% (subtracting 11% for metadata)
= ~200
 

SpudLobby

Senior member
May 18, 2022
913
618
106
Oh okay that checks out.

100 × 150% (50% increase in data rate)
= 150

150 × 150% (50% increase in channel size)
= 225

225 × (100-11)% (subtracting 11% for metadata)
= ~200
The channel size increase is interesting. Wonder why
 

Doug S

Platinum Member
Feb 8, 2020
2,420
3,915
136
You mean just because of the BW improvement? But why would they reduce the width and compromise what would otherwise be more BW improvement? I guess I see where at a certain point it ceases to matter and you can meet in the middle, yeah. 512 to 384 wouldn’t be a big deal whatsoever at the rates LPDDR6 is throwing, no one would complain.

Yes. You only want to pay for bandwidth improvement if you can benefit from it. Where is the evidence that Apple can benefit from another 33% more memory bandwidth on base Apple Silicon (M3, M4, etc.) beyond what they'd already be getting from 96 bits of LPDDR6? I wonder if anyone can quantify how much (if at all) M4 is benefiting from the faster LPDDR5X? Or did they do it more for the power savings?

Obviously at some point as you ramp up clock rates and performance you need to ramp up memory bandwidth to keep pace, but unless they do a totally new GPU that's offers much more performance which requires much more bandwidth to deliver that performance, getting the bump from 128 bit LPDDR5X to 96 bit LPDDR6 is probably all they need / can use in the near term. By the time they need more, LPDDR6X arrives.

Heck we saw them REDUCE bandwidth by 33% with M3 Pro vs M2 Pro. What sort of regressions were observed? I'm sure there were some, but I doubt they were meaningful and the impact was probably bigger on memory configs because of the lack of flexibility Apple offers in LPDDR stack densities than it was on actual performance.

I look at it the other way around from you. The entry level Apple Silicon really punches above its weight. Apple doesn't sell you SKUs with hobbled clock rates like Intel/AMD do to maintain their market segmentation. They're letting them run at the performance the silicon is capable of, so even if you were very slightly limited by 96 bit LPDDR6 versus what I guess 120 bit would be what you'd get (since you sure can't do 128 bits with LPDDR6) that's entry level and you're paying entry level prices (entry level APPLE prices, but they're entry level products regardless)

While people buying a Mac Pro with an Ultra with 1024 bit wide memory today might have loads that actually are limited by 768 bit LPDDR6 - even if it is faster it could be even more faster if it was wider. But I suspect that they'd trade that extra few percent on a super bandwidth heavy loads for the flexibility of being able to expand memory with LPCAMM2 rather than taking only the few configs Apple offers today.
 

SpudLobby

Senior member
May 18, 2022
913
618
106
Yes. You only want to pay for bandwidth improvement if you can benefit from it. Where is the evidence that Apple can benefit from another 33% more memory bandwidth on base Apple Silicon (M3, M4, etc.) beyond what they'd already be getting from 96 bits of LPDDR6? I wonder if anyone can quantify how much (if at all) M4 is benefiting from the faster LPDDR5X? Or did they do it more for the power savings?
My bet is it’s for AI but also some creatibe workloads, and the power savings don’t hurt either.

I do agree 384 is just fine. I was mostly pointing out that, you could take this argument to an extreme: a 64-bit LPDDR5x 8400 chip would match the M1 bandwidth and probabky save Apple money and power and be “good enough” in some sense, but gain them no bandwidth, but we don’t see them do that. Now, since in practice we actually literally don’t have 128-bit or 512-bit buses as options, it complicates things.

What will be really interesting is to see

A) what bus widths phones will go for: 48-bit seems most likely. It will be funny seeing them move down, not that it matters.
B) others have pointed out they may make 192 the standard for M laptops. I’m skeptical of this, though, particularly since Apple doesn’t give a crap about serious gaming. (Yes Doug we can debate this but no one really games on a Mac).
Obviously at some point as you ramp up clock rates and performance you need to ramp up memory bandwidth to keep pace, but unless they do a totally new GPU that's offers much more performance which requires much more bandwidth to deliver that performance, getting the bump from 128 bit LPDDR5X to 96 bit LPDDR6 is probably all they need / can use in the near term. By the time they need more, LPDDR6X arrives.

Heck we saw them REDUCE bandwidth by 33% with M3 Pro vs M2 Pro. What sort of regressions were observed? I'm sure there were some, but I doubt they were meaningful and the impact was probably bigger on memory configs because of the lack of flexibility Apple offers in LPDDR stack densities than it was on actual performance.

I look at it the other way around from you. The entry level Apple Silicon really punches above its weight. Apple doesn't sell you SKUs with hobbled clock rates like Intel/AMD do to maintain their market segmentation. They're letting them run at the performance the silicon is capable of, so even if you were very slightly limited by 96 bit LPDDR6 versus what I guess 120 bit would be what you'd get (since you sure can't do 128 bits with LPDDR6) that's entry level and you're paying entry level prices (entry level APPLE prices, but they're entry level products regardless)

While people buying a Mac Pro with an Ultra with 1024 bit wide memory today might have loads that actually are limited by 768 bit LPDDR6 - even if it is faster it could be even more faster if it was wider. But I suspect that they'd trade that extra few percent on a super bandwidth heavy loads for the flexibility of being able to expand memory with LPCAMM2 rather than taking only the few configs Apple offers today.
 

FlameTail

Platinum Member
Dec 15, 2021
2,909
1,646
106
I thought the implication of LPDDR6 increasing channel width was that memory bus widths are going to increase across the board, not DECREASE.

Low End Mobile SoCs : 32 bit -> 48 bit
High End Mobile SoCs : 64 bit -> 96 bit
Low End Laptop SoCs: 64 bit -> 96 bit
Mainstream Laptop SoCs: 128 bit -> 192 bit

If we specifically focus on smartphone SoCs, let's say Snapdragon 8 Gen 5 supports LPDDR6, and downgraded the memory bus from 64 bit to 48 bit.

Snapdragon 8 Gen 4
= 64 bit × LPDDR5X-9600
= 76.8 GB/s

Snapdragon 8 Gen 5
= 48 bit × LPDDR6-10667
= 64 GB/s × (100-11)% [subtracting metadata]
= 57 GB/s

So bandwidth actually goes down gen-on-gen, and by a huge amount. This is clearly unacceptable, and will never happen.

Actually, these smartphone SoCs will need significantly more bandwidth because of the push for on-device AI. The Snapdragon 8 Gen 3 already has a 45 TOPS NPU (yes, the same one as the X Elite; source: Revegnus). 8 Gen 5 is probably going to have double of that.

You guys know how much memory bandwidth is critical for AI workloads. Unlike the CPU or GPU, you can't sate an NPU by putting more cache to it. Because NPUs pull gigabytes of data from the AI models stored in RAM, and that cannot be stored in the megabyte-scale on-device caches.

It's not just smartphones. There is a push for AI, particularly on-device, throughout the industry. Microsoft recently unveiled their Copilot+ PC standard (terrible name, btw), with a minimum 40 TOPS requirment. They are rumoured to increase this requirement to ~100 TOPS for the next generation AI PCs (2025/2026. There needs to be significant bandwidth improvement to feed those huge NPUs.

Hence why I believe the industry came together at JEDEC, and they jointly decided to increase the channel width by 50%.
 
Last edited:
Reactions: Tlh97 and SpudLobby

SpudLobby

Senior member
May 18, 2022
913
618
106
I thought the implication of LPDDR6 increasing channel width was that memory bus widths are going to increase across the board, not DECREASE.

Low End Mobile SoCs : 32 bit -> 48 bit
High End Mobile SoCs : 64 bit -> 96 bit
Low End Laptop SoCs: 64 bit -> 96 bit
Mainstream Laptop SoCs: 128 bit -> 192 bit

If we specifically focus on smartphone SoCs, let's say Snapdragon 8 Gen 5 supports LPDDR6, and downgraded the memory bus from 64 bit to 48 bit.

Snapdragon 8 Gen 4
= 64 bit × LPDDR5X-9600
= 76.8 GB/s

Snapdragon 8 Gen 5
= 48 bit × LPDDR6-10667
= 64 GB/s × (100-11)% [subtracting metadata]
= 57 GB/s
Great post and exactly what my initial impression leaned towards, that we’d see 96 in phones and 192 in laptops. Yeah in this case you are right. And since LPDDR6x with higher data rates won’t come first but they’ll still want the features of LPDDR6, most likely we are going to see bus width expand for both high end phones and laptops.

92 = high end phones
192 = standard laptops

Wow lol, fun times ahead.
So bandwidth actually goes down gen-on-gen, and by a huge amount. This is clearly unacceptable, and will never happen.

Actually, these smartphone SoCs will need significantly more bandwidth because of the push for on-device AI. The Snapdragon 8 Gen 3 already has a 45 TOPS NPU (yes, the same one as the X Elite; source: Revegnus). 8 Gen 5 is probably going to have double of that.

You guys know how much memory bandwidth is critical for AI workloads. Unlike the CPU or GPU, you can't sate an NPU by putting more cache to it. Because NPUs pull gigabytes of data from the AI models stored in RAM, and that cannot be stored in the megabyte-scale on-device caches.

It's not just smartphones. There is a push for AI, particularly on-device, throughout the industry. Microsoft recently unveiled their Copilot+ PC standard (terrible name, btw), with a minimum 40 TOPS requirment. They are rumoured to increase this requirement to ~100 TOPS for the next generation AI PCs (2025/2026. There needs to be significant bandwidth improvement to feed those huge NPUs.

Hence why I believe the industry came together at JEDEC, and they jointly decided to increase the channel width by 50%.
 

Doug S

Platinum Member
Feb 8, 2020
2,420
3,915
136
I thought the implication of LPDDR6 increasing channel width was that memory bus widths are going to increase across the board, not DECREASE.

Low End Mobile SoCs : 32 bit -> 48 bit
High End Mobile SoCs : 64 bit -> 96 bit
Low End Laptop SoCs: 64 bit -> 96 bit
Mainstream Laptop SoCs: 128 bit -> 192 bit

If we specifically focus on smartphone SoCs, let's say Snapdragon 8 Gen 5 supports LPDDR6, and downgraded the memory bus from 64 bit to 48 bit.

Snapdragon 8 Gen 4
= 64 bit × LPDDR5X-9600
= 76.8 GB/s

Snapdragon 8 Gen 5
= 48 bit × LPDDR6-10667
= 64 GB/s × (100-11)% [subtracting metadata]
= 57 GB/s

So bandwidth actually goes down gen-on-gen, and by a huge amount. This is clearly unacceptable, and will never happen.

Actually, these smartphone SoCs will need significantly more bandwidth because of the push for on-device AI. The Snapdragon 8 Gen 3 already has a 45 TOPS NPU (yes, the same one as the X Elite; source: Revegnus). 8 Gen 5 is probably going to have double of that.

You guys know how much memory bandwidth is critical for AI workloads. Unlike the CPU or GPU, you can't sate an NPU by putting more cache to it. Because NPUs pull gigabytes of data from the AI models stored in RAM, and that cannot be stored in the megabyte-scale on-device caches.

It's not just smartphones. There is a push for AI, particularly on-device, throughout the industry. Microsoft recently unveiled their Copilot+ PC standard (terrible name, btw), with a minimum 40 TOPS requirment. They are rumoured to increase this requirement to ~100 TOPS for the next generation AI PCs (2025/2026. There needs to be significant bandwidth improvement to feed those huge NPUs.

Hence why I believe the industry came together at JEDEC, and they jointly decided to increase the channel width by 50%.

I just don't buy that ANYONE is going to do 96 bits on a smartphone. Maybe they do 72 rather than 48. It isn't like there is any reason the number of controllers must be a power of 2, as Apple demonstrated with M3 Pro.

Apple has historically lagged when adopting new memory standards - so I don't look for them to use LPDDR6 in M5/A19 unless they have been waiting impatiently to jump on ECC or memory tagging. Android OEMs have always pushed in first on that front.

Whether Apple lags with memory standard adoption due to not being able to guarantee sufficient supply given how many more phones they ship than all high end Androids combined, due to concerns over higher cost of the newer standards, or due to waiting until they see a meaningful performance gain and/or power reduction from the new standard is unknown.
 

SpudLobby

Senior member
May 18, 2022
913
618
106
I just don't buy that ANYONE is going to do 96 bits on a smartphone. Maybe they do 72 rather than 48. It isn't like there is any reason the number of controllers must be a power of 2, as Apple demonstrated with M3 Pro.

Apple has historically lagged when adopting new memory standards - so I don't look for them to use LPDDR6 in M5/A19 unless they have been waiting impatiently to jump on ECC or memory tagging. Android OEMs have always pushed in first on that front.

Whether Apple lags with memory standard adoption due to not being able to guarantee sufficient supply given how many more phones they ship than all high end Androids combined, due to concerns over higher cost of the newer standards, or due to waiting until they see a meaningful performance gain and/or power reduction from the new standard is unknown.
Would they lag to the point of straight up not adopting LPDDR6 until LPDDR6x? That would be an anomaly even for Apple. I think Flametail has a point here, a quite credible one. I absolutely don’t see them doing 48. I expect Samsung, Qualcomm, MediaTek and Apple to use the same bus widths here just like they do today for non-coincidental reasons, so we’ll see what they settle on. Maybe it’s 72 yeah.
 

Doug S

Platinum Member
Feb 8, 2020
2,420
3,915
136
Would they lag to the point of straight up not adopting LPDDR6 until LPDDR6x? That would be an anomaly even for Apple. I think Flametail has a point here, a quite credible one. I absolutely don’t see them doing 48. I expect Samsung, Qualcomm, MediaTek and Apple to use the same bus widths here just like they do today for non-coincidental reasons, so we’ll see what they settle on. Maybe it’s 72 yeah.

LPDDR5X was already samping before Apple shipped their first LPDDR5 so it wouldn't be something they haven't done before.

Now that I think about it, Apple could very well use bus width / bandwidth as one of the differentiators between Pro and non Pro iPhones. The non Pro SoC uses two controllers for a 48 bit bus, the Pro three for 72 bits and gets a newer/faster standard a year earlier. With such a split, Apple might be a bit more aggressive in adopting new standards than they were in the past, where they needed to secure sufficient supply for nearly 200 million devices in a year and needed the margins to work down to the base $799 iPhone.

That would also go along with the segmentation they're already doing on the amount of DRAM. With 50% more controllers you have 50% more DRAM assuming each uses the same size DRAMs.
 

SpudLobby

Senior member
May 18, 2022
913
618
106
LPDDR5X was already samping before Apple shipped their first LPDDR5 so it wouldn't be something they haven't done before.
Well, but isn’t that actually to my point! You yourself point out how Apple lags the sampling by quite a bit, so if they were adopting LPDDR5 they adopted it later as LPDDR5x was already sampling.


In this case with LPDDR6, Apple would be still lagging because that’s what they do with the versions, and the early versions do not offer Apple enough bandwidth gain to cut their smartphone bus widths to 48 bits. Now I know what you’re thinking: well Apple could just hold out on LPDDR5x for as long as possible until LPDDR6x — again this would be very uncharacteristic. They do still follow everybody else with small steps, just one at a time, and LPDDR6 offers other benefits I think they’ll be interested in like with power or ECC.

Now that having been said:
Now that I think about it, Apple could very well use bus width / bandwidth as one of the differentiators between Pro and non Pro iPhones. The non Pro SoC uses two controllers for a 48 bit bus, the Pro three for 72 bits and gets a newer/faster standard a year earlier.
This part I agree with, yeah. The newer standard part is already happening obviously. But I don’t know it’s as likely as for the M-stuff which might even see further segmentation.

Honestly it’s really hard to tell how this will shake out
With such a split, Apple might be a bit more aggressive in adopting new standards than they were in the past, where they needed to secure sufficient supply for nearly 200 million devices in a year and needed the margins to work down to the base $799 iPhone.
Yeah.
That would also go along with the segmentation they're already doing on the amount of DRAM. With 50% more controllers you have 50% more DRAM assuming each uses the same size DRAMs.
Yeah true
 

Doug S

Platinum Member
Feb 8, 2020
2,420
3,915
136
So what about DDR6?

There's no way they are going to add host bits to LPDDR6 but not to the next DDR standard. If it is true that laptops and all desktops except the ones running basically server class CPUs will go LPDDR6 then I could easily see them making DDR6 DIMMs 80 bits wide.

Alternative, they could do something similar to LPDDR6 and make DIMMs 96 bits wide, with four 24 bit channels and BL24 giving a total of 64 bit host bits to play with.

What I think there's almost no chance of is having two types of DDR6, with and without ECC support. When even smartphones will at least have the capability of supporting ECC, I don't see the utility in splitting up the DDR6 market. They'll have ECC bits, but those who care more about overclocking their RAM than data integrity will be able to turn it off.
 
Reactions: Tlh97 and MadRat

MadRat

Lifer
Oct 14, 1999
11,922
259
126
Whichever memory standard rules, they need to speed up the integration of optical - or at least universal-standard pinless - connectors to simplify the interfaces without losing transfer speeds. Bonus if its hot swappable. Such technology would abstract the actual physical memory type at that point. Do we care if its RDR, DDR, QDR, or any other type as long as its reaching latency and bandwidth expectations? Want ultra low latency? Pay for additional cache features.
 

FlameTail

Platinum Member
Dec 15, 2021
2,909
1,646
106
I have been looking at past generation of LPDDR, and my observation is that it takes atleast 1 year from the JEDEC ratification for a certain LPDDR to first appear in devices (usually flagship smartphones).

So if LPDDR6 is ratified by JEDEC in H2 of this year, we will not be seeing devices with it before H2 2025, at the earliest.
 
Reactions: dr1337

Tuna-Fish

Golden Member
Mar 4, 2011
1,415
1,732
136
Whichever memory standard rules, they need to speed up the integration of optical - or at least universal-standard pinless - connectors to simplify the interfaces without losing transfer speeds. Bonus if its hot swappable. Such technology would abstract the actual physical memory type at that point. Do we care if its RDR, DDR, QDR, or any other type as long as its reaching latency and bandwidth expectations? Want ultra low latency? Pay for additional cache features.

Cheap optical transceivers have unacceptable latency for main memory use. The universal standard connector for the next memory generation will be CAMM. Maybe there will be cheap low-latency transceivers for the DDR7 generation, but I doubt that.
 
Reactions: Hitman928
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |