Discussion Speculation: Zen 4 (EPYC 4 "Genoa", Ryzen 7000, etc.)

Page 247 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vattila

Senior member
Oct 22, 2004
800
1,364
136
Except for the details about the improvements in the microarchitecture, we now know pretty well what to expect with Zen 3.

The leaked presentation by AMD Senior Manager Martin Hilgeman shows that EPYC 3 "Milan" will, as promised and expected, reuse the current platform (SP3), and the system architecture and packaging looks to be the same, with the same 9-die chiplet design and the same maximum core and thread-count (no SMT-4, contrary to rumour). The biggest change revealed so far is the enlargement of the compute complex from 4 cores to 8 cores, all sharing a larger L3 cache ("32+ MB", likely to double to 64 MB, I think).

Hilgeman's slides did also show that EPYC 4 "Genoa" is in the definition phase (or was at the time of the presentation in September, at least), and will come with a new platform (SP5), with new memory support (likely DDR5).



What else do you think we will see with Zen 4? PCI-Express 5 support? Increased core-count? 4-way SMT? New packaging (interposer, 2.5D, 3D)? Integrated memory on package (HBM)?

Vote in the poll and share your thoughts!
 
Last edited:
Reactions: richardllewis_01

JasonLD

Senior member
Aug 22, 2017
485
445
136
There is always a bunch of stuff about the empire striking back, which was true previously, but isn’t actually true any more. Intel will have a tough time surpassing AMD. Even if Intel had executed much better, they would still have needed to accelerate there plans significantly, because of ARM. I read a phoronix review a while ago (Linux server and HPC) where it was AMD and ARM trading blows and Intel in third place for some benchmarks. There are still a few that Intel can win, but not many. I suspect that will be near zero after Genoa hits. Bergamo seems to be aimed more at ARM competitors than Intel.

Why is it different now? It is different because this isn’t AMD vs. Intel. This is AMD + TSMC vs. Intel. TSMC holds more than 50% of the semiconductor market. By breaking the Intel monopoly, we may actually be creating another monopoly on the fab side. Anyway, thinking that Intel will suddenly be able to dominate again is likely just not true. I am fine with that since to really break the intel monopoly requires AMD to dominate for a bit longer. A large part of Intel’s previous success was process tech superiority. That seems to no longer be the case and they do not seem to be going to take the lead again in the near future. It would be great if AMD (or Intel) can nock Nvidia down a bit also, but the cuda vendor lock-in is strong.

Even during the Intel's era of process superiority, Intel was lagging behind AMD for almost 7 years from Athlon's launch in 1999 to Intel's Core architecture in 2006. I expect it will take similar time for Intel to make a comeback. It took AMD Zen architecture to regain lead, and Intel needs brand new architecture to retake the lead. (Which might be Lunar Lake on 18A with High-NA Euv if Intel's 18A doesn't suffer from significant setback)
 

Henry swagger

Senior member
Feb 9, 2022
388
245
86
Even during the Intel's era of process superiority, Intel was lagging behind AMD for almost 7 years from Athlon's launch in 1999 to Intel's Core architecture in 2006. I expect it will take similar time for Intel to make a comeback. It took AMD Zen architecture to regain lead, and Intel needs brand new architecture to retake the lead. (Which might be Lunar Lake on 18A with High-NA Euv if Intel's 18A doesn't suffer from significant setback)
Intel is already in the lead what are you talking about.. 78% market share in server and desktop
 

eek2121

Platinum Member
Aug 2, 2005
2,934
4,033
136
Stop trying to make SMT4 a thing, it's not happening. And for good reasons too.
Hah, if you took my little comment seriously, you need to take a break from online forums and social media. 😉
Isn't their big APU what powers Xbox and PS?

Not really. What is in the consoles is a custom SoC.

Right, but currently if you are a performance user that will spend thousands of dollars on your PC and are at least passingly aware of the hardware capabilities you'll be getting when you spend that money, you want a dGPU that you can swap out depending on market conditions. Unless we go through another dGPU drought where buying one becomes prohibitively expensive, I do not expect most performance PC users to want a big APU, just the same way an M1 Ultra user can't effectively use a dGPU (and therefore doesn't really want one, since why else would they lock themselves in to the Mac experience?).



Probably not. Siena seems to be going after Intel's comm equipment, which is a little weird since most of that stuff is low power/ultra low power. Stuff like Snow Ridge. Unless I'm missing something.

You do not understand Apple buyers, at all. Most don't care about the insides of their machines outside of "is it faster?" (at most). If they did, they wouldn't be buying these things.

As someone who has supervised the purchase of many PCs, both Mac and PC, I can tell you, the lion's share don't care. They don't know what an APU or 'discrete' GPU is. Only a small minority actually cares. More importantly, as I was trying to indicate earlier, it does not matter. Apple isn't competing with Intel, NVIDIA, or AMD. They have their own niche and that is something many folks here fail to understand. The M1 is very good at running Mac software, however, it can't run anything else. This includes something like 90% of all games released, a very large amount of industrial applications, etc. Apple doesn't even ship an industrial version of the Mac, nor do they ship rackmount servers. > 90% of the PC market is not currently served, and cannot be served, by an Apple product. If you walk into a telco data center, you'll find nothing except PCs running Windows and Linux from end to end. If you walk into a cloud data center, you'll find between zero and a few mac minis tucked into a corner of a broom closet, with the rest being filled by servers running Windows and Linux.

I could go on, but my point is that Apple is not competing with "the big 3', and vice versa. Apple has a niche (I've been in IT and related areas to know very well which markets they serve and which they do not) and that is it. They've shown no indication at wanting to do otherwise. If anything, Microsoft has early indicators of making inroads into Apple's niche. Note that you cannot rely on Apple's own data either. "X% of all users that purchased a Mac are new users." That is a very misleading statement. Despite me having a Mac at home, if I walked into an Apple store today and purchased one, they would treat me as a new user unless I stated otherwise. Scalpers, for example, are all new users, until they aren't.

Note that I'm absolutely not bashing Apple. I'm simply combating this asinine argument that Apple is somehow competing with PC vendors. Get back to me when Apple hits 15% of total marketshare. Not in terms of shipments either (though they haven't even hit that goal yet), but in terms of daily users. Until the day they manage to plow well ahead of the historical "7-10%, but maybe 5 or 12 %, depending on who you ask" they are in their own little bubble and aren't a competitor at all.

They most certainly aren't even in the running for competing with Zen 4 and Raptor Lake, especially for folks like me, who heavily utilize their machines in a wide variety of workloads.

Hence, my Apple to Oranges comparison.

Good night.
 

jamescox

Senior member
Nov 11, 2009
637
1,103
136
Apple has demonstrated that they can sell a "big APU" (more like "big SoC") to the fans of their software ecosystem. It's not entirely clear that AMD could do the same thing since they don't have control over the software stack they'll be running. There were rumours of enterprise-class APUs that never came to fruition, so odds of them going in that direction seem low.
It seems like PC laptop makers are going to want something to compete with Apple. AMD can obviously make something like that easily since it is basically the same as a console chip. The software stack obviously already exists. I have to wonder if Microsoft would request such a thing for their devices. If they do make a very high end APU, will it just be a really big die? Two die connected together with silicon bridge? Some form of stacked device? Making an APU with the possibility to connect to another APU via silicon bridge would be very modular and flexible. Stacking a gpu chiplet on top of the IO portion of an APU die could also be a very modular solution.

I suspect we will also see an HPC device. I don’t know if this will be with a Genoa-like package or if it will not be until they use stacking. The rumors for next generation GPUs is a kind of modular set-up with 2 GPU chiplets, likely connected with a silicon bridge and then 2 HBM stacks, one per chiplet, also likely connected by silicon bridges. Then products can be made with 2, 4, or 8 gpu die (1, 2, or 4 modules). I don’t know how solid that rumor is. I have wondered if they may use IFOP for the connection between modules for larger GPUs. The Epyc IO die would have 6 IFOP along each side. If all of those could be used to connect to one of these GPU modules, it would provide massive bandwidth directly to the GPUs. It would make sense to make such a device by stacking some cpu cores on top of the IO die or something like that, but they could have gpu on one side and cpu cores on the other. It is very difficult to speculate once stacking comes into play. such a thing for both CPUs and GPUs.

Anyway, I consider Nvidia Grace Hopper “Superchip” to essentially be an HPC APU. It is multiple chips, but it is a cpu and gpu in one package. I suspect Nvidia talked about it to make it seem like they are leading the market even though they may not be the first to have a chiplet based gpu. I think AMD and Intel will almost certainty have a competing device to the Grace Hopper “APU”. AMD seems to be in super stealth mode still, while Intel and Nvidia are talking about things. Nvidia is obviously in a strong position with their CUDA vendor lock-in, but they do not have a powerful cpu. Both AMD and Intel will likely make such combined devices that allow very high bandwidth to system memory, but only with their integrated package. Nvidia will likely be stuck with rather low performance ARM cores in comparison until they can acquire a higher end design or make one themselves. A lot of gpu processing still has cpu stages that do not always parallelize easily, so being forced to use a rather weak cpu core may be problematic. It does have a lot of cores, but parallelizing the code isn’t always that easy or sometime even possible.
 

Kedas

Senior member
Dec 6, 2018
355
339
136
Also on the slide:
"ISA extensions for AI and AVX-512"
So AI and AVX512 are two different things? or just a strange sentence.

For Zen5:
"Integrated AI and Machine Learning optimizations"

Some AI library that works with the integrated GPU as accelerator?
But you would use your bigger GPU to do that.

Zen4c is 4nm instead of 5nm, didn't expected that, but makes it easier to make them small. An 4nm process that is focused on smaller and doesn't support stacking.
And the same again for Zen5 an 4nm with stacking and with Zen5c being in 3nm no stacking.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I know, but the point being that if there were a market for a large x86 APU/SoC, AMD could make one. I just don't believe it makes sense outside closed ecosystems like Mac or consoles to have the huge SoC/APUs

Yea it makes little/no sense. There needs to be a reason for integration. Is it going to end up faster? What about cheaper? What about greater power efficiency? Does it transition quicker between power states so in light loads it has battery life like an iGPU device?

Like why did Kabylake-G fail? People say "oh it's just a dGPU attempt", but an integrated HBM module, power sharing, it held promise in some areas mentioned above. It's a big APU in technical terms. You don't think putting ARC tiles on Meteorlake is the same thing?

With Kabylake it cost just as much as a dGPU device, it had battery life like one, and it performed about the same. So, you save some area, whoopee! It was like Lakefield. The theoretical advantages translated to zero in real world. There were articles and even some people saying it failed because of Nvidia's anticompetitive practices. Maybe, little bit, but mostly BS. It failed because it sucked. If the product was good, then people will demand it and buy it.

If it's faster/more power efficient = more expensive
If it's cheaper = less performing/maybe equal in power efficiency

Consoles have almost guaranteed high volume. Having a product in PC requires reason for people to buy it. What makes you think AMD/Intel will bring a faster, more power efficient, better battery life device with equal cost or cheaper? That's nonsense!
 

DrMrLordX

Lifer
Apr 27, 2000
21,709
10,983
136
You do not understand Apple buyers, at all. Most don't care about the insides of their machines outside of "is it faster?" (at most). If they did, they wouldn't be buying these things.

I never indicated that they did. They're there for the software ecosystem and the uh, lifestyle? Experience? Yeah. That.

In any case, I clearly indicated that Mac users - particularly M1 Ultra users, who do tend to be tech savvy and are the ones with the "big" SoC on the Apple side right now - willingly sacrifice dGPU options because they just don't give a darn. They're happy with the integrated graphics since they get the Apple experience they wanted, faster than before.

PC users tend to obsess over hardware specs and then choose software that helps them make the most out of their purchase.

For that reason, I do not think AMD would profit much from a "big" APU in the vein of an M1 Ultra for x86. Such an APU would not have its own operating system, custom APIs, custom compiler environment, etc. It would have to compete with dGPUs and dedicated CPUs in a wide variety of different TDP ranges. Such an APU wouldn't really be the best choice for anything right now in the x86 world.

Apple isn't competing with Intel, NVIDIA, or AMD.

At the level of M1 Ultra, they kind of are. Those users are at least aware of what "the other side" has and are actively interested in what "the other side" can and can't do faster or better. Those users still wind up with Macs for reasons other than performance much of the time, but they're aware.
 

biostud

Lifer
Feb 27, 2003
18,281
4,806
136
Yea it makes little/no sense. There needs to be a reason for integration. Is it going to end up faster? What about cheaper? What about greater power efficiency? Does it transition quicker between power states so in light loads it has battery life like an iGPU device?

Like why did Kabylake-G fail? People say "oh it's just a dGPU attempt", but an integrated HBM module, power sharing, it held promise in some areas mentioned above. It's a big APU in technical terms. You don't think putting ARC tiles on Meteorlake is the same thing?

With Kabylake it cost just as much as a dGPU device, it had battery life like one, and it performed about the same. So, you save some area, whoopee! It was like Lakefield. The theoretical advantages translated to zero in real world. There were articles and even some people saying it failed because of Nvidia's anticompetitive practices. Maybe, little bit, but mostly BS. It failed because it sucked. If the product was good, then people will demand it and buy it.

If it's faster/more power efficient = more expensive
If it's cheaper = less performing/maybe equal in power efficiency

Consoles have almost guaranteed high volume. Having a product in PC requires reason for people to buy it. What makes you think AMD/Intel will bring a faster, more power efficient, better battery life device with equal cost or cheaper? That's nonsense!
For me PC desktop is all about being able to exchange and customize your hardware. That goes against a big SoC/APU.
For laptops it might make sense, but I don't think the market is big enough compared to the traditional route with a CPU/APU and a dGPU. The low end DGPU like 1030 are being made obsolete, but it will take long time before midrange is being overtaken by an APU. But maybe the chiplet design of RDNA3 is the first step...
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Also on the slide:
"ISA extensions for AI and AVX-512"
So AI and AVX512 are two different things? or just a strange sentence.

It's just AMD marketing's way to highlight AVX-512 BF16 and AVX-512 VNNI instructions as "AI" stuff. In case of BF16 support it is a bit relevant to have it separate, cause Intel's IceLake server chips don't have it currently.
I think slides were prepared with Intel's AMX in mind ( "we can do slow and power inneficient AI too, put it in slides" ). Kinda irrelevant honestly, with Intel's Saphire Rapids delayed, all this "AI" on CPU thing is as irrelevant as always.
 

Mopetar

Diamond Member
Jan 31, 2011
7,936
6,233
136
I am pretty much certain that with stock memory 5800X3D will beat the hell out of Zen4 in gaming and it will take Z4 + 3D cache to beat it finally. Just not enough performance increase to beat brute force power of 96MB of L3.

Even without IPC improvements, Zen 4 is nearly there on clock speeds alone. Zen 3D doesn't see impressive gains in every title and once you average it out, it's easily within the possibility for Zen 4 to beat it on average. Here's a compilation of results from TechSpot:



We don't have final clocks for a 7800X, yet but even if it's only the 5.5 GHz we've already seen it would mean that it's a 17% improvement over the stock 4.7 GHz for the 5800X. Even if we be more generous use a value of 4.85 GHz, that still gets us to 13%. Both are easily within margin of error for the average. There will almost certainly be games where Zen 3D reigns supreme, but there are games where the extra cache doesn't do anything and the vastly higher clock speeds will.

What is really interesting, is how far they will be able to clock the chip with 3D stacked cache on top of it. Power density has increased, clocks have risen, might struggle to gain much over Z3 X3D incarnation, leaving just IPC gains + several hundred mhz.

The issue with Zen 3D was that the stacked cache had much tighter voltage limits. If AMD is able to achieve the higher clock speeds on Zen 4 without having to push the voltage, I doubt we'll lose much. Alternatively if they figure out how to make their v-cache more tolerant to the higher voltages that a CPU can reach they won't have this issue either. We just don't have any solid information to make a definite statement either way.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
It's just AMD marketing's way to highlight AVX-512 BF16 and AVX-512 VNNI instructions as "AI" stuff. In case of BF16 support it is a bit relevant to have it separate, cause Intel's IceLake server chips don't have it currently.
I think slides were prepared with Intel's AMX in mind ( "we can do slow and power inneficient AI too, put it in slides" ). Kinda irrelevant honestly, with Intel's Saphire Rapids delayed, all this "AI" on CPU thing is as irrelevant as always.
For better or worse, in the real world, a lot of AI is done on CPUs. By silicon area, probably the majority. Even Apple, who basically has the most idealistic environment imaginable from the perspective of their ability to tailor the hardware and software, includes AI acceleration in their CPU cores. Tough to say what an equilibrium would look like, but I certainly wouldn't dismiss AVX-512 and AMX as useless for either company.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
For better or worse, in the real world, a lot of AI is done on CPUs. By silicon area, probably the majority. Even Apple, who basically has the most idealistic environment imaginable from the perspective of their ability to tailor the hardware and software, includes AI acceleration in their CPU cores. Tough to say what an equilibrium would look like, but I certainly wouldn't dismiss AVX-512 and AMX as useless for either company.

That's kinda wrong argument for cores that go into desktop and server CPUs. For mobile phone, so called "edge" AI stuff is OK, some sort of dedicated accelerator is there to provide the TOPS needed, for seriuos work it is done on GPU and dedicated accelerators. So mobile phone or other dedicated accelerators are fine. The jury is still out for AMX.

Let's move aside the portion of AMD/Intel's chips used to model training and focus on inference part, where all those IFMA, VNNI, BF16 etc extentions are used. That's where they get destroyed by order of magnitude better performance and order of magnitude better efficiency by a contemporary GPUs.
It's good for marketing tho, larger numbers are always better.

EDIT: here is my recent take on AVX512:

AVX512 is complex topic that can be split into two parts:

1) The marketing driven and misguided part. Where core now can do up to 2x512 bit FMAs and looks great on paper in GFLops, or equally retarded "neural network" pushes with byte multiplications or BF16 support that provides unique benefits in accumulating results in larger than operands size register and so on.
This ship has already sailed, leading super computer has 50Gflops per watt efficiency and hint -> it does not come from CPUs. GPUs already rule this rooster and i think this year NV and/or AMD might come with SKU that will touch 100TFlops on gaming card.
Game is over for AVX512 in this area for quite some time. Noone cares about your FMA/IFMA throughput or latency as that is minimum order of magnitude slower and less efficient that GPUs.

2) The really useful part of AVX512 is amazing instruction set. Despite the mess with support requiring Venn diagrams, it is actually very simple and by ICL-SP you get some amazing instructions there, that enable to parallelize plenty more algorithms and to enable completely new capabilities that would require chains of instructions to do what one AVX512 instruction can.
For example going from already very parallel and fast baseline, AVX512 can gain 60% speed in JSON parsing library that is very important gain if you deal with bulk data in that format:

And there are plenty of hashing, crypto, bit bashing etc algos that can be made faster in times by using AVX512 GFNI instructions that are some of the most flexible vector bit instructions.

The fun thing is, that this case does not even require 512bit width, it could be done on AVX2 256 registers perfectly fine and with some of the same speed ups and efficiency gains. Intel has realised that somewhat and they provide subset of very useful stuff on Gracemont.

Fun times ahead with AMD and AVX512 support across all CPUs, that would really spur adoption and put fire under Intel's bottom.
 
Last edited:

DisEnchantment

Golden Member
Mar 3, 2017
1,623
5,894
136
"ISA extensions for AI and AVX-512"
So AI and AVX512 are two different things? or just a strange sentence.
When you see ISA extension for AI/AVX512, it means it is part of CPU ISA, i.e. running on the CPU not on an accelerator block.
While AI extension could possibly mean new instructions, most likely they are instructions belonging to the AVX512 family for now (like VNNI, BF16 etc). But this is an unknown, AMD could introduce new instructions but is unlikely.

Some AI library that works with the integrated GPU as accelerator?
But you would use your bigger GPU to do that.
It is mostly for inferencing. They are integrating the AIE engine.
But only for Phoenix SoCs. Seems it is not there for Raphael.
For training GPUs are best suited because of the inherent nature of the process needing lots of datasets and parameters which are best suited for GPUs by virtue of their large memory throughputs and parallel nature.

For better or worse, in the real world, a lot of AI is done on CPUs
I believe you meant inferencing only, otherwise it is not really true.
 

biostud

Lifer
Feb 27, 2003
18,281
4,806
136
Even without IPC improvements, Zen 4 is nearly there on clock speeds alone. Zen 3D doesn't see impressive gains in every title and once you average it out, it's easily within the possibility for Zen 4 to beat it on average. Here's a compilation of results from TechSpot:

View attachment 62933

We don't have final clocks for a 7800X, yet but even if it's only the 5.5 GHz we've already seen it would mean that it's a 17% improvement over the stock 4.7 GHz for the 5800X. Even if we be more generous use a value of 4.85 GHz, that still gets us to 13%. Both are easily within margin of error for the average. There will almost certainly be games where Zen 3D reigns supreme, but there are games where the extra cache doesn't do anything and the vastly higher clock speeds will.



The issue with Zen 3D was that the stacked cache had much tighter voltage limits. If AMD is able to achieve the higher clock speeds on Zen 4 without having to push the voltage, I doubt we'll lose much. Alternatively if they figure out how to make their v-cache more tolerant to the higher voltages that a CPU can reach they won't have this issue either. We just don't have any solid information to make a definite statement either way.
Also we know zen4 will come with vcache models as well, and if they can get it to run at least 5.5/5.3 they're going to do pretty well in games.
 

randomhero

Member
Apr 28, 2020
183
249
116
Does anyone have link to AMD FAD video without need to register(or transcript) ?
I have to hear exact wording regarding roadmaps. Few news outlets reported for Zen5 and RDNA4 coming out by 2024.That means quite fast succession of products. But reporting can be really misleading.
 

turtile

Senior member
Aug 19, 2014
617
296
136
Does anyone have link to AMD FAD video without need to register(or transcript) ?
I have to hear exact wording regarding roadmaps. Few news outlets reported for Zen5 and RDNA4 coming out by 2024.That means quite fast succession of products. But reporting can be really misleading.

It doesn't need to be a real address to get to the video link, and all of the slides are available without registering on the webcast page.

In the Q&A, Lisa Su stated that Zen 5 will have both 4nm and 3nm CCD designs. The slides seem to show that the 3D and c versions are out in 2024 which seems to imply that Zen 5 will be out in less than two years. Mike Clark alluded to the design being finished in the January interview here so it sounds plausible.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,153
1,099
136
It doesn't need to be a real address to get to the video link, and all of the slides are available without registering on the webcast page.

In the Q&A, Lisa Su stated that Zen 5 will have both 4nm and 3nm CCD designs. The slides seem to show that the 3D and c versions are out in 2024 which seems to imply that Zen 5 will be out in less than two years. Mike Clark alluded to the design being finished in the January interview here so it sounds plausible.
Intel's 7nm CPU's Meteor Lake (Intel 4) will be out by the end of 2023. Intel has already said they are accelerating their processes and designs because they fell behind AMD. If Zen 4 is only 30-60 days ahead of Raptor Lake. That means a lot especially if Raptor Lake outperforms or comes very close to Zen 4 performance. Remember, Zen 4 is on a silicon die shrink this round. Intel goes down from 10nm to 7nm on the next generation. So Intel gets a huge benefit in fabrication next year.

So it's possible Intel can leave AMD in the rear view mirror or smash them like a bug. Which is why I said over a month ago that Zen 4 needed to be out on the market yesterday.
 

inf64

Diamond Member
Mar 11, 2011
3,706
4,050
136
So it's possible Intel can leave AMD in the rear view mirror or smash them like a bug.
I'm not sure you are serious or joking. If serious you lost grip on reality. Intel is not competing in a vacuum, and AMD is not standing still. If Zen5 launches on time, intel better hope they will somehow achieve parity as they will be going against a massive 256 (fat/wider) Zen5 cores in 2024.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |