Question Raptor Lake - Official Thread

Page 177 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,269
2,089
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
Reactions: vstar

Schmide

Diamond Member
Mar 7, 2002
5,587
719
126
Meanwhile, HWUB does their testing at GPU limited settings or with RT disabled, which paints a false picture of the gaming capabilities of these newer CPUs. But if it makes AMD look good, I suppose it's alright.

The above absolutely false and borders on libel. You shouldn't review and comment on things you do not watch.

I just watched the hardware unboxed GPU review of Hogwarts Legacy. Almost everything you say above is voided by the video. They test at all settings and resolutions, RT on and off, Hogwarts Grounds and Hogsmeade.

Please don't complain about the lack of CPU variance. They mention it but state that it would need to be covered in a future video. Doing so would be disingenuous.

Edit: You don't even have to watch it. Just read the indexes

Code:
00:00 - Welcome back to Hardware Unboxed
00:54 - Test System Specs
01:02 - 1080p Medium
02:52 - 1440p Medium
04:20 - 4K Medium
05:19 - 1080p Ultra
07:00 - 1440p Ultra
07:54 - 4K Ultra
09:12 - 1080p Ultra, Ray Tracing Ultra
10:11 - 1440p Ultra, Ray Tracing Ultra
11:05 - 4K Ultra, Ray Tracing Ultra
11:34 - Hogsmeade
12:17 - Radeon 1080p Medium
12:36 - Radeon 1440p Medium
13:02 - Radeon 4K Medium
13:37 - Radeon 1080p Ultra
14:01 - Radeon 1440p Ultra
14:14 - Radeon 4K Ultra
14:24 - Radeon 1080p Ultra RT
14:43 - Radeon 1440p Ultra RT
14:55 - Radeon 4K Ultra RT
15:07 - GeForce 1080p Medium
15:21 - GeForce 1440p Medium
15:33 - GeForce 4K Medium
15:55 - GeForce 1080p Ultra
16:16 - GeForce 1440p Ultra
16:44 - GeForce 4K Ultra
16:59 - GeForce 1080p Ultra RT
17:40 - GeForce 1440p Ultra RT
17:59 - GeForce 4K Ultra RT
18:14 - Hogsmeade 1080p Medium
18:27 - Hogsmeade 1440p Medium
18:46 - Hogsmeade 4K Medium
18:57 - Hogsmeade 1080p Ultra
19:12 - Hogsmeade 1440p Ultra
19:25 - Hogsmeade 4K Ultra
19:34 - Hogsmeade 1080p Ultra RT
20:38 - Hogsmeade 1440p Ultra RT
21:05 - Hogsmeade 4K Ultra RT
21:32 - Final Thoughts
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Since you don't watch them, the inequalities on your summation really show. Their latest video is titled "Intel keeps dominating Intel Rolls AMD's ryzen" (note I haven't watched it so it could go either way) Also they don't avoid RT and often have a segment in most videos for it .

I used to be a subscriber to their channel, and I would also recommend them to other people. But I could no longer tolerate their lazy and half assed methodology which had tons of anomalies.

RT has its merits, but they only reach so far. In a sandbox game, where fidelity reigns supreme, it shows its worth. Competitive titles not so much. I wonder how big the viewer group for FPS + RT extremists is? Regardless they are two different metrics, you seem to put all your weight on one side of the issue causing a conflict with hardware unboxed. It is what it is.

I can't think of any major game that will release this year, that doesn't have RT in some form. Competitive titles are different because they put a premium on performance, unless they have a SP campaign. But RT is only projected to increase in prevalence.

One day, games will have full RT lighting without the hybrid element.

There is a place for RT testing. Spiderman really showed that, although in a strange way. (I believe it was compiling or at the very least updating the shaders in real time but I didn't go down that rabbit hole). For the most part though. It's mostly a GPU load. Reducing resolution seems counter intuitive to eliminating the bottleneck.

Spider-Man Remastered was very CPU intensive due to not only the use of RT, but because the assets had to be decompressed and streamed in by the CPU. Spider-Man Remastered is the only title I've played so far that actually utilized my entire CPU......including efficiency cores.

Final thought on hardware unboxed. Their brand is their brand. If you choose to drag them over the coals due to your agenda, that's on you. I find their metrics, clear, concise, and forthright. They do the work so they get to call their shots.

I used to regard them highly, but their testing method leaves much to be desired compared to other reviewers. Nowadays, the German review sites to me are the best reviewers for CPUs and GPUs.....and I don't even speak German.
 

Hitman928

Diamond Member
Apr 15, 2012
5,392
8,281
136
I'd have to see where the "vast majority" of people who follow them said they don't care about testing with RT on. I guess they must be all AMD fans, which would explain a lot.

They run polls on their twitter from time to time to get feedback from their viewers. My guess is you see anyone who disagrees with you as an "AMD fan", no matter what the situation.

Yep, and like I said last time, there's no way on Earth anything other than the GPU could account for a 120w power draw in a gaming workload. I could probably stretch my mind and swallow this nonsensical result if it were some CPU bound test, but this was a gaming workload that was GPU bottlenecked.

And this is where you are wrong. Using a different motherboard with everything else the same could account for half of that alone and that is on the same platform. In this situation you are talking about a completely different platform, with faster memory, and a situation where 1 CPU uses significantly more power when gaming. Add to that PSU efficiency curves and 120W is not out of the realm of possibility. A possible breakdown could be 40W for the CPU, 50W for the motherboard, and 30W split between memory and PSU efficiency differences. Maybe the fans are accounting for some small amount of that too if the cooling needs to work harder to hit whatever fan curve is set for each.
 
Last edited:

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,664
21,174
146
Competitive titles not so much. I wonder how big the viewer group for FPS + RT extremists is?
It's a very small and vocal minority. I leave it to your deductive reasoning to determine why that is.

Linus had nearly 87K responses to his poll about RT. Only a very small percentage responded they had used it in the last week. The number that had tried it in the last month was a small percentage as well. Almost 66% of respondents voted that they didn't even care about it.

BTW, I am enjoying this thread immensely now. Lively discussion and diverse opinions. My favorite part is reading a blue blood, painting us the picture of a company that has borne the moniker Chipzilla for decades, as being unfairly treated by big reviewers. You can't make this stuff up; it's comedy gold.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The above absolutely false and borders on libel. You shouldn't review and comment on things you do not watch.

That comment was in reference to their CPU reviews, not GPU reviews. They wouldn't dare not include RT in a GPU review.

They also will just bench anywhere in the game rather than attempt to find CPU intensive areas. This is why game testing can be much more complicated than what is seen on the surface. A good reviewer will isolate and test certain areas in a game that are either CPU or GPU dependent and use them in their reviews.

And not using CPU intensive settings in a CPU test. For example, when they tested the 13900KS, they didn't even bother to put crowd density to max settings.


Which explains why their benchmark looked like this, with all the CPUs bunched up together. If they had even maxed out the crowd density, (and enabled RT) there would have been a lot more variance between the CPUs.

 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
And this is where you are wrong. Using a different motherboard with everything else the same could account for half of that alone and that is on the same platform. In this situation you are talking about a completely different platform, with faster memory, and a situation where 1 CPU uses significantly more power when gaming. Add to that PSU efficiencies curves and 120W is not out of the realm of possibility. A possible breakdown could be 40W for the CPU, 50W for the motherboard, and 30W split between memory and PSU efficiency differences. Maybe the fans are accounting for some small amount of that too if the cooling needs to work harder to hit whatever fan curve is set for each.

Talk about hand wavey. The 13600K in a gaming workload has a slight power draw increase over the 7600x, especially in a GPU limited game, but also has increased performance. I don't know of any reviewer that ranked the 7600x as faster in gaming than the 13600K. DDR5 uses negligible power, even at high frequencies. Talking about less than 10w. The motherboard that they used for the 7600x is a more high end motherboard than what they used for the 13600K. MSI Meg x670e ACE EATX vs MSI Z790 Tomahawk ATX, so I doubt that the chipset is to blame here. And it was just in that one particular game, A Plague Tale Requiem.

Tomshardware, TechPowerUp and many other reviewers also ranked the 13600K as faster than the 7700x in gaming, much less the 7600x. AMD Unboxed is the only website that shows the 7600x beating the 13600K in gaming that I could find.




 
Reactions: Henry swagger

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
For someone who doesn't watch HWUB you seem to have a lot of information about their recent videos and charts.

I was a subscriber for years and watched most of their videos, so I'm very familiar with how they conduct their tests. I ignored a lot of discrepancies in their testing for a long time, but eventually I could no longer tolerate it.

I think they started going downhill when Steve started doing all the reviews. Dude cuts too many corners and is sloppy.
 

Hitman928

Diamond Member
Apr 15, 2012
5,392
8,281
136
Talk about hand wavey. The 13600K in a gaming workload has a slight power draw increase over the 7600x, especially in a GPU limited game, but also has increased performance. I don't know of any reviewer that ranked the 7600x as faster in gaming than the 13600K. DDR5 uses negligible power, even at high frequencies. Talking about less than 10w. The motherboard that they used for the 7600x is a more high end motherboard than what they used for the 13600K. MSI Meg x670e ACE EATX vs MSI Z790 Tomahawk ATX, so I doubt that the chipset is to blame here. And it was just in that one particular game, A Plague Tale Requiem.

Tomshardware, TechPowerUp and many other reviewers also ranked the 13600K as faster than the 7700x in gaming, much less the 7600x. AMD Unboxed is the only website that shows the 7600x beating the 13600K in gaming that I could find.





What do you consider a slight power draw increase?

As for performance, there are multiple other outlets that have the 7600x beating, or very near the 13600k in gaming.






For a couple, they didn't compile an average themselves so I had to calculate them:

7600x Relative Gaming Performance Vs. 13600k
Eurogamer97.4%
Tweaktown102.6%

BTW, if someone was deciding between a 7600x and 13600k for gaming today, I would suggest to them to get the 13600k because I think it will age better with the extra cores. Only thing that might change that would be if they strongly felt like they would upgrade to Zen 5/6 in the future on the same motherboard but I usually don't advise buying for future platform upgrades when they are so far away.
 

Henry swagger

Senior member
Feb 9, 2022
389
246
86
Talk about hand wavey. The 13600K in a gaming workload has a slight power draw increase over the 7600x, especially in a GPU limited game, but also has increased performance. I don't know of any reviewer that ranked the 7600x as faster in gaming than the 13600K. DDR5 uses negligible power, even at high frequencies. Talking about less than 10w. The motherboard that they used for the 7600x is a more high end motherboard than what they used for the 13600K. MSI Meg x670e ACE EATX vs MSI Z790 Tomahawk ATX, so I doubt that the chipset is to blame here. And it was just in that one particular game, A Plague Tale Requiem.

Tomshardware, TechPowerUp and many other reviewers also ranked the 13600K as faster than the 7700x in gaming, much less the 7600x. AMD Unboxed is the only website that shows the 7600x beating the 13600K in gaming that I could find.




Youtubers d'not test as in depth as websites.. websites are better for reviews
 
Reactions: inf64

coercitiv

Diamond Member
Jan 24, 2014
6,257
12,197
136
The problem with memory tuning is that it is not for the faint of heart. Most of users just select XMP profile and it either works or it doesnt. The problem is that vendors are hell bent to make profiles work and in the process of doing so they destroy performance with stupid secondary/tertiary timings and also love feeding ridiculous voltages like VCCSA, that are not exactly required for said speeds.
Here's one example linked to what @JoeRambo is talking about: HUB just published a new video on memory scaling for both RPL and Zen4. Conclusion is Zen4 is very sensitive to memory timings. You would think from the primary timings matter more and the second DDR5 6000 kit used by HUB would outperform the one given to them by AMD. Turns out it brings a 10% loss in min FPS instead. Also, notice the brutal drop in performance with the cheap DDR5 6000 kit: it shows that bandwidth matters less than latency in this game.

 

coercitiv

Diamond Member
Jan 24, 2014
6,257
12,197
136
BTW, if someone was deciding between a 7600x and 13600k for gaming today, I would suggest to them to get the 13600k because I think it will age better with the extra cores.
If someone was deciding between 7600X and 13600K for gaming today I would first ask them to adjust their options to 7700X ($340) versus 13600K ($320) because the 7600X is in a different price class.

And if I had to choose between these 2 CPUs today, I would make a choice based on motherboard availability in my region (form factor, features, price). If motherboard comparison were a tie, I would go AM5 for later upgrades. Likewise, if I had a fast DDR4 kit at hand, I would go 13600K for the extra value today.

RPL and Zen4 are so tied in performance for the average gamer that the CPUs themselves are no longer the main deciding factor, but rather the rest of the build (board, memory, platform longevity vs. DDR4 compatibility).
 

Hitman928

Diamond Member
Apr 15, 2012
5,392
8,281
136
Here's one example linked to what @JoeRambo is talking about: HUB just published a new video on memory scaling for both RPL and Zen4. Conclusion is Zen4 is very sensitive to memory timings. You would think from the primary timings matter more and the second DDR5 6000 kit used by HUB would outperform the one given to them by AMD. Turns out it brings a 10% loss in min FPS instead. Also, notice the brutal drop in performance with the cheap DDR5 6000 kit: it shows that bandwidth matters less than latency in this game.

View attachment 76536

You know, someone who was interested might see something really strange with HWUB's results from this video, like really strange. Just look at their average results when both the 13900k and 7700x are using slow memory, they are showing the 13900k as only 19% faster. Clearly they have no idea what they are doing and can't be trusted.



Obviously we need to refer to the reviewers who know how to benchmark CPUs and do it right. Let's check. . .




Oh wait. What an unexpected result. When both outlets are using slow memory for both CPUs their results line up exactly. It's almost as if Zen4 gains significantly more from faster memory than RPL does and reviewers who use faster memory for both would show Zen4 being much more competitive with RPL than those who restrict their memory speeds. If anything, Computerbase is overselling Zen here since they give Zen4 even slower memory than RPL but the 13900k is still only 19% faster. If only we had reviewers we could look at who use fast memory on both platforms and test a large number of games to see how the CPUs perform across a large variety of gaming workloads. I guess we can only hope
 
Last edited:

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,345
4,967
136
If someone was deciding between 7600X and 13600K for gaming today I would first ask them to adjust their options to 7700X ($340) versus 13600K ($320) because the 7600X is in a different price class.

And if I had to choose between these 2 CPUs today, I would make a choice based on motherboard availability in my region (form factor, features, price). If motherboard comparison were a tie, I would go AM5 for later upgrades. Likewise, if I had a fast DDR4 kit at hand, I would go 13600K for the extra value today.

RPL and Zen4 are so tied in performance for the average gamer that the CPUs themselves are no longer the main deciding factor, but rather the rest of the build (board, memory, platform longevity vs. DDR4 compatibility).

If you're willing to invest $500+ into a memory OC Z-series board, $400+ for highly binned Hynix A-die, and $700 for a 13900KS and run tuned DDR5-7800+ then you will stand as the king of most benchmarks.*

*Until X3D chips launch
 
Reactions: lightmanek

Mopetar

Diamond Member
Jan 31, 2011
7,941
6,242
136
So in light of this last HWUB video, can we finally move past this "RPL significantly faster than Zen 4 in latest RT heavy games?"

You're probably going to have to find some other niche use case where some combination of settings, workload, and hardware gives Intel a pretty clear advantage or I doubt you'll have much success getting the people desperately clinging to RT gaming to loosen their grip.

I wager dollars against doughnuts that Intel is winning with whatever the UserBenchmark score measures these days. Perhaps if we all just pretend it's really important they'll latch on to that instead and we can have more sensible conversations surrounding other topics.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
What do you consider a slight power draw increase?

Computerbase.de did some testing on gaming power consumption and found that the 13600K used 88w on average across 12 titles, whle the 7600x used 60w. Just 28w difference. Computerbase.de also ranked the 13600K 14% faster than the 7600x, and 7% faster than the 7700x at 720p with an RTX 3090 Ti.

Core i9-13900K, i7-13700K & i5-13600K: Gaming-Könige im Test: Benchmarks in Games - ComputerBase

Makes you wonder why there are such varying conclusions. I think a lot of it has to do with the settings used, available GPU power, and the types of games. Regardless of what some may think, the RTX 4090 can be a bottleneck at 1080p for these new CPUs. The only time when 1080p isn't a bottleneck is if the game itself is not properly threaded, using the lowest settings or it's using DX11.

720p is the best method to make sure that there is no GPU bottleneck. And most games hammer the GPU more than the CPU. The Linus Tech Tips screenshot you posted is a good example of that. They ran their gaming tests at 1080p with the highest presets, which made their results GPU bound.

As for performance, there are multiple other outlets that have the 7600x beating, or very near the 13600k in gaming.

I see that, but compared to the bulk of reviews, it's definitely not common I wager. Also, Eurogamer used a RTX 3090 at 1080p, which means GPU bottleneck. Tweaktown used a 3090 Ti but at GPU limited settings, which is why their benchmarks looked bunched up compared to other outlets.

 
Reactions: Henry swagger

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You know, someone who was interested might see something really strange with HWUB's results from this video, like really strange. Just look at their average results when both the 13900k and 7700x are using slow memory, they are showing the 13900k as only 19% faster. Clearly they have no idea what they are doing and can't be trusted.

For the record, I never had any issues with using faster memory. I myself am using it. But people need to understand why reviewers may opt to use the standard memory configuration for a CPU. The number one reason cited is because it's the "out of the box experience" and guaranteed to work with no fuss. XMP or Expo memory aren't guaranteed, because they're not standardized. Then there's the question of how far are you willing to go to optimize the setup?

In the HWUB video, why stop at DDR5 7200 for RPL? I'm running at DDR5 7600, and others are running at DDR5 8000 and above. All of this is arbitrary of course, but my point is that Zen 4 is going to cap out at much lower memory frequencies than RPL which brings up fairness issues.

Also the fact that Raptor Lake is getting significantly higher performance at DDR5 5200 (which is below RPL's stock memory speed btw) than Zen 4 at DDR 5200 (the stock memory speed for Zen 4) says that Zen 4 has an Achiiles heel, and that is memory performance....which is why it is so sensitive to memory frequency and timings.

Obviously we need to refer to the reviewers who know how to benchmark CPUs and do it right. Let's check. . .

Nothing wrong with using the standardized memory settings. That was the status quo for a long time. Zen 4 being pummeled by RPL when it's using standardized memory demonstrates a weakness in the CPU's memory controller/interface. Zen 4 practically needs overclocked memory to perform up to par, whereas RPL does not.

That's not really a problem for the tester, it's a problem for AMD and Zen 4 for making an underperforming memory controller.
 
Reactions: Henry swagger

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Well, to illustrate my point about why there is massive room for reviewers to both make honest mistakes and for unscrupulous ones to make results as their vendor requires i ran:

Two tests, with exact same 5.5Ghz clock, 5Ghz uncore, 8C RPL, same memory, same primary timings, same everythings:









As you can see there is a gap of one second between these results with EXACT same PRIMARIEs and everything -> what would the review boys show you on lovely graphs, except some secondary/tertiary timings detuned to what is reasonable for auto values on motherboard for these settings.

1) Now obviuosly PyPrime is very sensitive to memory, but so are games.
2) Same applies to AMD
3) I was not doing anything truly evil here, 8.118 is great score, PD is off, RTLs are reasonable, if i really wanted to make my 7950x really shine vs RPL, i can do it, easy.

Even best intending reviewer is depending on motherboard to have reasonable secondary/tertiary timings and there is variance between vendors and how inclined said vendor is to make things just work "@XMP/EXPO".
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Even best intending reviewer is depending on motherboard to have reasonable secondary/tertiary timings and there is variance between vendors and how inclined said vendor is to make things just work "@XMP/EXPO".

Quoted for truth. Motherboards can vary immensely when it comes to secondary and tertiary timings. I found this out myself, when started messing around with sub timings. Regardless of the frequencies I used, for some reason I was never able to get under 60ns for latency. I tried all the way from DDR5 7200 to DDR5 8000.

Come to find out the problem was that my motherboard used very relaxed secondary and tertiary timings. Especially tREFI. I found tREFI to have an enormous impact on DDR5 memory performance, which I believe is because it scales with frequency. My stock values were 7,000 or something, and when I changed it to 31,000, (not even maxed out) my latency dropped from 60 plus nanoseconds to 54ns, and my read, write and copy bandwidth all increased.

With tuned sub timings, I have better memory performance at DDR5 7600 than I did using XMP and stock values at DDR5 8000.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,723
3,912
136
Quoted for truth. Motherboards can vary immensely when it comes to secondary and tertiary timings. I found this out myself, when started messing around with sub timings. Regardless of the frequencies I used, for some reason I was never able to get under 60ns for latency. I tried all the way from DDR5 7200 to DDR5 8000.

Come to find out the problem was that my motherboard used very relaxed secondary and tertiary timings. Especially tREFI. I found tREFI to have an enormous impact on DDR5 memory performance, which I believe is because it scales with frequency. My stock values were 7,000 or something, and when I changed it to 31,000, (not even maxed out) my latency dropped from 60 plus nanoseconds to 54ns, and my read, write and copy bandwidth all increased.

With tuned sub timings, I have better memory performance at DDR5 7600 than I did using XMP and stock values at DDR5 8000.

DId you notice a difference or just benchmarks?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |