Gemini (FuryX2) looms near

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
They know NV's loyalists

It's okay RS, you can call us fanboys if it makes you feel any better.

will go buy another mid-range $550-600 GP204, so would NV be interested in launching their flagship in the spring? GP100 may come out as a Tesla/Quadro card and cost $3000+. I could see that. Another alternative could be a heavily cut-down GP100 aka GTX780 that NV can milk for 12 months and then the real flagship comes in 2017.

lol. Dude. You're living in the past. Moore's law died. It's time to accept it. There's nothing about NV's "greed" about that.

AMD's approach has been to release their flagship right away and then sit on their hands for two years. Do you prefer that? The end result is the same.



"Taiwan Semiconductor Manufacturing Co. has successfully produced the first samples of Nvidia Corp.’s code-named GP100 graphics processing unit. Nvidia has already started to test the chip internally and should be on-track to release the GPU commercially in mid-2016." ~ Kit-Guru

Yup. Don't know where Shintai's getting his crazed schedule from.


Getting the Mac Pro design isn't about making a lot of $ or market share. It's about mind-share for AMD, and for Apple/AMD it's also about pushing OpenCL and moving away from proprietary closed standards of CUDA.

If Apple chooses AMD for the next Mac Pro, it also reinforces the point that the sales and profits were meeting Apple's expectations. If the current Mac Pro sold poorly as a result of having an AMD GPU per the customer feedback, then surely Apple would do everything possible to switch back to CUDA/NV.

Doubtful it's all about mindshare. The Mac Pro is a faint joke among professionals. Go to the visual studio companies and see how many use Mac Pro. Winning that contract is unimportant.



NV's push with CUDA starting with G80 generation in 2008, and NV's dominance in the GPGPU market with supercomputer design wins suggests they strategically locked most of the market into their eco-system. First mover advantage and no competitor from Intel/AMD or anyone ensured that NV built the GPGPU compute eco-system uncontested. As a result, many popular compute apps/programs are still using CUDA. It will take time for the most popular programs to be converted to OpenCL.

If OpenCL could take full advantage of AMD's shaders, then sure, it would dominate NV in performance and perf/watt for compute. In the real world, you still need the software infrastructure to take advantage of AMD's compute capabilities.





The other issue is Fury is 4GB limited

Basically yes.

As a tangent, there's an interesting thread on Beyond3D where one poster, RecessionCone, talks about why AMD is losing in the HPC space.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It's okay RS, you can call us fanboys if it makes you feel any better.

I just found it ironic that the person predicting AMD to be 6-9 months late with their Arctic Islands roll-out spent $500 on a mid-range Kepler and $550 on a mid-range Maxwell and to my knowledge has never owned an AMD product, or at least not in the last 10 years. In other words, even if tomorrow AMD released a card 70% faster than 980Ti, it's not as if it would matter. Furthermore, it's nothing personal - just business and facts - NV managed to sell mid-range GPUs for $500-550, so why would they release a $650 consumer GP100 in the spring according to him?

lol. Dude. You're living in the past. Moore's law died. It's time to accept it. There's nothing about NV's "greed" about that.

Where did I blame NV? NV is doing what any successful capitalistic organization would -- maximizing profits by selling mid-range products at flagship prices. You can accept it if you like, I won't. I'll pay mid-range prices for mid-range products or wait until real flagships launch. In any case, your post right there only supports my point earlier about GP100. You got all defensive about $500-550 mid-range cards in 2016 so clearly it bothers you that it's the truth?

AMD's approach has been to release their flagship right away and then sit on their hands for two years. Do you prefer that? The end result is the same.

Who said anything about AMD doing it better? It's not about AMD vs. NV but them vs. our wallets.

Doubtful it's all about mindshare. The Mac Pro is a faint joke among professionals. Go to the visual studio companies and see how many use Mac Pro. Winning that contract is unimportant.

Did you read my reply? If the current Mac Pro was a joke and sold like crap, and consumers demanded CUDA, don't you think Apple would respond with a CUDA GPU for 2016 and beyond? Apple isn't that stupid. If they can increase sales 50-100%, they would. According to your suggestion, AMD's GPUs made the Mac Pro a horrible product for professionals. If there were true, Mac Pro sales would have bombed over the last 600-700 days and Apple wouldn't even contemplate AMD GPUs.

As a tangent, there's an interesting thread on Beyond3D where one poster, RecessionCone, talks about why AMD is losing in the HPC space.

Ya, another instance of you not reading my posts. The discussion in that thread is the exact point I already made.

"The exact same bullshit has always applied, to 3DFX Glyde, to building a Direct X based rendering engine instead of a higher level open abstraction, to etc. etc. It's brilliant on Nvidia's part, it worked. Someone, anyone, could put out a GPU with 64gb of Ram and quadruple the performance of a Nvidia card and DNN builders would still hesitate. It's why Nvidia did what it did in the first place. Vendor lock in, software, hardware, etc. pretty much always costs more than it's worth. It's only after you've gone down the hole too far to get out that most tend to realize their mistake."

That guy RecessionCone just decided to cherry-pick some compute program made specifically for NV:

"A FirePro W9100 (5.24 TFlops peak) is 11X slower than an original Titan (4.5 TFlops peak) using CUDNN 2.0"

Do you actually believe that in a brand agnostic compute application an NV card would be 11X faster than an AMD GPU? ha!

Why doesn't that RecessionCone dude pull up open-source compute applications to see how good GCN is for compute? He doesn't want to and he doesn't care. He loves vendor lock and frankly he probably uses what his employer buys so he has 0 choice in the matter.






Interesting how in open-source compute applications, Fury X is very competitive. That just reinforces my point that NV created the CUDA eco-system and now unless everyone moves to open-source applications, it's going to be almost impossible for Intel/AMD to compete.

NV's compute strategy is the same thing as GameWorks just on the compute side except they were quietly doing it for almost a decade. Now most of the professional HPC community is NV's b*tc* because NV locked them in -- it'll take everybody to get together and embrace open-source code/apps and abandon CUDA or they are forever locked into CUDA.

It's no wonder AMD is losing in the HPC space and so is Intel. It's not about AMD losing -- the big picture is NV created a system that locked almost everyone out. To get back in, Intel/AMD/Samsung/ARM, etc. would need to re-create open-source software that's industry standard and send free GPUs to these professionals. Not happening.

==========

On topic of Fury X2, I already outlined why I thought it was not a good fit for the Mac Pro. As far as the gaming GPU goes, if they price it at $999, it could find its customers. At $1,499, not so much.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Why would NV release a card 70-90% faster than 980Ti right away when they can repeat Kepler/Maxwell roll-out? They know NV's loyalists will go buy another mid-range $550-600 GP204, so would NV be interested in launching their flagship in the spring? GP100 may come out as a Tesla/Quadro card and cost $3000+. I could see that. Another alternative could be a heavily cut-down GP100 aka GTX780 that NV can milk for 12 months and then the real flagship comes in 2017.

The 6-9 months timeframe is also something you just made up considering NV would have to launch January 2016 for that to occur based on existing rumors for AI. It's already December 2015 and there are no credible rumours about a Pascal launch in January or even February 2016. It sounds more reasonable that both NV/AMD won't have anything worthwhile to buy on 14nm/16nm until Q2 2016 at the earliest.

"Taiwan Semiconductor Manufacturing Co. has successfully produced the first samples of Nvidia Corp.’s code-named GP100 graphics processing unit. Nvidia has already started to test the chip internally and should be on-track to release the GPU commercially in mid-2016." ~ Kit-Guru



Getting the Mac Pro design isn't about making a lot of $ or market share. It's about mind-share for AMD, and for Apple/AMD it's also about pushing OpenCL and moving away from proprietary closed standards of CUDA.

If Apple chooses AMD for the next Mac Pro, it also reinforces the point that the sales and profits were meeting Apple's expectations. If the current Mac Pro sold poorly as a result of having an AMD GPU per the customer feedback, then surely Apple would do everything possible to switch back to CUDA/NV.



Even if true, NV's push with CUDA starting with G80 generation in 2008, and NV's dominance in the GPGPU market with supercomputer design wins suggests they strategically locked most of the market into their eco-system. First mover advantage and no competitor from Intel/AMD or anyone ensured that NV built the GPGPU compute eco-system uncontested. As a result, many popular compute apps/programs are still using CUDA. It will take time for the most popular programs to be converted to OpenCL.

If OpenCL could take full advantage of AMD's shaders, then sure, it would dominate NV in performance and perf/watt for compute. In the real world, you still need the software infrastructure to take advantage of AMD's compute capabilities.





The other issue is Fury is 4GB limited but the current Mac has 6GB option. The current Mac Pro is also limited to HDMI1.4a. Surely if Apple is updating it, they will want HDMI 2.0 capability and Fiji doesn't provide that in current form.

I think it makes more sense for Apple to update the Mac Pro in Mid-2016 when they can use Broadwell-E Xeons + next gen 8-12GB GPUs.

I'm not so sure Mac cares about HDMI2.0. DP/mDP/TB is their preferred connection option.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
AMD officially confirms, FuryX2 delayed until 2016 to align with VR

I'm honestly really skeptical about this approach. Even if we see the Greenland flagship in September(ish), we're talking about no more than 6 months.

Also, the initial VR game selection is likely to be quite poor and many games are little else but half-assed exploration games, like Crytek's "climb" game which basically consists of a few mountains to climb and that's it.

If there were a decent selection of genuine AAA games at launch, I could see FuryX2 sell to the ultra-hardcore crowd if it was incredibly well optimised in terms of frame latency etc, but as things stand the entiry GPU just smells of "too little, too late". Sept/October of this year was probably the last moment they could have launched it and it would have made (some) sense in the timing of the market, with around 1 year left until the next GPUs hits the shelves and just in time before the major AAA releases that happened in Nov/Dec.
 

mysticjbyrd

Golden Member
Oct 6, 2015
1,363
3
0
Hardware.fr
Q. On the E3 Livecast, Lisa committed to shipping Fiji Gemini by Xmas. What happened? Is Fiji Gemini delayed?
A. The product schedule for Fiji Gemini had initially been aligned with consumer HMD availability, which had been scheduled for Q415 back in June. Due to some delays in overall VR ecosystem readiness, HMDs are now expected to be available to consumers by early Q216. To ensure the optimal VR experience, we’re adjusting the Fiji Gemini launch schedule to better align with the market.

Read more: http://wccftech.com/amd-fury-x2-delayed/#ixzz3v6LO4Lvx
 

maddie

Diamond Member
Jul 18, 2010
5,086
5,413
136
If the information that 14nm transistor cost is equivalent to 28nm and you will get double the flops/watt, then a max size die 14nm GPU will have roughly the same performance of this FuryX2.

The possibility of a max die GPU being released early being remote, this card will run with the big boys of14nm for some time to come.

Power consumption of course will be another matter.
 

MrTeal

Diamond Member
Dec 7, 2003
3,901
2,631
136
If the information that 14nm transistor cost is equivalent to 28nm and you will get double the flops/watt, then a max size die 14nm GPU will have roughly the same performance of this FuryX2.

The possibility of a max die GPU being released early being remote, this card will run with the big boys of14nm for some time to come.

Power consumption of course will be another matter.

The big caveat to that is you need a game that works well with CF in order to do so, which can unfortunately mean waiting for issues to be resolved before getting the full benefit from your card. That why this would seem so perfect to market towards VR; you don't need to worry about deferred rendering messing up AFR.

Still, it's odd they don't launch now. It would give them the crown of the most powerful single GPU by a good margin. Maybe they're worried that they won't be able to sell at a $1000 price point. Alternately, maybe Gemini is a little more interesting than two Fiji chips bridged with a PLX chip. It might be a long shot, but a guy can always hope.
 

maddie

Diamond Member
Jul 18, 2010
5,086
5,413
136
The big caveat to that is you need a game that works well with CF in order to do so, which can unfortunately mean waiting for issues to be resolved before getting the full benefit from your card. That why this would seem so perfect to market towards VR; you don't need to worry about deferred rendering messing up AFR.

Still, it's odd they don't launch now. It would give them the crown of the most powerful single GPU by a good margin. Maybe they're worried that they won't be able to sell at a $1000 price point. Alternately, maybe Gemini is a little more interesting than two Fiji chips bridged with a PLX chip. It might be a long shot, but a guy can always hope.
Ryan Smith at Anandtech. Interesting read.

http://www.anandtech.com/show/9874/amd-dual-fiji-gemini-video-card-delayed-to-2016


"Out of the 7 games I investigated, 3 of them outright did not (and will not) support multi-GPU. Furthermore another 2 of them had 60fps framerate caps, leading to physics simulation issues when the cap was lifted. As a result there were only two major fall of 2015 games that were really multi-GPU ready: Call of Duty: Black Ops III and Star Wars Battlefront."

"Regardless of what RTG’s original plans were though, I believe positioning Gemini as a video card for VR headsets is the best move RTG can make at this time. With the aforementioned AFR issues handicapping multi-GPU performance in traditional games, releasing Gemini now likely would have been a mistake for RTG from both a reviews perspective and a sales perspective."


Market This card as the best in world VR solution.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
Good analysis by Ryan. The card will be less dudworthy for VR considering the Crossfire support in 2015 games. Still, the risk of next year is competing with Pascal.
 

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
For VR? What? Why? It might be "the best card for VR" when it comes out, but it will only be the best for 3 weeks and then new stuff is out. This is a bad card. 4GB still hurts like hell and is terrible for longevity. No getting around that.
 

MrTeal

Diamond Member
Dec 7, 2003
3,901
2,631
136
For VR? What? Why? It might be "the best card for VR" when it comes out, but it will only be the best for 3 weeks and then new stuff is out. This is a bad card. 4GB still hurts like hell and is terrible for longevity. No getting around that.

4GB would probably be fine for VR for the near future, though you never know what will come down the pipe. Oculus and HTC Vive are 1080x1200 per eye which shouldn't be terribly stressful.

The bigger problem will of Pascal if it does launch in the same time frame. Gemini will still likely be decently faster than GP104 for VR since there's no CF scaling issues, but GP104 will give better performance out of the box for regular gaming if CF doesn't work well or at all and will use a lot less power. Much like 980 vs 295X2, AMD might have to severely discount Gemini to move units even if it is faster in CF enabled games.
 

maddie

Diamond Member
Jul 18, 2010
5,086
5,413
136
For VR? What? Why? It might be "the best card for VR" when it comes out, but it will only be the best for 3 weeks and then new stuff is out. This is a bad card. 4GB still hurts like hell and is terrible for longevity. No getting around that.
I suggest you read this and digest it fully.

If the information that 14nm transistor cost is equivalent to 28nm and you will get double the flops/watt, then a max size die 14nm GPU will have roughly the same performance of this FuryX2.

The possibility of a max die GPU being released early being remote, this card will run with the big boys of14nm for some time to come.

Power consumption of course will be another matter.

So NO. The new 14nm cards will NOT render this irrelevant. The same thing happened with 295x2. Are you so incapable of impartial thought.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I suggest you read this and digest it fully.

If the information that 14nm transistor cost is equivalent to 28nm and you will get double the flops/watt, then a max size die 14nm GPU will have roughly the same performance of this FuryX2.

The possibility of a max die GPU being released early being remote, this card will run with the big boys of14nm for some time to come.

Power consumption of course will be another matter.

So NO. The new 14nm cards will NOT render this irrelevant. The same thing happened with 295x2. Are you so incapable of impartial thought.

But like someone else said, what about a dual chip 14mn card. Since this a dual chip card too.

Be interesting to see what comes out off the wood works.

EDIT: Say a 14m medium single chip is roughly 25-30% faster (that just using GTX 680 vs GTX 580 node shift) than a large 28nm chip. Assume a similar gain for AMD. If they opt not use to HBM, they could possibly put out a dual medium AI card with 8GBs per chip GDDR5X card. With power reduction, it might even offer a little more OC over head.

Now, this is a lot of 'what if's but I wouldn't say it isn't unreasonable. Why I argued they should get this card out and gather as much revenue/sales as they can.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
5,086
5,413
136
But like someone else said, what about a dual chip 14mn card. Since this a dual chip card too.

Be interesting to see what comes out off the wood works.

EDIT: Say a 14m medium single chip is roughly 25-30% faster (that just using GTX 680 vs GTX 580 node shift) than a large 28nm chip. Assume a similar gain for AMD. If they opt not use to HBM, they could possibly put out a dual medium AI card with 8GBs per chip GDDR5X card. With power reduction, it might even offer a little more OC over head.

Now, this is a lot of 'what if's but I wouldn't say it isn't unreasonable. Why I argued they should get this card out and gather as much revenue/sales as they can.
Yes, an X2 new gen medium sized GPU card will beat this, but as you say that's a lot of what if's. Probably won't happen initially, or even for a few months translating to not in 2016.

By the way, how certain is it that Pascal is coming in the 1st quarter or even the 2nd as some rabid pro Nvidia posters are claiming. New info seems to be scarce. I would expect leaks to be happening.

With regards to releasing now and not with the VR headsets, I only have my speculation. The card would face a lot of criticism from the usual as it is Fury X Xfire without the VR angle. Using VR allows AMD to truly differentiate this assuming the whole AMD superiority in latency this gen is true as stated by David Kanter and many others.

I maintain that this card will be at the top or very close for all of 2016 because of the above.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I suggest you read this and digest it fully.

If the information that 14nm transistor cost is equivalent to 28nm and you will get double the flops/watt, then a max size die 14nm GPU will have roughly the same performance of this FuryX2.

The possibility of a max die GPU being released early being remote, this card will run with the big boys of14nm for some time to come.

Power consumption of course will be another matter.

So NO. The new 14nm cards will NOT render this irrelevant. The same thing happened with 295x2. Are you so incapable of impartial thought.

Your post is worthless because it's an "all things being equal" which is not at all the case when moving to a new node AND a new architecture.

1. 295x2 is on the same node as Maxwell, and Titan X with max OC's can match a 295x2 with significantly less transistors and less power consumption. You simply cannot compare different architectures to each other (like you are trying to do) and expect 1:1 performance on a transistor level. It's an even more invalid comparison when you compare unreleased architectures with existing ones from two completely different companies.
2. New architectures often bring improved perf/transistor. Look at Maxwell's perf/transistor vs. Kepler. Same node, massive perf/transistor performance increase. Look at Kepler vs. Fermi. Same thing, perf/transistor scaled higher than transistor % increase. You don't think Nvidia and AMD realize that they need to focus more on perf/transistor in the face of rising manufacturing costs?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
For VR? What? Why? It might be "the best card for VR" when it comes out, but it will only be the best for 3 weeks and then new stuff is out. This is a bad card. 4GB still hurts like hell and is terrible for longevity. No getting around that.


 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
That's a BS slide and you know it.

And how can people defend a dual 28nm product vs dual 14nm. It doesn't fit in anywhere.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
But like someone else said, what about a dual chip 14mn card. Since this a dual chip card too.

Be interesting to see what comes out off the wood works.

EDIT: Say a 14m medium single chip is roughly 25-30% faster (that just using GTX 680 vs GTX 580 node shift) than a large 28nm chip. Assume a similar gain for AMD. If they opt not use to HBM, they could possibly put out a dual medium AI card with 8GBs per chip GDDR5X card. With power reduction, it might even offer a little more OC over head.

Now, this is a lot of 'what if's but I wouldn't say it isn't unreasonable. Why I argued they should get this card out and gather as much revenue/sales as they can.

Oh god, don't tell me that we're back to this dual-mid-GPU theory again! it was dumb when people expected Fiji to be a dual Tonga, and it's dumb now.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Really ??? care to explain ??

Because it requires a setup that isn't practical possible for gaming in any way. It would help if you didn't take any AMD PR slide as gospel. Now tell me what real world case AMD have demoed this type? They said the same with Mantle, but was unwilling to show any. And its only getting worse as they try to defend the 4GB flagship failure.

Its like claiming I got 4+xGB because I use my IGP for post processing.

But gaming wise I got 4GB like previous and same limitations as a single GPU with 4GB.

What 14nm dual product ??? there is no such a product mentioned by AMD or NVIDIA to this date.

NVidia isn't making any 14nm products.

If you read the posts, some defends a dual 28nm product in a 14nm AMD GPU period.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Because it requires a setup that isn't practical possible for gaming in any way. It would help if you didn't take any AMD PR slide as gospel. Now tell em what real world case AMD have demoed this type? They said the same with mantle, but was unwilling to show any. And its only getting worse as they try to defend the 4GB flagship failure.

Its like claiming I got 4+xGB because I use my IGP for post processing.

But gaming wise I got 4GB like previous and same limitations as a single GPU with 4GB.

I dont see any technical reasons here for proof that this slide is bs.


NVidia isn't making any 14nm products.

If you read the posts, some defends a dual 28nm product in a 14nm AMD GPU period.

That is not what you said, you said DUAL 14nm, not in a 14nm GPU period.
 

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
I suggest you read this and digest it fully.

If the information that 14nm transistor cost is equivalent to 28nm and you will get double the flops/watt, then a max size die 14nm GPU will have roughly the same performance of this FuryX2.

The possibility of a max die GPU being released early being remote, this card will run with the big boys of14nm for some time to come.

Power consumption of course will be another matter.

So NO. The new 14nm cards will NOT render this irrelevant. The same thing happened with 295x2. Are you so incapable of impartial thought.

And you seem to have missed the massive 4GB issue. Even if the card came with more vram, which it won't, its still a bad card. Its better to buy two of the next gen cards. Similar cost and better performance with more ram. Any enthusiast will have the rig to house and support two GPU's. This card is a bust.


Why is it that the viability of AMD products so often hinge upon the success and implementation of questionable future software technologies? Rather than spend a fortune on a crappy dual GPU card and simply HOPE that DX12 comes to save the day, I'd rather just buy two next gen cards and do nothing but WIN for the next 2-3 years.
Nothing wrong with dual GPU cards, but this one is coming too late and its also coming with an already gimped Vram capacity. Its a bad card. Bad card is bad.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |