nVidia GT200 Series Thread

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: keysplayr2003
Originally posted by: Extelleron
Originally posted by: BFG10K
People have been saying there's a limit for about 5 years now and it's never the case; they always find new ways to advance technology.

Besides, if single GPUs hit a wall then so will multi-GPUs since the former are the building blocks of the latter.

Take a look at G80 -> GT200.

GT200 is around ~2x G80 in terms of specs and performance.

G80 = 484mm^2 @ 90nm
GT200 = 576mm^2 @ 65nm

So for 2x the performance, you are talking about 100mm^2 larger chip at the next full process.

So no, there isn't a way to continue this. If G80 -> GT200 scaling continued, we would see "GT300" being 2x GT200 and 700mm^2 on a 45nm process. That isn't going to happen, I can promise you that.

AMD, meanwhile, has a small ~260mm^2 chip on 55nm. The problem nVidia seems to be facing as well is their architecture appears to take up more room. G80 -> GT200 gives 87.5% more SPs / ~ 2x TMUs (not exactly sure of TF/TA arrangement in GT200 so its hard to tell) / 1.33x ROPs / 1.50x bus size. And GT200 is well above 2x G80 in terms of transistors/die size.

RV770, meanwhile, is 2.5x SPs / 2-2.5x TMUs / optomized RBE's and it is only 30-40% larger than RV670. For the most part, the jump from RV670 -> RV770 is larger than G80 -> GT200, yet we see a 2x+ jump in die size for nVidia meawhile we see a 30-40% jump for AMD. So nVidia probably needs multi-GPU more than AMD, actually. Their architecture takes up a lot of space.

It is indeed possible that we will see a bit more single-GPU, since TSMC is ramping up their move to advanced process nodes. We will see 32nm from TSMC in early 2010 and 40nm sometime in 2009. But after that, the single-GPU dies as far as I am concerned, if not before that. TSMC might have 32nm in 2010, but then it will be a 2 year wait until 2012 for 22nm.

My view of the future GPU is one where a number of GPUs (likely 2-4) are connected via hardware just like we see Intel's MCM quad-cores. The future GPU will be multi-GPU but I don't think it will always rely on software scaling.

Another thing you might not be considering, Extelleron, is that all of those transistors just might not be all for graphics purposes only. You are forgetting about compute ability and what new transistors may be dedicated to CUDA. I have no specific information about this, yet, but I am on the edge of my seat to find out. Nvidia has been cooking more than just graphics stew since G80.

P.S. Sorry, I accidentally hit edit instead of quote. Nothing was altered in your post. I hate when I do that.
CUDA doesn't have any notable dedicated transistors AFAIK. It's all set up by the drivers and then run on the existing shaders.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
i dont think of mgpu as either a placeholder or marketing jam, it is here to stay, but it is not becoming mainstream. The high end will always crave more power and multi GPU will be there for it. And faster single GPUs will do it for most.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Long term everyone needs to be careful not to extrapolate die size increases based on transistor budgets for parts that are close to 'solving' certain rendering elements. To give one example, AF is a rendering element that is not going to contribute in large increases in die size anymore- despite it imposing extremely large performance hits only a relatively short time ago. We are getting very close to this level with pixel and texture fill- and those are HUGE factors in transistor budgets as of now. When we can push 30" displays w/4x AA the pixel/texel fill issue will be largely solved. Take a look at the benches on this page- going from 1280x1024 w/o AA to 2560x1600 w/4x AA is in the 50% range for performance hit, this is a ~400% increase in pixel/texel draw demands and almost 1600% increase in sampling demands on the AA end(for a relatively meager 50% performance hit). Obviously we still have a ways to go there, but I am seeing the generation after the G200 as having largely solved those issues.

When we get to the point that AF, pixel/texel fill and AA are all relatively 'solved' issues, that only leaves us with shaders to address in a meaningful way. Due to this, if we were to take a look at the hypothetical GT300 v GT400 we may very well see a considerably smaller die size when advancing build process by a generation while at the same time seeing a 4-6x improvement in performance. Right now in order to get a large performance improvement for GPUs we need to throw die space at a multitude of areas, this isn't going to continue to be the case for much longer(even in the example I posted benches for, despite being a relatively older tech game, it does have its share of shaders and what performance hit we are seeing may be at least partly due to this).

If at some point we do end up going with multiple chips to render graphics again, a far more viable approach is seperating the rasterizer from the programmable segment instead of trying to load balance across mutliple identical GPUs. This eliminates almost all of the issues with SLI style setups, but I really don't see that being an issue anywhere on the horizon we can view at this point.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
i always thought multiple SPECIALIST die might be in order...

A core die with communications and general functions. A die made entirely of shaders (that comes in different sizes depending on what model you buy) that communicates directly with the main die, and so on... (or maybe multiple shader die connecting to different premade locations on the core die).
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: bryanW1995
Originally posted by: ArchAngel777
Originally posted by: Nemesis 1
But I will also recieve a 4850/4870/4870x2. 4850 I will give to my daughter . The 4870 goes in wifes gamer . I will likely have to buy 1 more 4870 to xfire. My gamer will get the 4870x2. So I will infact have all 4 cards. All using great hardware so I will know whats what just like the review sites. I actually did it today I am getting a K10 . LOL . YA I had to have it. It was a forced down my throat deal . But the NV 280 that I wanted. Just to see.

I call shens...

dude, come on, this makes perfect sense! ATI marketing always got their asses handed to them by nvidia, and amd continually gets clobbered by intel. this has created the perfect marketing storm, resulting in nemesis getting free ati hardware to "test". ATI is too stupid to realize that they need people like rollo on their side. Whom would you rather have spouting propaganda for you, rollo or nemesis? dan quayle or nemesis? stalin or...never mind.


LOL! Come on now the stalin hurt. I get nothing from ATI . A partner gives me the cards. I guess I could have explained myself better. I am a water cooling nut case . I designed and sold a block to another company . We stopped doing gaming pc durring this transition period . I got the hardware so I can fit water blocks on them . The free stuff will stop when we release our own watercooling system In Dec. Our system can be bought only with our complete PC system Much like Apple . Are my blocks that good . Yes they are. Is my rad. that good yes it is . Voodoo just released a new case . Its shook me up because they did alot of what were doing . You can see the new voodoo case (hp) At XS forums news section its a sweet setup . But ours is also very nice. Nehalem is worth the wait tho . Besides the rads are giving me fits on price. Until we can do more volumn. Rads were made to my specs. by another company . We did the proto type than they manufactor it for us . I hope this explains everything . Its more info than I was prepared to give . But I couldn't let you guys think other things . Things change so fast . That Something ya spent months on is a complete wash sometimes by the time ya complete it.
Sorry for the off topic but I had to straighten this out. To many PM asking for info I can't give.

 

angry hampster

Diamond Member
Dec 15, 2007
4,232
0
0
www.lexaphoto.com
Originally posted by: Ocguy31
Originally posted by: Quiksilver
Ooops, euro company accidentally lists GTX 280 (no don't bother looking it's gone) for $600 Euro's (almost 1 grand usd)...

I really hope that this is not a sign to what is to come here in the US and that rather that price is after all the silly taxes and what not are added in...

No, like many people have stated, they tend to charge the same numerical value, and disregard the exchange rate.

$600 here, 600 Euro there.

That card listed is 1024MB. I didn't think the GT200 series is going to be on a 512-bit bus?
 

Quiksilver

Diamond Member
Jul 3, 2005
4,725
0
71
Originally posted by: angry hampster
Originally posted by: Ocguy31
Originally posted by: Quiksilver
Ooops, euro company accidentally lists GTX 280 (no don't bother looking it's gone) for $600 Euro's (almost 1 grand usd)...

I really hope that this is not a sign to what is to come here in the US and that rather that price is after all the silly taxes and what not are added in...

No, like many people have stated, they tend to charge the same numerical value, and disregard the exchange rate.

$600 here, 600 Euro there.

That card listed is 1024MB. I didn't think the GT200 series is going to be on a 512-bit bus?

you need to re-read this entire thread.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: nitromullet
Originally posted by: Janooo
NV under pressure.

Sweet... Nothing like price competition between products that aren't even out yet.

NV is surprised by 770. You can say by reaction of the NV people around the web.
If 40% above 9800GTX stands then it could be getting to the 260 performance territory.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: Janooo
NV under pressure.

Another article with nothing in it, just speculation from sources such as "we've been hearing".

Id really like for these cards to come out already. !!!!
 

geokilla

Platinum Member
Oct 14, 2006
2,012
3
81
I did a quick search in the topic and I didn't find any info regarding the following so here it is:

Geforce GTX specifications are leaking all over the web, so why not share them here since they are all over in the past 24 hours.

1.4 billion transistors and 240 cores

GeForce GTX 280:

Codenamed GT200, the GTX 280 GPU developed in 65nm process and has 1.4 billion transistors clocked at 602MHz. The processor clock, something that we use to call Shaders, works at 1296MHz and Nvidia's new chip has impressive 240 cores.

The GTX 280 card uses GDDR3 memory with 512-bit memory interface clocked to 1107MHz (2214MHz). The card has 141.7GB/s bandwidth and it comes with a total of 1GB memory.

The GT200 chip has 32 ROPs, 80 texture filtering units, 48.2 Gigatexels/sec texture filtering rate. The card supports HDCP, and HDVI via DVI to HDMI adapter and comes with two dual link DVI-I and a single HDTV out.

Ramdac is set to 400MHz, and the card itself is dual-slot with PCIe 2.0 interface and has one 8-pin and one 6-pin power connector. So, now you know



GeForce GTX 260:

Nvidia?s second GT200-based card is the GeForce GTX 260. The Geforce GTX 260 is, as well, based on 65nm GT200 core that has 1.4 billion transistors, but this time clocked to 576MHz. Some of these transistors will sit disabled, as GTX 260 has one of eight clusters disabled.

The Shaders are clocked to 1242MHz and the card has a total of 192 Shaders (what used to be called Shader units are now called processor cores).

The card has an odd number of 896MB GDDR3 memory clocked at 999MHz (1998MHz effectively) and this is enough for 111.9GB/s bandwidth. The slower of the chips has 28 ROPs, 64 Texture filtering units and 36.9GigaTexels/second of texture filtering rate.

If you look at these specs closely, you will see that GTX 260 is the same as GTX 280, but with one cluster disabled. If GTX 280 has eight clusters, GTX 260 ends up at seven.

The card has HDCP, HDMI via DVI, but this time two 6-pin power connectors; and launches next Tuesday.



When it comes to power the leaks are saying this:

Geforce GTX 280 will need a lot of power. Its maximal board power is set to ultra high 236W, which is about the same number that we reported months ago.

One 8-pin power connector can provide up to 150W of power, as a 6-pin is stuck at 75W and so is the PCIe 2.0 slot. If you use one 8-pin and one 6-pin together with PCIe 2.0 you can end up with up to 300W.

The GTX 260 is happy with two times 6-pin (2x75W) and PCIe 2.0 that also provides additional 75W. GTX 260 can get up to 225W and the card actually needs much less than that.

The chip has a thermal threshold at 105 degrees Celsius, and once the GPU reaches this temperature the clock speed will automatically drop down.

GTX 260 is a bit better, as its maximal board power is 182 Watts and with 2x6-pin and the power from PCIe 2.0 interface tends to be enough. The GPU threshold is again 105 degrees Celsius, but as we said before it is the same chip, just with one cluster disabled.



Legit Reviews will as always have a full review on launch day.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: ViRGE
Originally posted by: keysplayr2003
Originally posted by: Extelleron
Originally posted by: BFG10K
People have been saying there's a limit for about 5 years now and it's never the case; they always find new ways to advance technology.

Besides, if single GPUs hit a wall then so will multi-GPUs since the former are the building blocks of the latter.

Take a look at G80 -> GT200.

GT200 is around ~2x G80 in terms of specs and performance.

G80 = 484mm^2 @ 90nm
GT200 = 576mm^2 @ 65nm

So for 2x the performance, you are talking about 100mm^2 larger chip at the next full process.

So no, there isn't a way to continue this. If G80 -> GT200 scaling continued, we would see "GT300" being 2x GT200 and 700mm^2 on a 45nm process. That isn't going to happen, I can promise you that.

AMD, meanwhile, has a small ~260mm^2 chip on 55nm. The problem nVidia seems to be facing as well is their architecture appears to take up more room. G80 -> GT200 gives 87.5% more SPs / ~ 2x TMUs (not exactly sure of TF/TA arrangement in GT200 so its hard to tell) / 1.33x ROPs / 1.50x bus size. And GT200 is well above 2x G80 in terms of transistors/die size.

RV770, meanwhile, is 2.5x SPs / 2-2.5x TMUs / optomized RBE's and it is only 30-40% larger than RV670. For the most part, the jump from RV670 -> RV770 is larger than G80 -> GT200, yet we see a 2x+ jump in die size for nVidia meawhile we see a 30-40% jump for AMD. So nVidia probably needs multi-GPU more than AMD, actually. Their architecture takes up a lot of space.

It is indeed possible that we will see a bit more single-GPU, since TSMC is ramping up their move to advanced process nodes. We will see 32nm from TSMC in early 2010 and 40nm sometime in 2009. But after that, the single-GPU dies as far as I am concerned, if not before that. TSMC might have 32nm in 2010, but then it will be a 2 year wait until 2012 for 22nm.

My view of the future GPU is one where a number of GPUs (likely 2-4) are connected via hardware just like we see Intel's MCM quad-cores. The future GPU will be multi-GPU but I don't think it will always rely on software scaling.

Another thing you might not be considering, Extelleron, is that all of those transistors just might not be all for graphics purposes only. You are forgetting about compute ability and what new transistors may be dedicated to CUDA. I have no specific information about this, yet, but I am on the edge of my seat to find out. Nvidia has been cooking more than just graphics stew since G80.

P.S. Sorry, I accidentally hit edit instead of quote. Nothing was altered in your post. I hate when I do that.
CUDA doesn't have any notable dedicated transistors AFAIK. It's all set up by the drivers and then run on the existing shaders.

Who told you that? I'm not saying you're wrong, but it would be kewl to know who provided you with your GT200 info.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Ocguy31
Originally posted by: Janooo
NV under pressure.

Another article with nothing in it, just speculation from sources such as "we've been hearing".

Id really like for these cards to come out already. !!!!

It's always best to take ALL pre-launch information with a big grain of salt, for whatever reason people seem to want to steer expectation. (which doesn't make sense to me- because most people buy based on independent reviews)

For example, nordichardware tells us today "NVIDIA is lowering prices based on ATi news".

Might be true (hopefully for gamers it is) but before the R600/HD2900XT launch nordichardware had this to say:
R600 faster than an overclocked GeForce 8800GTX
OCWorkbench has now published figures that points to that R600 will perform better than an overclocked GeForce 8800GTX, and we should not forget the early stage of the drivers, which means that there's more to come.

And it was, but only at the 3Dmark noted and without AA.

Then of course when the RV670 was in development they said:

RV670 is a die-shrink of the R600 with some redistributed resources; a better proportion between shader processors and TMUs and ROPs. Basically this means less shader processors in favor of more TMUs and ROPs, which should seriously improve the gaming performance of the Radeon HD 29XX

And of course the RV670 had no more TMUs or ROPs than the R600, and didn't offer "seriously improved gaming" for the most part, but did solve heat, yield, and cost issues in a big way.

So, while I hope the 4870 = GT260, personally I think we need to wait for some tests by independent review sites of all new gen hardware.

Heck, I remember reading the 7900GTX would have 32 ROPs, and look how that turned out.

 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: nRollo
Originally posted by: Ocguy31
Originally posted by: Janooo
NV under pressure.

Another article with nothing in it, just speculation from sources such as "we've been hearing".

Id really like for these cards to come out already. !!!!

It's always best to take ALL pre-launch information with a big grain of salt, for whatever reason people seem to want to steer expectation. (which doesn't make sense to me- because most people buy based on independent reviews)

For example, nordichardware tells us today "NVIDIA is lowering prices based on ATi news".

Might be true (hopefully for gamers it is) but before the R600/HD2900XT launch nordichardware had this to say:
R600 faster than an overclocked GeForce 8800GTX
OCWorkbench has now published figures that points to that R600 will perform better than an overclocked GeForce 8800GTX, and we should not forget the early stage of the drivers, which means that there's more to come.

And it did, but only at the 3Dmark noted and with out AA.

Then of course when the RV670 was in development they said:

RV670 is a die-shrink of the R600 with some redistributed resources; a better proportion between shader processors and TMUs and ROPs. Basically this means less shader processors in favor of more TMUs and ROPs, which should seriously improve the gaming performance of the Radeon HD 29XX

And of course the RV670 had no more TMUs or ROPs than the R600, and didn't offer "seriously improved gaming" for the most part, but did solve heat, yield, and cost issues in a big way.

So, while I hope the 4870 = GT260, personally I think we need to wait for some tests by independent review sites of all new gen hardware.

Heck, I remember reading the 7900GTX would have 32 ROPs, and look how that turned out.

What you are saying is 100% true for the most part, pre-release expectations tend to be a bit overblown compared to what ends up happening (for most parts, at least). The clearest example of this, probably ever, is R600 of course. Everyone thought it would destroy G80. But who can blame them? X1900 beat Geforce 7900, and R600 was supposed to be a 240W monster with 1GB of GDDR4 and a 512-bit bus... specs that blew away G80. But we all know how that turned out. The other example of this is actually G80 IMO, nobody expected it to be so good. Most people didn't even expect it to have unified shaders.

But this is not quite the same as with R600.... we know the specs of RV770 & GT200 (RV770 is still a bit sketchy in # of TMUs), and for the most part they are rehashes of G80 & R600. So it is easier to estimate performance based on specs.

And remember that NH article you are looking at came 3 months before RV670 was released..... back in March we were talking about RV770 being 480SP and the whole chip being clocked at 1050MHz, with 1GB of GDDR5. Back in March, we didn't know much at all about GT200; it wasn't even known if it was single-GPU or dual-GPU.

Two weeks before launch though, we can estimate things more accurately. The only thing we don't know for sure is performance. We will find that out pretty soon. I just wish ATI was releasing their cards the same day as nVidia, because I'm deciding between a 4870 & GTX 260. My step-up expires on June 23rd I believe, so I won't be able to wait until the 25th to find out how RV770 performs.


 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: keysplayr2003
Originally posted by: ViRGE
CUDA doesn't have any notable dedicated transistors AFAIK. It's all set up by the drivers and then run on the existing shaders.

Who told you that? I'm not saying you're wrong, but it would be kewl to know who provided you with your GT200 info.

Isn't that the point of GPGPU? If they add hardware that is "CUDA only", it's not really "General Purpose Computing On GPUs" anymore is it? That sounds more like a CUDA CPU than a GPU to me.
 

Hauk

Platinum Member
Nov 22, 2001
2,806
0
0
Look at everyone playing nicely.

Come on though Rollo, 4870 = GTX 260? I think you're playing too nice. You know better..
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
I would say that the G200 is abit different to the G8x architecture since alot of the underlying aspects (triangle setup, the thread scheduler etc etc) of its architecture is changed.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: keysplayr2003
Originally posted by: ViRGE
Originally posted by: keysplayr2003
Another thing you might not be considering, Extelleron, is that all of those transistors just might not be all for graphics purposes only. You are forgetting about compute ability and what new transistors may be dedicated to CUDA. I have no specific information about this, yet, but I am on the edge of my seat to find out. Nvidia has been cooking more than just graphics stew since G80.

P.S. Sorry, I accidentally hit edit instead of quote. Nothing was altered in your post. I hate when I do that.
CUDA doesn't have any notable dedicated transistors AFAIK. It's all set up by the drivers and then run on the existing shaders.

Who told you that? I'm not saying you're wrong, but it would be kewl to know who provided you with your GT200 info.
I'm talking about the G8X/G9X, though I wouldn't expect GT200 to be any different.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: SteelSix
Look at everyone playing nicely.

Come on though Rollo, 4870 = GTX 260? I think you're playing too nice. You know better..

Actually I don't- I have no "inside info" from any source on the 4870, so to me a rumor of it having 800 SPs is as possible as 480.

I know a whole lot about the GTX260, but without reliable RV770 info I'm in the same boat as the rest of you on that one.

Would be nice to see ATi make a big leap forward, I like having AMD CPUs* as an option and a strong mid range offering would help AMD.


*My Phenom 9850/780A with a 9800GX2 on it is a good enough gaming rig for anyone, self included.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
My question is, even though the memory is "shared" between two or more cards, do they each have their own data in their RAM, or do they still have a carbon copy of one another?

How does this shared memory work? I mean, theoretically.
My guess is there would be much less duplication. Each card would still need its own vertex data for the current scene but things like shaders, textures and models only need to be stored once and both cards could access them.

Also rendering that relies on the content of the previous frame for subsequent frames (e.g. render to texture operations) would only need to be stored once as well.

Overall it should significantly reduce VRAM usage compared to the traditional method of each card having to duplicate everything.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Take a look at G80 -> GT200.

GT200 is around ~2x G80 in terms of specs and performance.

G80 = 484mm^2 @ 90nm
GT200 = 576mm^2 @ 65nm

So for 2x the performance, you are talking about 100mm^2 larger chip at the next full process.
Sure, until GT200 moves to 55nm when the cycle begins anew (i.e I would expect a competitive die size at 55 nm and being smaller @ 45 nm).

It is indeed possible that we will see a bit more single-GPU, since TSMC is ramping up their move to advanced process nodes. We will see 32nm from TSMC in early 2010 and 40nm sometime in 2009. But after that, the single-GPU dies as far as I am concerned, if not before that. TSMC might have 32nm in 2010, but then it will be a 2 year wait until 2012 for 22nm.
In addition to process shrinks there are other elements constantly being explored and developed such as different materials and manufacturing processes. That and we haven?t even touched organic or laser parts.

People have been screaming for years about limits but in reality traditional silicon + electricity doesn?t even scratch the surface of the potential out there.

My view of the future GPU is one where a number of GPUs (likely 2-4) are connected via hardware just like we see Intel's MCM quad-cores. The future GPU will be multi-GPU but I don't think it will always rely on software scaling.
Like I said earlier if single GPUs hit a wall then so will multi-GPU as multi-GPU is built up of single GPUs. If the R600 hadn?t been shrunk to 55nm the 3870 X2 wouldn?t have been possible.

The only way forward from that point would be to add more and more PCIe slots and then start building sever racks after that to hold extra cards, none of which are viable in consumer space.

You also can?t expect to peddle multi-GPU to the mid or low range so they need single GPU upgrades or they won?t buy your product.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |