Gigabyte GTX680 retail pictures

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
That same poster posted results of crysis benchmarks in that same thread on OCN and got pretty terrible results to be honest, but he's using an i3....

I'm not sure if CPU is a limiting factor for a game that old, its mostly gpu limited IIRC....especially at very high quality settings. Hopefully we'll get data on a proper test bed soon. I sure hope his crysis results are a fluke or not true.

Yea I thought the i3 affected results somehow, but a few sites have done comparisons with different configurations of 7970s. In the end, with a single card, i3 and up showed little to no difference.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Nice to see there will be actual cards to buy when Kepler launches.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
We can't really draw too many meaningful conclusions from his results. On one hand, Crysis is typically GPU limited, but running it at a lower 1600x900 resolution makes the CPU a bigger factor. On the other hand, you have the fact that his system only has 3.5GB of usable RAM.

A system with a stock 2400, at least 4GB of usable RAM, and a 1920x1080 or 1920x1200 monitor would have been perfect.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
i think the obvious thing here is that gk104 is likely NOT the highest performing chip that will be released under the kepler architecture regrdless of time of release. whereas tahiti is likely the highest performing chip of SI architecture, (of course may be re-released with higher clocks et etc but would still be the same chip)
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
i think the obvious thing here is that gk104 is likely NOT the highest performing chip that will be released under the kepler architecture regrdless of time of release. whereas tahiti is likely the highest performing chip of SI architecture, (of course may be re-released with higher clocks et etc but would still be the same chip)

Yeah?
 

Awkward

Senior member
Mar 29, 2011
274
0
0
i think the obvious thing here is that gk104 is likely NOT the highest performing chip that will be released under the kepler architecture regrdless of time of release. whereas tahiti is likely the highest performing chip of SI architecture, (of course may be re-released with higher clocks et etc but would still be the same chip)
This barely even makes sense.


This is flamebaiting. Please see post #18.

Administrator Idontcare
 
Last edited by a moderator:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
i think the obvious thing here is that gk104 is likely NOT the highest performing chip that will be released under the kepler architecture regrdless of time of release. whereas tahiti is likely the highest performing chip of SI architecture, (of course may be re-released with higher clocks et etc but would still be the same chip)

Those benchmarks are inconclusive.

The Gigabyte card in question is running with GPU clocks capped at 706mhz vs. 1006 mhz based on the leaked slides. With a possible Turbo Boost to 1058mhz, the GPU may be running 33% slower than rumored maximum TDP clocks. Alternatively, the GPU-Z program may be incorrectly reading the GPU clocks and perhaps GTX680 isn't that fast. Either way, we cannot say which is true unless he pulls up MSI Afterburner and does benches with a manual clock of 1006mhz.

 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Those benchmarks are inconclusive.

The Gigabyte card in question is running with GPU clocks capped at 706mhz vs. 1006 mhz based on rumors. With possible Turbo Boost to 1058mhz, the GPU may be running 33% slower than rumored maximum TDP clocks. Alternatively, the GPU-Z program may be incorrectly reading the GPU clocks and fact GTX680 isn't that fast. Either way, we cannot say which is true.


agreed
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Those benchmarks are inconclusive.

The Gigabyte card in question is running with GPU clocks capped at 706mhz vs. 1006 mhz based on rumors. With possible Turbo Boost to 1058, the GPU is clocked 33% slower than rumored maximum shipping clocks.


He's using the installation driver. You'd think the installation driver would input the proper clock speed? How does that work? Does the card require you to manually input the proper clockspeed? Also GPU-Z does not support the GK104 yet IIRC.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
i think the obvious thing here is that gk104 is likely NOT the highest performing chip that will be released under the kepler architecture regrdless of time of release. whereas tahiti is likely the highest performing chip of SI architecture, (of course may be re-released with higher clocks et etc but would still be the same chip)

Everything from AMD/Ati 2k series up to the 5k series is based on the same underlying VLIW5 arch, but I haven't heard anyone say that AMD/Ati totally spanked Nvidia's G80 due to how awesome Cypress was, because that would be just plain stupid.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
They should've included DX11.1 (assuming they didn't), because DX10.1 was a huge improvement over DX10. If nvidia's DX10.1 parts had been about 2x as fast as they were and offered 2GB of GDDR5, then I wouldn't have bothered to upgrade to Fermi.

Still, nvidia's biggest problems are their drivers, not their hardware.

If this isn't DX11.1, then I wonder if this is just a 28 nm refresh and if my evga GTX 560 Ti 2 GB actually has adaptive vsync and the new AA mode.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Everything from AMD/Ati 2k series up to the 5k series is based on the same underlying VLIW5 arch, but I haven't heard anyone say that AMD/Ati totally spanked Nvidia's G80 due to how awesome Cypress was, because that would be just plain stupid.

and nv has been using hotclocks and a similar arch since g80 but each generation has changes that make it a new architecture (G80,GT200,GF100,GK110 probably)
Just like each new chip for AMD has significant changes (Evergreen, NI,SI )
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Everything from AMD/Ati 2k series up to the 5k series is based on the same underlying VLIW5 arch, but I haven't heard anyone say that AMD/Ati totally spanked Nvidia's G80 due to how awesome Cypress was, because that would be just plain stupid.

Not sure how this is relevant to G80 in context. Cypress was slower than GTX480/570/580, while G80 dominated 2900 and 3800 series entirely. Either way, his implication is obvious: why would Nvidia spend 2 years to launch a "high-end" Kepler (whatever the name will be GTX780/880, etc.) and then that card end up losing to the HD7970 that's just 25% faster than a GTX580? Doubtful.

How does that work exactly? The 680 driver inputs the wrong clockspeed? Huh?

Could be Nvidia locked out the full functionality until the launch date drivers.
Could be GTX680 is not as fast as predicted.
Could be user error (didn't activate some +20% TDP maximum boost / some other feature in the driver panel)

Food for thought: Why is 28nm GTX680 that's supposedly high-end Kepler clocked at 706 mhz when GTX580 was clocked at 772mhz on 40nm process? GTX680 also has lower ROP count, same memory bandwidth.

Any of these are possible:

1) NV flopped and abandoned the large die strategy, there is no GK110;
2) GTX670 --> GTX680 is like HD5870 --> HD6870 (where the marketing name is not reflective of the true standing in the Kepler lineup)
3) Nvidia couldn't get the flagship out on time so they had to resort to using GTX680; GK110 will be GTX780. Perhaps the plan was always to counter HD7970 with a card +/-5% within HD7970 until they sort out whatever problems they are having with power, yields on the large die flagship.
4) Nvidia is loving the profit margins on a 294mm^2 chip. Milk the market/consumer. It seems consumers are eager to pay $550 for HD7970, why not pay $500 for GTX680?
5) Nvidia is having yield issues. It wouldn't' be feasible to get out GK110 right now at reasonable profit levels without pricing it at $800 (good luck with that).

Either way, looking more and more like this entire 28nm generation is lackluster. Even if GTX680 is 10% faster than HD7970, that's still not fast enough after 2 years since GTX480 (my opinion).
 
Last edited:

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
I just noticed it has 32 ROPs. That must mean it does RGBA16 at full speed since they went with only 32 ROPs. It more or less (depending on clock speed) has the same zixel rate as AMD since nvidia does 2x as many zixels per clock as AMD does.

That would be nice if they increased the CUDA cores to 1/4 speed FP64 precision from 1/8, but it's unlikely since they doubled the number of the GTX580.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
i think the obvious thing here is that gk104 is likely NOT the highest performing chip that will be released under the kepler architecture regrdless of time of release. whereas tahiti is likely the highest performing chip of SI architecture, (of course may be re-released with higher clocks et etc but would still be the same chip)

What makes you think AMD can't release an HD 8970 with more compute units and higher clocks than the HD 7970?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That would be nice if they increased the CUDA cores to 1/4 speed FP64 precision from 1/8, but it's unlikely since they doubled the number of the GTX580.

I am guessing the 1/4 DP speed is ruled out for GK104. That's how they are able to keep the chip so lean/small. If you look at HD7870, that's exactly what AMD has done. They made HD7870 a full-blown gaming chip. It only has 1/16th DP performance which makes it worthless for any real world computing work. But then they are able to have a 212mm^2 chip that can come close to HD7950 with some overclocking.

I wouldn't doubt it if GTX680 is a straight up gaming chip with neutered DP performance. Nvidia also likes to sell full featured computing chips in their Tesla and Quadro lines and sell them for $3-5k+. They artificially neuter their consumer line. For years AMD has been the brand to go to for consumer DP performance.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Not sure how this is relevant to G80 in context. Cypress was slower than GTX480/570/580, while G80 dominated 2900 and 3800 series entirely. Either way, his implication is obvious: why would Nvidia spend 2 years to launch a "high-end" Kepler (whatever the name will be GTX780/880, etc.) and then that card end up losing to the HD7970 that's just 25% faster than a GTX580? Doubtful.



Could be Nvidia locked out the full functionality until the launch date drivers.
Could be GTX680 is not as fast as predicted.
Could be user error (didn't activate some +20% TDP maximum boost / some other feature in the driver panel)

Food for thought: Why is 28nm GTX680 that's supposedly high-end Kepler clocked at 706 mhz when GTX580 was clocked at 772mhz on 40nm process? GTX680 also has lower ROP count, same memory bandwidth.

Any of these are possible:
2) GTX670 --> GTX680 is like HD5870 --> HD6870 (where the marketing

That's not a likely possibility IMO.
Unless NV change their entire naming scheme and decide to release a new product ending in a 5, there's no room for two products above GTX680. You could only do a single GTX690, or do a 695/685 thing.
65/685 seems highly unlikely when that's typically a refresh designation, and suggests a higher end card than the 680 would be a 7xx.
This in turn suggests it will not appear until a while later, unless NV decide to eat up some more series, like they did with the 300 series, and have a GTX780 as high end in 6~9 months, and GTX680 as mid range.

Calling it a GTX680 seems very odd for a mid range card because it doesn't leave NV any real space for new cards, and zero space for a dual GPU card which doesn't have a weird number like "GTX699".
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Not sure how this is relevant to G80 in context. Cypress was slower than GTX480/570/580, while G80 dominated 2900 and 3800 series entirely. Either way, his implication is obvious: why would Nvidia spend 2 years to launch a "high-end" Kepler (whatever the name will be GTX780/880, etc.) and then that card end up losing to the HD7970 that's just 25% faster than a GTX580? Doubtful.



Could be Nvidia locked out the full functionality until the launch date drivers.
Could be GTX680 is not as fast as predicted.
Could be user error (didn't activate some +20% TDP maximum boost / some other feature in the driver panel)

Food for thought: Why is 28nm GTX680 that's supposedly high-end Kepler clocked at 706 mhz when GTX580 was clocked at 772mhz on 40nm process? GTX680 also has lower ROP count, same memory bandwidth.

Any of these are possible:

1) NV flopped and abandoned the large die strategy, there is no GK110;
2) GTX670 --> GTX680 is like HD5870 --> HD6870 (where the marketing name is not reflective of the true standing in the Kepler lineup)
3) Nvidia couldn't get the flagship out on time so they had to resort to using GTX680; GK110 will be GTX780. Perhaps the plan was always to counter HD7970 with a card +/-5% within HD7970 until they sort out whatever problems they are having with power, yields on the large die flagship.
4) Nvidia is loving the profit margins on a 294mm^2 chip. Milk the market/consumer. It seems consumers are eager to pay $550 for HD7970, why not pay $500 for GTX680?
5) Nvidia is having yield issues. It wouldn't' be feasible to get out GK110 right now at reasonable profit levels without pricing it at $800 (good luck with that).

Either way, looking more and more like this entire 28nm generation is lackluster. Even if GTX680 is 10% faster than HD7970, that's still not fast enough after 2 years since GTX480 (my opinion).

The GTX 480 was 6 months late, and for those six months you could just as equally make an argument that the HD 5870 was faster than the GTX 285. Also, the HD 5000 series didn't compete with the 500 series.

Also, as technology advances, you get more and more diminishing points of return.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Calling it a GTX680 seems very odd for a mid range card because it doesn't leave NV any real space for new cards, and zero space for a dual GPU card which doesn't have a weird number like "GTX699".

Well like I said we have no official benchmarks. The whole debate of mid-range vs. high-end: impossible to say. For me, I keep in mind that since GeForce 3 Nvidia's next generation mid-range card has always either beaten or tied the previous Nvidia high-end chip. (GeForce 3 ti 500 < GF4 4200Ti, 5950U < 6600GT, 6800 U < 7800GT, 7900GTX < 8800GT/S, 8800GTX/U < GTS250/GTX260, GTX285 < GTX460 1GB). Nvidia can call GTX680, GTX980 for all I care. If it performs like a GTX660Ti (i.e, 20% faster than GTX580), then some people will view it as a "next generation" mid-range card priced at $550. Someone buying next week might not even care about this classification. To them, if it's faster than GTX580, it's Nvidia's "high-end" chip at that moment in time.

Without knowing benchmarks, it's all speculation. Most of us would agree that $450+ price for any card signals the high-end 3-4% market segment. GTX680 is going to be high-end based on price. Still, even price is relative (Sennheiser HD650 @ $600 used to be the highest-end headphones of their lineup generation and were later superseded by HD800 $1,500 high-end successor, with HD700 at $999 as "mid-range"......). So again, even going by price, it's also subjective. Last generation HD6970 was high end at $370 and now a $350 HD7870 is accepted as mid-range. I think NV and AMD will price their cards as high as possible if they can get away with it. Their goal is to maximize profits for shareholders.

Whether we agree as gamers that GTX680's performance constitutes sufficient improvement from Fermi generation to warrant the high-end GTX680 brand name is debatable; and well impossible to say without official benches. Either way, people who want to upgrade will buy a GTX680/HD7970. They aren't going to care if the cards are "mid-range", "mid-high-end" or "high-end".

They should've included DX11.1 (assuming they didn't), because DX10.1 was a huge improvement over DX10.

Sarcasm?

I had HD4890 for about 1 year and I saw no improvement from DX10.1 over DX10. Perhaps there was 1 game that ran faster, BattleForge I believe. Other than that, DX10.1 was marketing fluff. DX11.1 doesn't even have any new instructions over DX11 (at least DX10.1 did). So DX11.1 matters even less.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |