OFFICIAL KEPLER "GTX680" Reviews

Page 39 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BoFox

Senior member
May 10, 2008
689
0
0
That's funny, because hwbot.org says the average air overclock for a GTX 680 is 1175: http://hwbot.org/hardware/videocard/geforce_gtx_680/ . Looks like your lame attempt at cherry picking benchmarks failed again. Also, you failed to address my comment that the 7970 starts is clocked at 925MHz where as the GTX 680 starts at 1006MHz. Therefore, when the 7970 overclocks to a proven average of 1202MHz, it's a 30% overclock, where as the GTX 680 only hits 1175MHz, or 17%.
It's ironic that you speak of "fairness" in all your biased posting. You're comparing more expensive AIB models just to bolster your "price/performance" argument, when in reality a reference 7970 has no problem hitting those clocks either: http://hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/ . But once again you cherry pick benchmarks and lie instead of presenting a situation honestly and completely.
Where did I ever claim that? Go post a quotation, except you won't find it because once again you are lying.
Never said any of that either, here's what I said:
So once again you are lying and committing libel since you can't properly rebut any of my arguments.
I've actually owned twice as many NV cards in my life as ATI/AMD. I don't support underhanded tactics that harm consumers, but it seems you do as long as they support nvidia. It's clear to me that you're posting on some agenda or a personal vendetta, not as a contributing member of the forum. In this one post you:

A) failed to rebut any of my arguments
B) Ignored, deflected, or changed the subject when you couldn't
C) flat out personally attacked me and lied when you couldn't

Shameful.

The first source that you linked to (http://hwbot.org/hardware/videocard/geforce_gtx_680/ ) only accounts for the default reported clock, which does not include clock boost of roughly 100MHz, right?

At least that's how the programs should view the GPU base clock, even if overclocked, like with GPU-Z itself. Unless I'm wrong??

But that still doesn't refute the point that when it was overclocked to 30% and still had gpu boost enabled, the average gains were less than 18%. So gpu boost was included in the gpu benchmarks automatically

Average 680 overclock is 16-17% and means a 10% performance gain including gpu boost

Well, GTX 680 usually hovers at around 1070-1080MHz on average, not 1050MHz as Nvidia states.
So, that's actually only about 20% overclock.

Here, in this chart:

http://bjorn3d.com/read.php?cID=2199&pageID=11682

The performance increase due to ~20% overclock is 20.7%. EDIT- The memory was overclocked by only 15%, and this is pretty good considering that the card is already somewhat bottlenecked by only 256-bit bus.
 
Last edited:

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
But that still doesn't refute the point that when it was overclocked to 30% and still had gpu boost enabled, the average gains were less than 18%. So gpu boost was included in the gpu benchmarks automatically

Average 680 overclock is 16-17% and means a 10% performance gain including gpu boost

Oh I wasn't trying to refute that point, merely throwing some light on what one can expect when overclocking a 680. As for the average performance increase, that is frankly to much work for me to bother with calculating.

Although I agree that there are some indications that the 680 scales somewhat poorly with overclocking, but again more data is probably needed.

The performance increase due to ~20% overclock is 20.7%. EDIT- The memory was overclocked by only 15%, and this is pretty good considering that the card is already somewhat bottlenecked by only 256-bit bus.

Actually if you look at the improvements in games rather than synthetics in that review the performance increase is on average 15.8% at 1920x1080, and since they measure a clock rate of 1305 when overclocking including GPU boost (an overclock of 23.3% from 1058), then the performance only scales at about 67.7% of the increased clock. But again that is only one data point.
 
Last edited:

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
You can't claim that a +150 GPU boost is actually a +150 OC. While a bunch of apps will squeeze the max boost out of it some of them will just have no advantage at all.

A few examples:

Base Clock: 725 Mhz.
Max OC while mining: 925 Mhz.
Max OC while running 3DM11: 900 Mhz.
Max OC actually gaming: 885 Mhz.

These are a few test I did with my 5850. I wouldn't say that it has a +200 Mhz OC clock offset.

That's just dishonest.

In fact with this launch I lost all my faith in every single reviewer out there. They're prone to please the reviewed product brand more and more each time. It's just an inmense turd and you have to excavate really deep to find anything close to the truth.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Just for the hell of it I went through all the reviews in the op (plus a couple of extra ones from the dailytech link), and the average core clock offset achieved was 151 mhz for a clock of 1157 (fairly close to your link from hwbot).

However gpu boost comes on top of this, unfortunately there weren't that many sites that measured the effect of this when overclocking, however 50-100 mhz extra seems somewhat common (but take this with a grain of salt, due to the lack of proper data).

so an average gtx680 would run at 1206-1256 mhz when overclocked compared to ~1058mhz at stock (again due to gpu boost), so a relative overclock of 14-19%
I believe you're right that the GPU clock reported by 3DMark11 is the GPU offset only. However, that's all that's guaranteed, isn't it? You'll see people throw around that "stock performance is all that matters because that's all that's guaranteed!"; then that principle should apply here as well. However, in the end, what performance are those clocks actually producing? Here's a link to a review with the highest GTX 680 clocks I could find:
http://vr-zone.com/articles/asus-gtx-680-2gb-overclocking-review-win-some-lose-some/15322-5.html
The GTX 680 is clocked at 1335MHz (32.7%) and the 7970 at 1250MHz (35.2%). Now take a look at the scaling, especially compared to the 7970:

AvP - GTX 680 - 17.7% vs. 7970 - 30.9%
BF3 - GTX 680 - 19.34% vs. 7970 - 30.6%
Crysis 2 - GTX 680 - 16.1% vs. 7970 - 30.7%
Batman: AA - GTX 680 - 16.9% vs. 7970 - 28.6%
L4D2 - GTX 680 - 12.3% vs. 7970 - 22.6%

Notice how poorly the GTX 680 scales? So then, is it that the architecture doesn't scale will with clocks or is that GPU Boost really doesn't do much once you start pushing the card? Either way, does it matter if the performance increase simply isn't there? GTX 680's in the wild are producing the same results, for example: http://hardforum.com/showthread.php?t=1681820

Therefore, dynamic clocking gives you fantastic results in synthetic benchmarks that only load some portions of the GPU at a time (keeping the TDP low and GPU Boost active), but in real gaming situations it's not so useful. This is why static clocks are the most important - when the GPU actually gets stressed, that's what it will be running at.
 
Last edited:

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
I believe you're right that the GPU clock reported by 3DMark11 is the GPU offset only. However, that's all that's guaranteed, isn't it? You'll see people throw around that "stock performance is all that matters because that's all that's guaranteed!"; then that principle should apply here as well. However, in the end, what performance are those clocks actually producing? Here's a link to a review with the highest GTX 680 clocks I could find:
http://vr-zone.com/articles/asus-gtx-680-2gb-overclocking-review-win-some-lose-some/15322-5.html
The GTX 680 is clocked at 1335MHz (32.7%) and the 7970 at 1250MHz (35.2%). Now take a look at the scaling, especially compared to the 7970:

AvP - GTX 680 - 17.7% vs. 7970 - 30.9%
BF3 - GTX 680 - 19.34% vs. 7970 - 30.6%
Crysis 2 - GTX 680 - 16.1% vs. 7970 - 30.7%
Batman: AA - GTX 680 - 16.9% vs. 7970 - 28.6%
L4D2 - GTX 680 - 12.3% vs. 7970 - 22.6%

Notice how poorly the GTX 680 scales? So then, is it that the architecture doesn't scale will with clocks or is that GPU Boost really doesn't do much once you start pushing the card? Either way, does it matter if the performance increase simply isn't there? GTX 680's in the wild are producing the same results, for example: http://hardforum.com/showthread.php?t=1681820

Therefore, dynamic clocking gives you fantastic results in synthetic benchmarks that only load some portions of the GPU at a time (keeping the TDP low and GPU Boost active), but in real gaming situations it's not so useful. This is why static clocks are the most important - when the GPU actually gets stressed, that's what it will be running at.


Wow, thats roughly 40% worse scaling than the 7970.
 

BoFox

Senior member
May 10, 2008
689
0
0
I believe you're right that the GPU clock reported by 3DMark11 is the GPU offset only. However, that's all that's guaranteed, isn't it? You'll see people throw around that "stock performance is all that matters because that's all that's guaranteed!"; then that principle should apply here as well. However, in the end, what performance are those clocks actually producing? Here's a link to a review with the highest GTX 680 clocks I could find:
http://vr-zone.com/articles/asus-gtx-680-2gb-overclocking-review-win-some-lose-some/15322-5.html
The GTX 680 is clocked at 1335MHz (32.7%) and the 7970 at 1250MHz (35.2%). Now take a look at the scaling, especially compared to the 7970:

AvP - GTX 680 - 17.7% vs. 7970 - 30.9%
BF3 - GTX 680 - 19.34% vs. 7970 - 30.6%
Crysis 2 - GTX 680 - 16.1% vs. 7970 - 30.7%
Batman: AA - GTX 680 - 16.9% vs. 7970 - 28.6%
L4D2 - GTX 680 - 12.3% vs. 7970 - 22.6%

Notice how poorly the GTX 680 scales? So then, is it that the architecture doesn't scale will with clocks or is that GPU Boost really doesn't do much once you start pushing the card? Either way, does it matter if the performance increase simply isn't there? GTX 680's in the wild are producing the same results, for example: http://hardforum.com/showthread.php?t=1681820

Therefore, dynamic clocking gives you fantastic results in synthetic benchmarks that only load some portions of the GPU at a time (keeping the TDP low and GPU Boost active), but in real gaming situations it's not so useful. This is why static clocks are the most important - when the GPU actually gets stressed, that's what it will be running at.

Well, if the memory is not overclocked by an equal amount (which some reviewers do not really bother with), the card is probably being bottlenecked by the bandwidth in some games.

EDIT: Also, I'd take VR-Zone's numbers with a huge grain of salt, with HD 7970's memory being clocked at 7480 MHz. VR-Z is one of the less reputable sources.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Once again, if the memory is not overclocked by an equal amount, the card is probably being bottlenecked by the bandwidth in some games.
Do you have any proof that it's memory bandwidth bottlenecked, especially at 1080p?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Temporarily closing this thread while I figure out what is going on with all the insults and baiting.

If, upon review, I am left with the impression that this thread is a lost cause then the thread will remain closed.

Administrator Idontcare

edit: Wow! Call me impressed, outside just a few inflammatory heated posts (which were truly few and far between) you guys have managed to keep this discussion well on-topic and robustly productive :thumbsup:

I closed this thread preparing myself for the worse based on the reported post queue, but I am pleasantly surprised in having my faith completely restored in you all for keeping what would otherwise be a heated debate as a reasonably active but even-keeled discussion about the GTX680.

Re-opening the thread now, realizing now that there was no real need for me to close it in the first place this morning. I should have had more faith in you all, but you've restored my confidence in this community today :thumbsup:

One favor is all I ask, if you feel things are getting personal for yourself, you feel like people are calling you a liar instead of talking about benches, take it to pm or just take a break from the keyboard and clear your thoughts. That's all, now get back to it folks

Administrator Idontcare
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
I am hoping for a comprehensive AT OC review of the 680. The core appears like it would clock to the moon with a great cooler and internals, but I wonder if the memory bandwidth is holding it back. If this is any precursor at all, 'big Kepler' with 512bit memory bandwidth should be a monster.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Do you have any proof that it's memory bandwidth bottlenecked, especially at 1080p?

Most review sites found overclocking the memory resulted in a higher performance gain than overclocking the core itself.

And just to clarify, if the base clock hence the offset has been shifted up 150MHz from default hence 1150MHz, doesn't that mean that the highest clock that the card can reach is 1250~1300MHz?
 

battletoad

Member
Mar 21, 2012
29
0
0
I'm having trouble getting anything close to respectable performance out of Arkham City. What is going on? Stock settings on the 680:


1080p
2xMSAA (Several reviews showed good FPS at 4xMSAA)
DX11 all at highest settings
Everything else at max settings, Physx OFF (On just makes things even worse)

Lots of reviews are listing 50-60FPS average on their benchs with stock speed 680 with 4xMSAA, including this very site and Hardwarecanucks, just off the top of my head.

Im talking wild and frequent drops to the very low 30s at 2xMSAA on Ivy fight, Funhouse Extreme Brawl Challenge map early waves, and lots of other indoor environments with minimal draw distances. Again, physx OFF. Tried this with Adapative Vsync both off and on. This game seems unplayable at the supposed bench settings in many spots right off the bat. Latest drivers,latest patch, 2500k OC to 4.2ghz.

I want to know what parts of the game they played when doing this bench, because I am finding their bench figures to be misleading, to put it kindly.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
I believe you're right that the GPU clock reported by 3DMark11 is the GPU offset only. However, that's all that's guaranteed, isn't it? You'll see people throw around that "stock performance is all that matters because that's all that's guaranteed!"; then that principle should apply here as well. However, in the end, what performance are those clocks actually producing? Here's a link to a review with the highest GTX 680 clocks I could find:
http://vr-zone.com/articles/asus-gtx-680-2gb-overclocking-review-win-some-lose-some/15322-5.html
The GTX 680 is clocked at 1335MHz (32.7%) and the 7970 at 1250MHz (35.2%). Now take a look at the scaling, especially compared to the 7970:

AvP - GTX 680 - 17.7% vs. 7970 - 30.9%
BF3 - GTX 680 - 19.34% vs. 7970 - 30.6%
Crysis 2 - GTX 680 - 16.1% vs. 7970 - 30.7%
Batman: AA - GTX 680 - 16.9% vs. 7970 - 28.6%
L4D2 - GTX 680 - 12.3% vs. 7970 - 22.6%

Notice how poorly the GTX 680 scales? So then, is it that the architecture doesn't scale will with clocks or is that GPU Boost really doesn't do much once you start pushing the card? Either way, does it matter if the performance increase simply isn't there? GTX 680's in the wild are producing the same results, for example: http://hardforum.com/showthread.php?t=1681820

Therefore, dynamic clocking gives you fantastic results in synthetic benchmarks that only load some portions of the GPU at a time (keeping the TDP low and GPU Boost active), but in real gaming situations it's not so useful. This is why static clocks are the most important - when the GPU actually gets stressed, that's what it will be running at.

Problem with that review is that the 680 isn't actually clocked at 1335 MHz, that is merely the max clock observed during their overclocking, the actual average clock during testing is probably considerably lower.

And just to clarify, if the base clock hence the offset has been shifted up 150MHz from default hence 1150MHz, doesn't that mean that the highest clock that the card can reach is 1250~1300MHz?

As far as I have understood the highest boost provided by GPU boost is 100 MHz, which would put the highest reachable clock at 1250 MHz with a 150 MHz offset.
 

Quantos

Senior member
Dec 23, 2011
386
0
76
Seems like what we really need to come up with is a way to average out the 680's OC clocks. The issue is that not only does the performance differs depending on the application, but the clocks do as well. More than ever this depends on what you're planning to do with the card. Also, even if we only look at a single application, which figure do we use here? Max clock? Average clock? Boost clock?

Also, as others have said, the memory seems to be more important than the core.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
Seems like what we really need to come up with is a way to average out the 680's OC clocks. The issue is that not only does the performance differs depending on the application, but the clocks do as well. More than ever this depends on what you're planning to do with the card. Also, even if we only look at a single application, which figure do we use here? Max clock? Average clock? Boost clock?

Also, as others have said, the memory seems to be more important than the core.

GPU boost completely muddies the waters on getting any sort of consistent foundation to benchmark the 680. It boosts to different clocks in different games and fluctuates in those games.

It's a good feature to keep temperatures, noise and power consumption in as an ideal state as possible. I do wish you could disable it though, people like to have control over the hardware and gpu boost is something I would rather have off than on. Looking on it like Intel's turbo boost, which causes instabilities at times in overclocks, I can see it doing the same and limiting the potential for overclocking your GPU.

With your CPU you can make loads of adjustments to get a stable overclock while using turbo to keep your voltage and clock fluctuations stable. GPU boost is solely automated.
 

Quantos

Senior member
Dec 23, 2011
386
0
76
GPU boost completely muddies the waters on getting any sort of consistent foundation to benchmark the 680. It boosts to different clocks in different games and fluctuates in those games.

It's a good feature to keep temperatures, noise and power consumption in as an ideal state as possible. I do wish you could disable it though, people like to have control over the hardware and gpu boost is something I would rather have off than on. Looking on it like Intel's turbo boost, which causes instabilities at times in overclocks, I can see it doing the same and limiting the potential for overclocking your GPU.

With your CPU you can make loads of adjustments to get a stable overclock while using turbo to keep your voltage and clock fluctuations stable. GPU boost is solely automated.

Possibly you can't disable it because the card wouldn't be able to reach such high clocks then? If, for instance, you set 1300MHz as the boost it's trying to reach. It might be able to reach it, or even higher, in some circumstances, but not others, as it would be too warm / use too much power / spin the fan too quickly then?

Considering that, the boost is then pretty useful, because it allows you to get an higher than usual clock in the circumstances it sees fit, and throttles down a bit where otherwise it would crash with such a frequency.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
I would think the GPU boost could be controlled in the BIOS. It could be conceivable that a custom BIOS may be out soon (or better yet - a handy software tool) that could disable it. It had great benefit to those who plan to never OC, but I could see how I might want to just disable it for more benchmarking and OCing purposes.
 

felang

Senior member
Feb 17, 2007
594
1
81
Seems like what we really need to come up with is a way to average out the 680's OC clocks. The issue is that not only does the performance differs depending on the application, but the clocks do as well. More than ever this depends on what you're planning to do with the card. Also, even if we only look at a single application, which figure do we use here? Max clock? Average clock? Boost clock?

Also, as others have said, the memory seems to be more important than the core.

I would just ignore the clock... just go by min and avg fps.

Does it really matter if it´s running at 1 or 1.2 or 1.3 ghz? We should just try to measure actual performance and compare vs. whatever other card´s performance you´re trying to compare to.

I would think the GPU boost could be controlled in the BIOS. It could be conceivable that a custom BIOS may be out soon (or better yet - a handy software tool) that could disable it. It had great benefit to those who plan to never OC, but I could see how I might want to just disable it for more benchmarking and OCing purposes.

Yes, but then you would have people pushing the cards over their capability and then trying to return/RMA them. IMO this is just a way for NVIDIA to make sure that no one ups clocks, volts, etc higher than what the card is engineered to do, probably diminishing warranty claims considerably, not to mention not having to overengineer the cards in the first place so that the small percentage of customers that overclock can actually do so.

Custom designs that feature custom beefed up PCB´s and power delivery will be released to cater to the Overclockers, the higher failure rate caused by enthusiasts pushing these cards further will already be factored in to the purchase price, effectively saving/making NVIDIA even more money in the long run... sort of like Intel´s K series cpu´s.
 
Last edited:

Termie

Diamond Member
Aug 17, 2005
7,949
48
91
www.techbuyersguru.com

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
HardOCP's SLI review is up:
http://hardocp.com/article/2012/03/28/nvidia_kepler_geforce_gtx_680_sli_video_card_review/1


Check out the very interesting results for BF3 multiplayer (bottom of page):
http://hardocp.com/article/2012/03/28/nvidia_kepler_geforce_gtx_680_sli_video_card_review/5

Seems that multiplayer not only uses far more VRAM, it almost doesn't work correctly on the 7970 in crossfire, requiring much lower settings than the 680. Maybe a driver issue...

Thanks for this. The following is from the [H] conclusion:

"Memory Capacity and Bandwidth
We know exactly what you guys are thinking. The Radeon HD 7970 has 3GB of VRAM, the GeForce GTX 680 has 2GB; the Radeon HD 7970 has 264GB/sec of memory bandwidth and the GeForce GTX 680 has 192GB/sec of memory bandwidth. You'd expect Radeon HD 7970 CrossFireX to simply blow GeForce GTX 680 SLI out of the water at 5760x1200. The simple fact is, it does not, and in fact GeForce GTX 680 SLI provides a better gameplay experience with better performance. Amazing, but true. Obviously AMD&#8217;s driver engineers need to figure out how to utilize the hardware more efficiently, because at the moment, NVIDIA is taking AMD to school.

There is one interesting game to look at right now for video card memory usage, and that is Battlefield 3 multiplayer. There is a big difference between the VRAM usage in single player and multiplayer. When we cranked this game up in a 64 player server at the highest in-game settings we saw it get near to 5 GB of VRAM usage on Radeon HD 7970 at 4X AA at 5760x1200. This game seems certainly capable of maximizing VRAM usage on video cards in multiplayer in NV Surround or Eyefinity resolutions. It makes us really want to try out 4GB, or higher video cards. A couple of 4GB GTX 680 video cards are looking real interesting to us right now in this game, and from what rumors we have heard, Galaxy is very likely to make this happen for us."


Looks like AMD caught up to NVidia's multi-GPU solution last generation, only to fall behind again. So much for the "7970's +1 GB VRAM will allow it to surpass the GTX 680 at Eyefinity-scale resolutions" hypothesis...

Given that AMD's multi-GPU again has issues, that extra VRAM is apparently not as good as it sounds in CF. Therefore I will revise my estimate of what a fair price is for the 7970, and the cascade downwards, since the last advantage the 7970 seemed to have, is apparently broken:

7970 - $425
7950 3GB - $320
7950 1.5GB - $300
7870 2GB - $260
7850 2GB - $220
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |