Tessellation power means nothing if they don't do anything with it.
I'm not supporting a company that pushes for a game to tessellate an entire ocean under the level or concrete barriers (like Crysis 2) or adds oodles of tessellation to a poor console port (like Batman: AC), just so they gain a few more FPS over the competition while making the game run poorly for everyone.
I thought we were fooled on April 1st.Anyone seen this article....a 2Ghz GTX680 coming...?
http://videocardz.com/31492/zotac-announcing-geforce-gtx-680-with-2ghz-core-clock
The bolded is simply incorrect it wasn't nVidia that brought tessellation into gaming but Microsoft when they made it part of DX11. Saying nVidia was the one putting money into games to include tessellation would imply that no game had tessellation before the GTX 480 released which again is incorrect.The 2 statements appear to be a contradiction. You think Tessellation is a marketing gimmick since it's not widely or effectively used in games. However, when 1 company actually pushes for this graphics demanding feature to be present in games you find it a "cheating tactic". Also, for better or worse, Tessellation is a PC specific feature PC gamers are enjoying vs. what are otherwise straight up console ports with high resolution texture packs. I would even go as far as to say DX11 will be remembered as the beginning of Tessellation era.
I agree with you that the way Tessellation is utilized in Crysis 2 is not very efficient (i.e., Concrete barrier, ocean). However, the fact that Tessellation coding is inefficient doesn't override the notion that it is because of NV that Tessellation as a feature has been elevated into the spotlight in the last 2 years. After all, AMD's Cayman architecture uses a 7th generation Tessellation engine. Actually AMD introduced a form of Tessellation in its basic form with TRUFORM on Radeon 8500 in 2001. Unfortunately, we haven't seen much use of Tessellation until NV put $$ behind it. If NV didn't work closely with developers who knows how long it would have taken before we would have seen tessellation. For all we know AMD might have been on their 12th generation tessellation videocard and none of us would have known this feature even mattered.
It appears to me your two main issues are inefficiency of Tessellation coding and that one company is willing to spend more $ to implement features in which its products offer a performance advantage. That's a valid point, but AMD can do the same. At the same time, HD7970 wins in the very heavy Tessellated games you are criticizing against HD6970: Crysis 2, Batman AC. If you aren't going to use next generation features in new games, what's the point of spending $500 on 7970 in th first place? HD6970/GTX570 can play them perfectly fine if you lay off that particular feature.
If you are upset that NV pushes PC centric features, then who else is going to do it? Nothing stops AMD from making games using bullet physics, or using its enormous GPGPU compute advantage for shaders, etc. I think I'd rather see NV spend marketing $ on Tessellation than to play console ports, even if my AMD card is much slower than the competitor's.
That's funny you accuse me of providing "NV marketing post" while you are the one missing the entire picture here - a more efficient, cheaper card is faster. You can discount the review I linked if you don't like its results. That review is not the only review which shows that in certain games 680 leads by a 20-30%. Furthermore, their card reached 1300mhz on reference cooling, and so did Xbitlabs' card. That's at least 2 samples that have achieved > 1300mhz overclockers. Many reviewers were able to achieve 1180-1250mhz as well. Regardless of how "cherry-picked" it could have been, I haven't seen any HD7970 accomplish 1300mhz on air. In fact 2 out of 3 Lightning 7970s couldn't even get to 1250mhz on air.
In fairness, if we are going to compare the best 7970 card vs. a reference 680, it makes sense to wait until after-market 680s arrive. There is little doubt that HD7970 will be left in the dust completely once Direct CU, Windforce 3X, Lightning, Classified editions of those cards launch. The fact that we are even discussing a $600 Lightning cards vs. a reference $500 is in itself telling how great the 680 is.
It's interesting how last generation HD6970 was 90% as good as GTX580 and you claimed it offered amazing bang for the buck for $370 and now HD7970 offers 90% of the performance of a 680 and it's OK that it's priced at $500? 2nd fastest card should always cost less unless it offers some unique features worth paying a premium for.
You say 3GB is a key advantage but so far no benchmarks have shown this to be true. I'd much rather recommend someone spends $500 on a card with proper working drivers and proper working features (h.264 encoding):
3GB of HD7970 didn't really help it in SKYRIM where on 3 monitors 680 handily won.
Previously you claimed that HardOCP focusing on newer games is preferable since testing older games is irrelevant. This was before all the 680 reviews came now. Now the only way HD7970 even manages to get tangible wins is when 2-4 year old games are used in the reviews: Crysis Warhead, Metro 2033 and AvP. Do those games matter more than BF3, Dirt 3, SKYRIM? Maybe to some gamers, but prob. not to most.
I don't believe that a reference HD7970 can be justified even at $500.
If you don't like the Bjorn3D review, look at any other notable ones: Hardware Canucks, ComputerBase, AnandTech and even your #1 source to go to - HardOCP. GTX680 wins in all of them.
"We could prattle on and on extolling the GTX 680s virtues but heres what really matters: NVIDIAs newest flagship card is superior to the HD 7970 in almost every way. Whether it is performance, power consumption, noise, features, price or launch day availability, it currently owns the road and wont be looking over its shoulder for some time to come." ~ Hardware Canucks
In their review, GTX680 was 16% faster at 1080P 4AA and 15% faster at 2560x1600 4AA. HD7970 needs to cost at most 90% of the GTX680 -- that's $450. And that's being generous since people always pay a premium for the fastest single GPU, especially one that's better in most other metrics too: power consumption, noise, most 680s have longer 3 year warranty, working H.264 decoding, new TXAA mode that might be good (or maybe marketing), etc.
I find it odd that you are trying to defend 7970's pricing and yet accuse me of spitting out NV "marketing pamphlet post". Last time I checked lower prices are better for consumers and here you are advocating that HD7970 doesn't need to fall in price by more than $50. It is a rather interesting position esp. after you've promoted AMD's bang for the buck philosophy for years. Yet now you think consumers should pay the same for a card that needs to be overclocked to match guaranteed performance. I don't see that as a reasonable expectation since overclocking is always just a nice bonus, while performance out of the box is guaranteed. Actually part of the enthusiasm behind overclocking is to get a card that's cheaper and performs as fast as a higher SKU part. In this case, a reference card is faster than a 1050mhz 7970 and that performance is guaranteed for everyone.
Since you already stated you won't support NV because of their business practices, thank you for finally admitting that you prob. won't buy NV products in the first place. If you prefer AMD cards for any reason, there is nothing wrong with that. Plenty of posters buy NV cards for Folding@Home for example. However, by more or less stating that you won't support NV as a company due to their business practices, don't expect most posters to view your opinion as objective when it comes to videocard recommendations.
Okay, being mostly an FPS guy, I'm looking at the BF3 benchmarks. I know that is a strong NVIDIA title..
I guess we're stuck with paying stupid amounts of $$ for mid-range products for a long time to come.
Playing video games is the best part about hardware.Overclocking is the best part about hardware :|
What are you running now for everything non-gpu in your system, 225w or so? The hx 520 is rated for 41a @ 50c continuous usage. Mine has been going literally 24/7/365 on seti for around 4 years with various cpu/gpu combos (e6750 + 4850, q6600 + 4850, x3350 + gtx 260, i7 920 + 9600gso). The older cpus ran around 3.5, but the i7 920 is running around 3.95 these days, so overclocking clearly never bothered me, either. As long as you remain under 300w peak for your gpu then you won't have any psu-related issues.
Of course, I do all this bragging on my hx 520 then remembered that I switched it out a couple of months ago for an xfx 850w unit that was just sitting around taking up space in my closet... I think I'll pair it with my i7 rig again this winter when I make my BF upgrade.
Well, if someone has a 30 inch monitor, many games can't manage 60fps at 25x16 with everything on max.
Guru 3D OC review is in. About a 20% OC yielding a 20% performance gain, very solid. Even with this running 300mhz faster, the noise is STILL the same as a stock 7970. The noise is really a negative for me on the 7970; I can't wait for a higher-TDP version of the 680.
http://www.guru3d.com/article/geforce-gtx-680-overclock-guide/1
Thanks for the heads-up.. yummy! However, a higher-TDP version would probably mean more noise also (even though GTX 590, out of all the cards, was still relatively quiet).
In your very quote (the one that you quoted RussianSensation), he actually said "GTX680 can do 1200-1300mhz on air with dynamic voltages in Precision X in 15 seconds by moving the slider."I see this being repeated again and again. Correct me if I am wrong but the GTX 680 dynamically overclocks and overvolts correct? If so then all this talk of no voltage adjustment is nonsense because the card is doing the volting for you.
At least that's the way I read about it.
In your very quote (the one that you quoted RussianSensation), he actually said "GTX680 can do 1200-1300mhz on air with dynamic voltages in Precision X in 15 seconds by moving the slider."
To add to RussianSensation's sincere context, GTX 680 hardly consumes much more power when overclocked. From the same source that he quoted, see:
source: http://bjorn3d.com/read.php?cID=2199&pageID=11686
Pretty impressive, compared against the other 2 cards above it, with one being wayyyyy above for a single GPU card. The authors never edited it ever since the article was published, so I guess it was indeed correct, if it raised some questions already.
That's funny, because hwbot.org says the average air overclock for a GTX 680 is 1175: http://hwbot.org/hardware/videocard/geforce_gtx_680/ . Looks like your lame attempt at cherry picking benchmarks failed again. Also, you failed to address my comment that the 7970 starts is clocked at 925MHz where as the GTX 680 starts at 1006MHz. Therefore, when the 7970 overclocks to a proven average of 1202MHz, it's a 30% overclock, where as the GTX 680 only hits 1175MHz, or 17%.That's funny you accuse me of providing "NV marketing post" while you are the one missing the entire picture here - a more efficient, cheaper card is faster. You can discount the review I linked if you don't like its results. That review is not the only review which shows that in certain games 680 leads by a 20-30%. Furthermore, their card reached 1300mhz on reference cooling, and so did Xbitlabs' card. That's at least 2 samples that have achieved > 1300mhz overclockers. Many reviewers were able to achieve 1180-1250mhz as well. Regardless of how "cherry-picked" it could have been, I haven't seen any HD7970 accomplish 1300mhz on air. In fact 2 out of 3 Lightning 7970s couldn't even get to 1250mhz on air.
It's ironic that you speak of "fairness" in all your biased posting. You're comparing more expensive AIB models just to bolster your "price/performance" argument, when in reality a reference 7970 has no problem hitting those clocks either: http://hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/ . But once again you cherry pick benchmarks and lie instead of presenting a situation honestly and completely.In fairness, if we are going to compare the best 7970 card vs. a reference 680, it makes sense to wait until after-market 680s arrive. There is little doubt that HD7970 will be left in the dust completely once Direct CU, Windforce 3X, Lightning, Classified editions of those cards launch. The fact that we are even discussing a $600 Lightning cards vs. a reference $500 is in itself telling how great the 680 is.
Where did I ever claim that? Go post a quotation, except you won't find it because once again you are lying.It's interesting how last generation HD6970 was 90% as good as GTX580 and you claimed it offered amazing bang for the buck for $370 and now HD7970 offers 90% of the performance of a 680 and it's OK that it's priced at $500? 2nd fastest card should always cost less unless it offers some unique features worth paying a premium for.
Never said any of that either, here's what I said:You say 3GB is a key advantage but so far no benchmarks have shown this to be true.
So once again you are lying and committing libel since you can't properly rebut any of my arguments.The 7970 base MSRP needs to come down to $500 to be competitive. Extras like 3GB of vRAM mean little to the vast majority of the market, and although the 7970 ends up being faster, you do have to work for it.
I've actually owned twice as many NV cards in my life as ATI/AMD. I don't support underhanded tactics that harm consumers, but it seems you do as long as they support nvidia. It's clear to me that you're posting on some agenda or a personal vendetta, not as a contributing member of the forum. In this one post you:Since you already stated you won't support NV because of their business practices, thank you for finally admitting that you prob. won't buy NV products in the first place. If you prefer AMD cards for any reason, there is nothing wrong with that. Plenty of posters buy NV cards for Folding@Home for example. However, by more or less stating that you won't support NV as a company due to their business practices, don't expect most posters to view your opinion as objective when it comes to videocard recommendations.
The statements don't contradict each other in the least. Having tessellation power means nothing when they waste it tessellating an ocean beneath the level in order to exploit a game to win in reviews. Once again you're trying with all your might to put nvidia in a positive light instead of write an honest opinion. I was playing games like STALKER: CoP on my 5870 and 5850's well before nvidia even had a tessellation capable video card.The 2 statements appear to be a contradiction. You think Tessellation is a marketing gimmick since it's not widely or effectively used in games. However, when 1 company actually pushes for this graphics demanding feature to be present in games you find it a "cheating tactic". Also, for better or worse, Tessellation is a PC specific feature PC gamers are enjoying vs. what are otherwise straight up console ports with high resolution texture packs. I would even go as far as to say DX11 will be remembered as the beginning of Tessellation era.
I agree with you that the way Tessellation is utilized in Crysis 2 is not very efficient (i.e., Concrete barrier, ocean). However, the fact that Tessellation coding is inefficient doesn't override the notion that it is because of NV that Tessellation as a feature has been elevated into the spotlight in the last 2 years. After all, AMD's Cayman architecture uses a 7th generation Tessellation engine. Actually AMD introduced a form of Tessellation in its basic form with TRUFORM on Radeon 8500 in 2001. Unfortunately, we haven't seen much use of Tessellation until NV put $$ behind it. If NV didn't work closely with developers who knows how long it would have taken before we would have seen tessellation. For all we know AMD might have been on their 12th generation tessellation videocard and none of us would have known this feature even mattered.
Actually the 6970 plays it fine with tessellation as long as it's scaled back in the drivers to run appropriately on its tessellation engine. That's something the GTX570 can't do, but I'm sure you'd never mention a feature AMD has done better. However, tons of tessellation doesn't save the mediocre, but poorly running, graphics of Batman: AA. Luckily, the gameplay makes up for it.It appears to me your two main issues are inefficiency of Tessellation coding and that one company is willing to spend more $ to implement features in which its products offer a performance advantage. That's a valid point, but AMD can do the same. At the same time, HD7970 wins in the very heavy Tessellated games you are criticizing against HD6970: Crysis 2, Batman AC. If you aren't going to use next generation features in new games, what's the point of spending $500 on 7970 in th first place? HD6970/GTX570 can play them perfectly fine if you lay off that particular feature.
I find it a shame that you go through anything and everything to defend nvidia rather present an accurate or fair argument. I'd like to see open development of games and technology that gives the best experience for the consumer for the least cost. You support proprietary technology and companies funding developers while universally harming gameplay, just to win a few more frames in a benchmark. That's idiotic.If you are upset that NV pushes PC centric features, then who else is going to do it? Nothing stops AMD from making games using bullet physics, or using its enormous GPGPU compute advantage for shaders, etc. I think I'd rather see NV spend marketing $ on Tessellation than to play console ports, even if my AMD card is much slower than the competitor's.
Yeah, I'm thinking that GTX 580 and HD 7970 should switch places, lol.. probably a "charto", like "typo"! I guess nobody really brought it to attention to them?!?
EDIT: Even though HD 7970 can be "caught" consuming 50W more than GTX 680 at times, I went ahead and posted a comment over there asking them if it's a typo or not.
Because GTX 580 pretty much always eats at least 50W more than GTX 680 in any demanding game.
That's funny, because hwbot.org says the average air overclock for a GTX 680 is 1175: http://hwbot.org/hardware/videocard/geforce_gtx_680/ . Looks like your lame attempt at cherry picking benchmarks failed again. Also, you failed to address my comment that the 7970 starts is clocked at 925MHz where as the GTX 680 starts at 1006MHz. Therefore, when the 7970 overclocks to a proven average of 1202MHz, it's a 30% overclock, where as the GTX 680 only hits 1175MHz, or 17%.