I'm not buying the card for new games. I'm buying it to play a preset list of games. I don't use aa at all and am looking to drive a high res then downsample in a list of games where I know crossfire works. I don't want to use crossfire 290s for new games..No, that part about the 970/Fury X was sarcasm.
The other part about the unicorn Memory Management drivers that were gonna give HD 7970 some kind of performance boost is not.
IE, I wouldn't put faith in AMD delivering on it.
You're better off buying this version of Fury X (if it gets better, kudos) over 290(x) CFX period. At least that's my opinion. I had 290X CFX for a good 2-3 hours before ripping it out of my computer. I mean, I was going to return the cards anyways but the microstutter is still very much visible.
290 CFX:
1440p Ultra + 4xSSAA + CMAA == 60 FPS
GTX 680:
1440p Good = 0xAA == 600 FPS
And I chose to return to the 680, even though I knew I'd have the 290's for another 3-4 days, because it was smoother. Two people tested the CFX system, two people felt the 60 FPS reading as false. Felt more like 30-40FPS.
I don't plan on playing any game that has bad support already said I'm a small segment of people. I am perfectly happy playing the games that work on the set-up I buy. I don't have a lot of time to play games gta 5 alone will kill me, the Witcher 3 I'll turn off gameworks I don't care. Those 2 games alone will probably be the only 2 I play this year. Really every title I'd want to play supports crossfire well so I don't care and this isn't about me anyway. Not sure why people want me to spend more for a card I have 0 intention of using in new games where I'd need to worry about single vs dual or worry about crossfire support etc.@railven
What games did you tested to conclude 680 SLI was smoother than R290 CF?
I have that setup. It's smooth in every game I've played. But I don't play any GameWorks titles, except for Witcher 3, which works fine with a few tweaks (tessellation override, temporalAA off).
@railven
What games did you tested to conclude 680 SLI was smoother than R290 CF?
I have that setup. It's smooth in every game I've played. But I don't play any GameWorks titles, except for Witcher 3, which works fine with a few tweaks (tessellation override, temporalAA off).
I'm not buying the card for new games. I'm buying it to play a preset list of games. I don't use aa at all and am looking to drive a high res then downsample in a list of games where I know crossfire works. I don't want to use crossfire 290s for new games..
I'll be playing
bioshock infinite
Alien isolation
And a few others that I know work in crossfire well.
Don't really game the way you guys do so ya im weird don't mind me. I'm playing in a far more casual setting and I only want eye candy and the 290s will give me enough power. The fury x is only worthwhile looking at if there is significant improvements to improve 4k performance and even then the benefit of fury over the 290s for me is so minimal. Good thing I have a 4770k I guess. And that my roommate cranks the ac nonstop
I know its premature, but...
If Fury has this performance from reviews with only 3.5 GB of RAM and 387 GB/s of bandwith, what it will do with 4 GB, and 512 GB/s of bandwith.
CrossFire test:
more detail&Game test.
http://www.hardware.fr/focus/111/crossfire-radeon-r9-fury-x-fiji-vs-gm200-round-2.html
The GPU driver can and does do memory management. AMD has already said they are going to change their memory management specifically to deal with 4GB and that it would be transparent to the game. They are going to more intelligently manage what gets stored in dedicated memory. Yes I suggesting they could dump the lesser used data to system RAM if 4GB capacity is being approached. Why would you use a virtual disk when system RAM is still many many times faster than SSDs? You ever heard of a RAM disk? I'm not suggesting they are using a RAM disk BTW. Also, why would you dump to a virtual disk if the data is already on the disk in the game install? The GPU can already read and write to system memory even if it has to use the OS. The OS shows you how much system memory it can use for graphics related tasks if it needs to. As this memory scheme is already in use the fact that you think 16GB is necessary is clearly false. Back to sontin's observations, the new management scheme might be too aggressive or too slow (overhead) and that's slowing the memory subsystem way down.
Direct3D drivers are free to implement the driver managed textures capability, indicated by D3DCAPS2_CANMANAGERESOURCE, which allows the driver to handle the resource management instead of the runtime. For the (rare) driver that implements this feature, the exact behavior of the driver's resource manager can vary widely, and you should contact the driver's vendor for details on how this works for their implementation.
EDIT: Makes me thing, now, if the pump is part of the Ref design, should that now be counted for power draw? I wonder what kind of power a pump will even require? TO THE GOOGLES!!!
You know what the page file is and how it works right? If you have 4GB of RAM, and 4GB of page file space, you more or less have "8GB" of RAM. As RAM fills up, Windows will move pages that have not been touched lately to the pagefile (this, of course, is relative. If there is A LOT of pressure on RAM, you could be moving a page that was last touched 50 seconds ago into the page file.) When the program comes back and asks for its data in RAM, a it'll get loaded back into RAM but at extreme cost.
RAM disks aren't in the realm of possibilities for AMD to do, especially not for people playing with 8GB of RAM or less and run things in the background. Very very few people disable their page file (I do, and I know I do it at risk of some programs rejecting me, and the dumps from BSODs being lost.)
Unless you have something saying that the AMD driver actually tells DX it'll do memory management, I'm going by the SDK.
https://msdn.microsoft.com/en-us/library/windows/desktop/ee418784(v=vs.85).aspx
AMD's CTO, Joe Macri
If you actually look at frame buffers and how efficient they are and how efficient the drivers are at managing capacities across the resolutions, you'll find that there's a lot that can be done.
WTH? That power reading is way different than the one from the other thread from Digital Storm.
Them pumps must be sucking down some juice!
EDIT: Makes me thing, now, if the pump is part of the Ref design, should that now be counted for power draw? I wonder what kind of power a pump will even require? TO THE GOOGLES!!!
The power numbers looks so messed up I doubt the credibility to the performance.
You didn't read or if you did didn't comprehend what I was saying. Yes I know what a page file does. It's an extended memory pool should a user run out of RAM. We are talking about GPUs though that use system memory, not hard drive space, as their extended pool. Why? It's faster and as the game data already exists on disk why would you need to do anything special to access it? As I said, you can see this in DXDIAG and in advanded display properties. As this is already a thing it stands to reason AMD would be using, perhaps novelly, the already a thing.
RAM disk. Again, read what I wrote. I said very specifically that I was not suggesting they are using a RAM disk. It was only mentioned to highlight just how much faster RAM is than hard disks or SSDs.
Sure double down on the SDK, when AMD is already on record for going to a new memory management scheme for Fury X.
http://arstechnica.co.uk/informatio...hbm-why-amds-high-bandwidth-memory-matters/2/
What is messed up about them?
The Fury X number more or less lines up with a doubling of the non-idle part of the power plus the idle power usage, which is roughly what you would expect.
980 Ti sees a much smaller jump in power when going SLI, but again that is to be expected given the poor performance scaling, which would obviously indicate that the GPUs aren't being properly feed in the first place (and would thus use less power).
well, not really. They are loosing a lot because of boost. 1200mhz boost normal is limited to 1000mhz in SLI. Looking at their single card results, they dont seem to have good air flow.
This is though an advantage for fury. This could exactly represent what a person at home might experience. If i was running a 980ti though, i would surely move my power/temp slider up so it wouldnt throttle at all. Then you would have 2 cards at 1200mhz instead of 2 at 1000mhz. That is a whopping 20% lost.
We had a guy running GTX 970 SLI, the cards were Leadtek dual fan custom coolers.It's an open bench setup, optimal for air cooled GPUs since there's no warm ambient air recycling. Cool air enters blower fan, exhaust hot air at the rear away. A closed case setup (most users) will generally be worse for air coolers than open bench setups (most reviews).
The problem with such a air SLI setup, is the back of the bottom card radiates heat, so the air-intake on the top GPU is taking in warmer air than the bottom GPU. Just visualize it in your mind, you'll understand why it throttles at auto-settings.
Air cool multi-GPU will require higher fan speed than auto to prevent throttling, that's basically what the hw.fr review finds. On auto, it will throttle down to 1ghz and "lose" a lot of performance. Except with NV's definition, it's actually operating as normal since it isn't falling below 1ghz base clocks.
well, not really. They are loosing a lot because of boost. 1200mhz boost normal is limited to 1000mhz in SLI. Looking at their single card results, they dont seem to have good air flow.
This is though an advantage for fury. This could exactly represent what a person at home might experience. If i was running a 980ti though, i would surely move my power/temp slider up so it wouldnt throttle at all. Then you would have 2 cards at 1200mhz instead of 2 at 1000mhz. That is a whopping 20% lost.
See, with maxwell if your temps are approaching 80c, your card boost will suffer. Looking at the single card results, the boost is already restricted. No way two cards are gonna be cool. The options are many but they require interaction. Turn up the power/temp slider but you may end up running at temps over 80c. The next option is to turn up your fans or better ventilate the case.
Fury X is water cooled and you dont have to worry about these things. But obviously there is more going on here that CF scaling. So the other reviews showing fury x CF vs titanX sli, those are probably the same- titanX not boosting because of temps.
Aftermarket 980TIs would fair much better. At least 20%. This is not even considering the factory overclock on most custom models. The results here are from 2 980ti's running at 1000mhz. That is severely crippled. People with custom models will not be running at 1000mhz. Nor would people that know how to overclock their GPUs. Nor would people that have great air flow. Nor would people that adjust their temp or fan targets.
FuryX water cooler is contributing to the great results here. Those 980tis are running at 1000mhz, Just saying.
I want to be clear,
No doubt about it, fury X is amassing when it comes to temps. If you have enough room for 2, it is a great solution.
But just saying, most people building SLI rigs are more knowledgable. They know about the power/temp slider, they know about fan profiles and custom cards. Many of them have their chips on liquid.
Fury x cf looks great though
I want to be clear,
No doubt about it, fury X is amassing when it comes to temps. If you have enough room for 2, it is a great solution.
But just saying, most people building SLI rigs are more knowledgable. They know about the power/temp slider, they know about fan profiles and custom cards. Many of them have their chips on liquid.
Fury x cf looks great though
You have not produced actual technical data saying they're doing the management...which is what I'm asking someone to provide.
RAM, you do realize that system RAM is an order of magnitude slower than accessing data in VRAM itself, right? That there's a lot more latency there. Moreover, a RAM disk would not work very well on low memory systems, which is what I was getting at. If you remember Vista, it used to actually store all contents of VRAM in system RAM (under the game's process.) The problem becomes that you're going to be eating up some large chunk of system RAM just to compensate for VRAM (and more or less ruin any gains found in HBM by slowing the whole system down by using system RAM.)
How is AMD determining when the data will be needed again (they can't know. So they'd be causing page faults.)
Hardware.fr created their own "uber" mode for GK110 cards where they set the fan to 85% and the temp target to 95°C. That did that to stop throttling and allowed the cards to run full cold boost clocks. It also made the cards as hot and loud as the reference 290X. I guess that was fine, considering the competition. With a cool and quiet Fury X though, I don't think that's acceptable.