Originally posted by: Extelleron
Originally posted by: Regs
The whole review I got the impression that Derek was trying to say "Well at least it's not a complete flop".
Though brand loyalty is much stronger for a GPU part than it is for a CPU part. You can label brand loyalty "fanboi" all you want but people are going to buy what they most feel comfortable with.
If they want to ignore the signals that this card shows obvious drawbacks and needs a revision to show it's worth for the near future, then I almost feel sorry for them for wasting the 400 dollars for games that wont be out until later this year. DX10, I agree with many others, is for the 65nm parts coming from both Nvidia and hopefully ATi later this year.
However if you want better performance today from a x800 or x1900, and continue to be loyal to ATi, then I see no reason why not to buy the X2900. Just don't ask me to recommend it to you.
It really depends on the drivers for me. If drivers can improve performance by at least 15-20%, which I think they will be able to, then I think the HD 2900XT is an excellent value at $400. However, if things really don't change, then for the most part the 2900XT is a decent deal. If you intend to keep your card for a year or more, however, I would recommend the 2900XT over the 8800GTS.
Originally posted by: Regs
Originally posted by: apoppin
is it a *feeling* ?Originally posted by: MadBoris
I'm starting to thing some of the great specs are being crippled by some bottleneck internally. 512 bit memory bus, 320 shaders is great and all but...something is holding this thing back and it isn't just drivers.
Part of the bottle neck was explained by derek in the conclusion:
And here's what AMD did wrong:
First, they refuse to call a spade a spade: this part was absolutely delayed, and it works better to admit this rather than making excuses. Forcing MSAA resolve to run on the shader hardware is less than desirable and degrades both pixel throughput and shader horsepower as opposed to implementing dedicated resolve hardware in the render back ends. Not being able to follow through with high end hardware will hurt in more than just in lost margins. The thirst for wattage that the R600 displays is not what we'd like to see from an architecture that is supposed to be about efficiency. Finally, attempting to extract a high instruction level parallelism using a VLIW design when something much simpler could exploit the huge amount of thread level parallelism inherent in graphics was not the right move.
Assuming what Derek said was accurate, it's a clear case of Ati having too many "good ideas" at once. A design too complicated and ahead of its time. The card could of spent another year or two in development.
Originally posted by: apoppin
Originally posted by: Regs
Originally posted by: apoppin
is it a *feeling* ?Originally posted by: MadBoris
I'm starting to thing some of the great specs are being crippled by some bottleneck internally. 512 bit memory bus, 320 shaders is great and all but...something is holding this thing back and it isn't just drivers.
Part of the bottle neck was explained by derek in the conclusion:
And here's what AMD did wrong:
First, they refuse to call a spade a spade: this part was absolutely delayed, and it works better to admit this rather than making excuses. Forcing MSAA resolve to run on the shader hardware is less than desirable and degrades both pixel throughput and shader horsepower as opposed to implementing dedicated resolve hardware in the render back ends. Not being able to follow through with high end hardware will hurt in more than just in lost margins. The thirst for wattage that the R600 displays is not what we'd like to see from an architecture that is supposed to be about efficiency. Finally, attempting to extract a high instruction level parallelism using a VLIW design when something much simpler could exploit the huge amount of thread level parallelism inherent in graphics was not the right move.
Assuming what Derek said was accurate, it's a clear case of Ati having too many "good ideas" at once. A design too complicated and ahead of its time. The card could of spent another year or two in development.
that 'VLIW design' is evidently 'intentional' ... something that appears to continue throughout AMD's future designs
... so pretty hard to call it a "flaw" .. or even a 'bottleneck' ... just yet
the reviewer doesn't call it a 'bottleneck'
and that "thirst for wattage" is easily explained by being forced to run at a higher clock --by nvidia's g80
NOR does it *imply* the far fetched conclusion you are drawing from his comments
Now Ati made the same mistake, they thought their card could handle it, and it cant, and now the AA performance is pretty much dreadful
Originally posted by: Stoneburner
so, architecturally what is the reason the R600 is not performing as well as one would expect?
Originally posted by: Wreckage
Originally posted by: Stoneburner
so, architecturally what is the reason the R600 is not performing as well as one would expect?
Well if the R600 came out last year and the G80 was not out yet, it probably would not have been considered too bad of a card.
Originally posted by: apoppin
Now Ati made the same mistake, they thought their card could handle it, and it cant, and now the AA performance is pretty much dreadful
no it isn't
where do you get "aa perf is dreadful"?
Forcing MSAA resolve to run on the shader hardware is less than desirable and degrades both pixel throughput and shader horsepower as opposed to implementing dedicated resolve hardware in the render back ends.
Originally posted by: Stoneburner
but there seems to be some gap between the performance expected considering its specifications and its actual performance. Is there one thing that's holding it back? Is it the failure of the anti aliasing hardware? Or is it the decision to including only 16 texture units in?
So while the AA performance isnt "dreaful" it isnt great either, especially considering how much memory bandwidth the 2900 has.
Originally posted by: apoppin
try loading the release drivers that came with the GTX ... right now ... then test performance ... i bet the HD wins over the GTX and certainly the GTS
Originally posted by: apoppin
read the rest of the reviews
i am seeing variability in performance due to drivers
Yes, Matt2 it's called *immature drivers*
they make a HUGE difference in performance
try loading the release drivers that came with the GTX ... right now ... then test performance ... i bet the HD 'wins' - much more - over the GTX and certainly the GTS
:Q
if nvidia cna improve drivers, so can AMD ... and i think AMD will execute faster and better.
can you say "new" HW ?
Originally posted by: Nightmare225
Originally posted by: apoppin
try loading the release drivers that came with the GTX ... right now ... then test performance ... i bet the HD wins over the GTX and certainly the GTS
Judging by what Extelleron (spelling?) posted in another topic, nowhere close. Don't get your hopes up
then do some research and you will be enlightenedI dont see how drivers are causing the card to have ZERO performance increase when overclocking the GPU by 115mhz and the memory by ~400mhz.
Especially since it was overclocked with the OC utility provided to reviewers by AMD.
Originally posted by: PhatoseAlpha
Even if there were huge improvements, would it be reasonable to expect the same from the HD2X00 lines? IIRC, the 8800 was a new architecture for nVidia, so you'd expect fairly immature drivers. The HD2X00 though, that's a second generation of the X1800ect architecture, so you'd really expect the drivers to bit a bit more mature from the get go. I'd really expect better stability out the door, but less room for optimization, since at least some of that would've been done last generation.
Maybe Vista will work by then.Originally posted by: DaveSimmons
It looks like the hack to get AA working hurts performance bad.
Numbers for Unreal 3 engine (Rainbow 6, no AA) and Oblivion without AA were both good. But given the power draw (19 watts above a GTX) and poor AA speed I think I'd wait for the 2950 refresh.
Luckily I'm waiting until the end of the year for my next card anyway, so I'll hopefully have 2950 and 8900 to pick from. nvidia Vista drivers might even work by then
IMO, this was the most insightful point from Anand's review, and unfortunately this bottleneck isn't something that can be corrected with better drivers. I'm sure efficiency can be improved, but this won't mask the fact that the shaders are also calculating AA. I would bet AMD is waiting to release the XTX until this AA problem is fixed and the full potential of this architecture can be realized.Originally posted by: Nightmare225
Its that:
Forcing MSAA resolve to run on the shader hardware is less than desirable and degrades both pixel throughput and shader horsepower as opposed to implementing dedicated resolve hardware in the render back ends.
Maybe Vista will work by then. [/quote]Originally posted by: GundamSonicZeroX
and 8900 to pick from. nvidia Vista drivers might even work by then
Originally posted by: Regs
Far fetch maybe. I can't stop to help out AMD a little since I am a loyal AMD customer.
But if you want the blunt truth the they did too much, too late, with too little.