Originally posted by: Cookie Monster
Firstly, what the heck is madd4 shaders? people who dont know these things shouldnt act like they do. You people realise G80 use scalar shaders which are madd+mul capable. The current prospect is that R600 is going to be vec4+scalar (like the Xenos) or vec4, or even vec2+vec3. Althought theorectically 64 vec4 shaders clocked at ~750mhz equal 128 scalar shaders clocked at 1350mhz, it has its advantages and disadvantages. If its vec4, it can do more work per cycle, (4 compared to 1) but the ultisation of the shaders are far less then scalar ones. This is why nVIDIA chose scalar shaders because you are garaunteeing 100% ultisation which the vec4 shaders can only dream of matching (increasing the core past 1ghz is going to be mission impossible). In a real world performance sense, 100% ultisation sounds more promising than less ultisation but more work per cycle.
This is why ATi is rumoured to hate the G80 so much. Although current drivers maybe a mess, the architecture itself is very impressive and the bar that nVIDIA has raised is VERY high. So right now, the R600 isnt going to
rape anything in that sense. Sites that said about ATi having 32 ROPs is just BS, because a) having more ROPs dont equal in gaining performance and b) it would be a waste of transistor budget. The only good thing is that its a good marketing material for the ATi PR. Not to mention 2GB of GDDR4? Unless you want AMD to lose money by selling these rumoured cards at affordable prices, i think theyve made the whole thing up. Half of the specs listed by was already floating around in the internet by the INQ and word of mouth.
Literally, dont get your hopes high because it will get to you once the real benchmarks hit. Im thinking its either 5~15% faster in some 3d apps while slower in others compared to the G80. AVIVO will be there i bet, and the "new" crossfire. I think the transistor count is so high (720 million rumoured) because of the ring bus design (which also casues the chip to have much more pins, e.g R600 die shot had 2240 pins), its unified shader architecture using vec4+scalar shaders, and the 512bit memory interface.
GDDR3? i guess GDDR4 isnt a good option due to availability as ATi hoped for. However hearing about their 2nd revision working is great and working at 800mhz is pretty amazing. But werent they aiming for 750mhz initially? 800mhz sounds abit too much for such a complex chip at 80nm. G80 on the other hand has much more headroom, with the GTX managing avg of 650~ from cherry picked cores (which means 1.5ghz shader clock).
Originally Posted by Morgoth the Dark Enemy from beyond3d
This is going bad...it`s worse than the whole 512-bit stuff. It wasn`t like that with the G80, it seems that ppl expect ATi to make a card that would improve even starship Enterprise`s systems AND wipe your ass at the same time.