G71 to sample next week

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: Capt Caveman
Originally posted by: solofly
Originally posted by: BenSkywalker

When will Vista launch?

Latest rumors indicate much earlier than predicted. I thought I saw somewhere late third quater ~ early fourth.

Microsoft launch something early? Has hell frozen over? Oh wait, I see a flying pig.

Hmmm... So that would put G80/R600 into Q4 provided that they will attempt to coordinate with Vista... As far as Vista being "early"... Wasn't Longhorn originally supposed to launch sometime in 2003 or 2004? I don't think it's possible for Vista to be early no matter how you cut it.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: BouZouki
Originally posted by: keysplayr2003
Originally posted by: munky
I doubt a 16 "pipe" card with 48 pixel shaders at 700mhz will have much trouble keeping up with a 32 pipe card. What's more interesting is how will the 32 pipe card compensate for its fillrate disadvantage?

Why is that more interesting? You seem to be saying that a 32 pipe G71 will have no advantages over a 16 pipe "48 pixel shader" card. We don't even know if all 48 pixel shaders will even be used at all times. Depends on the developers I suppose, and TWIMTBP campaign may try and see to it that all of those 48 pixel shaders, are not always used. IMHO.


LOL, so your saying Nvidia is going to convince game developers to make their games visually worse? Go Nvidia.

Is that what I said? Geez, because it doesn't look that way. Just the way you wanted to interpret it I guess. But that is natural when somebody likes one company's product and hates the other.
Games seem to look just fine with the amount of shaders they use currently, don't they?
Anyway, I am not condoning TWIMTBP pressuring devs, but that's just the way it is. If you can change that, by all means please do.

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
I doubt a 16 "pipe" card with 48 pixel shaders at 700mhz will have much trouble keeping up with a 32 pipe card. What's more interesting is how will the 32 pipe card compensate for its fillrate disadvantage?

Why is that more interesting? You seem to be saying that a 32 pipe G71 will have no advantages over a 16 pipe "48 pixel shader" card. We don't even know if all 48 pixel shaders will even be used at all times. Depends on the developers I suppose, and TWIMTBP campaign may try and see to it that all of those 48 pixel shaders, are not always used. IMHO.

The game only has pixel shader code - how the code gets executed by the individual pixel shaders is determined by the driver and the hardware. I suppose Nv may try to convince the devs to use less shaders and more texture ops, but that would go against the trend in modern games, and I doubt the devs would agree to write Quake 4 using Quake3 code.

The game? What game? What game has "only" pixel shader code?

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: keysplayr2003
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
I doubt a 16 "pipe" card with 48 pixel shaders at 700mhz will have much trouble keeping up with a 32 pipe card. What's more interesting is how will the 32 pipe card compensate for its fillrate disadvantage?

Why is that more interesting? You seem to be saying that a 32 pipe G71 will have no advantages over a 16 pipe "48 pixel shader" card. We don't even know if all 48 pixel shaders will even be used at all times. Depends on the developers I suppose, and TWIMTBP campaign may try and see to it that all of those 48 pixel shaders, are not always used. IMHO.

The game only has pixel shader code - how the code gets executed by the individual pixel shaders is determined by the driver and the hardware. I suppose Nv may try to convince the devs to use less shaders and more texture ops, but that would go against the trend in modern games, and I doubt the devs would agree to write Quake 4 using Quake3 code.

The game? What game? What game has "only" pixel shader code?

Any game that uses shaders. What I meant is games only tell the hardware what instructions to execute, but have no control over the low level details like activating only half the pipes and disabling 2/3 of the pixel shader ALU's.
 

Steelski

Senior member
Feb 16, 2005
700
0
0
here is something to consider. the X1600XT beats out a 6600GT consistently. One has 8 pipes (the GT) and the other has 4 and s shaders on each (XT). I know there is a lot of things flawed with what i am saying but it is the closest comparison with one having double the pipelines of the other but loosing out in the end, think about it(they also have the same crippling 128bit bus.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Steelski
here is something to consider. the X1600XT beats out a 6600GT consistently. One has 8 pipes (the GT) and the other has 4 and s shaders on each (XT). I know there is a lot of things flawed with what i am saying but it is the closest comparison with one having double the pipelines of the other but loosing out in the end, think about it(they also have the same crippling 128bit bus.

Let's see there are many advantages in favor of RV530 in this case as well.

38% more memory bandwidth
18% higher core clock
6600 is still based on older NV4x technology.

The margin will narrow once you use Geforce 7, and equal memory and core clock.
 

RichUK

Lifer
Feb 14, 2005
10,341
678
126
If there is a 7900 Ultra, do you think that it will use the same heatsink as the current GTX 512? Or a lower profile single slot solution?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
I doubt a 16 "pipe" card with 48 pixel shaders at 700mhz will have much trouble keeping up with a 32 pipe card. What's more interesting is how will the 32 pipe card compensate for its fillrate disadvantage?

Why is that more interesting? You seem to be saying that a 32 pipe G71 will have no advantages over a 16 pipe "48 pixel shader" card. We don't even know if all 48 pixel shaders will even be used at all times. Depends on the developers I suppose, and TWIMTBP campaign may try and see to it that all of those 48 pixel shaders, are not always used. IMHO.

The game only has pixel shader code - how the code gets executed by the individual pixel shaders is determined by the driver and the hardware. I suppose Nv may try to convince the devs to use less shaders and more texture ops, but that would go against the trend in modern games, and I doubt the devs would agree to write Quake 4 using Quake3 code.

The game? What game? What game has "only" pixel shader code?

Any game that uses shaders. What I meant is games only tell the hardware what instructions to execute, but have no control over the low level details like activating only half the pipes and disabling 2/3 of the pixel shader ALU's.

Understood. But that just brings me around to my other point. How do we know (we can't) that all 48 pixel shaders will be operating or being used all of the time? Chances are, they will not. For all we know, ATI found out that 3dMark06/07 or whatever the next one is relies 95% on pixel shader processing power, and ATI did the 48 pixel shaders to totally dominate 3DMark with ridiculous phenomenal scores. Unless the games (future games that is) are programmed to utilize this hardware (GITG needs to throw more money at devs) they might just be sitting there doing nothing and all that goodness is wasted. What I'm trying to say is, try to stop talking about it as if you know what is going to happen. None of us do except the dudes who designed the damned things, and they are not always the most honest of marketing saints. This includes Red and Green.

 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: RichUK
If there is a 7900 Ultra, do you think that it will use the same heatsink as the current GTX 512? Or a lower profile single slot solution?

I would be willing to bet that no matter if the cooler is single or dual slot, it will look a bit different than the GTX 512 sink. My reasoning for this is simply due to the fact that the newer cards pretty much always have a different looking HSF. I image that they do this to differentiate from each other more than out of necessity. That's all just IMO, so take it for what it's worth.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: keysplayr2003
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
I doubt a 16 "pipe" card with 48 pixel shaders at 700mhz will have much trouble keeping up with a 32 pipe card. What's more interesting is how will the 32 pipe card compensate for its fillrate disadvantage?

Why is that more interesting? You seem to be saying that a 32 pipe G71 will have no advantages over a 16 pipe "48 pixel shader" card. We don't even know if all 48 pixel shaders will even be used at all times. Depends on the developers I suppose, and TWIMTBP campaign may try and see to it that all of those 48 pixel shaders, are not always used. IMHO.

The game only has pixel shader code - how the code gets executed by the individual pixel shaders is determined by the driver and the hardware. I suppose Nv may try to convince the devs to use less shaders and more texture ops, but that would go against the trend in modern games, and I doubt the devs would agree to write Quake 4 using Quake3 code.

The game? What game? What game has "only" pixel shader code?

Any game that uses shaders. What I meant is games only tell the hardware what instructions to execute, but have no control over the low level details like activating only half the pipes and disabling 2/3 of the pixel shader ALU's.

Understood. But that just brings me around to my other point. How do we know (we can't) that all 48 pixel shaders will be operating or being used all of the time? Chances are, they will not. For all we know, ATI found out that 3dMark06/07 or whatever the next one is relies 95% on pixel shader processing power, and ATI did the 48 pixel shaders to totally dominate 3DMark with ridiculous phenomenal scores. Unless the games (future games that is) are programmed to utilize this hardware (GITG needs to throw more money at devs) they might just be sitting there doing nothing and all that goodness is wasted. What I'm trying to say is, try to stop talking about it as if you know what is going to happen. None of us do except the dudes who designed the damned things, and they are not always the most honest of marketing saints. This includes Red and Green.

Of course, nobody can predict the future. But it would be a really dumb oversight on Ati's part if they did not manage all the 48 shaders efficiently. So, if a game had only one really simple shader instruction for each texel, then the simplistic solution would result in only 16 pixel shaders being used (1 for each pipe) and the rest just sitting idle. The ideal solution is to keep all 48 shaders busy and then storing results from the remaining 32 to be used by the next batch of texels. Seeing how the r5xx gpu's are multithread-oriented, and have a separate array of texture units it should be possible in theory. But, OTOH, in this case the performance will probably be limited by the TMU's anyway, so we'll just have to wait and see how things turn out.
 

Skriptures17

Member
Jan 4, 2006
106
0
0
seriously the big green giant and its enemy the big RED, have got to stop. Another card my god i just got the 7800 gtx and now what its already a past card,. I swear they havent even made any games that utilize most of the features on the dang thing, i give up................
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Originally posted by: Skriptures17
seriously the big green giant and its enemy the big RED, have got to stop. Another card my god i just got the 7800 gtx and now what its already a past card,. I swear they havent even made any games that utilize most of the features on the dang thing, i give up................

games that use all the features of latest cards - not checked (for the most part)
games that push the latest cards to near breaking point - checked

even if the newest features arent used, the cards will not be able to play all the upcoming games in all their glory. thus, the release of refresh cards.
besides, nvidia and ati know that uber-rich enthusiasts will always strive to have the latest and greatest tech
so even if its not super fast compared to the previous cards, it will sell to someone
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: RobertR1
Originally posted by: Creig
Originally posted by: Rollo
Originally posted by: RobertR1
If it's going to take nvidia a 32piple card with high clock speeds and expensive 1.1 mem to beat the R580 then I'd have to say there are some major inefficiencies with their architecture that they're trying to mask with pure horsepower.

I'll be curious to see the high res, max AA/AF settings in newer GPU limited games to see which one is better.

You could turn that around and say, "ATI didn't release the X1800XT until they could get high enough clocks and faster RAM to beat the much, much lower clocked 256 GTX and make up for the deficiencies of their re-tread 16 pipe design"

Your post is pointless and argumentative, the sort of thing the board needs less of, not more.


I see nothing "pointless and argumentative" about his post. He was simply commenting that if rumored specs hold true, ATI will be countering the 32 pipeline G71 with a 16 pipeline R580. Nvidia keeps adding pixel pipelines to obtain higher performance levels while ATI is somehow managing to make do with 16.


Yep. That's pretty much what I was saying. I think this will help determine if adding traditional pipelines is still the answer to newer games or not. The R580/G71 will basically let us know which methodolgy is truly better. As a consumer, I just want the fastest card but I am still curious as to how a 16pipeline card will be able to keep up with a 32pipeline card especially when in older games, pipelines really seemed to matter.

The 7800GTX 512 has much more bandwidth, faster RAM and 8 more pipes yet manages to lose to the x1800xt oc in newer games such as FEAR, COD2 and even somewhat older BF2. This is somewhat puzzling.


Thats becasue of how efficent the AA is on the R520 (minimal impact). But comparing in terms of raw power, the 7800GTX 512 is of course much faster. However both have similiar shader performance.

I guess UT2007 can only prove which card will indeed be faster for future games.
 

rmed64

Senior member
Feb 4, 2005
237
0
0
Originally posted by: keysplayr2003
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: munky
I doubt a 16 "pipe" card with 48 pixel shaders at 700mhz will have much trouble keeping up with a 32 pipe card. What's more interesting is how will the 32 pipe card compensate for its fillrate disadvantage?

Why is that more interesting? You seem to be saying that a 32 pipe G71 will have no advantages over a 16 pipe "48 pixel shader" card. We don't even know if all 48 pixel shaders will even be used at all times. Depends on the developers I suppose, and TWIMTBP campaign may try and see to it that all of those 48 pixel shaders, are not always used. IMHO.

The game only has pixel shader code - how the code gets executed by the individual pixel shaders is determined by the driver and the hardware. I suppose Nv may try to convince the devs to use less shaders and more texture ops, but that would go against the trend in modern games, and I doubt the devs would agree to write Quake 4 using Quake3 code.

The game? What game? What game has "only" pixel shader code?

Any game that uses shaders. What I meant is games only tell the hardware what instructions to execute, but have no control over the low level details like activating only half the pipes and disabling 2/3 of the pixel shader ALU's.

Understood. But that just brings me around to my other point. How do we know (we can't) that all 48 pixel shaders will be operating or being used all of the time? Chances are, they will not. For all we know, ATI found out that 3dMark06/07 or whatever the next one is relies 95% on pixel shader processing power, and ATI did the 48 pixel shaders to totally dominate 3DMark with ridiculous phenomenal scores. Unless the games (future games that is) are programmed to utilize this hardware (GITG needs to throw more money at devs) they might just be sitting there doing nothing and all that goodness is wasted. What I'm trying to say is, try to stop talking about it as if you know what is going to happen. None of us do except the dudes who designed the damned things, and they are not always the most honest of marketing saints. This includes Red and Green.


Meh, all I know is games are definitely using more and more pixel shader processing like Fear, Call of Duty 2, alot of the new games which make high use of pixel shading. Im usually an Nvidia kind of guy (own 6600GT right now), but Im betting on the R580 with its massive pixel ops and better memory controller to win in most games like Fear. I'd put money on the R580 winning in Fear.
 

maskingtape

Junior Member
Jan 5, 2006
6
0
0
You can't really have the best card, each one will have an area where one will better the other some where. So its all down to preference, i think overall the Nvidia will end up beating the ATi in some games and ATi beating the Nvidia in others. The Ati will probably have better visual quality while the Nvidia will have very good quality and raw power. ATi will probably have better quality video play back then the Nvidia etc. Its all down to the individual and what he wants.

I always opt for a card that will give me the best overall results from playing games, video editing, watching dvds etc. Since my x850xt died i just bought an x1800xt (fitted it in a few hours ago) and to be honest its a very good card does just what i want and i think i will shy away from this batch of GFX cards and wait for the ones coming after windows vista is established some time during december and next January. Also thats probably when i will see the biggest performance gains jumping from and r520 core to the next top end card be it ATi or Nvidia.

I don't think its worth splashing out up to 500 pounds on a GFX card and then 4 months later doing the same again. Obviously if there is a massive leap in performance then i can see the benefit, but as some one said there arn't many games out there that use the features of the current cards and only 2 games i know of push the current cards to the limits. Although there are some awesome games coming out this year, we just have to wait and see if these new cards will live up to expectations.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: RobertR1
Originally posted by: Creig
Originally posted by: Rollo
Originally posted by: RobertR1
If it's going to take nvidia a 32piple card with high clock speeds and expensive 1.1 mem to beat the R580 then I'd have to say there are some major inefficiencies with their architecture that they're trying to mask with pure horsepower.

I'll be curious to see the high res, max AA/AF settings in newer GPU limited games to see which one is better.

You could turn that around and say, "ATI didn't release the X1800XT until they could get high enough clocks and faster RAM to beat the much, much lower clocked 256 GTX and make up for the deficiencies of their re-tread 16 pipe design"

Your post is pointless and argumentative, the sort of thing the board needs less of, not more.


I see nothing "pointless and argumentative" about his post. He was simply commenting that if rumored specs hold true, ATI will be countering the 32 pipeline G71 with a 16 pipeline R580. Nvidia keeps adding pixel pipelines to obtain higher performance levels while ATI is somehow managing to make do with 16.


Yep. That's pretty much what I was saying. I think this will help determine if adding traditional pipelines is still the answer to newer games or not. The R580/G71 will basically let us know which methodolgy is truly better. As a consumer, I just want the fastest card but I am still curious as to how a 16pipeline card will be able to keep up with a 32pipeline card especially when in older games, pipelines really seemed to matter.

The 7800GTX 512 has much more bandwidth, faster RAM and 8 more pipes yet manages to lose to the x1800xt oc in newer games such as FEAR, COD2 and even somewhat older BF2. This is somewhat puzzling.

The 7800 GTX 512 has about 13.34% more memory bandwidth compared to the standard X1800 XT, if your talking about the X1800 XT OC this advantage dwindles to 6.25%, I don't consider this a huge advanatage in Nvidia's favor. Also more memory bandwidth is the result of faster memory no need to state basically the same thing twice.

On the other hand the X1800 XT OC can output more pixel per clock then 7800 GTX 512, and has more vertex shader power, both are 27.3% higher on the X1800 XT OC then 7800 GTX 512.

On amount of Pixels that could be worked on Nvidia has about 17.85% advantage in their favor in comparison to X1800 XT OC.

Also ATI still traditionally keeps their lower then Nvidia performance hit on AA/AF so at equivalent memory clock speeds I would expect ATI to win AA & AF benches.

In addtion F.E.A.R seems to favor the ATI design much more it's one of the only games where ATI X1600 XT has a better then 50% performance advantage in comparison to the 6600 GT at 1024x768 4AA + 8AF

If G71 is indeed going to use 1.9GHZ GDDR3, it because they are aware their need for it as they take larger hits when AA & AF are applied, at least until Nvidia decided to revamp it's memory controller technology, or enhance their AA & AF methods.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
AFAIK, die size matters more than core speed, because it seems ultimately IHVs price cards based on die size (assuming similar yields, yadda yadda). If you're fab-constrained, you probably want to crank out as many GPUs per wafer as possible, so a smaller process at a higher speed seems the way to go.

As for keeping all 48 pixel shader units busy, a pretty simple way would be to just crank the resolution. Ben, I thought RV530 (and so R580) kept R300's basic pixel shader unit design, namely a full (ADD/MUL/MADD) + mini (ADD + specialty) ALU setup, as opposed to G70's full (ADD/MUL/MADD/tex + spcl) + full (ADD/MUL/MADD + spcl) setup, and that (so far) only Xenos went to 48 "simplified" shader units (no mini ALU)?

Actually, CoD2 may be chewing up TMUs rather than pixel shaders (which seem more FEAR's domain), in which case G71, with more TMUs, may do better.

Ironically (given where GPUs seem headed), G7x's pixel shader units seem like "unified" versions of R5x0's split pixel shaders and texture units. In a sense, they may be more flexible in that, in limiting cases, you can do more of one operation at the expense of the other (discounting any effect R5x0's memory controller has on utilization efficiency). Ah, tech irony--good times.
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Originally posted by: Pete
AFAIK, die size matters more than core speed, because it seems ultimately IHVs price cards based on die size (assuming similar yields, yadda yadda). If you're fab-constrained, you probably want to crank out as many GPUs per wafer as possible, so a smaller process at a higher speed seems the way to go.

As for keeping all 48 pixel shader units busy, a pretty simple way would be to just crank the resolution. Ben, I thought RV530 (and so R580) kept R300's basic pixel shader unit design, namely a full (ADD/MUL/MADD) + mini (ADD + specialty) ALU setup, as opposed to G70's full (ADD/MUL/MADD/tex + spcl) + full (ADD/MUL/MADD + spcl) setup, and that (so far) only Xenos went to 48 "simplified" shader units (no mini ALU)?

Actually, CoD2 may be chewing up TMUs rather than pixel shaders (which seem more FEAR's domain), in which case G71, with more TMUs, may do better.

Ironically (given where GPUs seem headed), G7x's pixel shader units seem like "unified" versions of R5x0's split pixel shaders and texture units. In a sense, they may be more flexible in that, in limiting cases, you can do more of one operation at the expense of the other (discounting any effect R5x0's memory controller has on utilization efficiency). Ah, tech irony--good times.

:Q This is all too confusing. Is there an article on this stuff somewhere? It's easy to find stuff on the graphics pipeline itself but not all the TMUs, ALUs, god knows what else.
 

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
Originally posted by: xtknight
Originally posted by: Pete
AFAIK, die size matters more than core speed, because it seems ultimately IHVs price cards based on die size (assuming similar yields, yadda yadda). If you're fab-constrained, you probably want to crank out as many GPUs per wafer as possible, so a smaller process at a higher speed seems the way to go.

As for keeping all 48 pixel shader units busy, a pretty simple way would be to just crank the resolution. Ben, I thought RV530 (and so R580) kept R300's basic pixel shader unit design, namely a full (ADD/MUL/MADD) + mini (ADD + specialty) ALU setup, as opposed to G70's full (ADD/MUL/MADD/tex + spcl) + full (ADD/MUL/MADD + spcl) setup, and that (so far) only Xenos went to 48 "simplified" shader units (no mini ALU)?

Actually, CoD2 may be chewing up TMUs rather than pixel shaders (which seem more FEAR's domain), in which case G71, with more TMUs, may do better.

Ironically (given where GPUs seem headed), G7x's pixel shader units seem like "unified" versions of R5x0's split pixel shaders and texture units. In a sense, they may be more flexible in that, in limiting cases, you can do more of one operation at the expense of the other (discounting any effect R5x0's memory controller has on utilization efficiency). Ah, tech irony--good times.

:Q This is all too confusing. Is there an article on this stuff somewhere? It's easy to find stuff on the graphics pipeline itself but not all the TMUs, ALUs, god knows what else.

:Q !!!!, I got some of it, if not most of it....i think....
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
You could start by reading the introductory G70 and R520 p/reviews, the ones which go into the marketing material and describe the various features using ATI's and NV's respective PR pics. The more articles you read, the more likely one of those poor, overworked and underpaid reviewers will spell it out to you intelligibly. My first stops would be Beyond3D, BeHardware, TechReport, Anandtech, XbitLabs, Digit-Life (all .com), and Hexus.net. 3DCenter.org is also one of the more detail-oriented sites, but it's German; English translations only follow later, and only sometimes. TR and AT may explain things at a higher level and therefore be easier to understand (TR's Scott [aka Damage] in particular is a wicked good writer, eh); the rest may not be written as approachably or edited as well, but they really get into the nitty gritty, especially with custom programs to explore a GPU's details. These programs are often community-supplied, in which case the related forum threads may be equally enlightening.

Basically, the 3D pipeline is like this: CPU ---> VS (vertex shaders) ---> PS (pixel [actually fragment] shaders) + TMU (texture units) --> ROPs (render output processors: among other things, apply AA and write scene fragments as screen pixels to the framebuffer which in turn gets piped to your monitor). Obviously I'm glossing over a lot of detail, much of which you can glean from carefully reading the above reviews.

ALU = arithmetic logic unit, basically it does stuff to your data like ADD, MULtiply, MultiplyandADD, MultiplyandSUBstract, etc, etc. A pixel shader can be comprised of more than one ALU, and each so-called ALU can do more than operation (tho usually not at once, thus "ADD/MUL/MADD"). Googling can get you some nice diagrams of R520 and G70 capabilities, with all that crazy detail. I'm not even sure if those diagrams are entirely accurate, but they jive with what I've read. You can see how R520 has separated its TMUs (texture units), while G70 has its TMU integrated--or at least sharing transistors with--its first pixel shader unit.

Don't ask me to explain more, I'll just get you more mixed up. I'm not entirely clear on some distinctions, either (like if an ALU can technically only perform one or two functions, of if the term applies to a group of execution units, as those diagrams imply of G70's "FP32 Shader Unit 1). B3D also has some past articles that go into detail with AA and AF, which you might be interested in.

Now, enough of this 3D voodoo--I've got a toaster oven to research. Why can't I find one that consistently toasts without burning and doesn't melt a month after the warranty expires? WHY, DAMMIT, WHY?!

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Pete, that was an awesome post. Thanks for the leads.

P.S. Black & Decker Pete, Black & Decker
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Heh, hope it helps.

(And thanks for the advice, keys. I'm leaning Cuisinart or Kenmore just b/c they look better. What? We're in the video forum, right? Looks matter. )
 

Ackmed

Diamond Member
Oct 1, 2003
8,498
560
126
Originally posted by: munky


Anyways, did everyone miss the part in the OP where it says Nv might paper launch the card before you can buy it? Can I expect the Nv trolls to continue their anti-paper launch campaign if that happens?

Dont forget the certain people who put down ATi cards, because it was "old tech". Now NV has "old tech", but its ok.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |