G71 to sample next week

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RichUK

Lifer
Feb 14, 2005
10,341
678
126
Originally posted by: apoppin
Originally posted by: RichUK
Originally posted by: apoppin



Nvidia's G71 still not in OEM, distributors hands

ATI has been shipping its R580 - the Radeon X1900 XTX - to them for some time. More than one source confirmed that ATI is preparing for a massive hard launch of its new line, as it will be able to ship the cards when it announces them. In fact, the cards are being sent to shops and OEMs as we speak and they are in production, so everything looks hunky dory for the Canadians right now.

We asked the same sources - and further afield - about Nvidia's cards and so far no one could confrim having seen a G71 not even a sample, which suggests that Nvidia is lagging somewhat behind ATI's agenda. There are still three more weeks till the end of January so the situation might change, but it's looking unlikely now.

Nevertheless, Nvidia did surprise us with the Quad SLI implementation it just announced in cooperation with Dell. That, indeed, was a well-kept secret that Nvidia managed to keep from its key partners, so we were not the only one surprised by it. Nvidia keeps its cards close to its chest these days.
:Q

Ohh dear

Maybe nVidia are purposely waiting to release soon after, so to tweak clocks for better competition.

i really doubt it . . . we already know ATi's clocks. . . as usual, nVidia will have their outrageously priced "Ultra" waiting in the wings [with 1GB vRAM] to demolish the x1900xtx . . . and of course ATi has their own "PE" edition [they dropped the PE naming finally].

where does it end? . . . bankruptcy for the ultra high end user.

If there is going to be an Ultra next gen card, then i am going to buy it, i've been holding out on a NF4 mobo (PCI-e compared to my AGP mobo) for some time now, and the only reason I would upgrade the mobo is for an updated GFX card (it would have to be a worthwhile performance increase however).

I?ve now pretty much missed the G70/R520 freenzy in hope for something much more powerful than my current 6800 Ultra. I presume I will need some serious monnies to purchase a BFG G71 ultra, but owning one would be SHWEET. Hopefully it will not be a 3 slot cooling solution LOL
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: keysplayr2003
Originally posted by: 5150Joker
Originally posted by: Acanthus
Originally posted by: 5150Joker
Originally posted by: Acanthus
Originally posted by: Ackmed
Originally posted by: munky


Anyways, did everyone miss the part in the OP where it says Nv might paper launch the card before you can buy it? Can I expect the Nv trolls to continue their anti-paper launch campaign if that happens?

Dont forget the certain people who put down ATi cards, because it was "old tech". Now NV has "old tech", but its ok.

really? what features are Nv missing again?


non-angle dependent AF, HDR + AA to name two.

ATis AF implementation sucks compared to NVs... and HDR+AA works on 6800 series and higher.


Wow you're dumb.

I don't know if HDR+AA works on 6800's, but it seems to work on 7800's if a site is able to test it no? Bottom graph on page shows percentage increase with 512MB memory using HDR+AA for 7800 and 1800XT

I just looked at the graph, and what does it show? HDR+AA works in HL2 because the devs put a lot of hard work into creating HDR using pixel shaders, so it works on any dx9 hardware and lets you have AA as well. It even works on x800 and 9800 cards. Farcry uses FP blending HDR, and the graph only shows the score for the x1800, which further confirms that no NV card can do FP blended HDR with AA.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Originally posted by: munky
Originally posted by: keysplayr2003
Originally posted by: 5150Joker
Originally posted by: Acanthus
Originally posted by: 5150Joker
Originally posted by: Acanthus
Originally posted by: Ackmed
Originally posted by: munky


Anyways, did everyone miss the part in the OP where it says Nv might paper launch the card before you can buy it? Can I expect the Nv trolls to continue their anti-paper launch campaign if that happens?

Dont forget the certain people who put down ATi cards, because it was "old tech". Now NV has "old tech", but its ok.

really? what features are Nv missing again?


non-angle dependent AF, HDR + AA to name two.

ATis AF implementation sucks compared to NVs... and HDR+AA works on 6800 series and higher.


Wow you're dumb.

I don't know if HDR+AA works on 6800's, but it seems to work on 7800's if a site is able to test it no? Bottom graph on page shows percentage increase with 512MB memory using HDR+AA for 7800 and 1800XT

I just looked at the graph, and what does it show? HDR+AA works in HL2 because the devs put a lot of hard work into creating HDR using pixel shaders, so it works on any dx9 hardware and lets you have AA as well. It even works on x800 and 9800 cards. Farcry uses FP blending HDR, and the graph only shows the score for the x1800, which further confirms that no NV card can do FP blended HDR with AA.
thanks for posting and saving me the time. you would think he would have even read the damn chart before he posted that link.

 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
AFAIK, the technical difference is that Far Cry HDR uses FP16 blends and buffers. ATI's X1000 series is the only one that can MSAA FP buffers, so it's the only one that can apply AA to Far Cry's HDR implementation (the "OpenEXR HDR" so often touted). I guess AoE3 uses the same method, as HW.fr doesn't show any NV #s. HL2 uses a different method that allows for AA on all (DX9+) cards. HL2's method was described in articles at Bit-Tech, Ars, and other sites--search back to Lost Coast's launch.

Also AFAIK, "HDR" has become an umbrella term for bloom, tone mapping, and other stuff. HDR formats, like OpenEXR's FP16 and beyond, simply allow for a larger color range, which translates into less detail lost when outputting to your computer monitor. Tone-mapping is the last step that downsamples/translates HDR formats (e.g., FP16, or 16-bit floating point per channel) to typical framebuffers (FX8, or 8-bit integer per channel). Bloom isn't necessarily HDR, it just means an effect where very bright light temporarily "blooms" or saturates the area adjacent to its source.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
I don't care what you pimp BFG I was just noting there aren't any HDR+AA games officially yet.
Neither were there any official SM 3.0 games when you ran those Far Cry 1.2 benchmarks.

BTW- have you bought that Asus fanless 7800GT yet?
I put in the order but they run ran out of stock just a day before so there may be a bit of a delay.

In any case I've got a large list of benchmarks lined up and I'll post them when I get the card.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
So is FP HDR floating point HDR?
Yes, it uses floating point blending in a floating point framebuffer. To date only the R5xx can handle AA through an FP buffer.

and what makes is better than the other?
Supposedly the blending accuracy is better with FP because there are less rounding errors than with using pixel shaders. Of course a lot of it boils down to artistic talent so things aren't actually that cut and dry.

I actually don't like FP HDR because implementations of it so far seem to run really slow compared to pixel shader implementations like HL2.

The filtering algorithms still aren't identical because if you use Quality AF on both ATi and nV cards, there is noticeably more shimmering with nVidia cards.
I think this is up for debate because some people report shimmering with ATi cards. It could be more of a game issue than the cards themselves.

don't know if HDR+AA works on 6800's, but it seems to work on 7800's if a site is able to test it no?
Repeat after me: FP HDR does not work with AA on any current nVidia card; this is a hardware limitation. Other forms of non-FP HDR (including the loser definitions like bloom) will work.
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
Thanks for cleaning that up.

I was seraching google and a found a doc that had info for nvidia developers by nvidia and it said something about the FP HDR. It stated something like it cause some sort of inf. loop error in the hardware.
 

Vesper8

Senior member
Apr 29, 2005
253
0
0
so with the g71 coming out this soon... any chances the g80 will come out sooner than expected?

like.. when can i expect the g80 to ship? I'm really torn between buying the g71 now and then selling it back so i can get a dx10 g80 board... or just wait for the g80

maybe my semester would do better if i waited... hehe.. it's a hard choice to make.. i'm due for a gpu upgrade bigtime too (6600gt atm)
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
Pure speculation follows:
Based on their normal product cycles, I'd expect G80 in the early Fall (late September/early October) at the earliest, but certainly in time for next holiday season.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Rumors are for a summer G80 launch, surprisingly enough. It's probably safe to say expect both R600 and G80 slightly before Vista, so around 3Q06.
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Originally posted by: Pete
You could start by reading the introductory G70 and R520 p/reviews, the ones which go into the marketing material and describe the various features using ATI's and NV's respective PR pics. The more articles you read, the more likely one of those poor, overworked and underpaid reviewers will spell it out to you intelligibly. My first stops would be Beyond3D, BeHardware, TechReport, Anandtech, XbitLabs, Digit-Life (all .com), and Hexus.net. 3DCenter.org is also one of the more detail-oriented sites, but it's German; English translations only follow later, and only sometimes. TR and AT may explain things at a higher level and therefore be easier to understand (TR's Scott [aka Damage] in particular is a wicked good writer, eh); the rest may not be written as approachably or edited as well, but they really get into the nitty gritty, especially with custom programs to explore a GPU's details. These programs are often community-supplied, in which case the related forum threads may be equally enlightening.

Basically, the 3D pipeline is like this: CPU ---> VS (vertex shaders) ---> PS (pixel [actually fragment] shaders) + TMU (texture units) --> ROPs (render output processors: among other things, apply AA and write scene fragments as screen pixels to the framebuffer which in turn gets piped to your monitor). Obviously I'm glossing over a lot of detail, much of which you can glean from carefully reading the above reviews.

ALU = arithmetic logic unit, basically it does stuff to your data like ADD, MULtiply, MultiplyandADD, MultiplyandSUBstract, etc, etc. A pixel shader can be comprised of more than one ALU, and each so-called ALU can do more than operation (tho usually not at once, thus "ADD/MUL/MADD"). Googling can get you some nice diagrams of R520 and G70 capabilities, with all that crazy detail. I'm not even sure if those diagrams are entirely accurate, but they jive with what I've read. You can see how R520 has separated its TMUs (texture units), while G70 has its TMU integrated--or at least sharing transistors with--its first pixel shader unit.

Don't ask me to explain more, I'll just get you more mixed up. I'm not entirely clear on some distinctions, either (like if an ALU can technically only perform one or two functions, of if the term applies to a group of execution units, as those diagrams imply of G70's "FP32 Shader Unit 1). B3D also has some past articles that go into detail with AA and AF, which you might be interested in.

Now, enough of this 3D voodoo--I've got a toaster oven to research. Why can't I find one that consistently toasts without burning and doesn't melt a month after the warranty expires? WHY, DAMMIT, WHY?!


*brain explodes into /dev/null* :shocked:

Ahh...thanks, I understand it better now I think. So in one sentence...by itself, the amount of pipelines on a graphics card now says nothing?

I heard Intel made some good toaster ovens.
[insert fake audience laugh here]

Oh, in terms of ASIC design then NVIDIA and ATI are left to their own devices while AMD and Intel must follow 'x86' or what?
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Short answer, yeah, "X pipes" may be too simplistic to describe current GPUs, given that both are expanding the capabilities of each pipe in different ways. R580 vs. G71 performance should be very revealing in terms of who blew their transistor budget more wisely.

I just found Hiroshige Goto's home base. He has some sweet diagrams of R520 and G70 (search for those terms) that might help you more than my typed assault. Plus, the colors are soothing.

Now you jumped over my head with "ASIC design." Can you assemble me a clue from the vast vastness of /dev/null? Are you asking me if ATI and NV are compatible in a way comparable to how AMD and Intel are x86 compatible? If so, no clue. I guess they both follow D3D specs, but can expose unusual bits via OGL?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Are you asking me if ATI and NV are compatible in a way comparable to how AMD and Intel are x86 compatible?

I can answer that one- not even remotely close- neither board would know anything about the others instructions at all- they HAVE to have an abstraction layer for the driver to translate calls- in essence it is akin to how the current x86 parts translate raw x86 code into uOps to work with- but the driver does this for them.

On a general note- if I'm nV and I have parts ready to ship I'm waiting right now for ATi to launch. Let's see what they bring to the table and then target our clockrates accordingly. If we get smoked we can afford to take a reasonable amount of time to adjust for it- mindshare has been overwhelmingly in our court for some time now. If we are pretty much even then we can push out the door what we have and call it good. If we smoke them then we can reduce the clockspeeds on our parts to increase yields leaving us headroom for anything they might throw our way. Right now, nVidia can afford to do this. ATi, OTOH, needs to come out strong with the R580. In terms of end effect, the R520 was only slightly better then the NV30.
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
OK. What does 'x86' specify in the first place? Does that mean it must at least have all the capabilities of Intel's 8086? Just like a list of instructions like 'mov', 'jmp', 'imul', etc. Being 'D3D9' compliant just means having the GPU-equivalent list for graphics instructions? Or must the graphics card also have a pipeline with these specified stages? Does the driver have a big job in terms of instruction translations to do or is there fairly little overhead at all? And what about the gains from doing multi-threaded drivers? Does the driver still actually consume like 10-20% of the CPU or was it because the vertex shaders were offloaded to the second CPU??

Originally posted by: BenSkywalker
In terms of end effect, the R520 was only slightly better then the NV30.

You mean NV40?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: toyota
Originally posted by: munky
Originally posted by: keysplayr2003


I don't know if HDR+AA works on 6800's, but it seems to work on 7800's if a site is able to test it no? Bottom graph on page shows percentage increase with 512MB memory using HDR+AA for 7800 and 1800XT

I just looked at the graph, and what does it show? HDR+AA works in HL2 because the devs put a lot of hard work into creating HDR using pixel shaders, so it works on any dx9 hardware and lets you have AA as well. It even works on x800 and 9800 cards. Farcry uses FP blending HDR, and the graph only shows the score for the x1800, which further confirms that no NV card can do FP blended HDR with AA.
thanks for posting and saving me the time. you would think he would have even read the damn chart before he posted that link.

Umm, toyota? I was actually asking the very question that munky gave the answer for. I didn't say it works, I asked if it did. So don't be so f'ing nasty next time.

 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Thanks Ben, makes sense (both parts of your post).

xt, Ben probably means NV30 in that ATI really needs a kick-ass, no-excuses, on-time part to restore their rep to sterling.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: xtknight
OK. What does 'x86' specify in the first place? Does that mean it must at least have all the capabilities of Intel's 8086? Just like a list of instructions like 'mov', 'jmp', 'imul', etc. Being 'D3D9' compliant just means having the GPU-equivalent list for graphics instructions? Or must the graphics card also have a pipeline with these specified stages? Does the driver have a big job in terms of instruction translations to do or is there fairly little overhead at all? And what about the gains from doing multi-threaded drivers? Does the driver still actually consume like 10-20% of the CPU or was it because the vertex shaders were offloaded to the second CPU??

Originally posted by: BenSkywalker
In terms of end effect, the R520 was only slightly better then the NV30.

You mean NV40?

x86 denotes the instruction set/instruction architecture that the cpu runs (as seen from the outside world). Internally, the cpu breaks the x86 instructions up into micro instructions that are localised for diferent cpus (ie K5, K6, K7, P2, P4 all break x86 instructions down diferently to each other). The x86 instruction set is generally taken to mean the instruction set that original pentiums could run (things like MMX + SSE etc can be viewed as "optional" extras), but this changes with time (for example pre pentium era the x86 architecture described the capabilities of a 486, prior to that a 386 etc).

How a particular CPU goes about implimenting the x86 architecture physically (fsb etc) is irrelevant so long as the x86 architecture itself works correctly on the chip.

The same is true of DirectX and GPU's internally they are all very different to each other, but externally they must run the DirectX architecture they were designed to support.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Being 'D3D9' compliant just means having the GPU-equivalent list for graphics instructions?

No- feature support. As long as D3D hands you a certain equation and you come back with a certain result(with some wiggle room, depending) then you are all set. Making a part D3D9 compliant means that it must be able to compute out if a D3D9 call is made for say a 2.0 level pixel shader. At the instruction levels the part can do whatever they want.

Or must the graphics card also have a pipeline with these specified stages?

You can actually get pretty wild in how your chip is laid out, if you look at something like the Kyro part from PVR they aren't remotely close to how nVidia or ATi handle things(the latter two are actually very close to each other)- as long as they output close to refrast then they are good.

Does the driver have a big job in terms of instruction translations to do or is there fairly little overhead at all?

The answer is- it depends With processors having as much power as they do today the relative overhead is quite small most of the time, but you can still see it rear its head on occasion. When you see bench charts and say all the nVidia cards are hitting ~79FPS from 800x600 up to 1280x1024 w/4xAA you can tell it is processor limited(overhwlemingly). It may be the case that ATi parts are all hitting 91FPS through the same settings- they are also processor limited(most likely) but driver overhead is lower for ATi leaving more processor time to handle game code.

And what about the gains from doing multi-threaded drivers?

There are many different things to consider when looking at this- is the game SMP aware, if so how well was it done(Quake4 good or CoD2 bad), how many threads is the game pushing, etc; all of these will effect how much of a benefit you can get from multi-threaded drivers(of course you also have to consider how well the drivers are written to use multiple threads).

Does the driver still actually consume like 10-20% of the CPU or was it because the vertex shaders were offloaded to the second CPU??

It may be that the driver is consuming 10%-20% of the CPU time- if that is due to computational intensity of translating the calls or if it is due to simply chewing time would take some time to figure out- and even if you figured it out you would need to do it per application. Offloading vertex shaders wouldn't make a lot of sense to me off the top of my head as the current parts all have dedicated hardware- having them sit idle to load the processor wouldn't be the best idea. Now if the game is calling for functionality that exceeds what the GPU is capable of it is possible that vertex shader ops would be offloaded to the processor as that is something x86 can actually handle at decent speeds(as opposed to pixel shaders or base filtering which they choke horribly on).
 

tjpark1111

Senior member
Oct 5, 2005
287
0
0
kinda getting off topic, but i got a question. I just learned of the eVGA step-up program, so does that mean I could just get any card from eVGA and trade it in for the G71 when it comes out? seems too good to be true? like a 6600GT at least to run windows for now and then trade it in later? that is pretty sweet!!(correct me if im wrong which i probably am)
 

tjpark1111

Senior member
Oct 5, 2005
287
0
0
Originally posted by: CP5670
This will probably be my next card. As long as it's released before March 6 I can use EVGA's step-up program to trade up for one.
is that date you have the end of your warranty or is there a specific length of time in which you can trade in your card for 'the latest and greatest'?
 

olternat

Member
Aug 28, 2004
114
0
0
Wait, I want to understand this thing too with the evga step up program.
If I were to buy the G71 card coming soon (february maybe?) and then wait til late summer or fall for the G80 would I be able to "step up" and trade in the g71 card for the g80 card? It doesn't matter if I have to pay a difference but would I be able to do this?
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
You can only do that with-in three months you bought the card.

And don't forget that you have to buy the step-up card at FULL MSRP (something 100-200 higher than newegg) and you only get to substract what you paid for your previous card. I think the deal would be better if it allowed you buy your step up card somewhere else and/or give you MSRP for your previous card.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |