POLL: nVidia's Silence on GT300

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If history repeats itself, NV will obviously try to downplay all of ATI's advantages until it has the same performance/features or better:

- DX11 is not very important since there is little support for it right now, and it is not the main selling point of new hardware
- When GT200 series (and in SLI) can already give you 120+ frames in games, what is the use in 125-150-160 frames?
- ATI cards won't provide a better gaming experience since they still don't support PhysX or Stereo 3D Vision
- Gamers don't just want a $300 graphics card for graphics, but also for computational tasks such as video work

And all this can be found here:
http://www.xbitlabs.com/news/v...of_Graphics_Cards.html
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: zagood
Other: I pick the last answer to see the results.

I'm sick of speculation. No offense - I'm probably just bitter that I'm going to be priced out of the next gen.

there is a button called "see results" that lets you see them without ruining the poll itself.

I am both surprised, unsurprised and saddened to see 11 people voted "Im an idiot and pick the first answer to see the results"
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: OCguy
Originally posted by: ronnn
I voted other, as I think they may launch before christmas, but with poor availability. But if Charlie is right, ouch.

If Charlie is right, they arent launching before christmas, even with poor availability.

Yes, ouch!!
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: Idontcare
Really need a third option there that involves efforts to move existing inventory at current ASP's before cannabilizing sales while hyping future product, regardless of the release timeline or the performance of that future product.

Hard to find a gt285 here
 

Obsoleet

Platinum Member
Oct 2, 2007
2,181
1
0
Nvidia's silence?

http://www.xbitlabs.com/news/v...of_Graphics_Cards.html

Nvidia yet has not disclosed any plans regarding its DX11 GPUs, which means that in the eyes of computer enthusiasts the company is not a technology leader any more.

More often than not, they've been technically inferior but good at brute force. They lead in marketing propaganda and most of the time in sales.

It is not completely clear why Nvidia is trying to downplay the importance of DirectX 11, DirectCompute 11, technologies that enable next-generation software.

Because they don't have it, when they do it'll probably be a slow performing implementation (and they know it). ATI's been implementing DX11 features into their hardware far in advance of DX11 just by being proactive and innovative, even if at the time those "DX11" features ended up being worthless.

Not only does ATI have better engineering now, a better approach to cards that fits the market, but they also have an inherent leg up on finally reaping their investment in advanced technologies from years ago.

What's the most likely scenario to play out? G300 would have been an OK product in 2010, if ATI didn't exist. Similar to NV30, not a terrible product by itself.

My vote was for door #2. It's pretty clear this time what's going on.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
nVIDIA's architecture will be a poor implementation of Directx 11 and Direct Compute, simply because AMD had a dormant tesselator (that had to be scratched and redesigned anyway) that went unused for years? can you explain that in greater detail please?

nvidia's chip is going to be a large-die manufacturing nightmare that will come at a premium price. newbs and OEMs will pay this premium and pass the rip-off onto their clients because it's probably going to be 5-10% faster than the corresponding Radeon and that's all they need to hear. that is the history and that's what seems to be repeated most. It will be a fine GPU, though probably not as power efficient as the Radeon at idle. The issue at hand is that most hardware guys are willing to trade a little performance and a little manufacturability for a substantial increase in value and efficiency. You'll have a much faster GPU if you spend $200 every two years rather than $400 every three years, and that's why people buy Radeons.

However, nVIDIA has never poorly implemented an instruction set. They were justifiably too stubborn to move to 10.1 because they didn't want to go back on months and months of satisfactory work. You can't comply with 10.1 with a bandaid patch and they weren't going to waste previously invested time and money supporting it because they knew it wouldn't affect sales and it wouldn't make a bit of difference since Dx10 itself was so poorly implemented by developers. There is no supporting evidence to the claim that they will poorly implement Dx11 and Direct Compute in their upcoming microarchitecture. With the exception of memory controllers for intel systems, they do a great job implementing a wide range of technologies, and you cannot say 10.1 was a poor implementation since it was never attempted. Perhaps with the 40nm derivs of GT218, we can see how big a screwup 10.1 will be on nvidia architectures... 40nm is indeed the dramatic node.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: taltamir
I am both surprised, unsurprised and saddened to see 11 people voted "Im an idiot and pick the first answer to see the results"

Allow me to introduce you to a little friend of mine who helps lower my expectations and improve my happiness levels :laugh: Might work for you as well
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
poor taltamir. he must be of the xbox 360 generation.


no, i don't know what that means.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: Obsoleet
Nvidia's silence?

http://www.xbitlabs.com/news/v...of_Graphics_Cards.html

Nvidia yet has not disclosed any plans regarding its DX11 GPUs, which means that in the eyes of computer enthusiasts the company is not a technology leader any more.

More often than not, they've been technically inferior but good at brute force. They lead in marketing propaganda and most of the time in sales.

It is not completely clear why Nvidia is trying to downplay the importance of DirectX 11, DirectCompute 11, technologies that enable next-generation software.

Because they don't have it, when they do it'll probably be a slow performing implementation (and they know it). ATI's been implementing DX11 features into their hardware far in advance of DX11 just by being proactive and innovative, even if at the time those "DX11" features ended up being worthless.

Not only does ATI have better engineering now, a better approach to cards that fits the market, but they also have an inherent leg up on finally reaping their investment in advanced technologies from years ago.

What's the most likely scenario to play out? G300 would have been an OK product in 2010, if ATI didn't exist. Similar to NV30, not a terrible product by itself.

My vote was for door #2. It's pretty clear this time what's going on.

Its amazing you can predict the poor performance of a whole new chip only months after it tapes out and before anyone has seen a demo.

I guess we will just have to come back to this thread when the benches come out.

 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Originally posted by: Obsoleet
More often than not, they've been technically inferior but good at brute force.

actually, AMD's 5D shader is far more ambitious than nVIDIA's, and far more difficult to keep busy. AMD was the one brute-forcing the architecture when they went from 320 "shaders" to 800. It was easier to throw a lot more underutilized SIMD cores at the problem rather than to ensure that each 5D component was always occupied (now this works to their advantage as they have a vast SIMD architecture with lots of room for improvements in threading without the added expense of die size). The consolidation of larger execution units meant that the architecture has the potential to be both faster and slower given a varying workload because the chip just isn't as flexible as nVIDIA's simpler, more independent SP with better thread scheduling and superior distribution of resources due to far greater encapsulation. this is one reason why the "1.4 tflop" of the radeon 4890 is never achieved in a real GPGPU scenario, while the 1.0 tflop GTX 285 is a more reflective representation. hopefully applications like linpack and sandra will shed more light on this as a numerical OpenCL benchmark becomes available.

both DX11-generation architectures are sure to have completely brand new implementation for thread setup, so this aspect of the efficiency argument is entering a new round.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,792
1,512
136
Silence can be a good or a bad thing. In the case of G80, it was a very good thing, in terms of G200 40nm derivatives, it wasn't such a good thing. Probably more often a good thing than not.

In terms of graphics though, being significantly late to the party with a new architecture has never been a good thing, has it?

I think G300 will probably be launched in late Q4 at the earliest, and maybe sometime in Q2 if it becomes another GeforceFX or R600 fiasco.

In terms of performance, 10-20% faster than HD 5870 is probably a good starting point, but since it is a significantly new design it could really be anywhere from -25% to +50%. There's really just too much unknown right now.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: OCguy


Its amazing you can predict the poor performance of a whole new chip only months after it tapes out and before anyone has seen a demo.

I guess we will just have to come back to this thread when the benches come out.


The majority here have been predicting the gt300 will be faster, which is quite amazing.

edit: I don't remember the "silence" around the g80. They did do a little viral reducing expectations phase - but that is normal.

Exactly what ati has been doing with the 5780 - releasing the less pleasing stuff - so expectations can be at least met if not exceeded. Has actually been very well managed by ati viral pr. First a glitzy demo party with raving reviews - now the bad news sneaking in. Than hopefully a nice launch with a dx11 game.........
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: mwmorph
Originally posted by: Idontcare
Originally posted by: thilan29
Originally posted by: RobertR1
They're working on it:

http://www.youtube.com/watch?v=FR45ja_fNzU

Lol every bad situation has been dubbed onto that piece of footage.

I think you just came up with the next one, hilter getting pissed off at hearing people are reusing his footage and adding sub-titles that make it seem like he's pissed off about everything that fails :laugh:

you mean
http://www.youtube.com/watch?v...F_lEDE&feature=related
or
http://www.youtube.com/watch?v=OL3L1wnpVb8&NR=1

edit: damn slowspyder, I shouldn't have spent all that time watching the videos.

thanks, I just spent 30 minutes learning about why hitler likes waffles and doesn't like vista...
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Idontcare
Originally posted by: taltamir
I am both surprised, unsurprised and saddened to see 11 people voted "Im an idiot and pick the first answer to see the results"

Allow me to introduce you to a little friend of mine who helps lower my expectations and improve my happiness levels :laugh: Might work for you as well

and it really works too... ;p
 

Atechie

Member
Oct 15, 2008
60
0
0
Originally posted by: kreacher
With their marketing department's past record, being silent means the situation is so bad right now that they can't even think of a way to spin it into anything positive.

You mean like before the G80 launch?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: HurleyBird
Silence can be a good or a bad thing. In the case of G80, it was a very good thing, in terms of G200 40nm derivatives, it wasn't such a good thing. Probably more often a good thing than not.

In terms of graphics though, being significantly late to the party with a new architecture has never been a good thing, has it?

I think G300 will probably be launched in late Q4 at the earliest, and maybe sometime in Q2 if it becomes another GeforceFX or R600 fiasco.

In terms of performance, 10-20% faster than HD 5870 is probably a good starting point, but since it is a significantly new design it could really be anywhere from -25% to +50%. There's really just too much unknown right now.

Can you guys define, "late to the party" for me please? I can understand if Nvidia announced a GT300 launch in June, July or August and it's still not here yet. But there have been no such announcements by them at all. You are only considering GT300 late because ATI is launching (pre-launching) R8xx soon. And last time I checked, there is no rule saying both companies have to launch at the same time, and if one does, and the other doesn't, the other is late.

The only thing you can accurately construde from current events is, ATI is launching R8xx before Nvidia is launching GT300. This doesn't make GT300 late. When we get an official launch date for GT300, and that time comes and goes, THEN you can call GT300 late.

One of the reasons (I think) AMD is able to release R8xx so quickly, is that R8xx (I think) is basically 2 R770's in one chip. Anyone who has seen the schematic photo for Cypress:

Cypress Architecture Schematic

You can see that there isn't one big core. It looks to me like they took 2 40nm 4770's (with the exception of 800sp instead of 640) and "glued" them.
"Glued" being a crude description, I'm sure there is more elegance to the design than that.

Anyway, A 4770 had a 128-bit memory interface. You can see each "core" in the schematice has 2 64-bit registers for the 256-bit total between both "cores". Also explains the doubling of everything. 800>1600 sps, 16 to 32 ROPs, 40 to 80 TMU's. This could also have been a contributing factor to the 4770 shortage for a while. Dedicating most of it's 40nm cores for Cypress, but I don't have any data to back this up. Grain of salt.

So, at first glance, and if this schematic pic is accurate (unknown), it would seem that is exactly what AMD has done. Joined 2 800sp 4770's together, with other enhancements such as DX11 compliance, improved power circuitry, Eyefinity hardware. Sure, it can be argued that this isn't the case, but that's sure what it looks like to me. It could turn out that this schematic is not truly representative of the actual x-ray shot of the die. Anyone have a link to an actual die shot?

I'm going to be open minded about this and consider all data, but that's all I see right now.

Conversely, Nvidia is releasing an entirely new architecture from the ground up according to leaks and rumors (we will have to see).
It isn't just a die shrink and doubling of GT200 tech AFAIK.

So, late to the party might just be a party Nvidia never meant to attend. They have their own schedule as AMD has theirs.

 

WelshBloke

Lifer
Jan 12, 2005
32,541
10,713
136
Originally posted by: Atechie
Originally posted by: kreacher
With their marketing department's past record, being silent means the situation is so bad right now that they can't even think of a way to spin it into anything positive.

You mean like before the G80 launch?

Wasn't NV waiting on Microsoft to change the DX10 spec and thats why they were quiet about it?
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
RV740 (HD47xx) has 640 ALUs and 32TMUs.

On paper it may look like dual RV740s, but thats just being short sighted. You shouldn't forget that RV7x0 was already close to being DX11 hardware so adding the other necessities to be DX11 compliant wouldn't have required a completely new GPU design. Your theory of two RV740s being glued makes no sense because if the two are physically integrated into a single die (which would be alot more expensive than designing it to be a single chip in the first place, even MCM would be a cheaper solution), then one would be wasting alot of transistors for having lots of redundant duplicates e.g UVD, I/O controller, etc. Not to mention the communication nightmares. RV740 looks more like a test vehicle for the 40nm process technology, preparing the road for its next generation 40nm solution ala RV870. The shortage problem of RV740 lies more to do with manufacturing problems (Im sure this was confirmed even by TSMC).

Cypress is already shown to be a single GPU solution. What they have done is simple. Take RV7x0 generation architecture as its basis and add in the required hardware to be DX11 compliant (This could potentially add in time and resources to beef up/polish other aspects of the hardware). Since they were able to get away with packing so many SIMD cores with 55nm process, 40nm has enabled them to double it.

IHVs in this market that launched their product first tended to be the winner of that generation btw.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Cookie Monster
RV740 (HD47xx) has 640 ALUs and 32TMUs.

On paper it may look like dual RV740s, but thats just being short sighted. You shouldn't forget that RV7x0 was already close to being DX11 hardware so adding the other necessities to be DX11 compliant wouldn't have required a completely new GPU design. Your theory of two RV740s being glued makes no sense because if the two are physically integrated into a single die (which would be alot more expensive than designing it to be a single chip in the first place, even MCM would be a cheaper solution), then one would be wasting alot of transistors for having lots of redundant duplicates e.g UVD, I/O controller, etc. Not to mention the communication nightmares. RV740 looks more like a test vehicle for the 40nm process technology, preparing the road for its next generation 40nm solution ala RV870. The shortage problem of RV740 lies more to do with manufacturing problems (Im sure this was confirmed even by TSMC).

Cypress is already shown to be a single GPU solution. What they have done is simple. Take RV7x0 generation architecture as its basis and add in the required hardware to be DX11 compliant (This could potentially add in time and resources to beef up/polish other aspects of the hardware). Since they were able to get away with packing so many SIMD cores with 55nm process, 40nm has enabled them to double it.

IHVs in this market that launched their product first tended to be the winner of that generation btw.

Yes Cookie, a 4770 relative with 800sp's, 40 TMU's and 128bit bus. The two reasons 4770 comes to mind are, 40nm, and 128bit bus for 1/2 of the R8xx core. This is short sighted? I thought it was actually describing what it appears to be, not short sighted. I don't know
why you would call it that in the first place and I'm not going out of my way to look for something that isn't there. :::shrugs:::
If short sighted means that I am only going off of what I can physically see, then yes, I am short sighted and not guessing.

If it looks like R740 on paper, then why is it short sighted to think it isn't? Otherwise, it would look like you're just blowing off the suggestion.

I've acknowledged the DX11 compliance enhancements, did I not? Not a lot of work required to move from DX10.1 to DX11, I agree.

It might be a single die, cookie, but I explicitly see two separate cores within that die. Not one big core with 1600sp. Do you? Again, we are assuming here that the schematic is accurate and not just a simplified layout of what R8xx is.

So, in short, we agree in some ways, and do not in others.

 

HurleyBird

Platinum Member
Apr 22, 2003
2,792
1,512
136
Originally posted by: Keysplayr
Can you guys define, "late to the party" for me please? I can understand if Nvidia announced a GT300 launch in June, July or August and it's still not here yet. But there have been no such announcements by them at all. You are only considering GT300 late because ATI is launching (pre-launching) R8xx soon.

I have no idea if Nvidia is "late" according to their own internal schedule or not, no non-insider does. That's different than being "late to the party" which just means that your product is out significantly later than your competitor's, and that's what counts. Keeping your schedule doesen't mean much when your shedule is way behind the competition anyway.

I'm merely making the observation that being significantly behind your competitor's schedule when you're releasing a new architecture has *never* been a good thing in the graphics world. It's not a prediction, just observation, and for all we know G300 will be the one to buck the trend, but it still gives AMD a window of oppurtunity to be the top dog.

Originally posted by: Keysplayr
One of the reasons (I think) AMD is able to release R8xx so quickly, is that R8xx (I think) is basically 2 R770's in one chip.

Anyway, A 4770 had a 128-bit memory interface. You can see each "core" in the schematice has 2 64-bit registers for the 256-bit total between both "cores". Also explains the doubling of everything. 800>1600 sps, 16 to 32 ROPs, 40 to 80 TMU's. This could also have been a contributing factor to the 4770 shortage for a while. Dedicating most of it's 40nm cores for Cypress, but I don't have any data to back this up. Grain of salt.

It sounds like you're saying that the RV740 shortage may have been due to AMD picking out the good ones to make Cypress dual core products out of? I'll attribute that to how late you're up posting :laugh:

Originally posted by: KeysplayrSo, at first glance, and if this schematic pic is accurate (unknown), it would seem that is exactly what AMD has done. Joined 2 800sp 4770's together, with other enhancements such as DX11 compliance, improved power circuitry, Eyefinity hardware.

All intercache bandwidth was effectively doubled, which is no small change, and I believe some caches were also doubled in size. There's a bunch of slides in the XS forums about this stuff. New scheduler too, and new aniso may or may not be hardware related. Just because something is structurally similar and looks the same on a simplified chart doesn?t mean that things are unchanged deeper down. Sure, G300 is almost certainly a bigger undertaking, but they?ve had enough time to work on it considering that all they?ve done since G80 is make it bigger with the slightest of architectural enhancements here and there.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |