nVidia's next gen cards...

Quiksilver

Diamond Member
Jul 3, 2005
4,725
0
71
TG Daily.

Q4 2008/Q1 2009...

GDDR5.
DirectX 10.1

So... is GT200/GT200b a huge stop gap, kind of like the 9 series?....
That's pretty annoying to have your $650 card become phased out in 6-8 months...
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
how is that annoying? you paid for a top end hardware part, not to get technological innovation to slow down

if you find it annoying, then you're buying the part for status when you can't really afford to do so...gj
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.
 

mkln

Member
Oct 31, 2006
97
0
0
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.

im not sure if it is necessarily wishful thinking. i think that NV probably had it something in the works, but didn't really see the need to release anything that would blow their own products out of the water. now that they are on the defensive, it seems they have no choice.

edit: spelling
 

Piuc2020

Golden Member
Nov 4, 2005
1,716
0
0
While that's possible it does seem highly unlikely NV will kill it's own 8-9 series sales even further with a myriad of cards and yet another gen.
 

nemesismk2

Diamond Member
Sep 29, 2001
4,810
5
76
www.ultimatehardware.net
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.

I agree 100%, it's a great shame that Nvidia has been caught with their pants down. Nvidia's lack of DX10.1 support and their video cards lacking GDDR5 memory is very disappointing and I expected better from Nvidia! :|
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.

The die shrink is probably the priority, but Nvidia has been known to tweak/change cores during a die shrink. I don't know what would be involved in making say a GT200b DX10.1 compliant. Could be hard to do. Could be simple to do. I'm just thinking of the 7800 to 7900 at the moment. Die shrink and reduction in transistor count. They may not be able to reduce tranny count in this architecture, but who knows. And integration of the NVIO would seem par for the course as they did this going from G80 to G92 as well as reducing the number of ROP's and increasing texture units. So, I feel it is a strong possibility that things could be changed during this die shrink.

Remember, ATI and Nvidia each have their features over the other. It is not one sided here. ATI has DX10.1 and Nvidia has onboard Physx. The 4870 has GDDR5, The GT200's have wider buses. Back and forth all day long. Exactly how valuable each of these features are will become known over the following year in the form of released titles that support one, the other, or both.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: nemesismk2
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.

I agree 100%, it's a great shame that Nvidia has been caught with their pants down. Nvidia's lack of DX10.1 support and their video cards lacking GDDR5 memory is very disappointing and I expected better from Nvidia! :|

Just like I said. Each company has respective features over the other right now. Exactly how beneficial these features are, will become known soon enough.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Piuc2020
While that's possible it does seem highly unlikely NV will kill it's own 8-9 series sales even further with a myriad of cards and yet another gen.

Sorry for the three posts in a row here. Just replying as I go along the thread.

Maybe Nvidia might offer GT200 based mid ranged cards and EOL G92 by that time.
Could be GTS240, GT220, who knows. All on 55nm obviously. This is just speculation on my part of course, but it's not too hard to picture that happening.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Cookie Monster
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.

Did G80 have it's memory channels tied with ROP's?
 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
Originally posted by: keysplayr2003
Originally posted by: Cookie Monster
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.

Did G80 have it's memory channels tied with ROP's?
I believe so. That's why they (be it intentionally or unintentionally) cut the memory width for the smaller brother (384 to 320bit). Same for the GT200 (512 to 448bit). But then it's entirely possible we see a transition like G80->G92, IMO.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: keysplayr2003
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.

The die shrink is probably the priority, but Nvidia has been known to tweak/change cores during a die shrink. I don't know what would be involved in making say a GT200b DX10.1 compliant. Could be hard to do. Could be simple to do. I'm just thinking of the 7800 to 7900 at the moment. Die shrink and reduction in transistor count. They may not be able to reduce tranny count in this architecture, but who knows. And integration of the NVIO would seem par for the course as they did this going from G80 to G92 as well as reducing the number of ROP's and increasing texture units. So, I feel it is a strong possibility that things could be changed during this die shrink.

Remember, ATI and Nvidia each have their features over the other. It is not one sided here. ATI has DX10.1 and Nvidia has onboard Physx. The 4870 has GDDR5, The GT200's have wider buses. Back and forth all day long. Exactly how valuable each of these features are will become known over the following year in the form of released titles that support one, the other, or both.

But with the 7800 -> 7900 shrink they had stuff they could remove (e.g. VIVO), not sure they do with the G200.
I would expect a G80 -> G92 style tweak, mainly because the architecture is similar, plus they might need it to make sure they can keep up with anything ATI might do.

Also by then we might see how the PhysX/Havok things have played out, and NV might need to add more value to their products if we do somehow see physics support on ATI cards.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
It seems to be a really big deal, but is anyone capable of saying why nVidia should care about supporting DX 10.1? Can anyone list the feature the 2x0 parts are incapable of doing that matters, at all?

Honestly it seems rather retarded that people are talking so much about this without anyone being capable of saying why it is needed. This isn't a DX9-DX10 style trasnsition or DX8-DX9 for that matter, it is- at best- an extremely small step with very limited uses. The most popular to bring up is aa using shader hardware which the 2x0 parts can do.

I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Not sure about the RAM swap. You are introducing yourself to more volatility in pricing going that route. Given, using a higher bit width you assure yourself of a higher pcb cost, but with the complexity of the 2x0 parts I am not so sure they would really be able to reduce the layers on the pcb too much anyway at which point the potential savings are truly marginalized. Not saying that it wouldn't end up working out for them, just that I can see why they may be a little bit hesitant to go that route.

Obviously 55nm is going to be the big factor in reducing costs for them, it also will likely allow considerable headroom for clock rates for them to utilize to deal with potential x2 parts when they arrive.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: BenSkywalker
It seems to be a really big deal, but is anyone capable of saying why nVidia should care about supporting DX 10.1? Can anyone list the feature the 2x0 parts are incapable of doing that matters, at all?

Keys: For the same reason that ATI cared about SM3.0 with their X8xx lineup. They had 2.0b, NV had 3.0. Did it cause a big stink? Sure did. Did it matter much in the end? Sure didn't. As of right now, it matters on paper. That is all. Until at least a small range of DX10.1 titles emerge, it's still just on paper. We will have to see. We can't really tell with Assasins Creed if it mattered or not. It seemed to boost performance on 10.1 hardware, but may not have been rendering correctly, and there were some graphical anomalies. So, 10.1 hardware may have been running it faster, it might not have. All depends on if the code they were running was done correctly, which according to the dev. it wasn't.

Honestly it seems rather retarded that people are talking so much about this without anyone being capable of saying why it is needed. This isn't a DX9-DX10 style trasnsition or DX8-DX9 for that matter, it is- at best- an extremely small step with very limited uses. The most popular to bring up is aa using shader hardware which the 2x0 parts can do.

It's "supposed" to improve performance by eliminating the need for an extra render pass because of the way things are done. People are talking about it so much because it "could" offer an advantage over a competitor. But they don't really know yet. So it's all based on the paperwork.

I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Not sure about the RAM swap. You are introducing yourself to more volatility in pricing going that route. Given, using a higher bit width you assure yourself of a higher pcb cost, but with the complexity of the 2x0 parts I am not so sure they would really be able to reduce the layers on the pcb too much anyway at which point the potential savings are truly marginalized. Not saying that it wouldn't end up working out for them, just that I can see why they may be a little bit hesitant to go that route.

Not sure of anything really, let alone the RAM swap. And that all depends on pricing and availability of GDDR5. R770 is a fairly complex GPU in itself, but they managed to utilize a 256-bit bus. G80 was pretty complex as well, but they managed to get it to 256 down from 384. But who knows. Maybe high frequency GDDR5 is just what the doctor ordered for GT200's on their current busses. Bandwidth would be in orbit. The difference in price for GDDR5 (assuming it's much more expensive) might be offset by the core die shrink saving them money there. All guesswork at best.

Obviously 55nm is going to be the big factor in reducing costs for them, it also will likely allow considerable headroom for clock rates for them to utilize to deal with potential x2 parts when they arrive.

Die shrinks don't always guarantee higher clock frequencies, but that is usually how it seems to work out for the most part.

 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: keysplayr2003
Originally posted by: Piuc2020
While that's possible it does seem highly unlikely NV will kill it's own 8-9 series sales even further with a myriad of cards and yet another gen.

Sorry for the three posts in a row here. Just replying as I go along the thread.

Maybe Nvidia might offer GT200 based mid ranged cards and EOL G92 by that time.
Could be GTS240, GT220, who knows. All on 55nm obviously. This is just speculation on my part of course, but it's not too hard to picture that happening.

I fully expect that to happen. q4 08 will be tough but certainly by q1 09 those parts should be out. if they really introduce gddr5 with 256 bit bus then they could pull a trick out of amd's bag and go to gtx 360/gtx 380 naming scheme, however. AMD had better be ready because I have a sneaky suspicion that nvidia will come back stronger and more focused next year.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Cookie Monster:

Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs.
Instead of two they could tie four to each.

BenSkywalker:

Not sure about the RAM swap. You are introducing yourself to more volatility in pricing going that route.
Going to 256 bit GDDR5 would reduce chip complexity and cost without sacrificing memory bandwidth. With the RV6xx ATi's two main changes were going to 55nm and reducing memory width to 256 bit which allowed them to bring thermals, yields and costs under control. This chip then paved the way for the R4xx.

I think the move to 256 bit GDDR5 was a smart one and I expect nVidia will follow suit as soon as possible.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
It's "supposed" to improve performance by eliminating the need for an extra render pass because of the way things are done. People are talking about it so much because it "could" offer an advantage over a competitor. But they don't really know yet. So it's all based on the paperwork.

Frame buffer read/write ops spelled out in DX10.1 are supported by the 2x0 parts(which is where the reduced passes came in). This is kind of what I was talking about, noone can even figure out what it is that nV doesn't support in the DX10.1 spec(I'm sure there is something, but nV isn't saying and all of the big points they seem to have full support for).

Die shrinks don't always guarantee higher clock frequencies, but that is usually how it seems to work out for the most part.

Tesla is already running higher clockrates, but that is in an environment that has more room for thermal tuning(they can output a ton of heat since nV controls the entire box and can make sure it is vented properly). A die shrink is pretty much going to assure us a heat reduction.

Going to 256 bit GDDR5 would reduce chip complexity and cost without sacrificing memory bandwidth.

Unless we see a sharp spike in the cost of GDDR5, then it would end up being a costly mistake. On a technical basis I see no big benefit going one way or the other, what I see it nV limiting potential outside pricing pressures while at the same time reducing potential savings. Is it the right long term choice? Not a clue. Entirely depends on where GDDR5 goes from here.

Also, another thinig to consider- validation for GDDR5 for use in HPC computing is non existant at the moment. Due to this, Tesla would require a 512bit interface to use with older memory tech in order to hit its' target goals. This would add a bit of complexity to the product lineup.

 

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

I think that's a prudent suggestion, a 256bit bus can save a lot of cost. And also on top of this, they need to move away from 1.4trillion transistor monsters that's very low yield. Basically do it the AMD way. I do think NV can beat ATI at this game if they switch their design philosophy a bit, they got the resource to do R&D. It seems the monolithic design is not yielding as much benefit anymore.
 

toslat

Senior member
Jul 26, 2007
216
0
76
Originally posted by: keysplayr2003
Remember, ATI and Nvidia each have their features over the other. It is not one sided here. ATI has DX10.1 and Nvidia has onboard Physx. The 4870 has GDDR5, The GT200's have wider buses. Back and forth all day long. Exactly how valuable each of these features are will become known over the following year in the form of released titles that support one, the other, or both.

How is wider buses a feature? AFAIK its more of a cost vs memory bandwidth issue.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I do think NV can beat ATI at this game if they switch their design philosophy a bit, they got the resource to do R&D. It seems the monolithic design is not yielding as much benefit anymore.

They are beating ATi atm, but ignoring that, they are far more focused on Intel atm then ATi.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |