- Jul 3, 2005
- 4,725
- 0
- 71
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.
Originally posted by: nemesismk2
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.
I agree 100%, it's a great shame that Nvidia has been caught with their pants down. Nvidia's lack of DX10.1 support and their video cards lacking GDDR5 memory is very disappointing and I expected better from Nvidia! :|
Originally posted by: Piuc2020
While that's possible it does seem highly unlikely NV will kill it's own 8-9 series sales even further with a myriad of cards and yet another gen.
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.
Originally posted by: Cookie Monster
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.
Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.
I believe so. That's why they (be it intentionally or unintentionally) cut the memory width for the smaller brother (384 to 320bit). Same for the GT200 (512 to 448bit). But then it's entirely possible we see a transition like G80->G92, IMO.Originally posted by: keysplayr2003
Originally posted by: Cookie Monster
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.
Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.
Did G80 have it's memory channels tied with ROP's?
Originally posted by: keysplayr2003
Originally posted by: munky
Actually, I have a hard time believeing Nv will have either DX10.1 or GDDR5 in 2008. This seems like some wishful thinking, by adding all the features NV currently lacks compared to ATI. The biggest news I'd expect from NV this year is a die shrink and integration of NVIO; in other words - gt200b.
The die shrink is probably the priority, but Nvidia has been known to tweak/change cores during a die shrink. I don't know what would be involved in making say a GT200b DX10.1 compliant. Could be hard to do. Could be simple to do. I'm just thinking of the 7800 to 7900 at the moment. Die shrink and reduction in transistor count. They may not be able to reduce tranny count in this architecture, but who knows. And integration of the NVIO would seem par for the course as they did this going from G80 to G92 as well as reducing the number of ROP's and increasing texture units. So, I feel it is a strong possibility that things could be changed during this die shrink.
Remember, ATI and Nvidia each have their features over the other. It is not one sided here. ATI has DX10.1 and Nvidia has onboard Physx. The 4870 has GDDR5, The GT200's have wider buses. Back and forth all day long. Exactly how valuable each of these features are will become known over the following year in the form of released titles that support one, the other, or both.
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.
Originally posted by: BenSkywalker
It seems to be a really big deal, but is anyone capable of saying why nVidia should care about supporting DX 10.1? Can anyone list the feature the 2x0 parts are incapable of doing that matters, at all?
Keys: For the same reason that ATI cared about SM3.0 with their X8xx lineup. They had 2.0b, NV had 3.0. Did it cause a big stink? Sure did. Did it matter much in the end? Sure didn't. As of right now, it matters on paper. That is all. Until at least a small range of DX10.1 titles emerge, it's still just on paper. We will have to see. We can't really tell with Assasins Creed if it mattered or not. It seemed to boost performance on 10.1 hardware, but may not have been rendering correctly, and there were some graphical anomalies. So, 10.1 hardware may have been running it faster, it might not have. All depends on if the code they were running was done correctly, which according to the dev. it wasn't.
Honestly it seems rather retarded that people are talking so much about this without anyone being capable of saying why it is needed. This isn't a DX9-DX10 style trasnsition or DX8-DX9 for that matter, it is- at best- an extremely small step with very limited uses. The most popular to bring up is aa using shader hardware which the 2x0 parts can do.
It's "supposed" to improve performance by eliminating the need for an extra render pass because of the way things are done. People are talking about it so much because it "could" offer an advantage over a competitor. But they don't really know yet. So it's all based on the paperwork.
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.
Not sure about the RAM swap. You are introducing yourself to more volatility in pricing going that route. Given, using a higher bit width you assure yourself of a higher pcb cost, but with the complexity of the 2x0 parts I am not so sure they would really be able to reduce the layers on the pcb too much anyway at which point the potential savings are truly marginalized. Not saying that it wouldn't end up working out for them, just that I can see why they may be a little bit hesitant to go that route.
Not sure of anything really, let alone the RAM swap. And that all depends on pricing and availability of GDDR5. R770 is a fairly complex GPU in itself, but they managed to utilize a 256-bit bus. G80 was pretty complex as well, but they managed to get it to 256 down from 384. But who knows. Maybe high frequency GDDR5 is just what the doctor ordered for GT200's on their current busses. Bandwidth would be in orbit. The difference in price for GDDR5 (assuming it's much more expensive) might be offset by the core die shrink saving them money there. All guesswork at best.
Obviously 55nm is going to be the big factor in reducing costs for them, it also will likely allow considerable headroom for clock rates for them to utilize to deal with potential x2 parts when they arrive.
Die shrinks don't always guarantee higher clock frequencies, but that is usually how it seems to work out for the most part.
Originally posted by: keysplayr2003
Originally posted by: Piuc2020
While that's possible it does seem highly unlikely NV will kill it's own 8-9 series sales even further with a myriad of cards and yet another gen.
Sorry for the three posts in a row here. Just replying as I go along the thread.
Maybe Nvidia might offer GT200 based mid ranged cards and EOL G92 by that time.
Could be GTS240, GT220, who knows. All on 55nm obviously. This is just speculation on my part of course, but it's not too hard to picture that happening.
Instead of two they could tie four to each.Cookie Monster:
Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs.
Going to 256 bit GDDR5 would reduce chip complexity and cost without sacrificing memory bandwidth. With the RV6xx ATi's two main changes were going to 55nm and reducing memory width to 256 bit which allowed them to bring thermals, yields and costs under control. This chip then paved the way for the R4xx.BenSkywalker:
Not sure about the RAM swap. You are introducing yourself to more volatility in pricing going that route.
It's "supposed" to improve performance by eliminating the need for an extra render pass because of the way things are done. People are talking about it so much because it "could" offer an advantage over a competitor. But they don't really know yet. So it's all based on the paperwork.
Die shrinks don't always guarantee higher clock frequencies, but that is usually how it seems to work out for the most part.
Going to 256 bit GDDR5 would reduce chip complexity and cost without sacrificing memory bandwidth.
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.
Originally posted by: keysplayr2003
Remember, ATI and Nvidia each have their features over the other. It is not one sided here. ATI has DX10.1 and Nvidia has onboard Physx. The 4870 has GDDR5, The GT200's have wider buses. Back and forth all day long. Exactly how valuable each of these features are will become known over the following year in the form of released titles that support one, the other, or both.
I do think NV can beat ATI at this game if they switch their design philosophy a bit, they got the resource to do R&D. It seems the monolithic design is not yielding as much benefit anymore.