Originally posted by: nosfe
what do i care about the 4870x2? can't stay on the subject? i'm talking about the g200 architecture. you started the "one card" thing, you said that nvidia can place two g200 on a card but it's not on a card and it's not a g200 either.
It's difficult, and somewhat useless, to talk about a card without the context of how it performs compared to other products that compete in the same price range, or position in competitor's hierarchy. What's it matter at all if the GTX295 can run a game at 60fps if not taken in the context of a competing product doing it at 50fps? Or perhaps 70fps? Knowledge without comparison is of very little use.
Originally posted by: nosfe
Why is the g200 architecture suddenly moot? because it's pointless to talk about a "past" architecture? well it's moot to talk about anything in this forum, its not like we're changing the world here. So my discussion about the inefficiency of the g200 still remains.
However, to all of us, the performance per transistor line of debate means about as much as the day to day of amoeba on Venus. I don't count my transistors, I set the resolution, AA, AF and see how fast it runs and what the IQ is. If I paid for cards, the cost each would factor in as well.
I can honestly say if NVIDIA figured out how to cram four GTX280 cards into one double slot package and it performed 20% higher than a 4870X2 with similar thermals and power, I'd pay 20% more for that card.
I'd never once think about efficiency, what it cost NVIDIA, or how many PCBs are there because I'm a gamer, not a guy making design decisions for NVIDIA.
Originally posted by: nosfe
I don't think that nvidia was price gouging us(not me, i don't have that kind on money) when the gtx280 came to the scene; i think it has more to do with the fact that the die was to big so they had to ask so much for it.
Obviously they didn't, as they lowered the price substantially and kept on churning out sales.
Companies will make what margin they can, and those margins pay for things people value about NVIDIA cards: TWIMTBP compatibility out of box, better warranties, step up programs, PhysX, CUDA, etc.
Originally posted by: nosfe
i hope that nvidia will learn from this mistake and focus in the future on smaller dies for those of us that can't afford 500$ video cards.
That's sort of selfish don't you think? I can afford $1000 video cards, and I want companies to give me the performance I can afford, not cater to the mainstream. Why don't people who can afford high end tech deserve to be developed for?
Originally posted by: nosfe
Going with smaller dies and multi gpu solutions ensures that the failure rate is kept small and that helps the prices stay low.
I'd rather pay for the lower yields and get higher end parts. I don't want my options limited by others budgets. I bought an 18', 150hp boat last year- I didn't post on the boat forums "I wish boat manufacturers would concentrate more on what I can afford, stop wasting their time on those inefficient 300hp monstrosities".
Originally posted by: nosfe
Sure, multi gpu solutions aren't the best right now, but they'll evolve because two of ati's next gen cards will probably(i don't know, just saying) beat out one big chip made from nvidia which will "force" them to make their own multi gpu "single card" solution so weather nvidia likes it or not, multi gpu is the future(whether its sli, crossfire or lucid hydra)
Lots and lots of people will never care about multi GPU till it's as seamless as single. (I'm not one)
Originally posted by: nosfe
@nRollo
i wasn't talking about single cards vs sandwich cards and what is better, its just that when someone talks about "single card" in a discussion about architectures you'd think they're talking about just that, a single card, not multiple cards. from an architecture point of view, making a single card is better than a sandwich or two(yes, it's useless for consumers but most of the conversations on this forum are useless for them and without them we'd be bored out of our skulls)
How is single card "better" if it doesn't offer better thermals, sound, power, performance, or image quality?
Originally posted by: nosfe
the reason why nvidia's solutions are higher performing is because a) they came second so they knew what performance they had to achieve in order to beat ati's cards and b) because each of the single chips is stronger.
OK
Originally posted by: nosfe
If given the choice between a 9800gx2 composed of 2 cards and one of only one card i'd say nvidia would chose the one card approach as it's usually cheaper to produce. Problem is that it takes more to research and it's impossible to do with a chip like the g200 because of the 512bit memory interface, too complex for the pcb
G200 isn't defined by 512 bit interface.