splicer707
Member
- Aug 27, 2008
- 45
- 0
- 0
Originally posted by: evolucion8
Originally posted by: taltamir
Originally posted by: evolucion8
Keys, you should understand that since you belong to a "focus group", automatically you will loose credibility as an unbiased person, and that's understandable, and you should understand it. But for me and for many unbiased people here.
Just because you are NOT in a focus group doesn't mean you are unbiased... you seem far more biased then keys.
1. For me, you are even more nVidia biased than keys, and I was talking about him, not about you or myself, so I don't know where do you want to go with this.
2. Just because I admited that ATi currently offers the best bang for the buck, has the fastest videocard on the market is the real truth and not some lie.
3. DX10.1 clearly benefits gamers with it's image quality improvements and performance,
4. PhysX will not impact your gameplay but your performance when GPU limited,
5. having 2 GPU's on a system is not what I like,
6. of course PhysX are great, but not a sale point to buy inferior hardware,
7. not even a 2 years old technology in which the GTX 280 is based.
8. I own this ATi card because nVidia never launched a 8800GT on AGP, I just buy the best card in it's category. 9700PRO was my first card and was clearly faster than any GeForce FX, then my X800XT PE which was clearly faster than any GeForce 6800 Ultra, then my X1950XT which was clearly faster than any 79XX series of card, and now the HD 3850 which is not faster than the GeForce 8. Since DX10.1 is a standard and the nVidia PhysX is not, it's just a gambling using it as an excuse to acquire an nVidia card just for it or to justify it's purchase.
9. So seems that you need to learn a bit about the definition of bias before pointing your finger pointlessly.
Originally posted by: keysplayr2003
Originally posted by: evolucion8
Originally posted by: taltamir
Originally posted by: evolucion8
Keys, you should understand that since you belong to a "focus group", automatically you will loose credibility as an unbiased person, and that's understandable, and you should understand it. But for me and for many unbiased people here.
Just because you are NOT in a focus group doesn't mean you are unbiased... you seem far more biased then keys.
1. For me, you are even more nVidia biased than keys, and I was talking about him, not about you or myself, so I don't know where do you want to go with this.
2. Just because I admited that ATi currently offers the best bang for the buck, has the fastest videocard on the market is the real truth and not some lie.
3. DX10.1 clearly benefits gamers with it's image quality improvements and performance,
4. PhysX will not impact your gameplay but your performance when GPU limited,
5. having 2 GPU's on a system is not what I like,
6. of course PhysX are great, but not a sale point to buy inferior hardware,
7. not even a 2 years old technology in which the GTX 280 is based.
8. I own this ATi card because nVidia never launched a 8800GT on AGP, I just buy the best card in it's category. 9700PRO was my first card and was clearly faster than any GeForce FX, then my X800XT PE which was clearly faster than any GeForce 6800 Ultra, then my X1950XT which was clearly faster than any 79XX series of card, and now the HD 3850 which is not faster than the GeForce 8. Since DX10.1 is a standard and the nVidia PhysX is not, it's just a gambling using it as an excuse to acquire an nVidia card just for it or to justify it's purchase.
9. So seems that you need to learn a bit about the definition of bias before pointing your finger pointlessly.
About biased people who try to hide the truth with a smokescreen.1. Why are you talking about me and not about the topic? This thread will most likely be locked by a mod if that continues, and I'm not quite sure you want that to happen. Stick to the topic and all should be fine.
2. Bang for the buck is disputable, and ATI has the fastest video card when Xfire scales, and sometimes not even then against a GTX280. And talking about bang per buck, do you think a 4870X2 is 150.00 faster than a GTX280? I'm thinking not. Your bang per buck comment loses some mileage right there.
The HD 4870 benefits greatly of the 1GB of framebuffer, even if it doesn't scale with the X2, is so close in performance against the GTX 280 that is just a waste of money paying for such huge GPU, we shall see when the HD 4870 1GB comes.
3. Link?
Link about what?'
4. As would any game? PhysX or not? Your point?
PhysX will not change your gameplay experience, I don't see how smoke, dust flying around will change it, Crysis with it's great phisics didn't do anything to improve it's gameplay experience.
5. Are you talking about straight crossfire and SLI? Or 9800GX2 and 3870X2/4870X2? Because all these have the limitations of Xfire/SLI. You're only saving a PCI-e slot in the end.
I meant two cards, using one for PhysX and one for graphics, will increase power consumption and heat dissipation for something that doesn't worth it.
6. What is inferior?
The fact that the GTX280 has the same old features as the almost 2 years old 8800GTX proves that nVidia only loves money and doesn't love innovation.
7. The 2900XT released when? What do you think 3xxx/4xxx series are based on? Peanut butter?
That question is not even related on what you quoted on me, and I don't like Peanut butter. HD 3800 series has more features and much lower consumption than the HD 2900XT, HD 4870 has the same features as the HD 3800 series with some improvements like in UVD, Anti Aliasing performance etc, and is at least twice as fast.
8. How many games need to be announced before you consider PhysX a viable contender? 40? 50? Without counting the currently benchable games, there are about 10 games due out before the holiday season. I'd say that is a pretty good number considering Nvidia only acquired Ageia less than a year ago. Don't you think?
nVidia is the best in marketing and technology promotion, yeah, it's a pretty good number, still a ghost number until they really come out.
9. We'd have to redefine the definition of "understatement" when using the phrase, "follow your own advice".
People will follow their own advice and what they like, it's never a bad idea just to learn a bit more about both worlds to help to choose which one is more suitable.
I assume you are talking about the 4870x2, but then you say...Originally posted by: evolucion8 Just because I admited that ATi currently offers the best bang for the buck, has the fastest videocard on the market is the real truth and not some lie.
having 2 GPU's on a system is not what I like,
Originally posted by: jaredpace
Physx is cool & neat. However, Ageia & Nvidia cannot compete against M$, Intel & AMD/Ati. If you read into Phsyx vs. Havok, you can see the way this is panning out.
Originally posted by: splicer707
Just my 2c worth. But PhysX is the factor why I purchased a GTX260.
It's not about how many titles are announced using PhysX, it's about AAA titles games using PhysX, and their release dates. You say 10 games before 2009, is that a fact? Here's what I was able to come up with.Originally posted by: keysplayr2003
How many games need to be announced before you consider PhysX a viable contender? 40? 50? Without counting the currently
benchable games, there are about 10 games due out before the holiday season. I'd say that is a pretty good number considering Nvidia only acquired Ageia less than a year ago. Don't you think?
Originally posted by: chizow
Actually I have been reading quite a bit on it and don't see how NV is even competing with MS/Intel/AMD. Even if they were, the fate of PhysX or any other API will be in the hands of game/software developers, not the hardware vendors and right now NV clearly has a huge advantage and head start.
Here's the facts:
1) Nvidia can run hardware accelerated physics on their GPUs TODAY. There is no proof of concept or any evidence of Havok running on anything but a CPU.
2) The PhysX SDK is available TODAY. It includes both hardware and software back-end solvers for CPU, GPU, PPU, PS3, Xbox360, and even Wii and is clearly the most full-featured and compatible physics API available today.
3) The hardware requirements for multi-GPU PhysX are extremely flexible, allowing mixed and matched configurations with different GPU families along with any available chipset, unlike traditional SLI.
4) There are games and demos available right now that enable GPU-accelerated physics on the GPU TODAY.
There's numerous other factors that may come into play in the future, like DX11 support of hardware physics or Havok running on Larrabee, but those are so far into the future they simply do not matter and will obviously have a similar adoption and ramp up time as PhysX. For now those who do not have a hardware physics solution (Intel, AMD) can only blow smoke in an attempt to marginalize PhysX in order to buy time and slow adoption rate.
As opposed to all the ATI fanboys insisting hardware physics don't matter? Or don't matter until Intel/AMD/MS/Havok can run on the GPU?Originally posted by: SlowSpyder
You are technically right, Physx is available "TODAY" for Nvidia 8 series and up GPU's. You are 100% correct, but that's the smoke screen that everyone is talking about being used by the Nvidia fanboys.
And what are you basing this on? From your sig you obviously haven't tested it yourself........it seems plenty are impressed by the impact of PhysX in GRAW2 and UT3 as well as some of the tech demos that show what is possible.While it is true, the bottom line is that it is avaiable, it's also just as true that it simply doesn't matter yet and doesn't look like it will in the near future.
Does nothing for you maybe, but it doesn't need to affect gameplay to improve the overall game experience. Like any other eye candy, better physics effects can certainly improve immersiveness even if its not interactive.It does nothing to affect game play. Whoopee... I can shoot a hole in a flag, or there is more dust and debri when there is an explosion. It is nothing more then a checkbox feature at this point really.
And I'm sure people were saying similar this time last year with DX10 games....and 64-bit OS and 4GB+....and just about every other "new" tech that doesn't immediately translate into benefits in every application. Of course it takes time for adoption, but every day that passes favors Nvidia and PhysX.But, yes, it is avilable and I'm happy to see that it has it's start so it can hopefully grow and become meaningful. But at this point it means next to nothing, and who knows if Physx, Havok, or something else all together will be come the standard for all companies.
Yes of course ATI cards can theoretically accelerate physics, but they do not have any such capability TODAY.Originally posted by: SSChevy2001
1) Havok FX is proof enough, that GPU physic can be done on ATI. Even Nvidia PhysX can be done on a ATI card.
Why would devs spend more time just to add some effects for Creative card owners? Why would devs spend more time just to add some effects for Vista/DX10 card owners? Game devs want to make their games the best they can, its really that simple. Nvidia claims 70 million GPGPU/PhysX capable parts which is compelling reason enough. To put this into perspective, PS3 and Xbox360 reported 12 and 18 million units sold, respectively a few months ago.2) Doesn't matter how many platforms support physx, only the PC's with Nvidia GPUs 8 or higher will be able currently use GPU PhysX. Why should developers spend more time just to add some effects for Nvidia card owners? More that likely you'll only see more smoke and debris from GPU PhysX.
The answer to this is obvious....of course it assumes you are interested in PhysX acceleration and have a game that is capable of running it.3) Why should a Intel MB owner like myself waste more power on having 2 cards sitting in my PC? Nvidia needs to go the whole route and unlock SLI support, otherwise don't bother.
Its more than 1 map and UT3 shows how flexible the PhysX SDK really is. All it takes is downloadable content with PhysX extensions and changing the pointer to a hardware back-end solver to enable hardware PhysX in games that use software PhysX. The #1 and #2 game engines, UE3 and Gamebryo both use PhysX, which certainly makes it easier for PhysX adoption going forward.4) Graw2 is about all you got. Nvidia only rehashed one map on UT3, way to go Nvidia
Originally posted by: chizow
Yes of course ATI cards can theoretically accelerate physics, but they do not have any such capability TODAY.Originally posted by: SSChevy2001
1) Havok FX is proof enough, that GPU physic can be done on ATI. Even Nvidia PhysX can be done on a ATI card.
Why would devs spend more time just to add some effects for Creative card owners? Why would devs spend more time just to add some effects for Vista/DX10 card owners? Game devs want to make their games the best they can, its really that simple. Nvidia claims 70 million GPGPU/PhysX capable parts which is compelling reason enough. To put this into perspective, PS3 and Xbox360 reported 12 and 18 million units sold, respectively a few months ago.2) Doesn't matter how many platforms support physx, only the PC's with Nvidia GPUs 8 or higher will be able currently use GPU PhysX. Why should developers spend more time just to add some effects for Nvidia card owners? More that likely you'll only see more smoke and debris from GPU PhysX.
The answer to this is obvious....of course it assumes you are interested in PhysX acceleration and have a game that is capable of running it.3) Why should a Intel MB owner like myself waste more power on having 2 cards sitting in my PC? Nvidia needs to go the whole route and unlock SLI support, otherwise don't bother.
Its more than 1 map and UT3 shows how flexible the PhysX SDK really is. All it takes is downloadable content with PhysX extensions and changing the pointer to a hardware back-end solver to enable hardware PhysX in games that use software PhysX. The #1 and #2 game engines, UE3 and Gamebryo both use PhysX, which certainly makes it easier for PhysX adoption going forward.4) Graw2 is about all you got. Nvidia only rehashed one map on UT3, way to go Nvidia
Rofl? There are games, whether you care about them or not doesn't matter. The big difference however is that there aren't even tools available to enable GPU-accelerated physics on ATI cards, so there is nothing in that aspect for ATI card owners to look forward to. Maybe DX11 at the earliest, but that's over a year away before you'll even start seeing development for ATI parts.Originally posted by: SSChevy2001
1) What purpose would having accelerate physics serve when there's no games?
Yep, and just like DX9 and DX10, those who can't run hardware physics will get the same old software/CPU physics you got before. You missed the point though as its obvious why game devs would take the time to develop for 70 million parts for Nvidia who dominates discrete GPU market share on the PC 2:1 over ATI. The console comparison was to illustrate how massive that 70 million figure was as its probably still more than all current-gen console sales combined. The EAX comparison is even more telling, with maybe 10-20% (based on Valve survey) Creative parts but a much higher adoption rate for EAX. I'd say 40-50% of my games have EAX.2) That's why most games are still made for DX9 and have DX10 pathways. Also again game console don't add to your point as they don't support GPU acceleration.
There's plenty of reviews that show mixed-GPU PhysX with 2 slower cards (8800GT+9600GT) provides the same FPS and gameplay experience as 2 faster cards in traditional SLI (9800GTX SLI) or a single fast GPU (GTX 280). When compared to traditional SLI the benefit is obvious as you are on a chipset that does not support SLI but get the same gameplay experience without any multi-GPU hassles. When compared to the GTX 280, you may have an older card but not enough money to upgrade to the flagship, but you'd still see similar performance in PhysX titles.3) Wrong Intel owners aren't going run 2 nvidia cards in their PCs just for PhysX, as it's not worth the power consumption. Without SLI support it's just half assed.
And this is all based on support for Ageia's PPU, so of course little work was done to an existing title. Development budgets simply do not allow for much retroactive content, but games developed from the ground-up will certainly implement as many features as their budget will allow and Nvidia/PhysX clearly have the advantage in that regard.4) Your right UE3 is helping a lot to support PhysX, but my point was Nvidia didn't really try to make a map pack for UT3. All they did was rehashed 1 map.
Not really sure what that means, since GPU PhysX relies on CUDA. For other parallel computing tasks, I don't see anything more important to games than PhysX.To me CUDA is more important that PhysX right now. For current video cards buyers PhysX support should be at the bottom of the list, until real games come out.
Originally posted by: Wreckage
I assume you are talking about the 4870x2, but then you say...Originally posted by: evolucion8 Just because I admited that ATi currently offers the best bang for the buck, has the fastest videocard on the market is the real truth and not some lie.
having 2 GPU's on a system is not what I like,
:roll:
Originally posted by: evolucion8
The fact that the GTX280 has the same old features as the almost 2 years old 8800GTX proves that nVidia only loves money and doesn't love innovation.
Originally posted by: taltamir
why do all the anti physX people here keep on saying I can shoot a hole in a flag, or there is more dust / smoke / debri when there is an explosion, who cares! and the like when they try to belittle physX?
Extra smoke and debrees are what is called "2nd order physics", they have no effect on game play, and yes, nobody here is really impressed with them.
But you keep on claiming that "1st order physics", where an object is realistically working with other objects, is impossible with physX.
This is simply not true. first order physics IS possible with physX, and exists in several games already. It is not complete yet (aka, not every single object is fully interactable with every other object in realistic manner" and it will not be for years to come. But it is a significant leap in such calculations.
Originally posted by: chizow
Rofl? There are games, whether you care about them or not doesn't matter. The big difference however is that there aren't even tools available to enable GPU-accelerated physics on ATI cards, so there is nothing in that aspect for ATI card owners to look forward to. Maybe DX11 at the earliest, but that's over a year away before you'll even start seeing development for ATI parts.Originally posted by: SSChevy2001
1) What purpose would having accelerate physics serve when there's no games?
Yep, and just like DX9 and DX10, those who can't run hardware physics will get the same old software/CPU physics you got before. You missed the point though as its obvious why game devs would take the time to develop for 70 million parts for Nvidia who dominates discrete GPU market share on the PC 2:1 over ATI. The console comparison was to illustrate how massive that 70 million figure was as its probably still more than all current-gen console sales combined. The EAX comparison is even more telling, with maybe 10-20% (based on Valve survey) Creative parts but a much higher adoption rate for EAX. I'd say 40-50% of my games have EAX.2) That's why most games are still made for DX9 and have DX10 pathways. Also again game console don't add to your point as they don't support GPU acceleration.
There's plenty of reviews that show mixed-GPU PhysX with 2 slower cards (8800GT+9600GT) provides the same FPS and gameplay experience as 2 faster cards in traditional SLI (9800GTX SLI) or a single fast GPU (GTX 280). When compared to traditional SLI the benefit is obvious as you are on a chipset that does not support SLI but get the same gameplay experience without any multi-GPU hassles. When compared to the GTX 280, you may have an older card but not enough money to upgrade to the flagship, but you'd still see similar performance in PhysX titles.3) Wrong Intel owners aren't going run 2 nvidia cards in their PCs just for PhysX, as it's not worth the power consumption. Without SLI support it's just half assed.
SLI PhysX @ FiringSquad
Honestly the benefits of PhysX flexibility are plainly obvious. The only thing missing right now is allowance of mixed vendor PhysX, but that's more a WDDM driver issue than hardware compatiblity.
And this is all based on support for Ageia's PPU, so of course little work was done to an existing title. Development budgets simply do not allow for much retroactive content, but games developed from the ground-up will certainly implement as many features as their budget will allow and Nvidia/PhysX clearly have the advantage in that regard.4) Your right UE3 is helping a lot to support PhysX, but my point was Nvidia didn't really try to make a map pack for UT3. All they did was rehashed 1 map.
Not really sure what that means, since GPU PhysX relies on CUDA. For other parallel computing tasks, I don't see anything more important to games than PhysX.To me CUDA is more important that PhysX right now. For current video cards buyers PhysX support should be at the bottom of the list, until real games come out.
Originally posted by: taltamir
why do all the anti physX people here keep on saying I can shoot a hole in a flag, or there is more dust / smoke / debri when there is an explosion, who cares! and the like when they try to belittle physX?
Extra smoke and debrees are what is called "2nd order physics", they have no effect on game play, and yes, nobody here is really impressed with them.
But you keep on claiming that "1st order physics", where an object is realistically working with other objects, is impossible with physX.
This is simply not true. first order physics IS possible with physX, and exists in several games already. It is not complete yet (aka, not every single object is fully interactable with every other object in realistic manner" and it will not be for years to come. But it is a significant leap in such calculations.
Originally posted by: Wreckage
Originally posted by: taltamir
why do all the anti physX people here keep on saying I can shoot a hole in a flag, or there is more dust / smoke / debri when there is an explosion, who cares! and the like when they try to belittle physX?
Extra smoke and debrees are what is called "2nd order physics", they have no effect on game play, and yes, nobody here is really impressed with them.
But you keep on claiming that "1st order physics", where an object is realistically working with other objects, is impossible with physX.
This is simply not true. first order physics IS possible with physX, and exists in several games already. It is not complete yet (aka, not every single object is fully interactable with every other object in realistic manner" and it will not be for years to come. But it is a significant leap in such calculations.
http://www.youtube.com/watch?v...JiXaYs&feature=related
Shows a lot of what PhysX can add to game play.
GRAW2 and UT3, two games developed with the AGEIA PPU in mind with multiple titles announced under development for native GPU PhysX. But you're right, ATI doesn't want to support Nvidia's IP, they'd much rather blow smoke up your ass about Havok.Originally posted by: SSChevy2001
1) What games? Your not talking about Mass Effect or Gears or War? Those games don't benefit even the slightest with GPU Physx acceleration, so why are we still adding them to the list? ATI has no problem running CUDA or PhysX, they just don't want to work with Nvidia.
There's enough difference even in Crysis to make the feature worthwhile, but like any game, there are going to be different levels of implementation, just like there are in the PhysX games and demos we've seen so far.2) It's like how Crysis DX10 very high isn't that much different than very high for DX9. Again I don't expect to see much of a difference in GPU Physx because the games still need to support the majority of hardware. Between Wii, 360, and PS3, there's about 67 Million and growing, not including ATI GPUs, which can't support GPU PhysX. Then's there's also the fact that ATI current generation of GPUs are selling hot now right now.
How is it about power consumption and heat if you're willing to run 2 cards in SLI anyways? The point is it offers options so that you have upgrade options and a use for that card you are upgrading from beyond throwing it in a 2nd rig or salvaging it for a grande latte at Starbucks after selling it on Ebay.3) It's about the power and heat consumption when the cards aren't in use, and the fact that's there's not enough GPU Physx based games. At least with SLI you know there's a big list of games that you can play and see the benefit from having 2 cards. Sorry but serious doubt anyone here would currently think of running 2 nvidia cards on Intel MB just to increase some FPS for demos. Of course this is not a problem for Nvidia MB, but it's a limiting factor for the others.
While you brought up the firingsquad review it looks than PhysX still requires lots of CPU power. It's very clear by the results, also let me add some fudzilla.
http://www.fudzilla.com/index....=view&id=8859&Itemid=1
And support of older titles was never a question but there are games out there that are constantly supported and updated that could easily install a hardware PhysX pack. Will they ever be as impressive as a title developed with hardware PhysX in mind from the ground up? Of course not, but they still have the potential to be better than what they are now.... As to current games under development, I think its quite obvious the implementation can be simple and seamless for titles that already use the PhysX SDK for software PhysX.4) Just proves the point that older titles aren't going to have any advantage from now having GPU PhysX. It also question the amount of time developers will spend with current titles.
Nice and all, but completely unrelated to games.5) CUDA can be used on applications like video encoding, photoshop, other applications, and yes PhysX. I simply find CUDA based application to be more important than PhysX right now.
Just like DX10 titles were far from coming out last year when a similar discussion was taking place? 25 PhysX games before Xmas and 25 more in Q1 09. Perhaps not all of them will be GPU-accelerated, but its clear PhysX is thriving as developers look to improve their games.Originally posted by: SSChevy2001
@chizow
1) UT3 will more than likely never see another PhysX map other than the demo map that are already out. GRAW2 isn't much of difference with GPU PhysX. Announced titles are great, but they're far from coming out. Out of the small list I posted most aren't going to be out till next year, and even then your getting real close to DX11 emerging.
Disagree about what? I agree its not the best example of DX9 vs DX10, but the differences are still there and make DX10 worthwhile. I'm quite familiar with that Crysis comparison as I've linked to it many times, and if you noticed in their conclusion:2) I beg to differ about crysis DX10 vs DX9 very high isn't that noticable. Check it out for yourself.
http://www.gamespot.com/featur...140/index.html?cpage=4
Going by yours and my numbers there's 70 million that support GPU PhysX and 102 million that don't. That and also I bet more games are sold on consoles than there are on the PC currently. EAX more than likely isn't as hard to impliement as adding PhysX.
The hacked very high quality settings under Windows XP were almost 20 percent faster than the Vista frame rates, but comparing frame rates between the two is pointless because it isn't an apples-to-apples comparison. The image differences between the two versions indicate that they don't have an identical workload.
But you can't run SLI on your Intel chipset, and in that case, you would gain an advantage by running a 2nd card for PhysX in those 2 games and all future games. Your fuel cell car analogy fails because there are games now and NV's solution is a hybrid that allows you to run on both gasoline and hydrogen.3) If I can run 100 games on SLI vs only 2 games with PhysX on a multi-gpu setup, then it does make a difference. Point is it's a half ass offering by Nvidia. What don't you get? It's like giving me a fuel cell car and there's no gas stations for it, there 10 stations coming next year.
Then you got the fact that PhysX is eating more CPU power, when the work is supposed to be done on the GPU. It's a CPU bottleneck nightmare for people with 2 GTX280s.
And again, they've shown the willingness to add features for the bleeding edge minority time and time again. I've already given concrete examples of specific feature sets added for hardware with a much smaller install base than Nvidia's 70 million CUDA capable GPUs.4) Still developers are going have to be willing to add new stuff to games, when they know that more than half the users might not be able to benefit from it. Remember most of the PhysX games list are multi-platform. Also if there's multiplayer levels, thing can't change much for those levels.
I guess it just emphasizes the irony of buying a GPU for non-game related applications while trying to de-emphasize the impact of GPU PhysX.5) It's related to PhysX being at the bottom of the list when it's comes to buying a video card today. Something Nvidia should be marketing more of right now.
Originally posted by: SSChevy2001
How is that any better than crysis physics?
http://www.youtube.com/watch?v=YG5qDeWHNmk
25 games before then end of the year is BS. List them like I did in the previous post. You can't cause there isn't 25 games than will use GPU Physx. Where talking games that are developed for GPU Physx in mind.Originally posted by: chizow
Just like DX10 titles were far from coming out last year when a similar discussion was taking place? 25 PhysX games before Xmas and 25 more in Q1 09. Perhaps not all of them will be GPU-accelerated, but its clear PhysX is thriving as developers look to improve their games.
I does matter if they don't add anything new, not even added FPS. How many FPS more do you get in a UT3 regular map, by using PhysX? Or Gears of War or Mass Effect or ...... Why list games for the hell of it, just because they had software PhysX and offer GPU acceleration that isn't needed?Also, it doesn't matter what you think will happen with existing PhysX games or whether it adds much to the game itself, the games do exist. Stop acting as if they don't because its obvious you're just trolling.
Disagree about what? I agree its not the best example of DX9 vs DX10, but the differences are still there and make DX10 worthwhile. I'm quite familiar with that Crysis comparison as I've linked to it many times, and if you noticed in their conclusion:
Its also very obvious DX10 looks better when comparing shadows and water, particularly on the 2nd page, which is where DX10 typically provides the biggest advantages. Like I said, it depends on what games and how well DX10 is implemented, just as it will with PhysX.The hacked very high quality settings under Windows XP were almost 20 percent faster than the Vista frame rates, but comparing frame rates between the two is pointless because it isn't an apples-to-apples comparison. The image differences between the two versions indicate that they don't have an identical workload.
As for adoption of PhysX, its obvious you're grasping at straws at this point. Its clear devs will implement features without an overwhelming install base with DX10 and EAX, but in this case, Nvidia does have an overwhelming install base making the decision even easier.
But you can't run SLI on your Intel chipset, and in that case, you would gain an advantage by running a 2nd card for PhysX in those 2 games and all future games. Your fuel cell car analogy fails because there are games now and NV's solution is a hybrid that allows you to run on both gasoline and hydrogen.
The point is CUDA still does alot of work in the CPU. Currently Badaboom eats about 30% of my quad core on a encoding, which clearly shows CUDA still requires a good amount of CPU usage. PhysX will also eat more CPU usage because of CUDA, and it clearly shows ingame. It's the point that the PPU handle PhysX with less of a CPU hit.The additional physics processing is being done on the GPU, but like anything else, increasing settings/details will also increase load on the CPU. This isn't any different than current software/CPU physics, but the difference is the GPU is still clearly much faster than the CPU trying to run hardware physics.
I guess it's just the fact that the GPU being used for something more than games is more attractive right now vs some extra debris and smoke.Again, they've shown the willingness to add features for the bleeding edge minority time and time again. I've already given concrete examples of specific feature sets added for hardware with a much smaller install base than Nvidia's 70 million CUDA capable GPUs.
I guess it just emphasizes the irony of buying a GPU for non-game related applications while trying to de-emphasize the impact of GPU PhysX.
Well it's crysis on very high with a stock Q6600, which we know crysis isn't optimized for. Also he's only using a one 8800Ultra. Most of the crysis physic movies were taken 8 month ago, when there were no 9 or 200 series cards.Originally posted by: Wreckage
Originally posted by: SSChevy2001
How is that any better than crysis physics?
http://www.youtube.com/watch?v=YG5qDeWHNmk
"This Video is rendered frame by frame, for frame by frame rendering follow this tutorial:"
http://forums.facepunchstudios...owthread.php?p=8179898
That's how.
Originally posted by: SSChevy2001
Well it's crysis on very high with a stock Q6600, which we know crysis isn't optimized for. Also he's only using a one 8800Ultra. Most of the crysis physic movies were taken 8 month ago, when there were no 9 or 200 series cards.