Question

I recently compared some of the physics engine out there for simulation and game development. Some are free, some are opensource, some are commercial (1 is even very commercial $$$$). Havok, Ode, Newton (aka oxNewton), Bullet, PhysX and "raw" build-in physics in some 3D engines.

At some stage I came to conclusion or question: Why should I use anything but NVidia PhysX if I can make use of its amazing performance (if I need it) due to GPU processing ? With future NVidia cards I can expect further improvement independent of the regular CPU generation steps. The SDK is free and it is available for Linux as well. Of course it is a bit of vendor lock-in and it is not opensource.

Whats your view or experience ? If you would start right now with development, would you agree with the above ?

cheers

Was it helpful?

Solution

Disclaimer: I've never used PhysX, my professional experience is restricted to Bullet, Newton, and ODE. Of those three, ODE is far and away my favorite; it's the most numerically stable and the other two have maturity issues (useful joints not implemented, legal joint/motor combinations behaving in undefined ways, &c).

You alluded to the vendor lock-in issue in your question, but it's worth repeating: if you use PhysX as your sole physics solution, people using AMD cards will not be able to run your game (yes, I know it can be made to work, but it's not official or supported by NVIDIA). One way around this is to define a failover engine, using ODE or something on systems with AMD cards. This works, but it doubles your workload. It's seductive to think that you'll be able to hide the differences between the two engines behind a common interface and write the bulk of your game physics code once but most of your difficulties with game physics will be in dealing with the idiosyncrasies of your particular physics engine, deciding on values for things like contact friction and restitution. Those values don't have consistent meanings across physics engines and (mostly) can't be formally derived, so you're stuck finding good-looking, playable values by experiment. With PhysX plus a failover you're doing all that scut work twice.

At a higher level, I don't think any of the stream processing APIs are fully baked yet, and I'd be reluctant to commit to one until, at the very least, we've how the customer reaction Intel's Larrabee shapes peoples' designs.

So far from seeing PhysX as the obvious choice for high-end game development, I'd say it should be avoided unless either you don't think people with AMD cards make up a significant fraction of your player base (highly unlikely) or you have enough coding and QA manpower to test two physics engines (more plausible, though if your company is that wealthy I've heard good things about Havok). Or, I guess, if you've designed a physics game with performance demands so intense that only streaming physics can satisfy you - but in that case, I'd advise you to start a band and let Moore's Law do its thing for a year or two.

OTHER TIPS

An early 2013 update answer: I develop for what I consider the big three OS: Linux, OS X, MS. I also develop with the big three physic libraries: PhysX, Havok, Bullet.

Concerning PhysX, I recently did some tests with the newest incarnation being 3.2.2 as of the time of this writing. In my opinion nVidia really reduced the effectiveness of the library. The biggest is lack of acceleration for rigid bodies. The lib only accelerates particles and cloth. Even those do not interface with general rigid bodies. I am completely puzzled by nVidia doing this since they have a huge marketing drive pushing GPU accelerated apps, focusing on scientific computation with a large driving force being physics simulation.

So while my expectations of the king of physics sim being PhysX, Havok, and Bullet in that order I see the reverse in reality. Bullet has released lib 2.8.1 with a sampling of OpenCL support. Bullet is a relatively small lib with generous licensing. Their goal is to have release 3 with fully integrated OpenCL rigid body acceleration.

Part of the comments talk about multiple code paths. My opinion is this is not too big a deal. I already support three OSes with minimal hard code support (threading for the most part and don't use OS specific code; use C++ and std lib templates). It is similar for the physics libraries. I use a shared library and abstract a common interface. This is fine because physics doesn't change much ;) You will still need to set up a simulation environment, manage objects, render iterations in the environment, clean up when finished. The rest is flash, implemented at leisure.

With the advent of OpenCL in mainstream libraries (nVidia Cuda is very close - see Bullet OpenCL demos) the physics plugin work will shrink.

So, starting from scratch and only concerned with physics modeling ? You can't go wrong with Bullet. Small, flexible license (free), very close to production ready OpenCL which will be cross platform across the big three OS and GPU solutions.

Good Luck !

You may find this interesting:

http://www.xbitlabs.com/news/video/display/20091001171332_AMD_Nvidia_PhysX_Will_Be_Irrelevant.html

It is biased ... it's basically an interview with AMD ... but it makes some points which I think are worth considering in your case.

Because of the issues David Seiler pointed out, switching physics engines some time in the future may be a huge/insurmountable problem... particularly if the gameplay is tightly bound to the physics.

So, if you really want hardware accelerated physics in your engine NOW, go for Physx, but be aware that when solutions such as those postulated by AMD in this article become available (they absolutely will but they're not here yet), you will be faced with unpleasant choices:

1) rewrite your engine to use (insert name of new cross-platform hardware accelerated physics engine), potentially changing the dynamics of your game in a Bad Way

2) continue using Physx only, entirely neglecting AMD users

3) try to get Physx to work on AMD GPUs (blech...)

Aside from David's idea of using a CPU physics engine as a fallback (doing twice the work and producing 2 engines which do not behave identically) your only other option is to use pure CPU physics.

However, as stuff like OpenCL becomes mainstream we may see ODE/Bullet/kin starting to incorporate that ... IOW if you code it now with ODE/Bullet/kin you might (probably will eventually) get the GPU acceleration for "free" later on (no changes to your code). It'll still behave slightly differently with the GPU version (an unavoidable problem because of the butterfly effect and differences in floating-point implementation), but at least you'll have the ODE/Bullet/kin community working with you to reduce that gap.

That's my recommendation: use an open source physics library which currently only uses the CPU, and wait for it to make use of GPUs via OpenCL, CUDA, ATI's stream language, etc. Performance will be screaming fast when that happens, and you'll save yourself headaches.

The hypothetical benefit of future gfx cards is all well and good, but there will also be future benefits from extra CPU cores too. Can you be sure that future gfx cards will always have spare capacity for your physics?

But probably the best reason, albeit a little vague in this case, is that performance isn't everything. As with any 3rd party library, you may need to support and upgrade that code for years to come, and you're going to want to make sure that the interfaces are reasonable, the documentation is good, and that it has the capabilities that you require.

There may also be more mathematical concerns such as some APIs offering more stable equation solving and the like, but I'll leave comment on that to an expert.

I have used ODE and now using PhysX. PhysX makes building scenes easier and (my personal opinion) seems more realistic, however, there is no adequate documentation for PhysX; in fact hardly any documentation at all. On the other hand, ODE is open source and there is plenty of documents, tutorials etc. PS: Using GPU accelaration is helping me and my colleagues significantly; we are using APEX destruction and PhysX particles.

PhysX works with non-nVidia cards, it just doesn't get accelerated. Leaving it in the same position the other engines are to start with. The problem is if you have a physical simulation which is only workable with hardware physics acceleration.

if all your code is massively paralelizable, then go for it!

for everything else, GPUs are woefully inadequate.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top