Domanda

Why would LWJGL be so much slower than say the Unity implementation of OpenGL or even Ogre3D? I'll begin with some "benchmarks" (if you would even call them that) on what I've tested.

Hardware:

  • i5 - 3570k @ 4.3GHZ

  • GTX 780 @ 1150 MHZ

First Test: Place 350,000 triangles on screen (modified Stanford Dragon)

Results:

  • GTX 780 Renders at 37 FPS (USING LWJGL)
  • GTX 780 Renders at ~300 FPS (USING UNITY3D)
  • GTX 780 Renders at ~280 FPS (USING OGRE3D)

Second Test: Render Crytek Sponza w/ Textures (I believe around 200,000 vertices?)

Results:

  • GTX 780 Renders at 2 FPS (USING LWJGL)
  • GTX 780 Renders at ~150 FPS (USING UNITY3D)
  • GTX 780 Renders at ~130 FPS (USING OGRE3D)

Normally I use either Ogre3D, Unity3D, or Panda3D in order to render my game projects, but the difference in frame rates is staggering. I know Unity has things like Occlusion Culling so it's generally the quickest, but even when using similar calls with Ogre3D, I would think to expect similar results to LWJGL... Ogre3D and LWJGL are both doing Front face only culling, but LWJGL doesn't get any sort of performance increase vs. rendering everything. One last thing, LWJGL tends to break 2.5 GB of RAM usage rendering Sponza, but that doesn't explain the other results.

È stato utile?

Soluzione

If anyone is having the same issue, the issue is NOT java I've realized. The use of recording immediate draw calls into Display Lists is depreciated and it yields poor performance. You MUST use VBOs and not display list. You can expect performance to increase up to 600x in the case of my laptop.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top