سؤال

I'm observing a strange phenomenon with my OpenGL-program which is written in C#/OpenTK/core-profile. When displaying mandelbrot-data from a heightmap with ~1M vertices the performance differs dependant on the scale-value of my view-matrices (it's orthographic so yes I need scale). The data is rendered using VBO's. The render-process includes lighting and shadow-maps.

My only guess is that something in the shader "errors" on low scale values and there is some error handling. Any hints for me?

Examples:

Example 1 Example 2

هل كانت مفيدة؟

المحلول

There is nothing unusual about this at all. At lower scale values, your mesh does not cover a great deal of the screen so it does not produce very many fragments. At larger scales, the entire screen is covered by your mesh and worse still, overdraw becomes a huge factor.

You are fragment bound in this scenario, reducing the complexity of your fragment shader should help and a Z pre-pass to reduce overdraw will also help.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top