Question

I've been making my game's core for the past week and I hit a wall because rendering was simply not good enough. Movement was jerky, I was getting tearing and there was a lot of lag in general. I thought that it may not be my game engine's fault so I tested rendering with a very simple gameloop:

        sf::RenderWindow window(sf::VideoMode(1024, 768), "Testing");
                window.setVerticalSyncEnabled(true);
        sf::Clock clock;
        sf::Event event;
        float elapsed;
        while(window.isOpen())
        {
                elapsed += clock.restart().asSeconds();
                std::cout << 1.f/elapsed << std::endl;
                while(elapsed > 1.f/60.f)
                {
                        while(window.pollEvent(event))
                        {
                                if (event.type == sf::Event::Closed || event.key.code == sf::Keyboard::Escape)
                                {
                                        window.close();
                                }
                        }
                        elapsed -= 1.f/60.f;
                }
                window.clear();
                window.display();
        }

The fps starts at 40 , goes up to 60 and then falls back to 30, it increments again and repeats. If I set VSynct to false, I get anywhere between 30-500 fps. Either I am not testing the frame rate correctly or there is something wrong with my nvidia drivers (I did reinstall them 2 times with no changes). Any help is appreciated!

Was it helpful?

Solution

You pointed me to a material which has similar code to yours, but you wrote it differently.

From: gameprogrammingpatterns.com/game-loop.html

double previous = getCurrentTime();
double lag = 0.0;
while (true)
{
  double current = getCurrentTime();
  double elapsed = current - previous;
  previous = current;
  lag += elapsed;

  processInput();

  while (lag >= MS_PER_UPDATE)
  {
    update();
    lag -= MS_PER_UPDATE;
  }

  render();
}

You seem to be using one variable elapsed for both, elapsed and lag. That is what was baffling me. Your mangling with elapsed makes it unusable for the purpose measuring time. I think your code should look more like:

    sf::RenderWindow window(sf::VideoMode(1024, 768), "Testing");
            window.setVerticalSyncEnabled(true);
    sf::Clock clock;
    sf::Event event;
    float lag;
    float elapsed;

    while(window.isOpen())
    {
            lag = elapsed = clock.restart().asSeconds();
            std::cout << 1.f/elapsed << std::endl;
            while(lag > 1.f/60.f)
            {
                    while(window.pollEvent(event))
                    {
                            if (event.type == sf::Event::Closed || event.key.code == sf::Keyboard::Escape)
                            {
                                    window.close();
                            }
                    }
                    lag -= 1.f/60.f;
            }
            window.clear();
            window.display();
    }

I am still not sure if this will be correct. I don't know what clock.restart().asSeconds() does exactly. Personally, I would implement it line by line like the example. Why redesign working code?

Edit: OP confirmed that, using elapsed for "throttling" was breaking its purpose as time a measurement variable.

OTHER TIPS

Here's what I think is happening. If I understand your code correctly, you want to keep the timestep of the simulation constant - at 60 times a second. You're using a sort of accumulator to keep track of how many times you should be running your loop. The problem when using an accumulator like this is that you must remember that your simulation loop takes time. To illustrate why that might be a problem, let's examine some cases. I'm going to use, for the timestep, 20 milliseconds instead of 16.6666 (1/60 second) for clarity. Let's assume:

  1. Your simulation loop takes 10 milliseconds to run. The first time elapsed >= 20, it's reset and your simulation loop is run. The next time you get back to the top, elapsed == 10 + c where c is something small, and it skips. Eventually, elapsed >= 20 again and it repeats. Everything is fine.
  2. Your simulation loop takes exactly 20 milliseconds to run. elapsed gets to 20, and the inner loop takes 20 milliseconds to run. The next time you run your loop, elapsed == 20. It repeats. Everything is basically fine.
  3. Your simulation loop takes 40 milliseconds to run. When elapsed gets to 20, the inner loop runs, taking 40 milliseconds. Then, the outer loop runs again, this time with elapsed == 40. Then, the inner loop runs twice, taking 80 millisecond. elapsed is now 80. The inner loop runs 4 times, taking 160 milliseconds. elapsed is now 160. The inner loop runs 8 times, taking 320 milliseconds... And so on.

Of course, the time the simulation loop takes will never be constant like in these examples, but the idea is the same. If you can assume that the loop will never take much more time than your simulation step to run, it's fine. As soon as it takes longer than 1/60th of a second to run your loop, you start to get a kind of feedback loop where the outer loop just takes longer and longer to finish.

The usual way I fix the "framerate-dependent movement" sort of problem is to pass a delta time to all the update()-type functions I have, wherever those may be. Then, all movements/attacks/whatever depend on that delta. For example, for movement, I'd have something like this:

const static pixelsPerSecond = 100
update(float delta) { //delta is seconds
    x += moveX * pixelsPerSecond * delta;
    y += moveY * pixelsPerSecond * delta;
}

EDIT: Just saw the link you posted above, which mentions the downsides to having a variable time step as I recommended. In general, though, if you're not doing accurate physics simulation or multiplayer, it should work fine. (Funnily enough, your link contains a link to Glenn Fielder's article, which also mentions the "spiral of death" I describe above.)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top