Question

I'm making a XNA game and I wonder if there is a way of optimizing some loops. For example:

I have a Map class, that contains a collection of tiles, so, in the Map Update() I just call every tiles Update()

    // Update method in Map Class
    public void Update()
    {
        for (int index = 0; index < tiles.Count; index++)
        {
            tiles[index].Update();
        }
    }

This works fine but, it can get worst with some larger populated objects, like the Particle class where, every particle is managed with a ParticleManager class (that contains a collection of particles) so:

    // Update method in ParticleManager class
    public void Update()
    {
        for (int index = 0; index < particle.Count; index++)
        {
            particle[index].Update();
        }
    }

    //Update Method in Particle class
    public void Update()
    {
        for (int index = 0; index < Map.tiles.Count; index++)
        {
             CheckCollitions(Map.tile[index], this);
        }
    }

The ParticleManager loops for every particle and, each particle checks collitions for every Tile. So, if you got 20 particles and 100 Tiles, it will do lots of computation:

20 loops cycles * 100 loops cycles

That's why I was thinking of some optimizations, like loop unrolling but, I don't know if it works (I think not) with undefined length loops (cause the compiler doesn't know those loops lengths in compile time)

To sum up:

  1. It is possible to optimize those loops using loop unrolling?
  2. Can you advice me with any other type of optimization?

Thanks

Was it helpful?

Solution

First of all, loop unrolling is a micro-optimization and won't get to a lot of benefit. Don't bother until it's absolutely needed.

More importantly, the way to optimize code is more about the data structures and algorithms used, not how fast you can iterate through a collection.

In your particular example, you are effectively doing this..

    for (int p = 0; p < particle.Count; p++)
    {
        for (int t = 0; t < Map.tiles.Count; t++)
        {
                 CheckCollitions(Map.tile[t], particle[p]);
        }
    }

Nested loops like this indicate an O(n^2) complexity and are a sure sign of potential performance issues.

Typically when you are working with collision detection you can optimize the code by reducing the number of objects that could collide based on things you already know.

For example, I assume tiles don't move around and are laid out in a uniform grid. I can also assume particles are very small.

Let's say you store tile data in a 2 dimensional array.

var tiles = new int[32,32];

A value of 0 means no tile, and a value of 1 (or > 0) means the tile is solid. You also know that when tiles are rendered on the screen they are 64x64 pixels.

This means, you can do a basic collision test with any tile very quickly using simple math..

    for (int p = 0; p < particle.Count; p++)
    {
        var tileWidth = 64;
        var tileHeight = 64;
        var particlePosition = particle[p].Position;
        var tx = particlePosition.X / tileWidth; 
        var ty = particlePosition.Y / tileHeight;

        var tile = tiles[tx, ty];

        if(tile == 0)
        {
            // no collision
        }
        else
        {
            // collision detected
        }
    }

At this point, you know exactly which tile in the array the particle position falls into and removed the inner loop (effectively reducing it to an O(n) complexity). Obviously, you'll also need to be careful not to check outside the bounds of the array and there may be some other details to deal with if your particles are larger than a single pixel, but you get the idea.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top