문제

I need to implement a LRU algorithm in a 3D renderer for texture caching. I write the code in C++ on Linux.

  • In my case I will use texture caching to store "tiles" of image data (16x16 pixels block). Now imagine that I do a lookup in the cache, get a hit (tile is in the cache). How do I return the content of the "cache" for that entry to the function caller? I explain. I imagine that when I load a tile in the cache memory, I allocate the memory to store 16x16 pixels for example, then load the image data for that tile. Now there's two solutions to pass the content of the cache entry to the function caller:
    1) either as pointer to the tile data (fast, memory efficient),

    TileData *tileData = cache->lookup(tileId); // not safe?

    2) or I need to recopy the tile data from the cache within a memory space allocated by the function caller (copy can be slow).

    void Cache::lookup(int tileId, float *&tileData)
    {
       // find tile in cache, if not in cache load from disk add to cache, ...
       ...
       // now copy tile data, safe but ins't that slow?
       memcpy((char*)tileData, tileDataFromCache, sizeof(float) * 3 * 16 * 16);
    }
    float *tileData = new float[3 * 16 * 16]; // need to allocate the memory for that tile
    // get tile data from cache, requires a copy
    cache->lookup(tileId, tileData);
    

    I would go with 1) but the problem is, what happens if the tile gets deleted from the cache just after the lookup, and that the function tries to access the data using the return pointer? The only solution I see to this, is to use a form of referencing counting (auto_ptr) where the data is actually only deleted when it's not used anymore?

  • the application might access more than 1 texture. I can't seem to find of a way of creating a key which is unique to each texture and each tile of a texture. For example I may have tile 1 from file1 and tile1 from file2 in the cache, so making the search on tildId=1 is not enough... but I can't seem to find a way of creating the key that accounts for the file name and the tileID. I can build a string that would contain the file name and the tileID (FILENAME_TILEID) but wouldn't a string used as a key be much slower than an integer?

  • Finally I have a question regarding time stamp. Many papers suggest to use a time stamp for ordering the entry in the cache. What is a good function to use a time stamp? the time() function, clock()? Is there a better way than using time stamps?

Sorry I realise it's a very long message, but LRU doesn't seem as simple to implement than it sounds.

도움이 되었습니까?

해결책

Answers to your questions:

1) Return a shared_ptr (or something logically equivalent to it). Then all of the "when-is-it-safe-to-delete-this-object" issues pretty much go away.

2) I'd start by using a string as a key, and see if it actually is too slow or not. If the strings aren't too long (e.g. your filenames aren't too long) then you may find it's faster than you expect. If you do find out that string-keys aren't efficient enough, you could try something like computing a hashcode for the string and adding the tile ID to it... that would probably work in practice although there would always be the possibility of a hash-collision. But you could have a collision-check routine run at startup that would generate all of the possible filename+tileID combinations and alert you if map to the same key value, so that at least you'd know immediately during your testing when there is a problem and could do something about it (e.g. by adjusting your filenames and/or your hashcode algorithm). This assumes that what all the filenames and tile IDs are going to be known in advance, of course.

3) I wouldn't recommend using a timestamp, it's unnecessary and fragile. Instead, try something like this (pseudocode):

typedef shared_ptr<TileData *> TileDataPtr;   // automatic memory management!

linked_list<TileDataPtr> linkedList;
hash_map<data_key_t, TileDataPtr> hashMap;

// This is the method the calling code would call to get its tile data for a given key
TileDataPtr GetData(data_key_t theKey)
{
   if (hashMap.contains_key(theKey))
   {
      // The desired data is already in the cache, great!  Just move it to the head
      // of the LRU list (to reflect its popularity) and then return it.
      TileDataPtr ret = hashMap.get(theKey);
      linkedList.remove(ret);     // move this item to the head
      linkedList.push_front(ret); // of the linked list -- this is O(1)/fast
      return ret;
   }
   else
   {
      // Oops, the requested object was not in our cache, load it from disk or whatever
      TileDataPtr ret = LoadDataFromDisk(theKey);
      linkedList.push_front(ret);
      hashMap.put(theKey, ret);

      // Don't let our cache get too large -- delete
      // the least-recently-used item if necessary
      if (linkedList.size() > MAX_LRU_CACHE_SIZE)
      {
         TileDataPtr dropMe = linkedList.tail();
         hashMap.remove(dropMe->GetKey());
         linkedList.remove(dropMe);
      }
      return ret;
   }
}

다른 팁

In the same order as your questions:

  • Copying over the texture date does not seem reasonable from a performance standpoint. Reference counting sound far better, as long as you can actually code it safely. The data memory would be freed as soon as it is not used by the renderer or have a reference stored in the cache.

  • I assume that you are going to use some sort of hash table for the look-up part of what you are describing. The common solution to your problem has two parts:

    • Using a suitable hashing function that combines multiple values e.g. the texture file name and the tile ID. Essentially you create a composite key that is treated as one entity. The hashing function could be a XOR operation of the hashes of all elementary components, or something more complex.

      Selecting a suitable hash function is critical for performance reasons - if the said function is not random enough, you will have a lot of hash collisions.

    • Using a suitable composite equality check to handle the case of hash collisions.

    This way you can look-up the combination of all attributes of interest in a single hash table look-up.

  • Using timestamps for this is not going to work - period. Most sources regarding caching usually describe the algorithms in question with network resource caching in mind (e.g. HTTP caches). That is not going to work here for three reasons:

    1. Using natural time only makes sense of you intend to implement caching policies that take it into account, e.g. dropping a cache entry after 10 minutes. Unless you are doing something very weird something like this makes no sense within a 3D renderer.

    2. Timestamps have a relatively low actual resolution, even if you use high precision timers. Most timer sources have a precision of about 1ms, which is a very long time for a processor - in that time your renderer would have worked through several texture entries.

    3. Do you have any idea how expensive timer calls are? Abusing them like this could even make your system perform worse than not having any cache at all...

    The usual solution to this problem is to not use a timer at all. The LRU algorithm only needs to know two things:

    1. The maximum number of entries allowed.

    2. The order of the existing entries w.r.t. their last access.

    Item (1) comes from the configuration of the system and typically depends on the available storage space. Item (2) generally implies the use of a combined linked list/hash table data structure, where the hash table part provides fast access and the linked list retains the access order. Each time an entry is accessed, it is placed at the end of the list, while old entries are removed from its start.

    Using a combined data structure, rather than two separate ones allows entries to be removed from the hash table without having to go through a look-up operation. This improves the overall performance, but it is not absolutely necessary.

As promised I am posting my code. Please let me know if I have made mistakes or if I could improve it further. I am now going to look into making it work in a multi-threaded environment. Again thanks to Jeremy and Thkala for their help (sorry the code doesn't fit the comment block).

#include <cstdlib>
#include <cstdio>
#include <memory>
#include <list>
#include <unordered_map> 

#include <cstdint>
#include <iostream>

typedef uint32_t data_key_t;

class TileData
{
public:
    TileData(const data_key_t &key) : theKey(key) {}
    data_key_t theKey;
    ~TileData() { std::cerr << "delete " << theKey << std::endl; }
};

typedef std::shared_ptr<TileData> TileDataPtr;   // automatic memory management!

TileDataPtr loadDataFromDisk(const data_key_t &theKey)
{
    return std::shared_ptr<TileData>(new TileData(theKey));
}

class CacheLRU
{
public:
    // the linked list keeps track of the order in which the data was accessed
    std::list<TileDataPtr> linkedList;
    // the hash map (unordered_map is part of c++0x while hash_map isn't?) gives quick access to the data 
    std::unordered_map<data_key_t, TileDataPtr> hashMap; 
    CacheLRU() : cacheHit(0), cacheMiss(0) {}
    TileDataPtr getData(data_key_t theKey)
    {
        std::unordered_map<data_key_t, TileDataPtr>::const_iterator iter = hashMap.find(theKey);
        if (iter != hashMap.end()) {
            TileDataPtr ret = iter->second;
            linkedList.remove(ret);
            linkedList.push_front(ret);
            ++cacheHit;
            return ret;
        }
        else {
            ++cacheMiss;
            TileDataPtr ret = loadDataFromDisk(theKey);
            linkedList.push_front(ret);
            hashMap.insert(std::make_pair<data_key_t, TileDataPtr>(theKey, ret));
            if (linkedList.size() > MAX_LRU_CACHE_SIZE) {
                const TileDataPtr dropMe = linkedList.back();
                hashMap.erase(dropMe->theKey);
                linkedList.remove(dropMe);
            }
            return ret;
        }
    }
    static const uint32_t MAX_LRU_CACHE_SIZE = 8;
    uint32_t cacheMiss, cacheHit;
};

int main(int argc, char **argv)
{
    CacheLRU cache;
    for (uint32_t i = 0; i < 238; ++i) {
        int key = random() % 32;
        TileDataPtr tileDataPtr = cache.getData(key);
    }
    std::cerr << "Cache hit: " << cache.cacheHit << ", cache miss: " << cache.cacheMiss << std::endl;
    return 0;
}
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top