Question

Because of my noob reputation, I cannot reply to this Thread, in specific the accepted answer:

I never used boost::intrusive smart pointers, but if you would use shared_ptr smart pointers, you could use weak_ptr objects for your cache.

Those weak_ptr pointers do not count as a reference when the system decides to free their memory, but can be used to retrieve a shared_ptr as long as the object has not been deleted yet.

This is certainly an intuitive idea, however, the C++ standard does not support comparison of weak_ptrs, so it cannot be used as key for associative containers. This could be circumvented by implementing a comparison operator for weak_ptrs:

template<class Ty1, class Ty2>
    bool operator<(
        const weak_ptr<Ty1>& _Left,
        const weak_ptr<Ty2>& _Right
    );

The problem with this solution is that

(1) the comparison operator has to obtain ownership for every comparison (i.e. creating shared_ptrs from the weak_ptr refs)

(2) the weak_ptr is not erased from the cache when the last shared_ptr that manages the resource is destroyed, but an expired weak_ptr is kept in the cache.

For (2), we could provide a custom destructor (DeleteThread), however, this would require again to create a weak_ptr from the T* that is to delete, which can then be used to erase the weak_ptr from the cache.

My question would be if there is any better approach to a cache using smart pointers (I am using the VC100 compiler, no boost), or do I simply not get it?

Cheers, Daniel

Was it helpful?

Solution

The thing is, your Cache is not addressed by the object cached itself, otherwise it would be about useless.

The idea of a Cache is to avoid some computation, so the index will be the parameters of the computation, which will directly map to the result if already present.

Now, you might actually need a second index, to remove objects from the cache, but it is not mandatory. There are certainly other strategies available.

If you really want to remove objects from the cache as soon as they are not used anywhere else in the application, then, effectively, you could use a secondary index. The idea here though will be to index according to T*, not weak_ptr<T>, but to keep weak_ptr<T> around, because otherwise you cannot create a new shared_ptr on the same reference count.

The exact structure depends on whether the parameters of the computation are hard to recompute after the fact, if they are, a simple solution is:

template <typename K, typename V>
class Cache: boost::enable_shared_from_this<Cache>
{
  typedef std::map<K, boost::weak_ptr<V>> KeyValueMap;
  typedef std::map<V*, KeyValueMap::iterator> DeleterMap;

  struct Deleter {
    Deleter(boost::weak_ptr<Cache> c): _cache(c) {}

    void operator()(V* v) {
      boost::shared_ptr<Cache> cache = _cache.lock();
      if (cache.get() == 0) { delete v; return; }

      DeleterMap::iterator it = _cache.delmap.find(v);
      _cache.key2val.erase(it->second);
      _delmap.erase(it);
      delete v;
    }

    boost::weak_ptr<Cache> _cache;
  }; // Deleter

public:
  size_t size() const { return _key2val.size(); }

  boost::shared_ptr<V> get(K const& k) const {
    KeyValueMap::const_iterator it = _key2val.find(k);
    if (it != _key2val.end()) { return boost::shared_ptr<V>(it->second); }

    // need to create it
    boost::shared_ptr<V> ptr(new_value(k),
        Deleter(boost::shared_from_this()));

    KeyValueMap::iterator kv = _key2val.insert(std::make_pair(k, ptr)).first;
    _delmap.insert(std::make_pair(ptr.get(), kv));

    return ptr;
  }


private:
  mutable KeyValueMap _key2val;
  mutable DeleterMap _delmap;
};

Note the special difficult: the pointer might outlive the Cache, so we need some trick here...

And for your information, while it seems feasible, I am not at all confident in this code: untested, unproven, bla, bla ;)

OTHER TIPS

A possible solution for what you want to achieve might be

Lets say T is your object and shared_ptr<T> is your shared ptr

  1. Only have regular T* in your cache.
  2. Have a custom deleter for your shared_ptr<T>
  3. Have your custom deleter erase your T* from the cache upon delete.

This way the cache doesn't increase the ref count of your shared_ptr<T> but is notified when the ref count reaches 0.

struct Obj{};

struct Deleter
{
    std::set<Obj*>& mSet;
    Deleter( std::set<Obj*>& setIn  )
        : mSet(setIn) {}

    void operator()( Obj* pToDelete )
    {
        mSet.erase( pToDelete );
        delete pToDelete;
    }
};

int main ()
{

    std::set< Obj* > mySet;
    Deleter d(mySet);
    std::shared_ptr<Obj> obj1 = std::shared_ptr<Obj>( new Obj() , d );
    mySet.insert( obj1.get() );
    std::shared_ptr<Obj> obj2 = std::shared_ptr<Obj>( new Obj() , d );
    mySet.insert( obj2.get() );

    //Here set should have two elements
    obj1 = 0;
    //Here set will only have one element

    return 42;
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top