Domanda

I'm trying to use a Guava Cache as a replacement for the ConcurrentLinkedHashMap. However I found that while the ConcurrentLinkedHashMap allowed me to iterate over the map in order of Insertion, Guava's asMap() method doesn't return elements in any particular order. Am I missing something, or is this functionality simply not available?

Example (trying to print the keys, the values, and the entries):

Cache<Integer, Integer> cache = CacheBuilder.newBuilder().maximumSize(10).initialCapacity(10)
        .expireAfterAccess(10000, TimeUnit.SECONDS).build();

cache.put(1, 1);
cache.put(2, 2);
cache.put(3, 3);
cache.put(4, 4);
cache.put(5, 5);
cache.put(6, 6);

Iterator<Integer> iter1 = cache.asMap().keySet().iterator();

System.out.println("Keys");
while (iter1.hasNext())
    System.out.println(iter1.next());

System.out.println("Values");
Iterator<Integer> iter2 = cache.asMap().values().iterator();

while (iter2.hasNext())
    System.out.println(iter2.next());

System.out.println("Entries");
Iterator<Entry<Integer, Integer>> iter3 = cache.asMap().entrySet().iterator();

while (iter3.hasNext()) {
    Entry<Integer,Integer> entry = iter3.next();
    System.out.println(entry.getKey() + " " + entry.getValue());
}

Prints:

Keys
2
6
1
4
3
5
Values
2
6
1
4
3
5
Entries
2 2
6 6
1 1
4 4
3 3
5 5
È stato utile?

Soluzione

A CacheWriter will allow your code to be invoked during an explicit write or on removal. For a loading cache, you would have to perform the same work within the loader. That too is performed under the entry's lock so you can assume atomicity. This should let you maintain the ordering without relying on the cache's internal data structures. Note that if the work when performing the ordered iteration is expensive, you might want to copy it inside the lock and then do the work outside so as not to block cache writes.

LinkedHashMap<K, V> orderedMap = new LinkedHashMap<>();
LoadingCache<K, V> cache = Caffeine.newBuilder()
    .writer(new CacheWriter<K, V>() {
      public void write(K key, V value) {
        synchronized (orderedMap) {
          orderedMap.put(key, value);
        }
      }
      public void delete(K key, V value, RemovalCause cause) {
        if (cause == RemovalCause.REPLACED) {
          return;
        }
        synchronized (orderedMap) {
          orderedMap.remove(key);
        }
      }
    })
    .maximumSize(1_000)
    .build(key -> {
        V value = ...
        synchronized (orderedMap) {
          orderedMap.put(key, value);
        }
        return value;
    });

cache.put(key1, value); // calls writer under lock
cache.get(key2); // calls loader under lock; not writer
cache.invalidate(key1); // calls writer under lock
cache.policy().eviction().get().setMaximum(0); // calls writer under lock

synchronized (orderedMap) {
  for (K key : orderedMap.keySet()) {
    // do work, but blocks writes!
  }
}

Altri suggerimenti

(Answering my own question)

It seems fge's answer is correct, and the Guava Cache cannot be iterated according to the order of insertion. As a workaround, I used the previously noted ConcurrentLinkedHashMap, which is less feature rich, but allows for ordered iteration.

I'd still appreciate an official answer from someone on the Guava team since this seems to indicate that the ConcurrentLinkedHashMap is not fully integrated into Guava (contrary to the ConcurrentLinkedHashMap documentation)

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top