Domanda

I am running into some memory issues with my project, so I decided to stress test some portions to see some performance measurements. I am using Google's ConcurrentLinkedHashMap library as an LRU memory cache. The relevant parts of my testing code is shown below:

final ConcurrentLinkedHashMap cache = new ConcurrentLinkedHashMap.Builder<Long,Long>()
            .maximumWeightedCapacity(200000)
            .build();

for (int i = 0; i < 10; i++) {
    new Thread(new Runnable() {
        @Override
        public void run() {
            int count = 0;
            while (count < 1000000) {
                if (throttle) {
                    Thread.sleep(1000);
                    continue;
                }
                cache.put(random.nextLong(), random.nextLong());
                count++;
            }
        }
    }).start();
}

this.wait();

I set the throttle flag to true once memory hits over 50%. I have a monitoring thread that takes some measurements every 2 seconds. Here are the numbers I got:

Size: 423902    Weighted Size: 200001   Memory: 0.11229571913729916
Size: 910783    Weighted Size: 200001   Memory: 0.25812696264655144
Size: 1373394   Weighted Size: 200001   Memory: 0.38996117352719034
Size: 2120239   Weighted Size: 200001   Memory: 0.6203725762957892
Size: 2114424   Weighted Size: 200000   Memory: 0.6233790564491212
Size: 2114424   Weighted Size: 200000   Memory: 0.6233790564491212
Size: 2114424   Weighted Size: 200000   Memory: 0.6233790564491212
Size: 2114424   Weighted Size: 200000   Memory: 0.6233790564491212
Size: 2114424   Weighted Size: 200000   Memory: 0.6233790564491212
Size: 2114424   Weighted Size: 200000   Memory: 0.6233790564491212

I am not seeing the evicted entries of the LRU cache being cleaned up for some reason. I've heard that manually calling System.gc() is a bad idea. If so, what's a good way to efficiently clean up the memory?

ON A SIDE NOTE: does anyone know what does size() for ConcurrentLinkedHashMap return? weightedSize() returns the correct value, but I'm worried that size returns something significantly bigger..

È stato utile?

Soluzione

As your numbers show, the cache does not strictly guarantee a maximum but does attempt to maintain that high watermark. If it did make a strong guarantee, then it would limit concurrency by having a blocking call for every write operations and not be able to efficiently maintain a top-level LRU policy. In Guava's Cache we do make that restriction by using lock striping, but that trade-off comes with limitations.

As CLHM asynchronously connects the hash table and LRU data structures the two numbers can deviate. The size is measured by the decorated ConcurrentHashMap. This is the most accurate estimate of key-value pairs at any given instant. The weighted size is calculated by the LRU policy when it replays the map operations against its data structures to determine the recency ordering and perform evictions. The weighted size stays at the expected maximum because the policy will evict when the threshold has been exceeded.

It is expected that the size differs in bursts, but the cache will always try to self-correct quickly. In your example the behavior is exacerbated because the operating system will schedule a thread for a large time slice. This allows it to perform many more insertions than possible in a real-world scenario. The amortized LRU management means that eviction cannot keep up since its stealing a bit of time from one thread at any given moment. To better simulate actual behavior, the MemoryLeakTest forces a context switch between operations.

If you implement an eviction listener, you should see the count increase and that you have reached a steady-state. If you force a context switch, you should see the size kept at a lower bound. If you want a stricter limit on the total possible number entries ever in the map, then prefer the Guava implementation.

Altri suggerimenti

Isn't the default eviction policy first in first out? If so are you looking at the first insertions to see them being evicted? If you are looking at the last, you won't see them.

This is from the documentation:

The default
 * weigher assigns each value a weight of 1 to bound the map by the
 * total number of key-value pairs. A map that holds collections may choose to
 * weigh values by the number of elements in the collection and bound the map
 * by the total number of elements that it contains. A change to a value that
 * modifies its weight requires that an update operation is performed on the
 * map.

So weighted size will not change by default based on my reading of it.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top