Вопрос

What is the difference between LRU and LFU cache implementations?

I know that LRU can be implemented using LinkedHashMap. But how to implement LFU cache?

Это было полезно?

Решение

Let's consider a constant stream of cache requests with a cache capacity of 3, see below:

A, B, C, A, A, A, A, A, A, A, A, A, A, A, B, C, D

If we just consider a Least Recently Used (LRU) cache with a HashMap + doubly linked list implementation with O(1) eviction time and O(1) load time, we would have the following elements cached while processing the caching requests as mentioned above.

[A]
[A, B]
[A, B, C]
[B, C, A] <- a stream of As keeps A at the head of the list.
[C, A, B]
[A, B, C]
[B, C, D] <- here, we evict A, we can do better! 

When you look at this example, you can easily see that we can do better - given the higher expected chance of requesting an A in the future, we should not evict it even if it was least recently used.

A - 12
B - 2
C - 2
D - 1

Least Frequently Used (LFU) cache takes advantage of this information by keeping track of how many times the cache request has been used in its eviction algorithm.

Другие советы

LRU is a cache eviction algorithm called least recently used cache.

Look at this resource

LFU is a cache eviction algorithm called least frequently used cache.

It requires three data structures. One is a hash table that is used to cache the key/values so that given a key we can retrieve the cache entry at O(1). The second one is a double linked list for each frequency of access. The max frequency is capped at the cache size to avoid creating more and more frequency list entries. If we have a cache of max size 4 then we will end up with 4 different frequencies. Each frequency will have a double linked list to keep track of the cache entries belonging to that particular frequency. The third data structure would be to somehow link these frequencies lists. It can be either an array or another linked list so that on accessing a cache entry it can be easily promoted to the next frequency list in time O(1).

the main difference is that in LRU we only check on which page is recently that used old in time than other pages i.e checking only based on recent used pages. BUT in LFU we check the old page as well as the frequency of that page and if frequency of the page is lager than the old page we cant remove it and if we all old pages are having same frequency then take last i.e FIFO method for that. and remove page....

  • constant-time insertion only works for the LRU. We can at least maintain constant-time lookup with LFU, and it’s better than nothing.

  • In LFU, the best way to keep order on entries based on their dynamically changing priority is, of course, a priority queue. It might be a heap, a Fibonacci heap, or any other implementation, users can choose the implementation based on the task of thinking about which implementation works best for them, depending on the programming language they use and, of course, the context in which they will use the cache.

  • we have only two things to change in our caching implementation to switch from an LRU to an LFU:

    1- The eviction policy (all the logic about choosing which element to remove)

    2- The data structure used to store elements

  • To create LFU cache we use hashmap and treap. treap is the combination of binary tree and priority queue. The idea in treap, you have two constraints, you strucutre one of them with binary tree and another one with priority queue. have a look at this graph:

enter image description here

Values of hashmap are pointing to treap nodes. In treap nodes, numbers in small blue rectangles are the number of accesses and this is the priority queue part of the trap. NOTE that children nodes always have a higher priority than their parents, but there is no ordering between siblings.

  • if the LFU cache runs for long, the turnaround time for older entries that are not popular anymore could be very long.

Here is the Performance comparison:

enter image description here

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top