LRU Cache in Java

When you reference a page which is not in cache, it is brought into the cache from Main Memory (or virtual memory). It might be possible that cache is already full, thus require removing an already existing page from cache. In LRU cache, the Least Recently Used frame is removed.

You can implement such cache using a LinkedList and HashMap.

  1. LinkedList (LL): Maintain LinkedList with max number of elements equal to cache size.
  2. HashMap: Maintain map of page address and reference to linked list node.
  3. In LL most recently accessed element will be at head and least recently used element at tail.
  4. When a page is referenced which already exists in cache, we find it’s LL node using HashMap and remove that node from the LL. Then, add that node to the front of LL.
  5. If the LinkedList is already full, we remove the last element of LinkedList and add new element to the front.

We can also implement it using LinkedHashMap with Constant time O(1) get, add, remove operations.

public class LRUCache extends LinkedHashMap<Object, Object> { 
    /*
     * SerialVersionUID is a unique identifier for each class, 
     * JVM uses it tocompare the versions of the class ensuring that 
     * the same class was used during Serialization and Deserialization
     */
    private static final long serialVersionUID = 1L;

    private int cacheSize;

    public LRUCache(int capacity) {
        
        // LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder)
        // AccessOrder: 
        //      the ordering mode - true for access-order, 
        //      false for insertion-order
        super(2 * capacity, 0.75f, true);

        this.cacheSize = capacity;
    }

    @Override
    protected boolean removeEldestEntry(java.util.Map.Entry eldest) {
        // Returns true if this map should remove its eldes entry. 
        // Used by Put method
        return size() > cacheSize;
    }
}

References:

  1. Geeks for Geeks: LRU Cache using Queue and Hash
  2. Javadocs: LinkedHashMap