Design and implement a data structure for Least Recently Used (LRU) cache, which supports get and put.
Analysis
The key to solve this problem is using a double linked list which enables us to quickly move nodes.
The LRU cache is a hash table of keys and double linked nodes. The hash table makes the time of get() to be O(1). The list of double linked nodes make the nodes adding/removal operations O(1).
Java Solution
Define a double linked list.
class Node{ int key; int value; Node prev; Node next; public Node(int key, int value){ this.key=key; this.value=value; } } |
By analyzing the get and put, we can summarize there are 2 basic operations: 1) removeNode(Node t), 2) offerNode(Node t).
class LRUCache { Node head; Node tail; HashMap<Integer, Node> map = null; int cap = 0; public LRUCache(int capacity) { this.cap = capacity; this.map = new HashMap<>(); } public int get(int key) { if(map.get(key)==null){ return -1; } //move to tail Node t = map.get(key); removeNode(t); offerNode(t); return t.value; } public void put(int key, int value) { if(map.containsKey(key)){ Node t = map.get(key); t.value = value; //move to tail removeNode(t); offerNode(t); }else{ if(map.size()>=cap){ //delete head map.remove(head.key); removeNode(head); } //add to tail Node node = new Node(key, value); offerNode(node); map.put(key, node); } } private void removeNode(Node n){ if(n.prev!=null){ n.prev.next = n.next; }else{ head = n.next; } if(n.next!=null){ n.next.prev = n.prev; }else{ tail = n.prev; } } private void offerNode(Node n){ if(tail!=null){ tail.next = n; } n.prev = tail; n.next = null; tail = n; if(head == null){ head = tail; } } } |
Perfect. Thanks.
Hey,I made a video explaining the solution using doubly linked list and hashmap, you can check this out !!
Please do subscribe if you find this helpful
https://www.youtube.com/watch?v=uIojskB-xAo
how about poll() method?
Correct answer. you are right. key is required in node.
Key is useful when the tail element is removed from the map.
map.remove(tail.key);
What is the significance of data member ‘key’ in class ‘Node’. I think, it is not required because we already store key in HashMap as a key. So, Node class should have only three data members: value, pre and next.
if(head!=null)
head.pre = n;
what is the significance of this while you are doing
head = n;
in the next line. Please reply. Thanks in advance.
Your solution works with interviewbit – https://www.interviewbit.com/problems/lru-cache/ ?
It would be easy to read and understand if we,
1. Maintain head as a dummy node
2. Use doubly circular linked list
Capacity is a total capacity. We need not update it with every push and remove operation.
We anyways check hashMap size for capacity comparison and that should be enough.
Correct me if I am missing something here..
Your ArrayList remove takes O(n) instead of O(1) in Linked List, that is why you will be penalized.
Quickest and easiest way.. but not sure if you should use this in interviews.
You certainly can do this with a LinkedList, or even a LinkedList, as you rightly have suggested. However, this ends up being super slow. The probem lies with the fact that there is no “remove(N n) method in the api. There are only removeFirstOccurrence(Object o) (which entails a linear search) and remove(int index), which is useful only if you know the index of the node in question. The first option costs O(n) time and the second sounds nightmarish to implement.
The benefit to using your own home-grown nodes is that you can allow them to keep references to their buddies and thus have O(1) removal time. But, by way of example, the following will work, but will time-out on leetcode when you get to the huge test cases
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
public class LRUCache {
private int capacity;
private Map lookUp;
private LinkedList nodes;
public LRUCache(int capacity) {
this.capacity = capacity;
lookUp = new HashMap();
nodes = new LinkedList();
}
public int get(int key) {
Integer val = lookUp.get(key) ;
if(val == null){
return -1;
}else{
nodes.removeFirstOccurrence(key);
nodes.addFirst(key);
return val;
}
}
public void set(int key, int value) {
if(lookUp.get(key)==null){
if(nodes.size()==capacity){
int last = nodes.removeLast();
lookUp.remove(last);
}
nodes.addFirst(key);
}else{
get(key);
}
lookUp.put(key, value);
}
}
Hi,
Your algorithm is actually quite good.
There is only a few details to make it better.
1) When the cache is reaches its max capacity you only remove the end of the list. You should also remove the entry from the hash table.
if(map.size()>=capacity)
{
map.remove(end.key);
remove(end);
setHead(newnode);
}
2) Don’t forget to update capacity every time you add a new node.
Cheers,
RC
Would I be penalized if I just used an ArrayList instead of creating a doubly linked list? Is there a reason why a double linked list is used in terms of efficiency? Is it because the ArrayList remove function takes O(n) time?
public class LRUCache {
HashMap LRU = new HashMap();
private int Capacity;
private ArrayList leastRecent = new ArrayList();
public LRUCache(int capacity) {
this.Capacity = capacity;
}
public int get(int key) {
if(LRU.get(key) == null){
return -1;
}
else{
leastRecent.remove((Integer) key);
leastRecent.add(key);
return LRU.get(key);
}
}
public void set(int key, int value) {
if(LRU.get(key) == null){
if(LRU.size() < Capacity){
LRU.put(key, value);
leastRecent.add(key);
}
else{
int remove = leastRecent.remove(0);
LRU.remove(remove);
LRU.put(key, value);
leastRecent.add(key);
}
}
else{
if(LRU.size() < Capacity){
leastRecent.remove((Integer) key);
LRU.put(key, value);
leastRecent.add(key);
}
else{
leastRecent.remove((Integer) key);
LRU.put(key, value);
leastRecent.add(key);
}
}
}
}
LinkedHashMap – keeps track of the order in which each entry is added.
By default, it removes the oldest entry when reached to a threshold.
1. In constructor – true flag – we are saying that, we want to remove the oldest element based on its access. (the one that was least accessed, should be removed)
2. In overridden method, we are saying that, remove entry only when we have reached cacheSize.
Hope this helps.
Hi, Can you please explain how the above code works ?
Hi, Can you please explain how the above code works exactly?
because its not first in first out, its least recently used, the node which does a get access needs to be evicted and put in as a head because head grows from left to right and it will be the last element to be removed from.
Hi, how can I decide the size of the cache ?
why not using LinkedList to implement queue ?
Instead of doing:
end = end.pre;
if (end != null) {
end.next = null;
}
Can’t we just make a call to:
removeNode(end);
public void printKeyPriority(){
DoubleLinkedListNode node = head.next;
System.out.print(head.toString());
while (node !=null){
System.out.print(node.toString());
node = node.next;
}
System.out.println(“n”);
}
call this after a get() / set()
removed. Thanks!
Can you please remove the bar on the left side? cuz it covers the contents. or name it be able to be closed?
public static void main(String[] args)
{
LRUCache lr=new LRUCache(5);
lr.set(1, 1);
lr.set(2,2);
lr.set(3, 3);
lr.set(4, 4);
lr.set(5, 5);
int val=lr.get(1);
System.out.println(“”+val);
lr.set(6, 6);
int val2=lr.get(2);
System.out.println(“”+val2);
}
how do you implement this for test?
public class LRUCache extends LinkedHashMap{
private int cacheSize;
public LRUCache(int size) {
super(size, 0.75f, true);
this.cacheSize = size;
}
@Override
protected boolean removeEldestEntry(
java.util.Map.Entry eldest) {
// remove the oldest element when size limit is reached
return size() > cacheSize;
}
}
Isn’t this a better way to do it ?
http://www.codewalk.com/2012/04/least-recently-used-lru-cache-implementation-java.html