Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache
class:
LRUCache(int capacity)
Initialize the LRU cache with positive size capacity
.int get(int key)
Return the value of the key
if the key exists, otherwise return -1
.void put(int key, int value)
Update the value of the key
if the key
exists. Otherwise, add the key-value
pair to the cache. If the number of keys exceeds the capacity
from this operation, evict the least recently used key.The functions get
and put
must each run in O(1)
average time complexity.
Example 1:
Input ["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"] [[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]] Output [null, null, null, 1, null, -1, null, -1, 3, 4] Explanation LRUCache lRUCache = new LRUCache(2); lRUCache.put(1, 1); // cache is {1=1} lRUCache.put(2, 2); // cache is {1=1, 2=2} lRUCache.get(1); // return 1 lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3} lRUCache.get(2); // returns -1 (not found) lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3} lRUCache.get(1); // return -1 (not found) lRUCache.get(3); // return 3 lRUCache.get(4); // return 4
Constraints:
1 <= capacity <= 3000
0 <= key <= 104
0 <= value <= 105
2 * 105
calls will be made to get
and put
.When you get asked this question in a real-life environment, it will often be ambiguous (especially at FAANG). Make sure to ask these questions in that case:
Imagine we have a collection of items with a strict limit on how many we can hold. To solve this, we'll keep a simple list of all our items and manually track how recently each one was used every single time we add or access an item.
Here's how the algorithm would work step-by-step:
class LRUCacheBruteForce:
def __init__(self, capacity):
self.capacity = capacity
# Use a list of tuples to store key-value pairs as described.
self.item_collection = []
# A separate list tracks the order of key usage.
self.usage_history = []
def get(self, key):
found_index = -1
for index, (item_key, item_value) in enumerate(self.item_collection):
if item_key == key:
found_index = index
break
if found_index != -1:
# The key must be marked as most recently used by moving it in the history list.
self.usage_history.remove(key)
self.usage_history.append(key)
return self.item_collection[found_index][1]
return -1
def put(self, key, value):
found_index = -1
for index, (item_key, item_value) in enumerate(self.item_collection):
if item_key == key:
found_index = index
break
if found_index != -1:
self.item_collection[found_index] = (key, value)
# An existing item's access also updates its position in the usage history.
self.usage_history.remove(key)
self.usage_history.append(key)
else:
if len(self.item_collection) >= self.capacity:
# To make space, we must identify and remove the least recently used item.
key_to_evict = self.usage_history[0]
self.usage_history.pop(0)
eviction_index = -1
for index, (item_key, item_value) in enumerate(self.item_collection):
if item_key == key_to_evict:
eviction_index = index
break
self.item_collection.pop(eviction_index)
self.item_collection.append((key, value))
self.usage_history.append(key)
The core idea is to use two structures working together: a quick lookup table to find items instantly, and an ordered list to keep track of how recently items were used. This combination allows us to find, use, and remove items very quickly without searching through the entire collection each time.
Here's how the algorithm would work step-by-step:
class Node:
def __init__(self, key, value):
self.key = key
self.value = value
self.previous_node = None
self.next_node = None
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.cache_directory = {}
self.head_sentinel = Node(0, 0)
self.tail_sentinel = Node(0, 0)
self.head_sentinel.next_node = self.tail_sentinel
self.tail_sentinel.previous_node = self.head_sentinel
def _remove_node(self, node_to_remove):
previous_node = node_to_remove.previous_node
next_node = node_to_remove.next_node
previous_node.next_node = next_node
next_node.previous_node = previous_node
def _add_to_front(self, node_to_add):
node_to_add.previous_node = self.head_sentinel
node_to_add.next_node = self.head_sentinel.next_node
self.head_sentinel.next_node.previous_node = node_to_add
self.head_sentinel.next_node = node_to_add
def get(self, key):
if key in self.cache_directory:
found_node = self.cache_directory[key]
# Move the accessed node to the front to mark it as most recently used.
self._remove_node(found_node)
self._add_to_front(found_node)
return found_node.value
return -1
def put(self, key, value):
if key in self.cache_directory:
node_to_update = self.cache_directory[key]
node_to_update.value = value
self._remove_node(node_to_update)
self._add_to_front(node_to_update)
return
# If the cache is full, we must evict the least recently used item before adding a new one.
if len(self.cache_directory) == self.capacity:
least_recently_used_node = self.tail_sentinel.previous_node
self._remove_node(least_recently_used_node)
# Remove the evicted item's entry from the directory to maintain consistency.
del self.cache_directory[least_recently_used_node.key]
new_node = Node(key, value)
self.cache_directory[key] = new_node
# Place the new item at the front, as it is now the most recently used item.
self._add_to_front(new_node)
Case | How to Handle |
---|---|
Initializing the cache with a capacity of zero | The cache should never store any items, so any put operation should immediately evict the new item and get should always return -1. |
Initializing the cache with a capacity of one | The cache can only hold a single item, so every put operation either updates the existing item or replaces it with the new one. |
Performing a get operation on an empty cache | The get operation must return -1 as the key cannot exist in an empty cache. |
Performing a put operation on a key that already exists | The existing value should be updated, and the key should be marked as the most recently used item without changing the cache size. |
Performing a put operation when the cache is full with a new key | The least recently used item must be evicted from both the hash map and the doubly linked list before the new item is inserted. |
Performing a get operation on a key that exists | This operation should mark the accessed key as the most recently used item by moving its node to the head of the list. |
A rapid sequence of put operations that exceeds the capacity | The solution must efficiently evict the LRU item for each new insertion after the cache reaches its capacity limit. |
Getting the least recently used item and then immediately putting a new item | Getting the LRU item makes it the MRU item, so the subsequent put operation should evict the new LRU item, which was previously the second least recently used. |