Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity) Initialize the LRU cache with positive size capacity.int get(int key) Return the value of the key if the key exists, otherwise return -1.void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1); // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2); // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1); // return -1 (not found)
lRUCache.get(3); // return 3
lRUCache.get(4); // return 4
Constraints:
1 <= capacity <= 30000 <= key <= 1040 <= value <= 1052 * 105 calls will be made to get and put.When you get asked this question in a real-life environment, it will often be ambiguous (especially at FAANG). Make sure to ask these questions in that case:
To manage a collection of items with a fixed size limit, the brute force approach keeps a simple list of all the items. Every time an action happens, like adding or retrieving an item, we go through the entire list to update it and find the oldest item to remove when needed.
Here's how the algorithm would work step-by-step:
class LRUCacheBruteForce:
def __init__(self, capacity_limit: int):
self.capacity_limit = capacity_limit
self.cache_list = [] # Represents a key-value pair as a list [key, value]
def get(self, key_to_find: int) -> int:
item_to_move = None
item_index = -1
for current_index, cache_item in enumerate(self.cache_list):
if cache_item[0] == key_to_find:
item_to_move = cache_item
item_index = current_index
break
# If the item is found, it must be moved to the front to mark it as recently used.
if item_to_move:
self.cache_list.pop(item_index)
self.cache_list.insert(0, item_to_move)
return item_to_move[1]
return -1
def put(self, key_to_add: int, value_to_add: int) -> None:
existing_item_index = -1
for current_index, cache_item in enumerate(self.cache_list):
if cache_item[0] == key_to_add:
existing_item_index = current_index
break
if existing_item_index != -1:
# If the key already exists, we remove it to re-insert it at the front later.
self.cache_list.pop(existing_item_index)
elif len(self.cache_list) >= self.capacity_limit:
# If the cache is full, the least recently used item (at the end) must be removed.
self.cache_list.pop()
# The new or updated item is placed at the front, marking it as most recently used.
self.cache_list.insert(0, [key_to_add, value_to_add])To build an efficient LRU Cache, we combine two simple ideas: a quick way to look things up and a way to keep track of the most recently used items. This lets us find items instantly and also know which one to remove when the cache is full.
Here's how the algorithm would work step-by-step:
class Node:
def __init__(self, key, value):
self.key = key
self.value = value
self.previous_node = None
self.next_node = None
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.cache_directory = {}
self.head_node = Node(0, 0)
self.tail_node = Node(0, 0)
self.head_node.next_node = self.tail_node
self.tail_node.previous_node = self.head_node
def _remove_node(self, node_to_remove):
previous_one = node_to_remove.previous_node
next_one = node_to_remove.next_node
previous_one.next_node = next_one
next_one.previous_node = previous_one
def _add_to_front(self, node_to_add):
node_to_add.previous_node = self.head_node
node_to_add.next_node = self.head_node.next_node
self.head_node.next_node.previous_node = node_to_add
self.head_node.next_node = node_to_add
def get(self, key_to_find):
if key_to_find in self.cache_directory:
node_to_move = self.cache_directory[key_to_find]
# This item was just used, so it must be moved to the front to mark it as most recent.
self._remove_node(node_to_move)
self._add_to_front(node_to_move)
return node_to_move.value
return -1
def put(self, key_to_add, value_to_add):
if key_to_add in self.cache_directory:
node_to_update = self.cache_directory[key_to_add]
self._remove_node(node_to_update)
node_to_update.value = value_to_add
self._add_to_front(node_to_update)
return
# If the cache is full, we must remove the least recently used item to make space.
if len(self.cache_directory) == self.capacity:
least_recent_node = self.tail_node.previous_node
self._remove_node(least_recent_node)
# The evicted item's key must also be removed from the directory to maintain consistency.
del self.cache_directory[least_recent_node.key]
new_node = Node(key_to_add, value_to_add)
self.cache_directory[key_to_add] = new_node
self._add_to_front(new_node)| Case | How to Handle |
|---|---|
| Capacity of 0 or negative | The cache should ideally reject or handle such capacities gracefully, perhaps by throwing an error or initializing with a capacity of 1. |
| Capacity of 1 | Every `put` operation will evict the single existing item, and every `get` will return the item if it exists or -1 otherwise. |
| Accessing a key that does not exist in the cache | The `get` operation should return a specific indicator, typically -1, to signify that the key is not present. |
| Inserting a key that already exists | The existing value should be updated, and the key should be moved to the most recently used position. |
| Cache is full and a new item is inserted | The least recently used item must be identified and evicted to make space for the new item. |
| Repeatedly getting the same item without new insertions | Accessing an item multiple times without other insertions should not change its 'recently used' status relative to other items that haven't been accessed. |
| Sequence of gets and puts that continuously accesses the same items | The LRU logic must correctly track the usage order even under heavy, repetitive access patterns. |
| Maximum capacity reached and then all items are accessed in order of recency | The eviction policy must correctly identify and remove the true least recently used item when new insertions occur. |