How to design LRU Cache on System Design Interview?
How I Designed an LRU Cache in a Coding Interview (and What I Learned Along the Way)
Preparing for System Design Interviews? Join ByteByteGo now for a more structured preparation. They are also offering a rare 50% discount now on their lifetime plan.
A few months ago, I was interviewing for a low latency trading role in one of those firms you know and the interviewer dropped a classic question:
“Can you implement an LRU Cache from scratch — without using any built-in library like
LinkedHashMap
?”
It sounds simple on paper, but under interview pressure, it tests everything — your data structure fundamentals, your ability to clarify requirements, and how cleanly you can reason about performance.
I’ll walk you through exactly how I approached it — my thought process, design choices, mistakes, and how I finally coded a working solution using a HashMap and a Doubly Linked List.
Along the way, I’ll also share some fantastic learning platforms that helped me prepare — ByteByteGo, Exponent, AlgoMonster, and Codemia — each of which focuses on a slightly different (but equally vital) part of technical interviews.
Step 1: Clarify before you code
Even though, I have seen this question before in my long experience, the first thing I did was slow down and ask clarifying questions:
What should
get()
return if the key doesn’t exist? (-1
)Should updating an existing key move it to most-recently-used? (Yes)
What happens if capacity is zero or negative? (No-op or
-1
)
Taking 30 seconds to confirm those details immediately calmed me down.
I learned that confidence in interviews doesn’t come from coding speed — it comes from clarity.
This mindset shift came from repeated mock interviews
I practiced on Exponent — their peer and AI mock sessions simulate exactly this kind of question, forcing you to explain your reasoning out loud. (They also run up to 70% off annual memberships, so it’s worth checking if you’re preparing for FAANG-level interviews.)
Step 2: Picking the right data structures
The interviewer emphasized O(1) time for both get()
and put()
.
That constraint alone points to a HashMap — but we also need to track usage order efficiently, so a Doubly Linked List becomes the natural companion.
HashMap → For O(1) lookups (
key -> node
)Doubly Linked List → To maintain usage order
Head = most recently used
Tail = least recently used
When we call get(key)
, we move the node to the head (marking it as recently used).
When we call put(key, value)
:
If the key exists, update its value and move to head.
If not, insert a new node at the head.
If capacity is exceeded, remove the tail node (least recently used).
Here is how a sample LRU Cache implementation using HashMap and Doubly LinkedList will look like
Step 3: Java implementation (no libraries, clean logic)
Here’s the exact version I wrote in the interview:
import java.util.HashMap;
import java.util.Map;
public class LRUCache {
private class Node {
int key, value;
Node prev, next;
Node(int k, int v) { key = k; value = v; }
}
private final int capacity;
private final Map<Integer, Node> map;
private final Node head, tail;
public LRUCache(int capacity) {
this.capacity = capacity;
this.map = new HashMap<>();
head = new Node(-1, -1);
tail = new Node(-1, -1);
head.next = tail;
tail.prev = head;
}
public int get(int key) {
Node node = map.get(key);
if (node == null) return -1;
moveToHead(node);
return node.value;
}
public void put(int key, int value) {
if (capacity == 0) return;
Node node = map.get(key);
if (node != null) {
node.value = value;
moveToHead(node);
} else {
Node newNode = new Node(key, value);
map.put(key, newNode);
addToHead(newNode);
if (map.size() > capacity) {
Node lru = popTail();
map.remove(lru.key);
}
}
}
private void addToHead(Node node) {
node.next = head.next;
node.prev = head;
head.next.prev = node;
head.next = node;
}
private void removeNode(Node node) {
node.prev.next = node.next;
node.next.prev = node.prev;
}
private void moveToHead(Node node) {
removeNode(node);
addToHead(node);
}
private Node popTail() {
Node node = tail.prev;
removeNode(node);
return node;
}
}
This design ensures:
Time complexity: O(1) for both
get
andput
Space complexity: O(capacity)
Step 4: Edge cases and debugging
I remember hitting one small bug — I forgot to remove the evicted key from the HashMap after popping the tail.
The interviewer smiled and said, “Walk me through what happens when the cache overflows.” T
hat’s when I spotted the missing map.remove(lru.key)
line.
A few lessons I noted for myself:
Use dummy head and tail nodes to simplify pointer logic.
Handle capacity = 0 early.
Separate helper functions (
addToHead
,removeNode
, etc.) to reduce mental load.
Step 5: Scaling this design (system design perspective)
A great follow-up in such interviews is:
“How would you scale this to a distributed system?”
That’s where concepts from system design interviews come into play — cache coherence, consistency, eviction strategies, and distributed coordination (Redis, Memcached, consistent hashing, etc.).
To get better at this layer, I studied from ByteByteGo, which explains caching, load balancing, and distributed design patterns visually. Their diagrams make even tough topics like “Design YouTube” or “Scale from zero to millions” intuitive.
If you’re preparing for interviews, they currently have a 50% lifetime discount — an insane value considering their catalog also covers Coding Interview Patterns, Machine Learning System Design, Object-Oriented Design, and even Generative AI System Design courses.
I got their lifetime plan and I recommend the same because these are evergreen stuff and you will find yourself referring to them every now and then. With lifetime plan, you will also be immune to any future price increase but get all content update for free.
Here is the link to get the plan - 50% OFF on ByteByteGo
Step 6: Keep practicing patterns
One interview doesn’t make you fluent — it just shows you where your weak spots are.
That’s why I kept revisiting platforms like:
AlgoMonster — for bite-sized algorithm pattern practice.
Codemia.io — structured, project-driven learning paths for mastering System Design and DSA.
Exponent — for peer mock interviews and realistic simulations.
ByteByteGo — for visually learning how these data structures come to life in real systems.
All four combined cover everything you need — from coding patterns to system design to behavioral prep.
Final thoughts
That LRU Cache question ended up being one of my favorite interviews.
Not because I nailed it perfectly, but because I learned how to stay calm, think in layers, and build from first principles.
If you’re preparing for similar interviews — whether it’s FAANG, a startup, or a system design role — keep this in mind:
you don’t need to memorize 500 problems. You just need to understand 50 deeply and tell a story around each.
That’s how you stand out.
If you found this helpful:
Learn System Design visually with ByteByteGo (currently 50% lifetime offer)
Practice algorithmic thinking with AlgoMonster (currently 50% lifetime offer)
Get structured interview prep with Codemia.io (currently 60% lifetime offer)
Do realistic mock interviews on Exponent (currently 70% lifetime offer)
I have seen this question a couple of times, I got it almost right except doubly linked list part