The most surprising thing about Valkey eviction policies is that they don’t actually evict in the way you might think; they replace data based on a predictable, deterministic algorithm when memory runs out.
Let’s see this in action. Imagine a Valkey instance with a maxmemory of 100MB and a maxmemory-policy set to allkeys-lru. We’ll start populating it with some data.
# Start a Valkey server (for demonstration)
valkey-server --port 6379 --maxmemory 100mb --maxmemory-policy allkeys-lru
# Connect to Valkey
valkey-cli -p 6379
Now, let’s add some keys. We’ll use SET with EX to set keys with an expiry, and also just regular SET commands.
127.0.0.1:6379> SET mykey1 "value1"
OK
127.0.0.1:6379> SET mykey2 "value2" EX 60
OK
127.0.0.1:6379> SET mykey3 "value3"
OK
127.0.0.1:6379> SET mykey4 "value4" EX 120
OK
127.0.0.1:6379> SET mykey5 "value5"
OK
As we add more and more keys, Valkey keeps track of their usage. If we were to exceed the 100MB limit, it would start making decisions. Let’s say mykey1 was accessed most recently.
# Simulate accessing mykey1
127.0.0.1:6379> GET mykey1
"value1"
If memory is now full and Valkey needs to make space, and allkeys-lru is configured, it will look at all keys (not just expired ones) and remove the one that was least recently used. In this scenario, if mykey2 (which has an expiry, but it’s not expired yet) was least recently used, it could be removed, even though it has a TTL. The TTL only matters when Valkey is actively scanning for expired keys in the volatile-* policies.
This is the core problem Valkey eviction policies solve: what to do when your cache is full. You have a finite amount of RAM, and Valkey needs a strategy to decide which data to keep and which to discard to make room for new data. The policies dictate this decision-making process.
Here’s how they work internally:
noeviction: This is the default. If memory is full, new writes will fail with an error. Valkey simply refuses to accept more data. It’s the safest if you never want to lose data, but useless for a cache.allkeys-lru: Removes the Least Recently Used (LRU) keys from all keys in the database. This is a good general-purpose policy if you expect most keys to be accessed with similar frequency.volatile-lru: Removes the Least Recently Used (LRU) keys, but only from those keys that have an expiry set (i.e.,EXPIREorSETwithEX/PX). This is useful if you have a mix of volatile cache data and permanent data, and you only want to evict the cache data.allkeys-random: Removes random keys from all keys in the database. Simpler than LRU, but less predictable performance. Good if key access patterns are highly unpredictable.volatile-random: Removes random keys, but only from those keys that have an expiry set. Similar tovolatile-lrubut uses random selection.volatile-ttl: Removes keys with the shortest time-to-live (TTL) first, but only from those keys that have an expiry set. This prioritizes keeping keys that are "about to expire" for a bit longer, which might be useful in specific scenarios.allkeys-lfu: Removes the Least Frequently Used (LFU) keys from all keys in the database. This is an improvement over LRU if some keys are accessed very frequently and should be kept, while others are accessed rarely and can be discarded. LFU tracks access counts.volatile-lfu: Removes the Least Frequently Used (LFU) keys, but only from those keys that have an expiry set. Combines the benefits of LFU with the selective eviction ofvolatile-*policies.
The choice of policy significantly impacts cache hit rates and performance. For instance, allkeys-lfu is often superior to allkeys-lru because it correctly identifies and keeps frequently accessed "hot" keys, even if they were accessed a long time ago, while discarding infrequently accessed "cold" keys. The internal implementation of LFU uses a probabilistic counting mechanism to avoid excessive memory overhead for tracking counts, making it efficient.
When you configure maxmemory-policy, you’re telling Valkey how to behave under memory pressure. It’s not just about when to evict (which is when maxmemory is reached and a new write occurs), but how to choose which key to sacrifice. The allkeys- policies look at everything in your Valkey instance, while volatile- policies restrict their view to only keys with an associated TTL.
The actual mechanism for eviction involves Valkey periodically sampling keys (especially for LRU/LFU) or iterating through keys with TTLs. When a write operation triggers the need for eviction, it performs a limited number of checks or removals based on the chosen policy until enough memory is freed to accommodate the new data. This process is designed to be relatively fast and not block incoming requests for too long.
A common misconception is that volatile-lru will always evict an expired key before a non-expired one. This is incorrect. If a key has an expiry set, it’s eligible for eviction under volatile-lru or volatile-random or volatile-ttl, regardless of whether its TTL has elapsed. The policy only considers keys with TTLs. If the least recently used key among those with TTLs has not yet expired, it will still be evicted if memory is full. The TTL is just a filter for which keys to consider, not a guarantee of survival until expiry.
The next step after understanding eviction policies is to consider how to tune maxmemory itself and understand the implications of different data structures on memory consumption.