LFU evicts items based on how frequently they’ve been accessed, while LRU evicts the least recently used items.
Let’s see this in action. Imagine a cache with a few items:
GET itemA // itemA accessed
GET itemB // itemB accessed
GET itemA // itemA accessed again
GET itemC // itemC accessed
If we were using LRU and the cache was full, itemB would be the next to go because it was accessed least recently among itemA, itemB, and itemC.
Now, let’s consider LFU with the same access pattern:
GET itemA // itemA: 1 access
GET itemB // itemB: 1 access
GET itemA // itemA: 2 accesses
GET itemC // itemC: 1 access
If the cache is full and we need to evict, itemB or itemC would be candidates. itemA has been accessed twice, making it more "valuable" under LFU. If itemB was added before itemA’s second access, itemB would still be the least recently used, but it also has only one access. itemC also has one access. In a tie between itemB and itemC on access count, LFU often falls back to LRU to break the tie, meaning itemB would likely be evicted.
The core problem both LFU and LRU aim to solve is cache thrashing: when the cache is too small to hold the working set of data, leading to constant evictions and re-fetches, degrading performance. By intelligently choosing which items to keep, these policies try to maximize cache hit rates.
LRU is simpler and often performs well when access patterns are sequential or predictable. If you’re accessing a range of data and then moving on, LRU is usually effective. It assumes that if you accessed something recently, you’re likely to access it again soon.
LFU, on the other hand, is better for caches where certain items are disproportionately popular over longer periods, regardless of when they were last accessed. Think of a popular product page on an e-commerce site or a frequently queried configuration setting. LFU tries to keep these "hot" items in the cache for as long as possible.
To configure these in Valkey (or Redis), you’d use the maxmemory-policy configuration directive.
For LRU, you’d set:
maxmemory-policy allkeys-lru
This means that when maxmemory is reached, Valkey will evict the least recently used key across all keys in the database.
For LFU, you have a few options:
maxmemory-policy allkeys-lfu
This evicts keys based on frequency, and when frequencies are tied, it evicts the least recently used among the tied keys.
There’s also volatile-lfu which only evicts keys with an expire set, and allkeys-lfu which considers all keys.
A common misconception is that LFU simply keeps the N most frequently accessed items. In reality, LFU policies often decay the access counts over time. This prevents items that were popular long ago but are no longer accessed from perpetually occupying cache space. Valkey’s LFU implementation uses a probabilistic approach to approximate frequency and a decay mechanism. When an item is accessed, its frequency counter is incremented. If an item hasn’t been accessed for a while, its counter is reduced. This decay is crucial for adapting to changing access patterns.
The main trade-off is complexity and accuracy. LRU is computationally cheaper to implement and maintain. LFU, especially with decay, requires more complex logic and potentially more CPU to track access counts accurately. The choice depends on your specific workload and whether your data exhibits long-term popularity or more transient access patterns.
The next step is understanding how maxmemory itself interacts with these eviction policies.