Valkey’s LATENCY HISTORY command doesn’t just show you latency spikes; it reveals the hidden temporal footprint of your slowest operations.

Let’s see it in action. Imagine a Valkey instance handling a mix of commands. We’ll hit it with a few slow ones and then inspect the LATENCY HISTORY.

First, let’s simulate some slow commands. We’ll use redis-cli for this.

# Connect to Valkey
redis-cli

# Set a key
SET mykey "myvalue"

# Now, let's simulate a slow command.
# We'll use a command that takes a bit longer, like ZADD with many elements.
# For demonstration, let's add 10,000 elements to a sorted set.
ZADD myzset 1 "member1" 2 "member2" ... (imagine 10,000 pairs)
# This command will take some time to execute.

# Let's do another slow one, perhaps a large HSET.
HSET myhash field1 "value1" field2 "value2" ... (imagine 10,000 pairs)
# This will also introduce latency.

# Now, let's query the latency history.
LATENCY HISTORY

The output of LATENCY HISTORY looks something like this:

1) 1) "zadd"
   2) 1) (integer) 1678886400  # Timestamp of the first sample
      2) (integer) 12345      # Latency in microseconds
   3) 1) (integer) 1678886401
      2) (integer) 15000
   ... (more samples for zadd)
2) 1) "hset"
   2) 1) (integer) 1678886405
      2) (integer) 8765
   3) 1) (integer) 1678886406
      2) (integer) 9500
   ... (more samples for hset)

This output breaks down latency by command type. Each command type has a list of samples, where each sample is a timestamp and the measured latency in microseconds at that moment. Valkey samples latency at regular intervals (configurable by latency-tracking-interval in valkey.conf, defaulting to 1 second). When a command exceeds a certain threshold (defaulting to 10ms for slowlog-log-slower-than), it’s recorded. LATENCY HISTORY then aggregates these recorded latencies over time.

The core problem LATENCY HISTORY solves is the "it was slow sometime" mystery. Without it, you’d see a general increase in response times but no clear indicator of which operation was the culprit or when it was happening. This command allows you to pinpoint specific commands that are causing performance degradation, and crucially, the temporal distribution of those slow operations.

Internally, Valkey uses a data structure to store these latency samples. When a command is identified as slow, its execution time is recorded along with the current timestamp and associated with the command’s name. LATENCY HISTORY queries this internal store. The latency-tracking-threshold configuration parameter (in valkey.conf) determines the minimum latency (in microseconds) for a command to be considered "slow" and thus eligible for tracking. The latency-tracking-interval determines how often Valkey checks for slow commands and records their latencies.

Valkey’s latency tracking is a form of statistical sampling. It doesn’t record every single command’s latency, but rather samples the system periodically and logs commands that exceed the defined threshold. This prevents overwhelming the Valkey instance with logging overhead while still providing valuable insights into performance bottlenecks. The data is aggregated into time buckets, allowing you to see trends and patterns of slow commands over time.

What’s often missed is how LATENCY HISTORY can reveal correlated latency. If you see spikes in SET and GET commands appearing at the exact same timestamps as spikes in ZADD, it strongly suggests a system-wide issue or contention that’s affecting all operations, rather than ZADD being inherently slow in isolation. The presence of multiple command types showing synchronous latency increases points towards resource exhaustion (CPU, memory, network) or a blocking operation elsewhere in the system that’s starving Valkey’s event loop.

The next logical step after identifying slow commands with LATENCY HISTORY is to investigate the SLOWLOG for the specific commands and timestamps identified, to get the actual command arguments that caused the issue.

Want structured learning?

Take the full Valkey course →