Valkey’s data structures aren’t just buckets for data; they’re active participants in how your application logic runs, often executing operations atomically and faster than you could in your application code.

Let’s watch a Valkey List in action. Imagine you’re building a real-time feed for a social media app. New posts arrive constantly, and users need to see them in chronological order.

import valkey

# Connect to Valkey
r = valkey.Redis(host='localhost', port=6379, db=0)

# Simulate a new post arriving
new_post_id = "post:12345"
r.lpush("feed:user1", new_post_id) # LPUSH adds to the head (left) of the list

# Simulate another post
another_post_id = "post:12346"
r.lpush("feed:user1", another_post_id)

# User requests their feed
user_feed = r.lrange("feed:user1", 0, 9) # LRANGE gets a range of elements

print(f"User's feed (most recent first): {user_feed}")

# Output will be: User's feed (most recent first): [b'post:12346', b'post:12345']

Here, lpush and lrange are Valkey commands. lpush adds an element to the left (head) of a list associated with the key feed:user1. lrange retrieves a slice of that list. Because lpush is atomic, even if thousands of posts arrive simultaneously and multiple clients are fetching the feed, the order is guaranteed. This avoids race conditions that would be tricky to handle in application code.

The core problem Valkey’s data structures solve is providing efficient, in-memory storage and retrieval for common data patterns, while also offering atomic operations that simplify complex concurrent logic.

Strings: The most basic type. They can store any kind of data, up to 512MB. Think of them for caching simple values, session IDs, or even as counters.

  • Example: SET user:1:name "Alice" and GET user:1:name retrieve and store a simple string.
  • Incrementing: INCR user:1:login_count atomically increments a string value, treating it as an integer. This is far faster and safer than fetching, converting to int, incrementing, and setting back.

Hashes: Perfect for representing objects. Instead of storing a JSON string for a user object, you can store it as a hash where field names are keys.

  • Example: HSET user:1 name "Alice" email "alice@example.com" age 30 creates a hash. HGETALL user:1 retrieves all fields and values.
  • Benefit: You can update individual fields atomically, like HSET user:1 age 31, without touching the rest of the hash.

Lists: Ordered collections, like queues or stacks. LPUSH and RPUSH add to the left and right, respectively. LPOP and RPOP remove from the left and right.

  • Example: A task queue. RPUSH tasks "process_image_123" adds a task. LPOP tasks pulls the oldest task off the queue.
  • Blocking Operations: BLPOP tasks 5 will wait up to 5 seconds for a task to appear if the list is empty, which is a powerful primitive for worker processes.

Sets: Unordered collections of unique strings. Great for tags, unique visitors, or membership.

  • Example: SADD post:1:tags "tech" "valkey" "database" adds tags. SMEMBERS post:1:tags lists them.
  • Set Operations: SINTER post:1:tags post:2:tags finds common tags between two posts.

Sorted Sets: Like Sets, but each member has an associated score. Members are ordered by score. Useful for leaderboards, rate limiting, or time-series data.

  • Example: ZADD leaderboard 1500 "player1" 1200 "player2" adds players with scores. ZRANGE leaderboard 0 9 WITHSCORES gets the top 10 players.

The magic of Valkey’s data structures often lies in their atomic commands. When you execute LPUSH or INCR, the entire operation happens on the server, in a single, uninterruptible step. This means you don’t have to worry about network latency introducing race conditions between reading a value and writing a modified one, or between multiple clients trying to update the same piece of data. Valkey handles the concurrency for you, at the data structure level.

What most people don’t realize is that the LRANGE command, while seemingly simple, can have performance implications if you’re retrieving a huge range from a massive list. Valkey has to traverse the internal linked list structure to find the elements, which can become a bottleneck. For very large ordered collections where you only need recent items, using LTRIM to cap the list size (e.g., LTRIM feed:user1 0 99) in conjunction with LPUSH is a common pattern to keep the list manageable and LRANGE operations fast.

Once you’re comfortable with these fundamental data structures, you’ll want to explore Valkey’s Pub/Sub capabilities for real-time messaging.

Want structured learning?

Take the full Valkey course →