Valkey, the community-driven fork of Redis, can be deployed on Kubernetes using a StatefulSet to manage its stateful nature.
Here’s how you can set up Valkey on Kubernetes, focusing on the StatefulSet configuration.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: valkey-cluster
labels:
app: valkey
spec:
serviceName: "valkey-cluster"
replicas: 3
selector:
matchLabels:
app: valkey
template:
metadata:
labels:
app: valkey
spec:
containers:
- name: valkey
image: valkey/valkey:latest # Use a specific version in production
ports:
- containerPort: 6379
name: client
- containerPort: 16379 # For cluster bus
name: cluster
command: ["/usr/bin/valkey-server"]
args: ["/etc/valkey/valkey.conf"]
volumeMounts:
- name: valkey-config-volume
mountPath: /etc/valkey/valkey.conf
subPath: valkey.conf
- name: valkey-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: valkey-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi # Adjust storage size as needed
---
apiVersion: v1
kind: ConfigMap
metadata:
name: valkey-config
data:
valkey.conf: |
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-replicate-no-failover yes
cluster-node-timeout 5000
appendonly yes
appendfilename "appendonly.aof"
dbfilename "dump.rdb"
dir /data
logfile "" # Log to stdout for Kubernetes logging
bind 0.0.0.0
---
apiVersion: v1
kind: Service
metadata:
name: valkey-cluster-headless
spec:
selector:
app: valkey
clusterIP: None # Headless service for StatefulSet
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: cluster
This setup defines a StatefulSet for Valkey nodes, a ConfigMap for the Valkey configuration, and a headless Service to facilitate discovery within the cluster.
The StatefulSet is crucial here because Valkey, especially in cluster mode, relies on stable network identities and persistent storage for each replica. Each pod managed by the StatefulSet will have a predictable, unique hostname (e.g., valkey-cluster-0, valkey-cluster-1) and will be automatically assigned persistent storage via volumeClaimTemplates.
The ConfigMap provides the valkey.conf file, enabling cluster mode (cluster-enabled yes) and directing data persistence to the /data directory, which is mounted from the PersistentVolumeClaim.
The headless Service (clusterIP: None) doesn’t provide a single virtual IP. Instead, it allows pods to discover each other directly via DNS using their stable hostnames, which is essential for the Valkey cluster to form and maintain its topology.
When you apply this configuration, Kubernetes will create three Valkey pods. Each pod will have a stable identity and its own dedicated persistent volume. You can then use a tool like valkey-cli to connect to one of the pods and initiate the cluster creation process.
For example, after the pods are running, you might exec into the first pod:
kubectl exec -it valkey-cluster-0 -- /usr/bin/valkey-cli -c -p 6379
And then, from within the valkey-cli prompt, start the cluster creation:
CLUSTER MEET <ip_of_pod_1>:6379
CLUSTER MEET <ip_of_pod_2>:6379
CLUSTER MEET <ip_of_pod_3>:6379
You’ll need to get the internal IPs of the other pods. A common pattern is to create a regular Service that selects all Valkey pods, and then use that service name to discover IPs for CLUSTER MEET commands. Alternatively, you can use the stable DNS names provided by the headless service:
CLUSTER MEET valkey-cluster-1.valkey-cluster-headless.default.svc.cluster.local:6379
CLUSTER MEET valkey-cluster-2.valkey-cluster-headless.default.svc.cluster.local:6379
Once the cluster is formed, you can check its status:
CLUSTER INFO
CLUSTER NODES
The most subtle aspect of this setup is how the subPath in the volumeMounts for valkey-config-volume ensures that only the valkey.conf file from the ConfigMap is mounted into the pod, rather than the entire contents of the ConfigMap if it had multiple keys. This prevents accidental overwriting of other configuration files or directories if your ConfigMap were to grow.
The next logical step is to explore how to manage Valkey cluster upgrades and handle node failures gracefully within this Kubernetes deployment.