Valkey Search isn’t just a faster way to search your data; it fundamentally changes what you can search for, allowing you to find not just exact matches but also semantically related items and even items based on their visual or auditory characteristics.

Let’s see it in action. Imagine you have a collection of product descriptions and their associated vector embeddings, representing their meaning.

// Example Redis Data
HSET products:1 "name" "Vintage Leather Satchel" "description" "A classic distressed leather messenger bag, perfect for everyday use." "embedding" "[0.1, 0.5, -0.2, ...]"
HSET products:2 "name" "Modern Canvas Backpack" "description" "Lightweight and durable, this backpack is ideal for travel and work." "embedding" "[-0.3, 0.1, 0.7, ...]"
HSET products:3 "name" "Elegant Suede Handbag" "description" "A sophisticated accessory crafted from premium suede, adding a touch of luxury." "embedding" "[0.4, -0.1, 0.3, ...]"

Now, using Valkey Search, we can index this data and perform a vector similarity search.

// Index creation (simplified)
FT.CREATE products:idx ON HASH PREFIX 1 products: SCHEMA text description TAGS name VECTOR shape 1500 dim 1500 distance METRIC_L2

This command sets up an index named products:idx for Hash keys starting with products:. It defines description as a text field for full-text search and name as a tag field. Crucially, it defines embedding as a vector field with a dimension of 1500, using the L2 distance metric for similarity comparisons.

With the index created, we can now search for products semantically similar to a query vector.

// Vector Search Query
FT.SEARCH products:idx "@description:[(VECTOR_RANGE 0.5 {[0.2, 0.6, -0.1, ...]})]" RETURN 2 name description

// Full-Text Search Query
FT.SEARCH products:idx "distressed leather bag" RETURN 2 name description

The first query uses VECTOR_RANGE to find items whose embeddings are within a certain distance (0.5 in this example) of the provided query vector. This is how you find items that mean something similar to your query, even if the words aren’t identical. The second query is a standard full-text search, finding descriptions containing the exact phrase "distressed leather bag."

Valkey Search combines these capabilities. You can do hybrid searches, like finding "distressed leather bags" that are also semantically similar to a given vector.

The real power comes from understanding the different indexing strategies. For full-text search, you have options like WITH SUFFIX for prefix matching and NOSTOPWORDS to include common words. For vector search, the choice of distance metric (METRIC_L2, METRIC_IP, METRIC_COSINE) and the underlying index algorithm (like HNSW for Approximate Nearest Neighbor search) dramatically impacts performance and recall. HNSW, for instance, builds a graph where nodes are vector embeddings, and edges connect similar vectors, allowing for very fast, though sometimes approximate, nearest neighbor searches.

What most people miss is how to effectively combine full-text and vector search for nuanced results. You can assign weights to different fields in your query, allowing you to prioritize semantic similarity over keyword matches, or vice versa. For example, (0.7 @description:[(VECTOR_RANGE 0.5 {...})]) + (0.3 #name:distressed) would give 70% weight to the vector similarity in the description and 30% weight to the keyword match in the name. This allows for fine-grained control over the search relevance.

The next step is exploring advanced vector indexing algorithms and understanding how to tune HNSW parameters for optimal performance based on your dataset size and query latency requirements.

Want structured learning?

Take the full Valkey course →