Trivy server mode doesn’t just scan images; it fundamentally redefines how security scanning integrates into your CI/CD pipeline by acting as a persistent, centralized vulnerability database and scanning engine.
Let’s see it in action. Imagine you have a fleet of Kubernetes clusters and a CI/CD pipeline that’s constantly building and deploying new container images. Instead of each CI job spinning up its own Trivy scanner and potentially re-scanning the same image multiple times, or worse, relying on outdated local caches, Trivy server mode provides a single source of truth.
Here’s a simplified setup. First, you’d run the Trivy server. It needs a database to store vulnerability information. For a robust setup, you’d use an external database like PostgreSQL.
trivy server --cache-dir /path/to/trivy/cache --db-repository aquasecurity/trivy-db --listen 0.0.0.0:8080 --port 8080 --db-update-interval 24h --redis redis://localhost:6379
This command starts Trivy in server mode, listening on port 8080. It’s configured to use a local cache directory for downloaded vulnerability data, pull updates from the aquasecurity/trivy-db repository every 24 hours, and importantly, it’s using Redis as a shared cache for scan results. This Redis integration is key for performance in a multi-client environment.
Now, from your CI/CD pipeline, you can point Trivy clients to this server.
trivy image --server http://trivy-server.your-domain.com:8080 --severity HIGH,CRITICAL your-docker-image:latest
When this command runs, the Trivy client doesn’t download vulnerability databases. Instead, it sends the image reference (and any other scan options) to the Trivy server. The server then performs the scan using its locally updated vulnerability data and its powerful scanning engine. If the image has been scanned recently and the results are still valid (based on cache expiry), the server can return cached results immediately, making subsequent scans incredibly fast. The client receives only the scan results, not the entire vulnerability database.
This architectural shift solves several critical problems in typical CI/CD security scanning. Firstly, it drastically reduces the load on your CI runners. Instead of each runner needing to download gigabytes of vulnerability data, it just makes a lightweight API call. Secondly, it ensures consistency. Every scan, regardless of which CI job or runner executes it, is using the same, up-to-date vulnerability information from a single, managed source. This eliminates the "it worked on my machine" problem related to scan data freshness. Thirdly, it enables centralized policy enforcement. You can configure the Trivy server (or manage its vulnerability data updates) to enforce specific security policies across all your scans.
The core problem Trivy server mode addresses is the inefficiency and inconsistency of distributed vulnerability data and scanning logic. Traditional scanning often involves downloading large vulnerability databases to each CI runner, leading to:
- High CI runner resource consumption: Downloading and storing massive DBs consumes disk space and network bandwidth.
- Stale scan data: Runners might not update their DBs frequently enough, leading to missed vulnerabilities.
- Inconsistent results: Different runners might have slightly different DB versions.
- Re-scanning overhead: Multiple CI jobs might scan the same image, duplicating effort.
By centralizing the vulnerability database and the scanning engine, Trivy server mode transforms scanning from a resource-intensive, per-job task into a lean, efficient, and consistent service. The server manages the database updates and the scanning logic, while clients simply query for results. This is particularly impactful in large organizations with many microservices and frequent deployments.
A less obvious benefit of this architecture is how it handles custom vulnerability data. If you maintain your own internal vulnerability feeds or exceptions, you can configure the Trivy server to incorporate these. This allows for a unified view of vulnerabilities that includes both public CVEs and your organization-specific security intelligence. The server then acts as the single point of truth for all your security findings, regardless of their origin.
The next logical step after implementing centralized scanning is to integrate these scan results directly into your deployment gates, preventing vulnerable images from reaching production.