Vitess’s VTAdmin UI is more than just a pretty dashboard; it’s your real-time control panel for a distributed MySQL cluster, offering insights into query performance, schema changes, and overall cluster health that would be impossible to glean from individual MySQL instances alone.

Let’s get this thing running. We’ll use a basic setup with a single vtgate and a single vtctld for simplicity, but the principles extend to larger deployments.

First, you need to have Vitess itself built and running. If you haven’t done that, head over to the Vitess documentation for build instructions. We’ll assume you have a Vitess environment where vtgate and vtctld are already accessible.

The VTAdmin UI is served by a dedicated vtadmin process. This vtadmin process acts as a frontend for both vtgate and vtctld. It needs to know where to find these components.

Here’s a typical command to start vtadmin in a development-like setup, assuming your vtgate is listening on localhost:15991 and your vtctld is listening on localhost:15999:

vtadmin \
  --vtgate_grpc_endpoints=localhost:15991 \
  --vtctld_grpc_endpoint=localhost:15999 \
  --port=15992

Let’s break this down:

  • vtadmin: This is the executable for the VTAdmin server.
  • --vtgate_grpc_endpoints=localhost:15991: This tells vtadmin where to find your vtgate instances. You can provide multiple comma-separated endpoints if you have more than one vtgate. VTAdmin will talk to these vtgate instances to get query and tablet information.
  • --vtctld_grpc_endpoint=localhost:15999: This tells vtadmin where to find the vtctld (Vitess Cluster Control Daemon) instance. vtctld is the control plane for Vitess, managing topology, schema, and other cluster-wide operations. VTAdmin uses this to display topology, trigger schema changes, and manage tablets.
  • --port=15992: This is the port on which the VTAdmin web UI itself will be served. You can choose any available port.

Once vtadmin is running, you can access the UI by navigating your web browser to http://localhost:15992. You should see the VTAdmin dashboard.

You’ll notice several sections:

  • Workflows: Here you can see and manage resharding and other complex schema migration operations.
  • Keyspaces: Lists all your keyspaces, showing their current schema, tablet distribution, and health.
  • Tablets: A detailed view of all your MySQL instances (tablets) managed by Vitess, including their role (primary, replica), health, and current replication lag.
  • Queries: A real-time view of queries being executed through vtgate. This is invaluable for performance tuning, allowing you to see slow queries, query patterns, and query throughput.
  • Schema: Allows you to view and initiate schema changes across your keyspaces.

Consider a scenario where you’re investigating a performance bottleneck. You’d go to the "Queries" tab in VTAdmin. You can filter by keyspace, tablet, or even search for specific SQL statements. You might see a particular query with a high average execution time or a high QPS. Clicking on that query would often reveal more details, such as the tablets it’s running on and the vtgate instance that served it.

From there, you might pivot to the "Keyspaces" or "Tablets" view to understand the topology and health of the relevant shards or tablets. If a specific tablet is showing high replication lag, that could be the root cause of slow reads for queries hitting that tablet. You can then drill down into that tablet’s details to see its specific MySQL process status and replication configuration.

The vtadmin process itself is relatively lightweight. It primarily acts as a proxy, forwarding requests to vtgate and vtctld and aggregating the results. The heavy lifting of query execution and data storage remains with your MySQL instances, and cluster management remains with vtctld.

One of the most powerful, yet often overlooked, aspects of VTAdmin is its ability to trigger and monitor schema changes. When you initiate a schema change through VTAdmin, it doesn’t just send a command to one MySQL instance. Instead, it orchestrates a controlled rollout across all relevant primary tablets for a given keyspace, often in conjunction with gh-ost or pt-online-schema-change tools. VTAdmin tracks the progress of this rollout, showing you which shards have completed the change and which are still in progress. This visibility is crucial for minimizing downtime and risk during production schema updates.

If you’re setting up VTAdmin and it doesn’t seem to be connecting, the most common oversight is ensuring that the --vtgate_grpc_endpoints and --vtctld_grpc_endpoint flags point to the actual gRPC addresses where those services are listening. Often, people might use localhost but the services are running in separate containers or on different nodes, requiring their respective IP addresses or Kubernetes service names.

The next step after getting VTAdmin running is usually integrating it with a more robust deployment strategy, such as configuring it within Kubernetes or ensuring it can connect to a distributed vtgate/vtctld topology.

Want structured learning?

Take the full Vitess course →