Log drains are Vercel’s way of letting you export your application logs to external services.
Here’s a Vercel Function log output during a simulated request:
{
"timestamp": "2023-10-27T10:00:00.123Z",
"level": "INFO",
"message": "Processing request for user 123",
"meta": {
"requestId": "req_abc123xyz",
"userId": "123"
}
}
Vercel Functions execute in isolated environments. When a request comes in, Vercel spins up an instance, runs your code, and then tears it down. Log drains intercept the stdout and stderr streams from these ephemeral instances and forward them. The core problem they solve is the transient nature of serverless: you don’t have persistent servers to ssh into and tail -f logs from. Instead, logs are pushed to you.
This is achieved through Vercel’s integration with various log aggregation platforms. You configure a log drain endpoint in your Vercel project settings. Vercel then creates a webhook that sends log events as they occur. These events are typically delivered as JSON payloads over HTTP POST requests to the endpoint you specify.
The configuration itself is straightforward. Within your Vercel project settings, navigate to "Logs" and then "Log Drains." You’ll see options to add a new drain. For a service like Datadog, you’d select "Datadog" from a dropdown or choose "HTTP/S" and provide your Datadog intake URL and API key. The payload format is standardized:
{
"source": "vercel",
"project": "my-app",
"environment": "production",
"version": "1.2.3",
"logs": [
{
"message": "User logged in successfully",
"level": "INFO",
"timestamp": "2023-10-27T10:05:15.456Z",
"meta": {
"userId": "user456",
"ip": "192.168.1.1"
}
},
// ... more log entries
]
}
The logs array contains individual log entries, each with a message, level (e.g., INFO, WARN, ERROR), timestamp, and optional meta for contextual data. Vercel batches these logs to reduce the number of HTTP requests, but the timestamp on each individual log entry is what matters for ordering.
When setting up a custom HTTP/S drain, you’re essentially creating your own webhook receiver. The endpoint needs to be able to accept POST requests and process the JSON payload. For example, a simple Node.js Express server could look like this:
const express = require('express');
const app = express();
app.use(express.json());
app.post('/vercel-logs', (req, res) => {
console.log('Received Vercel logs:', JSON.stringify(req.body, null, 2));
// Process req.body.logs array here, e.g., send to Splunk
res.status(200).send('Logs received');
});
app.listen(3000, () => console.log('Log receiver listening on port 3000'));
The req.body would contain the JSON structure described above. The key is that Vercel doesn’t manage the lifecycle of your log destination; it just reliably pushes data to it. This allows for immense flexibility, from sending logs to a managed Splunk instance to a custom-built logging pipeline.
The most surprising true thing about Vercel Log Drains is that they don’t just capture logs from your Vercel Functions. They also capture logs from your Edge Functions, Next.js Server Components, and even deployment logs themselves, providing a unified stream of telemetry for your entire Vercel deployment.
If you configure a log drain with a misconfigured endpoint or an endpoint that fails to respond with a 2xx status code, you’ll start seeing "Log drain delivery failed" errors in your Vercel deployment notifications.