Vercel’s Edge Functions aren’t just "closer" to the user; they fundamentally alter the request lifecycle by executing code before any traditional origin server or even Vercel’s own Serverless Functions are invoked.

Let’s watch an HTTP request navigate the Vercel ecosystem. Imagine a user in Tokyo requesting your Vercel-hosted site.

  1. User Request (Tokyo): The browser sends an HTTP GET request for /api/data.
  2. Vercel Edge Network: The request hits the nearest Vercel Edge Network PoP (Point of Presence) in Tokyo.
  3. Edge Function Execution: If an Edge Function is configured for /api/*, it runs immediately in that PoP. This function can inspect the request, modify headers, perform authentication, or even serve a response directly without hitting any origin.
  4. Origin Server/Serverless Function (if Edge Function doesn’t respond): If the Edge Function doesn’t return a response (e.g., it just modifies headers and passes the request along), the request then proceeds to Vercel’s Serverless Functions platform, which might be geographically closer to your deployment’s origin, or to your actual origin server (like a self-hosted API).
  5. Response: The response from either the Edge Function, Serverless Function, or origin server travels back through the Vercel Edge Network to the user in Tokyo.

This flow means that Edge Functions act as a programmable, global network layer. They don’t replace Serverless Functions but complement them by providing a way to intercept and act on requests at the absolute earliest point in the Vercel infrastructure.

The Problem Edge Functions Solve: Latency and Control at the Edge

Traditional Serverless Functions, while powerful, still involve network hops to a specific region’s compute infrastructure. For tasks that need to happen everywhere simultaneously, or need to happen with minimal latency, Serverless Functions can be too slow. Edge Functions address this by running code in hundreds of Vercel PoPs worldwide.

Consider these use cases:

  • A/B Testing: Serve different content variations based on a cookie or header, executed at the edge.
  • Authentication & Authorization: Check JWTs or API keys before a request even reaches your core API, saving compute and protecting your origin.
  • Geo-Targeted Content: Redirect users or modify responses based on their IP address’s geographic location.
  • Header Manipulation: Add security headers (like Strict-Transport-Security) or cache-control directives globally.
  • Edge-Side Includes (ESI): Dynamically assemble HTML pages from different fragments, with fragments fetched and stitched together at the edge.
  • Request Rewrites/Redirects: Implement complex routing logic that’s faster than DNS-based solutions or origin-level rewrites.

How Edge Functions Work Internally (The Vercel Edge Runtime)

Edge Functions run on Vercel’s custom Edge Runtime, which is built on top of WebAssembly (Wasm). This allows Vercel to run JavaScript (and TypeScript) code in a highly performant, sandboxed environment across their global network without needing a full Node.js or browser environment. It’s a constrained API surface, meaning not all Node.js modules are available, but it’s optimized for speed and network operations.

You define Edge Functions in a middleware.ts (or .js) file at the root of your src directory, or within a dedicated middleware directory.

Here’s a middleware.ts example for authentication:

// src/middleware.ts
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';

export function middleware(req: NextRequest) {
  const authHeader = req.headers.get('authorization');

  if (!authHeader || !authHeader.startsWith('Bearer ')) {
    return new NextResponse(JSON.stringify({ error: 'Missing or invalid Authorization header' }), {
      status: 401,
      headers: { 'Content-Type': 'application/json' },
    });
  }

  const token = authHeader.split(' ')[1];
  // In a real app, you'd verify the token here.
  // For this example, we'll assume any Bearer token is valid.

  // If authentication passes, allow the request to proceed
  return NextResponse.next();
}

// Configure which paths this middleware applies to
export const config = {
  matcher: '/api/:path*', // Apply to all routes under /api/
};

In this example:

  • middleware(req: NextRequest): This is the entry point for your Edge Function. It receives the incoming NextRequest object.
  • req.headers.get('authorization'): We’re accessing request headers directly.
  • new NextResponse(...): If authentication fails, we construct and return a NextResponse immediately from the edge. This stops the request from going any further.
  • NextResponse.next(): If authentication succeeds, we call NextResponse.next(), which tells Vercel to continue processing the request by passing it to the next middleware or to the appropriate Serverless Function/origin.
  • config.matcher: This is crucial. It tells Vercel when to run this middleware. Here, it’s set to run for any request path starting with /api/.

The matcher configuration is a powerful lever. You can use glob patterns to include or exclude specific paths, file types, or even use regular expressions for very fine-grained control. For instance, matcher: ['/((?!api|_next/static|_next/image|favicon.ico).*)'] is a common pattern in Next.js applications to apply middleware to all pages except for static assets and API routes.

The One Thing Most People Don’t Realize: Immutable Request/Response Objects

When you’re writing an Edge Function, you’re working with Request and Response objects that are very similar to the standard Web Fetch API. However, there’s a critical difference: the Request object is largely immutable. You cannot directly modify properties like req.url or req.method after the request has been received. If you need to alter the URL or method, you must create a new Request object. Similarly, while you can add headers to a Response you’re creating, you can’t directly mutate headers on an incoming Request object. This immutability is key to the predictable and performant nature of the Edge Runtime, preventing unintended side effects across distributed execution environments.

Choosing between Edge Functions and Serverless Functions isn’t about which is "better," but about where in the request lifecycle you need your code to run. For tasks requiring sub-millisecond latency, global distribution, or pre-origin request interception, Edge Functions are the way to go. For more complex business logic, data processing, or tasks that don’t need to be executed at every PoP, Serverless Functions remain the ideal choice.

The next challenge you’ll encounter is understanding how to manage state and shared resources when your logic is distributed across hundreds of Edge Function instances.

Want structured learning?

Take the full Vercel course →