This multi-stage Dockerfile is a clever way to build your Webpack-powered application, but it can trip you up if you’re not careful about how the stages interact.
Let’s see it in action. Imagine you have a Dockerfile like this:
# Stage 1: Builder
FROM node:18-alpine as builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Stage 2: Production
FROM nginx:stable-alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
When you run docker build -t my-webpack-app ., Docker first executes the FROM node:18-alpine as builder stage. It downloads the Node.js image, sets up a working directory, installs your production dependencies using npm ci --only=production, copies your application code, and then runs your npm run build script. This script, presumably, uses Webpack to bundle your frontend assets into a dist directory.
Once the first stage is complete, Docker discards all the intermediate layers and files from that stage, except for what you explicitly copy. It then starts the second stage FROM nginx:stable-alpine. This is a minimal Nginx image. The crucial part is COPY --from=builder /app/dist /usr/share/nginx/html. This command only copies the contents of the /app/dist directory from the builder stage into the Nginx webroot. Everything else from the builder stage—the node_modules, the source code, the build tools—is gone. Finally, Nginx is configured to serve files from that directory, and the container is ready to run.
The core problem this solves is image size. Without multi-stage builds, your final Docker image would contain all the Node.js dependencies, source code, and build tools necessary for development and building, even though the production server only needs the static assets. This can lead to massive image sizes, increasing build times, storage costs, and deployment times. By using two stages, you separate the build environment from the runtime environment, resulting in a lean, production-ready image.
The as builder clause is key here. It names the first stage, allowing subsequent stages to reference it using --from=builder. This is how you select which artifacts to carry over. Without a name, you’d refer to stages by their index (e.g., --from=0), which is less readable and more brittle if you add or remove stages.
The Webpack build itself is abstracted away. The Dockerfile doesn’t care how npm run build works, only that it produces output in /app/dist. This makes the Dockerfile independent of your specific Webpack configuration, as long as the output path remains consistent.
You control the final image by carefully selecting what gets copied from the builder stage to the final stage. If your Webpack build produces assets in a different directory, say /app/build, you would change the COPY command to COPY --from=builder /app/build /usr/share/nginx/html. Similarly, if your application requires a different web server or a more complex runtime setup, you’d adjust the second stage accordingly.
What most people miss is that the COPY --from instruction can copy individual files or entire directories. You’re not limited to copying the entire output of a build. If, for example, you needed a specific configuration file generated during the build process, you could copy just that: COPY --from=builder /app/dist/config.json /etc/app/config.json. This granular control is powerful for optimizing the final image even further.
The next hurdle you’ll likely face is managing environment-specific configurations for your frontend application, especially when dealing with API endpoints or feature flags.