Webpack’s optimization.splitChunks is a powerful tool, but its default configuration often leads to more chunks, not fewer, which is usually the opposite of what you want.
Let’s see it in action. Imagine a simple project with two entry points: app.js and admin.js.
webpack.config.js:
const path = require('path');
module.exports = {
mode: 'development', // Use 'production' for actual optimization
entry: {
app: './src/app.js',
admin: './src/admin.js',
},
output: {
filename: '[name].bundle.js',
path: path.resolve(__dirname, 'dist'),
},
};
src/app.js:
import _ from 'lodash';
console.log('App module loaded');
console.log('Lodash version:', _.VERSION);
src/admin.js:
import _ from 'lodash';
console.log('Admin module loaded');
console.log('Lodash version:', _.VERSION);
If you run webpack with this configuration, you’ll get app.bundle.js and admin.bundle.js. Both will contain a full copy of lodash. This is inefficient.
Webpack’s optimization.splitChunks is designed to extract common modules into separate "chunks" that can be shared between your entry points. This reduces duplication and allows the browser to cache these shared chunks independently. The goal is to create fewer, larger, more cacheable chunks.
Here’s how the system works internally. Webpack analyzes your entire dependency graph and identifies modules that meet certain criteria for splitting. By default, it looks for modules that are:
- Shared: Used by more than one chunk (your entry points count as initial chunks).
- Not asynchronous: Meaning they are statically imported, not loaded via
import()dynamically. - Large enough: Meeting a minimum size threshold (default is 30KB uncompressed).
- Not part of the initial chunk: Meaning they are not the only thing in a chunk.
When these conditions are met, Webpack creates a new chunk containing that module. It then references this new chunk from all the original chunks that needed it.
Let’s refine our webpack.config.js to actually achieve optimization:
const path = require('path');
module.exports = {
mode: 'production', // Essential for splitChunks to work effectively
entry: {
app: './src/app.js',
admin: './src/admin.js',
},
output: {
filename: '[name].bundle.js',
path: path.resolve(__dirname, 'dist'),
},
optimization: {
splitChunks: {
chunks: 'all', // Consider all types of chunks (initial, async, and vendors)
minSize: 10000, // Minimum size in bytes for a chunk to be split (e.g., 10KB)
maxSize: 200000, // Maximum size in bytes for a chunk (e.g., 200KB)
minChunks: 1, // Minimum number of chunks that must share a module before splitting
name: true, // Use the module name for the chunk name (e.g., 'lodash')
automaticNameDelimiter: '-', // Delimiter for generated chunk names
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/, // Regex to match modules in node_modules
name: 'vendors', // Name for the vendor chunk
chunks: 'all', // Consider all chunks for vendors
},
},
},
},
};
After running webpack with this configuration, you’ll likely see app.bundle.js, admin.bundle.js, and a new vendors~app~admin.bundle.js (or similar, depending on naming). The lodash module will be in vendors~app~admin.bundle.js, and app.bundle.js and admin.bundle.js will be significantly smaller, only containing their specific code.
The chunks: 'all' setting tells Webpack to consider modules for splitting regardless of whether they are in initial entry points or dynamically imported chunks. minSize and maxSize give Webpack guidance on what constitutes a "worthwhile" chunk to split out, preventing tiny or excessively large splits. minChunks is crucial: a minChunks: 2 would mean a module must be used by at least two separate chunks to be considered for splitting. Setting name: true combined with automaticNameDelimiter: '-' allows Webpack to intelligently name chunks based on their contents, like vendors-lodash.
The cacheGroups section is where the real magic for vendor libraries happens. By defining a vendor cache group that specifically targets modules within node_modules, we instruct Webpack to bundle all third-party libraries into a single, well-named chunk (vendors). This is highly effective because vendor libraries are often large and stable, making them prime candidates for long-term browser caching.
A common misconception is that splitChunks is only about reducing the number of files. While that’s often a side effect, its primary goal is to optimize caching. By creating separate chunks for stable vendor code versus frequently changing application code, you ensure that users only re-download the parts of your application that have actually changed, leading to faster load times on subsequent visits.
The next thing you’ll likely want to tackle is controlling the size and number of dynamically imported chunks, which often leads to exploring import() and its interaction with splitChunks.