Vite’s code splitting with dynamic imports is more about intelligently breaking up your application into smaller, loadable pieces than about optimizing the size of those pieces.

Let’s see it in action. Imagine a simple app with a button that loads a heavy component only when clicked.

// src/App.vue
<template>
  <div>
    <h1>My App</h1>
    <button @click="loadHeavyComponent">Load Component</button>
    <div id="component-container"></div>
  </div>
</template>

<script setup>
async function loadHeavyComponent() {
  const heavyModule = await import('./HeavyComponent.vue');
  const Component = (await import('vue')).defineComponent(heavyModule.default);
  const instance = new Component();
  instance.$mount('#component-container');
}
</script>
// src/HeavyComponent.vue
<template>
  <div>
    <h2>This is a heavy component!</h2>
    <p>It was loaded dynamically.</p>
  </div>
</template>

<style scoped>
div {
  border: 1px solid red;
  padding: 20px;
  margin-top: 20px;
}
</style>

When you build this with Vite and run it, notice the network requests. Initially, you’ll only see main.js and index.html. After clicking the button, a new chunk, something like src_HeavyComponent_vue.js, will be requested and loaded on demand. Vite automatically identifies the import() statement and treats that module as a candidate for a separate, dynamically loaded chunk.

Vite’s core philosophy here is leveraging native ES Modules. When it encounters a dynamic import(), it understands that this is an explicit instruction to treat the imported module as a separate unit. During the build process, Vite analyzes these dynamic imports and creates separate JavaScript files (chunks) for them. This means that the code for HeavyComponent.vue isn’t included in your initial main.js bundle. It’s only fetched and executed when the loadHeavyComponent function is called.

The "strategy" part comes into play with how Vite decides what constitutes a chunk and how it names them. By default, Vite’s Rollup-based build backend is quite good at automatically splitting code based on dynamic imports and vendor dependencies. It aims to create chunks that are reasonably sized and have good cacheability. For instance, if multiple dynamic imports reference the same library (like vue itself, or lodash), Vite will often try to create a shared vendor chunk for those common dependencies.

You can influence this behavior through the build.rollupOptions.output.manualChunks configuration in your vite.config.js. This is where you can define custom splitting logic.

// vite.config.js
import { defineConfig } from 'vite';
import vue from '@vite-plugin-vue';

export default defineConfig({
  plugins: [vue()],
  build: {
    rollupOptions: {
      output: {
        manualChunks(id) {
          if (id.includes('node_modules/lodash')) {
            return 'lodash'; // Create a separate chunk for lodash
          }
          if (id.includes('src/components/')) {
            return 'components'; // Group all components into one chunk
          }
        }
      }
    }
  }
});

In this example, we’re telling Rollup (which Vite uses under the hood for bundling) to create a separate chunk named lodash if a module ID contains node_modules/lodash. Similarly, any module paths including src/components/ will be grouped into a chunk named components. This gives you fine-grained control over how your code is split beyond just what dynamic imports naturally suggest.

The most surprising thing about Vite’s code splitting is how little you often need to do to get significant benefits, and how it prioritizes developer experience by treating native ESM like first-class citizens. It doesn’t try to reimagine code splitting; it simply makes the standard mechanisms work incredibly well out of the box, deferring code loading until it’s actually needed by a dynamic import.

However, relying solely on dynamic imports can sometimes lead to an explosion of very small chunks, which can negatively impact network performance due to the overhead of multiple HTTP requests. This is where manualChunks becomes crucial for consolidating related code that might otherwise be split too granularly.

The next step is understanding how to optimize these chunks for caching and network efficiency.

Want structured learning?

Take the full Vite course →