Once you know how to do code splitting with Webpack well, you will find a world of performance optimizations in your hands. To understand why this is so beneficial for Javascript as it is today, check out this introduction to Javascript to know how it is interpreted and compiled at runtime. Further, to be successful at determining the best bang for the buck in code-splitting, refer here for learning what tools will help you analyze your bundles.
Now you are armed and ready to dig into any project and start splitting the code. For the examples used here, I will be using a hacked version of my front end Vue performance app built as a template for building performant web front end apps.
Note at the time of this blog’s writing, I used Webpack v4 for the examples below. I will update this over time with future versions of Webpack as needed.
Finding out where to start code splitting with Webpack
In the basic Webpack configuration, you will need to define your entry points. You may have these setup to just one entry point or split to be organized by views. Either way is fine since there are several ways to tell Webpack where you want to split the code.
Entry points can be thought of as a traditional web page, so if you have a server responding to various routes and returning pages, you can bundle these by entry points. If you have a SPA, you will generally want just one entry point. An example of a SPA entry point is as so. Exceptions to this might be for workers (without UI) and may have their own entry points:
const ROOT = path.resolve(__dirname, "..");
function root(args) {
args = Array.prototype.slice.call(arguments, 0);
return path.join.apply(path, [ROOT].concat(args));
}
{
entry: {
// Path to your source code for the "main"
// of where your app starts
main: root("/src/rendering/main.ts")
},
...
}
Let’s say I run Webpack Bundle Analyzer and see this output from my project:
My main bundle here is 179 Kb. There are also two other bundles that we won’t care about here; they are adequately trivial. 179 KB is not the worst, but every time my app reloads, 179 KB must be downloaded (unless it’s cached) and then parsed. We can do better here with our code-splitting skills.
Splitting out your dependencies
An easy win starting here is to split out our third party dependencies from our main bundle. We likely will be updating our app’s source all the time. However, we won’t be updating our dependencies anywhere near as much. We don’t want these bundles to be redownloaded for the user every time you update your code. By splitting them out of the main bundle, we allow them to be cached separately and still be updated when and if you upgrade them.
Using Webpack’s optimization configuration options by adding the below into your Webpack production config:
{
optimization = {
splitChunks: {
chunks: "all",
maxInitialRequests: Infinity,
minSize: 0,
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name(module) {
const packageName = module.context.match(
/[\\/]node_modules[\\/](.*?)([\\/]|$)/
)[1];
return `npm.${packageName}`;
}
}
}
},
...
}
These config options tell Webpack to use its’ built-in SplitChunksPlugin, which will do this automated work for you. You’re telling the plugin to split all chunks regardless of async or synchronous dependencies. Further, we define minSize, which tells Webpack to split out the code regardless of size (otherwise, it will chunk your bundles with larger bundle sizes by default. I find maxInitialRequests is needed to massage out all of your third party modules into separate bundles. Webpack uses maxInitialRequests to judge how many modules use a given dependency. It won’t split it out if more than ‘maxInitialRequests’ dependencies exist that could cause several network fetches (it defaults to 3). This isn’t an issue in newer versions of HTTP.
HTTP 1.1 versus HTTP 2.0 and effects on bundle splitting decisions
To explain why this is in brief, HTTP 1.1 can only download six dependencies at one time. This can bottleneck browsers that don’t support HTTP 2. HTTP 2 allows more parallel downloading and will ease that bottleneck. (Even improve performance slightly if you have many smaller bundles versus a few larger bundles.) HTTP 2.0 is the future and has good saturation. You can consider leaving maxInitialRequests at the default value if you want to ensure better legacy support. (For exchange of losing performance gains for more modern browsers.)
Further, I use ‘cachedGroups’ to tell Webpack that the modules located in node_modules should be their bundles. After these changes, this is what my bundles look like:
The changes from above are already looking like a substantial investment. All we did is tweaked some configuration options in our Webpack config file. Webpack has already helped optimize the splitting for us. Our main bundle went from 179 KB down to 15.4 Kb, which is a 90% reduction. Do note the logic in my app here is quite small. Thanks to this optimization, I’ve cleared the way to start growing it optimally.
Caching your Webpack bundles for more gains
You’ll notice above that my bundles have some hash on the file name. This is from the Webpack output configuration options. I have the output configuration for this project set up as so:
{
output: {
path: root("/dist"),
filename: "js/[name].[hash].js",
chunkFilename: "js/[name].[hash].js",
publicPath: "/"
},
...
}
This config tells Webpack to output the files it processes (filename
) and the files processed by the SplitChunkPlugin (chunkFileName
) in the manner you define (the cacheGroup
will infer it). The tag [name] is the particular human-readable for the module. The [hash] is a randomly generated hash based on your code’s current build to make the bundle name unique.
Utilizing the filename hash for cache busting
The hash is essential here as you continue to make changes to your code, a unique bundle name will be generated. It allows the end user’s browsers to ensure they don’t redownload bundles that they already have. The decision to redownload is where caching comes to play.
I won’t dive into the details of caching here. (Let me know in the comments if you’d like to see some articles on this.) The best solution as of today for caching is ServiceWorkers. They are powerful caching tools that give you quite some control. Other alternatives are AppCache and utilizing the browser Cache-Control.
Without the hash, caches that make decisions to fetch the latest changes based on the file name will not update as you expect them to. This hash gives you control over bundle version control. If the client’s current bundle’s name is not the same as the one on the server, let’s update it. Otherwise, you will need to explore some alternative “cache-busting” techniques.
Further, with control over your cache, you can delay load bundles your site will need by prefetching or preloading them. This delay load offloads the network usage until after you get through the critical initialization. Then allows you to pull those bundles down just afterward. That way, when the user does something that needs that code, it is already available.
Lazy loading with Webpack
To wrap up, another tool that has promising gains with Webpack is lazy loading. Lazy loading isn’t a simple configuration change, though. It will take a more in-depth analysis of your code, architecture, and design. Building an app with lazy loading in mind from the start is a great mindset. It will give you control over when bundles are downloaded, parsed, and used.
Let’s look at the following code. I am using Vue-router in this code to control the “pages” that are loaded. The router loads pages when the user uses URL links to route to the site or uses the UI Navbar to navigate. The problem is this bloats the main bundle and requires the download and parsing of all your pages. The user will only see one of those on the first download, parse, and render. We want to optimize that first and lazy load all of the rest afterward.
import Vue from "vue";
import VueRouter, { RouteConfig } from "vue-router";
import { HomePage } from "../pages/home";
import { RemotePage } from "../pages/remote";
import { RenderPage } from "../pages/render";
export const createRouter = (): VueRouter => {
const createRoutes: () => RouteConfig[] = () => [
{
path: "/",
component: HomePage
},
{
path: "/render",
component: RenderPage
},
{
path: "/remote",
component: RemotePage
}
];
Vue.use(VueRouter);
return new VueRouter({ mode: "history", routes: createRoutes() });
};
Using dynamic imports
Webpack will look at the above file and then find that all imports add to the main bundle. Then include all of the code paths from those imports. Webpack has a built-in API for dynamic imports that explicitly load code on-demand instead of the traditional synchronous import. It will then chunk these out to different bundles and load them as they are needed. Transforming my code above to utilize these dynamic imports and naming my chunks:
import Vue from "vue";
import VueRouter, { RouteConfig } from "vue-router";
import { HomePage } from "../pages/home";
import { RemotePage } from "../pages/remote";
import { RenderPage } from "../pages/render";
export const createRouter = (): VueRouter => {
const homePage = async (): Promise<any> =>
import(/* webpackChunkName: 'home-page' */ "../pages/home").then(
({ HomePage }) => HomePage
);
const renderPage = async (): Promise<any> =>
import(/* webpackChunkName: 'render-page' */ "../pages/render").then(
({ RenderPage }) => RenderPage
);
const remotePage = async (): Promise<any> =>
import(/* webpackChunkName: 'remote-page' */ "../pages/remote").then(
({ RemotePage }) => RemotePage
);
const createRoutes: () => RouteConfig[] = () => [
{
path: "/",
component: homePage
},
{
path: "/render",
component: renderPage
},
{
path: "/remote",
component: remotePage
}
];
Vue.use(VueRouter);
return new VueRouter({ mode: "history", routes: createRoutes() });
};
That’s it! I’m making an asynchronous function using Webpack’s dynamic imports to import the code on demand when my asynchronous function is called. I utilize ‘webpackChunkName’ to tell Webpack what to name my modules. Now my routes are lazy-loaded proper. I can take this pattern. Use it for anything that isn’t needed for getting the initial critical render to the user. My webpack output now looks as so:
Wrapping up
My main bundle is now 8.96 KB. It’s not huge, but that’s mainly because my app’s logic is relatively small per page. For real production apps, the saving will be much more significant.
Further, by doing this early, your app will scale over time as new features are added. There is undoubtedly more we can trim out of that main bundle here too. You’ll find it can be somewhat obsessive trimming out the fat to get the initial render as fast as possible.
It’s also good to note that building your design around asynchronous loading will force a stronger overall design. Doing this from the start of a project will force the design to loosely couple components. The loose coupling will enable you to play around with what is pulled out of the main bundle or left to be included.
Everything that is needed for the initial render should be in that main bundle in one pass. There might be an overhead otherwise around hitting the lazy-loaded code and then downloading the dependency in synchronization. Most importantly, designing with the ability to lazy load gives you control to make these decisions without expensive refactors.
Code splitting can be a bit daunting to explore. Once you master these concepts down, you will find useful code splitting is one of the most significant performance gains you can explore on the market today. Let me know if I explained the concept well in the comments below if you’d like me to dig in more with any particular topics that I may have brushed over. Check out my other Webpack articles here.