Code caching (also known as bytecode caching) is an important optimization in browsers. It reduces the start-up time of commonly visited websites by caching the result of parsing + compilation. Most 1 KiB of source code. This means that smaller scripts are not cached at all, since we consider the overheads to be greater than the benefits.
If your website has many such small scripts, the overhead calculation may not apply in the same way anymore. You may want to consider merging them together so that they exceed the minimum code size, as well as benefiting from generally reducing script overheads.
Avoid inline scripts #
Script tags whose source is inline in the HTML do not have an external source file that they are associated with, and therefore can’t be cached with the above mechanism. Chrome does try to cache inline scripts, by attaching their cache to the HTML document’s resource, but these caches then become dependent on the entire HTML document not changing, and are not shared between pages.
So, for non-trivial scripts which could benefit from code caching, avoid inlining them into the HTML, and prefer to include them as external files.
Use service worker caches #
Service workers are a mechanism for your code to intercept network requests for resources in your page. In particular, they let you build a local cache of some of your resources, and serve the resource from cache whenever they are requested. This is particularly useful for pages that want to continue to work offline, such as PWAs.
A typical example of a site using a service worker registers the service worker in some main script file:
navigator.serviceWorker.register('/sw.js');
And the service worker adds event handlers for installation (creating a cache) and fetching (serving resources, potentially from cache).
self.addEventListener('install', (event) => {
async function buildCache() {
const cache = await caches.open(cacheName);
return cache.addAll([
'/main.css',
'/main.mjs',
'/offline.html',
]);
}
event.waitUntil(buildCache();
});
self.addEventListener('fetch', (event) => {
async function cachedFetch(event) {
const cache = await caches.open(cacheName);
let response = await cache.match(event.request);
if (response) return response;
response = await fetch(event.request);
cache.put(event.request, response.clone();
return response;
}
event.respondWith(cachedFetch(event));
});
These caches can include cached JS resources. However, we have slightly different heuristics for them since we can make different assumptions. Since the service worker cache follows quota-managed storage rules it is more likely to be persisted for longer and the benefit of caching will be greater. In addition, we can infer further importance of resources when they are pre-cached before the load.
The largest heuristic differences take place when the resource is added to the service worker cache during the service worker install event. The above example demonstrates such a use. In this case the code cache is immediately created when the resource is put into the service worker cache. In addition, we generate a "full" code cache for these scripts - we no longer compile functions lazily, but instead compile everything and place it in the cache. This has the advantage of having fast and predictable performance, with no execution order dependencies, though at the cost of increased memory use.
If a JS resource is stored via the Cache API outside of the service worker install event then code cache is not immediately generated. Instead, if a service worker responds with that response from the cache then the "normal" code cache will be generated open first load. This code cache will then be available for consumption on the second load; one load faster than with the typical code caching scenario. Resources may be stored in the Cache API outside the install event when "progressively" caching resources in the fetch event or if the Cache API is updated from the main window instead of the service worker.
Note, the pre-cached "full" code cache assumes the page where the script will be run will use UTF-8 encoding. If the page ends up using a different encoding then the code cache will be discarded and replaced with a "normal" code cache.
In addition, the pre-cached "full" code cache assumes the page will load the script as a classic JS script. If the page ends up loading it as an ES module instead then the code cache will be discarded and replaced with a "normal" code cache.
Tracing #
None of the above suggestions is guaranteed to speed up your web app. Unfortunately, code caching information is not currently exposed in DevTools, so the most robust way to find out which of your web app’s scripts are code-cached is to use the slightly lower-level chrome://tracing
.
chrome://tracing
records instrumented traces of Chrome during some period of time, where the resulting trace visualization looks something like this:
The chrome://tracing
UI with a recording of a warm cache run
Tracing records the behavior of the entire browser, including other tabs, windows, and extensions, so it works best when done in a clean user profile, with extensions disabled, and with no other browser tabs open:
google-chrome --user-data-dir="$(mktemp -d)" --disable-extensions
When collecting a trace, you have to select what categories to trace. In most cases you can simply select the “Web developer” set of categories, but you can also pick categories manually. The important category for code caching is v8
.

After recording a trace with the v8
category, look for v8.compile
slices in the trace. (Alternatively, you could enter v8.compile
in the tracing UI’s search box.) These list the file being compiled, and some metadata about the compilation.
On a cold run of a script, there is no information about code caching — this means that the script was not involved in producing or consuming cache data.
On a warm run, there are two v8.compile
entries per script: one for the actual compilation (as above), and one (after execution) for producing the cache. You can recognize the latter as it has cacheProduceOptions
and producedCacheSize
metadata fields.
On a hot run, you’ll see a v8.compile
entry for consuming the cache, with metadata fields cacheConsumeOptions
and consumedCacheSize
. All sizes are expressed in bytes.
Conclusion #
For most developers, code caching should “just work”. It works best, like any cache, when things stay unchanged, and works on heuristics which can change between versions. Nevertheless, code caching does have behaviors that can be used, and limitations which can be avoided, and careful analysis using chrome://tracing
can help you tweak and optimize the use of caches by your web app.