Moving beyond window.onload()
[Originally posted in the 2012 Performance Calendar. Reposting here for folks who missed it.]
There’s an elephant in the room that we’ve been ignoring for years:
window.onload is not the best metric for measuring website speed
We haven’t actually been “ignoring†this issue. We’ve acknowledged it, but we haven’t coordinated our efforts to come up with a better replacement. Let’s do that now.
window.onload is so Web 1.0
What we’re after is a metric that captures the user’s perception of when the page is ready. Unfortunately, perception.ready()
 isn’t on any browser’s roadmap. So we need to find a metric that is a good proxy.
Ten years ago, window.onload
 was a good proxy for the user’s perception of when the page was ready. Back then, pages were mostly HTML and images. JavaScript, CSS, DHTML, and Ajax were less common, as were the delays and blocked rendering they introduce. It wasn’t perfect, but window.onload
 was close enough. Plus it had other desirable attributes:
- standard across browsers -Â
window.onload
 means the same thing across all browsers. (The only exception I’m aware of is that IE 6-9 don’t wait for async scripts before firingÂwindow.onload
, while most other browsers do.) - measurable by 3rd parties –
window.onload
 is a page milestone that can be measured by someone other than the website owner, e.g., metrics services like Keynote Systems and tools like Boomerang. It doesn’t require website owners to add custom code to their pages. - measurable for real users – MeasuringÂ
window.onload
 is a lightweight operation, so it can be performed on real user traffic without harming the user experience.
Web 2.0 is more dynamic
Fast forward to today and we see that window.onload
 doesn’t reflect the user perception as well as it once did.
There are some cases where a website renders quickly but window.onload
 fires much later. In these situations the user perception of the page is fast, but window.onload
 says the page is slow. A good example of this is Amazon WebPagetest results we see that above-the-fold is almost completely rendered at 2.0 seconds, but window.onload
 doesn’t happen until 5.2 seconds. (The relative sizes of the scrollbar thumbs shows that a lot of content was added below-the-fold.)
Amazon – 2.0 seconds (~90% rendered) |
Amazon – 5.2 seconds (onload) |
But the opposite is also true. Heavily dynamic websites load much of the visible page after window.onload
. For these websites, window.onload
 reports a value that is faster than the user’s perception. A good example of this kind of dynamic web app is Gmail. Looking at the WebPagetest results for Gmail we see that window.onload
 is 3.3 seconds, but at that point only the progress bar is visible. The above-the-fold content snaps into place at 4.8 seconds. It’s clear that in this example window.onload
 is not a good approximation for the user’s perception of when the page is ready.
Gmail – 3.3 seconds (onload) |
Gmail – 4.8 seconds (~90% rendered) |
it’s about rendering, not downloads
The examples above aren’t meant to show that Amazon is fast and Gmail is slow. Nor is it intended to say whether all the content should be loaded before window.onload
 vs. after. The point is that today’s websites are too dynamic to have their perceived speed reflected accurately by window.onload
.
The reason is because window.onload
 is based on when the page’s resources are downloaded. In the old days of only text and images, the readiness of the page’s content was closely tied to its resource downloads. But with the growing reliance on JavaScript, CSS, and Ajax the perceived speed of today’s websites is better reflected by when the page’s content is rendered. The use of CSS is growing. As the adoption of these dynamic techniques increases, so does the gap between window.onload
 and the user’s perception of website speed. In other words, this problem is just going to get worse.
The conclusion is clear: the replacement for window.onload
 must focus on rendering.
what “it†feels like
This new performance metric should take rendering into consideration. It should be more than “first paintâ€. Instead, it should capture when the above-the-fold content is (mostly) rendered.
I’m aware of two performance metrics that exist today that are focused on rendering. Both are available in WebPagetest. Pat Meenan, gives the “average time at which visible parts of the page are displayedâ€. Both of these techniques use a series of screenshots to do their analysis and have the computational complexity that comes with image analysis.
In other words, it’s not feasible to perform these rendering metrics on real user traffic in their current form. That’s important because, in addition to incorporating rendering, this new metric must maintain the attributes mentioned previously that make window.onload
 so appealing: standard across browsers, measurable by 3rd parties, and measurable for real users.
Another major drawback to window.onload
 is that it doesn’t work for single page web apps (like Gmail). These web apps only have one window.onload
, but typically have several other Ajax-based “page loads†during the user session where some or most of the page content is rewritten. It’s important that this new metric works for Ajax apps.
ball rolling
I completely understand if you’re frustrated by my lack of implementation specifics. Measuring rendering is complex. The point at which the page is (mostly) rendered is so obvious when flipping through the screenshots in WebPagetest. Writing code that measures that in a consistent, non-impacting way is really hard. My officemate pointed me to this thread from the W3C Web Performance Working Group talking about measuring first paint that highlights some of the challenges.
To make matters worse, the new metric that I’m discussing is likely much more complex than measuring first paint. I believe we need to measure when the above-the-fold content is (mostly) rendered. What exactly is “above-the-fold� What is “mostly�
Another challenge is moving the community away from window.onload
. The primary performance metric in popular tools such as WebPagetest, Gomez.
It’s going to take time to define, implement, and transition to a better performance metric. But we have to get the ball rolling. Relying on window.onload
 as the primary performance metric doesn’t necessarily produce a faster user experience. And yet making our websites faster for users is what we’re really after. We need a metric that more accurately tracks our progress toward this ultimate goal.
I now think what we need to do is have a way (or multiple ways) for a site to signal to the browser (and other 3rd party libs), “hey, I’m ‘meaningfully interactive’ now.”
This event would ideally be fired by the page’s code (or markup) shortly after the DOMContentLoaded was fired, but a a complex app could delay it longer if there was more stuff that it felt it needed to load before a user could have a meaningful experience with the site.
By contrast, a simple text blog post sort of site could fire it immediately after the text is painted, because for that site, “meaningful interaction” is passive visual reading of the text.
I kind of envision an optional HTTP header and/or meta-tag that could specify one of a preset of triggers, like “first-paint”, “DOMContentLoaded”, “onload”, or several other possible choices. If not present, it might default to “onload” for legacy sake.
Many sites would be able to make good usage of those preset values, to slightly customize how “ready” is defined for them. If such a tag/header was found the browser could use that, and then fire a “InteractionReady” event on the page so that a third-party lib could detect it.
Or, a more complex app could set the header/value to “none”, and then their app could manually fire the “InteractionReady” event themselves at whatever point the app felt was appropriate.
The polyfill for this for older browsers is straightforward: FT for the event name in the window object, and if not present (older browser), add a window.onload handler that artificially fires a custom DOM event called “InteractionReady”.
Anyway, this is just an idea of how I’d like to see this kind of thing proceed. I’m sure there’s plenty of details to work out. Hopefully your post gets the ball rolling. :)
Permalink |
Well said!
This is the problem with virtually every metric. We want “making our websites faster for users”. But it’s hard to define, and hard to measure. So we settle for something that is easier to measure, but isn’t quite what we really want. And then we cross or fingers and hope there is some correlation between what we measure and what we actually want. Sadly, many metrics degrade into well-intentioned folklore, best-practices, and a cargo cult mentality.
I applaud your efforts! I look forward to some progress toward anything which can accurately indicate “faster for users”, yet works across browsers and is external to the site under test.
Charlie | 14-May-13 at 6:15 am | Permalink |
It’s worth noting that vendors are avid followers of your work. Keynote has added additional metrics to their product to give approximate timings for user experience:
http://www.keynote.com/mykeynote/help/components.asp#user_ex
I’ve haven’t looked at it in any detail but I believe that it is no longer exclusive to Internet Explorer.
Leo Vasiliou | 14-May-13 at 9:17 am | Permalink |
Steve,
Nothing is perfect (therefore everything comes with some type of imperfection). Asking which metric is like asking which statistical calculation (Arithmetic Mean versus Geometric Mean versus etc). If the answer is contextual, then suggest ReadyState or OnLoad are still two of the least imperfect metrics available.
Leo
Permalink |
Hi
The way I look at this is the best way to over come such a complex task as determining when a site is ready to use, as far as real humans are concerned, is to use those users and their brains to do all the heavy lifting.
A metric that determines when the users begin interacting with content in relation to the start of the browsers work (HTTP GET, AJAX fired etc), would be the most useful, and probably fairly easy to implement – JS at top of the page (yes I know that is a sin) that watches the mouse/keyboard for user behaviour in the region of the browser display (or more specific areas of the DOM for AJAX events) (clearly you’d want to ignore users reading their emails whilst waiting for a page to load).
Clearly this is only of use in systems where RUM is possible (not dev, test, CI etc).
Paul
Permalink |
Hmm, no way to edit my comment. above. The PS should be at the very end for the whole comment to read properly.
pd | 24-Jul-13 at 9:05 pm | Permalink |
Surely there are a whole variety of browser internal events that may have never hitherto been exposed to page content, that we can get browser companies to allow us to use. It shouldn’t be too hard to keep the finite certainty of using non-qualitative metrics. We need to get events such as CSS rendering completed (pre animations), images rendered (as opposed to just loaded, thus factoring in progressive loading). Browsers already have scroll methods so they know when an element is outside the viewport. That information just needs to be refined so that we can start looking at above-the-fold timing.
As usual the technicalities may not be that hard, it’s the human factor of getting all the browser vendors, standards committees and developer communities to agree! That process continues to take too long, even if it is getting faster.