Let’s talk about performance bounce.

tl;dr performance bounce is about users, single-number summaries (such as core web vitals speed index) are about seo ranking and benchmarking. These two concepts have overlap, but need to be separated.

Beyond the 3 second rule

The axiom that the likelihood of bounce increases beyond the 3 second mark has driven web performance into the business domain. Like every good movement, performance needed a starting point, and that was it. Of course there was effort before the genesis of this axiom, but really it was this 3 second rule mobilised an industry, and correctly so.

It spawned research, companies, jobs, and indeed, protocol and browser improvements. We have collectively garnered huge amounts of knowledge around web performance.

But there is a very practical obstacle currently in the way. The industry is, at this stage, aware of the importance of web performance. This awareness, however, does not always translate to a solid business case for improving web performance. We use creative phrases like “performance as a feature” to try (with some modicum of success) pushing performance improvements up the priority ladder. Businesses like to act on evidence, and tend to move more swiftly when presented with compelling evidence. In order to find that evidence we need to move beyond the 3 second rule.

Finding clarity by separating layers

The performance narrative is a bit confused, as is the narrative of any burgeoning discipline. It is easiest to give an example of a very familiar conversation:

me: we need to fix performance.

business: why?

me: because of the 3 second rule (and research)!

business: ah, great. How much is this hurting us? How much will we benefit?

me: erm… Core web vitals suggests we’re losing users and also our competitors have a better speed index.

The problem with the 3 second rule and other single-number summaries is that they lack context of an individual site. They work at a macro level but do not always translate to individual sites, or at least not precisely.

The view from this macro level does not take into account any context:

  1. site type (blog, e-commerce, making travel plans, tax research etc),
  2. market share and familiarity,
  3. user intent,
  4. lock-in or lack of alternatives (e.g a government website about parking restrictions in a city) and likely many more variables.

Why does that matter? Well, in short, user sensitivity to performance differs across sites. For example the guideline for an acceptable LCP is 4 seconds:

  • Scenario 1: users start to bounce at a lower threshold, meaning more effort is required to satisfy users.

  • Scenario 2: users start to bounce at a higher threshold, say 4.5 seconds. A 500ms LCP improvement is not always easy and if it is essentially wasted effort then your effort may be better spent elsewhere.

Both scenarios can also present their own subtleties, as we’ll see later.

To summarise the last few paragraphs: single number metrics can be a good guide. They are based on a bunch of research, but ultimately they cannot reflect your website and users precisely.

So now we have 2 very clear and separate concerns:

  1. benchmarking / ranking
  2. users / performance bounce (see below)

(It is worth noting that a good outcome would be fixing one helps the other, but this will not always be the case. So, while they are not mutually exclusive, neither are they mutually inclusive - they should at least be investigated separately. Let’s take a look)

Business case: ranking / benchmarking

If the concern is that e.g. Google will penalise you because your core web vitals metrics are not within acceptable levels, then you already have both a clear target (adjust loading to be within acceptable core web vitals metrics) and a benefit in outcome (ranking will not be affected).

Business case: performance bounce

When it comes to retaining users, we can introduce the concept of performance bounce

To put it simply, we can define performance bounce:

Assuming content being equal, a performance bounce is a user who leaves a site purely because of poor performance.

And as an extension:

Assuming content being equal, performance bounce rate is the rate at which users leave your site because of a poorly performing page

Now, like any metric, that can be sliced many ways. We can slice by mobile vs desktop, connection, or location for example. The core however remains the same - that if we are serving the same content to users and the only practical difference is the loading times (various metrics) of the page then we can infer that those users who bounced were influenced by the speed of load. Hence, performance bounce.

It now becomes pretty obvious that if performance bounce is bad, we are losing users purely because of performance bounce.

What does performance bounce look like?

Here’s a real-world landing page. It represents the percentage of bouncing and successful (non bouncing) users, bucketed by LCP time. Note the percentage here is user count at that bucket as a percentage of all users, and limits are lowerbounds (inclusive) So e.g that bounce 2% at 500ms means 2% of all users landing on this page, whose LCP time is within 500ms - 999ms, bounce.

Performance Bounce - No Trend Line

On its own this information is already powerful! An immediate point of focus is the 6000+ms mark where bounce overtakes non bounce. While those numbers look like they’re heading towards the tail, totalled up that can mean a lot of users being lost purely for performance reasons.

We can magnify problem areas if we include a bounce trend line. This is, a slightly different calculation in that instead of looking at the sum of all users, the bounce percentage is calculated per bucket. Using the same 500-999ms bucket from before we now see that of users who experience LCP in that bucket during a page load, 24% are bouncing.

Performance Bounce - With Trend Line

To interpret the bounce trend line:

  1. flat sections mean bounce and non bounce are moving at the same gradient
  2. rising sections mean bounce is increasing, relative to non bounce
  3. falling sections mean bounce is decreasing, relative to non bounce

The immediate point of interest is that, if looking only at the previous chart we may set our target at 6000ms, this chart shows bounce starting to increase from around 4500ms. We might need a rethink!

There is also an earlier rise from 0 - 2000ms. At this point the fixes likely become tougher to implement, meaning we should analyse the cost vs benefit (there are a lot more users concentrated there).

More is better

The two charts are best used together. The bounce trend line highlights where effort can be focused, which can then be cross-referenced with the actual numbers to see potential gain.

For example, the bounce trend line is rising at 0 - 1999ms. It is then flat (ish) until 4500 where it steadily rises again until eventually flattening out (with some deviation) at around 9000ms.

Even more is even better

When looking to implement performance fixes, solid goals are key. For those single number metrics mentioned above, that is easy. For real users we need to think a bit more. It is important to realise that while those single number metrics can be gamed, users cannot. In other words, in order to achieve acceptable loads according to e.g core web vitals, it is possible to do so at the expense of other metrics. Your users don’t care, so we have to create some boundary lines from the information we have

To illustrate, let’s bring in another metric. Good old Dom Content loaded in this case. It doesn’t really matter which metric you choose, and you’ll see why. The more information you have the more flexibility in the choices you can make.

Performance Bounce - No Trend Line Performance Bounce - No Trend Line

With the dom content loaded bounce trend line we can see an early rise (which makes sense as it is a fairly early point) up to around 1500ms.

Now as part of our performance improvement efforts we have some very neat and solid target options: Dom Content Loaded: < 1500ms LCP: either < 4500ms or < 6000ms depending on your cost benefit analysis.

Takeaways

  1. Single number metrics need to be treated separately from user behaviour
  2. Looking at Performance Bounce means we can justify a solid business case for doing performance improvemnts (or perhaps not do work, and that is not a bad outcome either)
  3. Users don’t care about metrics, use multiple metrics as a guide
  4. This is awesome!
  5. We can do the leg work for you. These charts are from our BytesMatter real user monitoring platform. Give us a try, add our beacon to your site, starting free.
  6. Take a look at our sandbox demo site and have a play (menu item: Bounce analysis)

Thanks for reading!

Tags: #frontend