This is a quick summary of a caveat of cross-origin resource sharing (CORS) when measuring performance of ajax requests using performance resource timings in the browser.

To be clear in this article we will unpack slightly some of the information from Mozilla’s documentation on the subject. Why? While it contains the root cause it does not spell out the consequence, which hopefully we’ll do here.

Resource Timing API wait what?!

The rise and influence of performance measuring and monitoring across the discipline of web engineering has meant the respective tooling has come a long way too. I mean, do you remember this:

  var start = new Date();
  window.onload = function() {
    var end = new Date();
    var pageLoadKindaSortaMaybeWhatever = end.getTime() - start.getTime();
    console.log('Page load time was ', pageLoadKindaSortaMaybeWhatever);

*use of the keyword var used lovingly for historical accuracy

One of goto (see what I did there) tools is the Resource Timing API. Very briefly, it allows us to extract information about load times of resources (documents, scripts, images, fetch/xmlhttp/ajax requests etc.). In turn we can then do some very simple calculations to derive more recognisable information such as page load time, dns lookup time, time to first byte (TTFB) and more. For example, to collect ajax request resource timings (we’ll assume ‘modern’ browsers for the purpose of this and further examples):

    const ajaxRequestTimings = window.performance.getEntriesByType('resource')
                                  .filter(entry => ['fetch', 'xmlhttprequest', 'beacon'].includes(entry.initiatorType));
    ajaxRequestTimings.forEach(timing => {
      console.log('Total request time' , timing.responseEnd - timing.startTime);
      console.log('DNS lookup time' , timing.domainLookupEnd - timing.domainLookupStart);

As you can see the first very simple step was to ask the browser for all the resource entries and then filter them to the known ‘ajaxy’ types. After that we are free to calculate as much as we want. Brilliant!

Well done you’ve copy pasted the documentation. Now tell me something useful!

Gladly! The two metrics derived above, DNS lookup and total request time, demonstrate the issue perfectly when CORS comes into play (So good it’s almost as if it was planned). In short, one works and the other “doesn’t”!

For requests going to the same origin (which protocol!) as the document. So given:


Our timing code above would helpfully yield something along the lines of:

  DNS lookup time 52.70500000938773
  Total request time 174.40499999839813

Great! Now on the other hand, given:


We now see (something like)

  DNS lookup time 0
  Total request time 200.0150000094436

What’s happening here

So this brings us back to the little unpacking of the documentation linked above. It clearly states:

The properties which are returned as 0 by default when loading a resource from a domain other than the one of the web page itself: redirectStart, redirectEnd, domainLookupStart, domainLookupEnd, connectStart, connectEnd, secureConnectionStart, requestStart, and responseStart.

Now, given Mozillas timestamp image from their documentation: Mozillas Resource Timing Timestamps

That means we’re missing out on really helpful measurements. To highlight a few:

  - Time to first byte (TTFB): responseStart - startTime
  - DNS: domainLookupEnd - domainLookupStart
  - Redirect Time: redirectEnd - redirectStart
  - Download Time: responseEnd - responseStart
  - SSL (TLS) Time: connectEnd - secureConntionStart

Those are big hitters! We can see now that when CORS comes into play we become a little poor on the detail.

What does that mean for me?

Back in practicaland, you’re probably thinking “Dave, I don’t care if requests from take long because of DNS or download time, I just care that they take long”. Yes and no.

For third party requests there is not always much you can do about it. That is unfortunate. Though if you could see that it was DNS taking time then that may prompt you to do a DNS prefetch,. Or perhaps if SSL time was slow you may go as far as a preconnect.

For requests to your own subdomains you can easily mitigate by adding the Timing-Allow-Origin header to responses.

I think however the key takeaway here is to ensure your monitoring doesn’t go for the lowest common denominator. The easy path here is to only monitor that which gives results for all requests (from our example above Total Request Time). The problem with that is that diagnosing website issues is hard. We need all the help we can get in order to quickly mitigate frontend issues for our users. All information is good information in that context, because the faster we can react, the fewer people we drive away from our website.

When you can, track the information that is available. Allow for the caveat that this information is not always available in your dashboards. The power of a chart in front of you with a showing a massive spike point directly to a culprit in it cannot be emphasized enough and is a huge helper in stressful situations such as diagnosing production website issues <– plug to the first in a blog series.

tl;dr all information is good information. Don’t write it off.

Tags: #frontend #tools