Below you will find pages that utilize the taxonomy term “web performance”
Waterfalls made of RUM
Debugging frontend problems should be simple. The reality is it is not always straightforward. It’s a potentially stressful exercise, given that diagnosis usually happens because of an outage of sorts. Outages mean users aren’t happy, or we’re losing money or users, or both. Okay, so we know we’re stressed. Now we have to work with a mix of system familiarity, crossed with a logical, often robotic, exercise of looking for clues and excluding suspects until a root cause is found.
Why did Nock not record all the api requests?
tl;dr Nock overrides http(s).request functions, which needs to be done before other code stores a reference to it. We ran into an issue where Nock wasn’t recording api calls to a new service dependency we introduced into the BytesMatter codebase. The solution eventually turned out to be quite straightforward, but investigating it taught us that sometimes code subtleties mean that tools are not always as straightforward to use as the “copy-paste” tutorials suggest.
Performance Bounce - Beyond the 3 second rule
Let’s talk about performance bounce. tl;dr performance bounce is about users, single-number summaries (such as core web vitals speed index) are about seo ranking and benchmarking. These two concepts have overlap, but need to be separated. Beyond the 3 second rule The axiom that the likelihood of bounce increases beyond the 3 second mark has driven web performance into the business domain. Like every good movement, performance needed a starting point, and that was it.
Performance Resource Timing, Cors, and AJAX requests
This is a quick summary of a caveat of cross-origin resource sharing (CORS) when measuring performance of ajax requests using performance resource timings in the browser. To be clear in this article we will unpack slightly some of the information from Mozilla’s documentation on the subject. Why? While it contains the root cause it does not spell out the consequence, which hopefully we’ll do here. Resource Timing API wait what?! The rise and influence of performance measuring and monitoring across the discipline of web engineering has meant the respective tooling has come a long way too.
Diagnosing Website Issues - Part 2 - Let your systems tell you what is wrong
This is part 2 in a series about diagnosing website issues. Hopefully it gives us a framework to constructively and efficiently work through website outages or issues instead of working in a state of stress or panic. Thanks to Part 1, we know what it means to be in an outage. Everyone knows their place and we’re ready to figure out what on earth is going on. Now, obviously you’re not ACTUALLY reading this in the middle of an outage.
Real User Monitoring vs Synthetic Web Performance Tools
Let’s start with an assertion: Web performance is important. You agree with that and that’s why you’re here. Both real user monitoring and synthetic web performance tooling offer ways to shed light on how your website is performing for your users. Which to use? Easy, they offer different views and are both important. A simple way to differentiate is tools measure and evaluate, and monitoring tracks trends and deviations. If I may insert a pedantic opinion here, the former is best suited to build time and the latter for your production website.
Diagnosing Website Issues - Part 1 - Before we begin
This is part 1 of a series. As engineers we love building software. We love releasing software. We love seeing our software helping people. Diagnosing Website Issues is not always the first thing that comes to mind if we think about what we do day to day, but it is a real and critical part of our jobs. Unfortunately, sometimes things go wrong. Sometimes wrong is “just a bit wrong”. Those cases that can generally be filed under bug - fix later.