Steve Workman's Blog

Data-Driven Performance Breakout at Edge Conference

Posted on by Steve Workman About 3 min reading time

I was lucky enough to attend Edge Conf in London this year, a day that I always truly enjoy. The main sessions of the conference were streamed live and videos will be available later, but the break-outs weren't recorded. These were the sessions I enjoyed the most and it's a shame that people won't see them without being there - so here's my notes on what was said to the best of my ability (and with a big hat tip to George Crawford for his notes). Patrick Kettner was the moderator.

Q: How can we use the masses of data that RUM collects to get businesses to care about performance?

Business leaders like metrics from companies that they can relate to (i.e. Amazon, eBay) but these aren't very useful metrics as the scale is completely different. Finding stats from competing or relevant companies is hard, so how do you make them care?

Introducing artificial slowness is one way to convince people, but not good for business. There's also the risk that you may not see increase in conversion from speed improvements! Filmstrips are incredibly useful at this point to see what's going on and these are available in Chrome Dev tools in the super secret area.

Showing videos to business people makes it really hit home - people hate it when they can visibly see their site suck. It’s like making people watch a user test for their site. Shout out to Lara Hogan at Etsy (their engineering blog is awesome) for their great work on this, something that Yell has copied.

Metrics that are useful: first render, SpeedIndex, aren’t available in the browser. Using SpeedCurve can really make business people sit up and take notice of performance because it’s a pretty interface to those things.

All-in-all, the standard metrics are unlikely to be the best for you, so add in user timing markings (and a very simple polyfill) and graph those, including sending them to WebPageTest so you can measure the things that are important to you over time. This was done very successfully by The Guardian (hat tip Patrick Hamann).

Q from Ilya Grigorik: The browser loading bar is a lie, yet users wait for it. What metric should it use?

Basically, developers can put their loading after the onLoad event to hack around the loading spinner. If we stop the spinner at first render, it's not usable. If we stop it at when the page can be interacted with when would that be? The browser runs the risk of "feeling slower" or "feeling faster" by just changing the progress bar. Apparently there's one browser that just shows the bar for three seconds, nothing more.

No real consensus was reached here, but it was a very interesting discussion

Q: Flaky or dropped connections are important to know about for performance metrics - what can the room say about their experiences gathering offline metrics?

When the FT tried this with their web app they often exceeded localStorage sizes and sometimes POST sizes (25MB) as users could be offline for a week or more. The Guardian had good success with bundling beacons up into one big post to save money with Adobe Omniture/SiteCatalyst.

The best solution is the Beacon API (sendBeacon) which promises to deliver the payload at some point (which images/XHR don't right now). It's implemented in Google Analytics, you just have to enable it in the config, other tracking providers don't have it right now.

Q: What metrics APIs are missing in browsers?

A unique opportunity to ask Ilya to add APIs into Chrome - not to be passed up

Wrap-up

I'd have loved to stay and chat more (nice to meet Tim Kadlec in person, shout out to the Path to Performance podcast as well), it's rare to have a lot of the web performance community in the same room at the same time and should definitely happen more often.

If there's things I've missed, let me know in the comments or on twitter (@steveworkman)