Articles in the ‘Performance’ Category

7 Key take-aways from Chrome Dev Summit 2019

I was luck enough to go to the Chrome Dev Summit in San Francisco this year. There were a thousand people in the venue, and seven thousand on the wait list, so I’m glad that I could attend in person.

You can see all the videos from the talks at CDS 2019 on this playlist

Here are my 7 key take-aways from the event

The web must get to parity with native

Without saying so explicitly, the web must win and the Chrome team are working towards a future where the web is the go-to development platform for everything. It’s going gangbusters on desktop, but on mobile, where the user population are moving towards, it’s not the same story.

So, the team is investing heavily in things like project fugu, to create APIs for things like contacts, native file system interaction, bluetooth and NFC. This all has to be done through the standards process, which can be a long road, but it’s necessary to ensure interoperability in the web.

However, there’s still a long way to go to get the web into the Play/App store

The Chrome leadership panel session threw up a point that both Samsung and Microsoft treat PWAs as first-class citizens in their stores. Google has come out with TWAs – Trusted Web Apps. There is a build and submit step for the store as well, and the tooling provided only works on MacOS right now.

 

Take a step back from the specific problem and you can see that no one on the stage was happy with the outcome (note – the video of that session is not available online and the full live streams have been taken down). They all want the web to be available, and be discoverable. There is a lot more to be done and it’s not perfect, but they have taken a big step forward with TWAs from having no PWAs in the store, to a pretty simple way to create them and deploy them.
Chrome Leadership Panel at CDS 2019
It is going to be OK everyone, it’s just going to take some more time.

There’s a big focus on making the web faster

The speakers all talked about making the web faster, including how they measure this, with upcoming changes to how Lighthouse is going to incorporate new metrics such as Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS). All these metrics are explained over at https://web.dev/metrics/, and they are now being labelled in the performance tab of the web inspector.
New metrics coming to Lighthouse in V6
They are also experimenting with slow and fast loading indicators and interstitials though they didn’t make that much of this during the talk – expect this to change a lot.

Lighthouse is getting a CI mode and a server

https://github.com/GoogleChrome/lighthouse-ci – and I couldn’t be happier.

The focus on a faster web extended to making React faster

I was surprised at how much focus there was on React, given that the Web Almanac also launched at this event and it showed that React has just a 4.6% share of the web – though it clearly has a much larger mindshare amongst developers.

During one talk the speakers seemed to recommend that you focus on “UI frameworks” not “view libraries”, so Next.js over React. I don’t really understand why Google would go all-out in saying do one over the other, and especially the one that needs the most effort to configure to get high performance.

Google told us to focus on those less privileged than ourselves

A lot of talks covered performance, security, and access to data and services for those that can’t rely on it – for example, the developing world. They encouraged us to test on devices worse than ours, to look at our data, and to find out what the world is really like outside of our bubbles.

Chrome is secure by default

We’ve moved on from the pre-HTTPS era where users had to be rewarded for being safe with green labels and checkboxes – we are now in to the part where that is taken for granted and you are warned when things might not be as they seem. The security team are gathering data and doing in-depth research to find out what signals users react to and take action when something is insecure.

Also in security, there was a great talk on WebAuthn, a way to authenticate your users with minimal or no passwords using the capabilities of your devices or things like Titan security keys.

The open standards process is important, and bringing Microsoft in to Chromium is good for the web

It was fantastic to see the Edge team present, and show off all the things they were working on in collaboration with the Chrome team. Microsoft are bringing the best of what they know from accessibility, security and the Windows platform to the table, which will enhance the whole platform.
Open source contributors to Chromium
I’m also quite confident that we can stop developing for IE very soon. Once Edge launches, businesses adopt it and the IE icon goes away, we’ll see big drop-offs in IE usage.

And that’s it – the event was great and excellently organised. Well done to the MC’s as well and everyone behind the scenes. Roll on 2020!

A Primer on Preconnect

The best thing about technology, is that it’s always changing. I learn new things every day, even in areas that I think I know a lot about, such as web performance.

See, when I was looking at performance optimisations for a project, I found a way to reduce connection times by getting the browser to speculatively perform DNS lookups for a list of third-party domains. This takes about 10-30ms off of the initial connection to a new domain. It may not sound like much, but it can make a big difference to how fast an image loads.

The last time I looked at this was before the internet went all-in on HTTPS, and so when I looked at the network timing of a new third-party HTTPS service, I was wondering if there was anything I could do to help the browser with the additional TLS negotiation and connection.

Waterfall showing DNS prefetch on a HTTPS resource

The affect shown here is achieved by adding a meta tag of <meta rel=”dns-prefetch” href=”//example.com”> to the <head> of the HTML. You can also do this by adding it as a header to the response of the HTML document.

That fetches the DNS, but not the TLS negotiation or the connection setup. There is a different hint that you can give to the browser that will do all of those things, called preconnect. It works much in the same way as dns-prefetch but sets up the rest of the connection as well. When working, it looks like this

The code here looks like this: <meta rel=”preconnect” href=”https://example.com”>. That looks nice and easy, and you’ve just saved another fifty milliseconds on the first connection to that domain over HTTPS.

However, this is a web standard, so it’s not all that simple. Firstly, the browser support for this hint is not brilliant with no support in IE or Safari. The best thing to do for those browsers is to keep the dns-prefetch hint as well as the preconnect hint.

Secondly, there is a limit to how many you can use effectively, and that limit is 6 connections. The reasons for this go back into the aeons of ancient internet history (read: the 90s) where browsers and internet routers were most efficient with 6 open connections at any one time. With modern routers and browsers, this isn’t true, but these limits aren’t likely to be changed any time soon due to this enormous internet legacy.

Finally, there is an extra attribute that you can add to this <meta> tag called “crossorigin” which changes how preconnect makes its connections. As Ilya Grigorik explains in his post:

The font-face specification requires that fonts are loaded in “anonymous mode”, which is why we must provide the crossorigin attribute on the preconnect hint: the browser maintains a separate pool of sockets for this mode.

What that means is that some resources, such as fonts, ES6 modules or XHR, need to be accessed in a “non-credentialed fetch”, or crossorigin=”anonymous”. Otherwise, these resources won’t load and you’ll see a cancelled resource request in the network requests. the “anonymous” value is the default if just “crossorigin” is provided, so if you like shorthand, you don’t need to add the =”anonymous” part to your code.

That’s it. Preconnect is a really useful hint that can save milliseconds on those third-party requests. Give it a go.

For reference, my conversation started on Twitter with this tweet  and ending with this tweet from Yoav Weiss

HSTS – a no-nonsense guide

I’ve been playing with HTTP Strict Transport Security (HSTS, I’m late to the party as usual) and there’s some misconceptions that I had going in that I didn’t know about that threw me a bit. So, here’s a no nonsense guide to HSTS.

The HSTS Header is pretty simple to implement

I actually thought that this would be the hard bit, but actually putting the header in is very simple. As it’s domain specific you just need to set it at the web server or load balancer level. In Apache, it’s pretty simple:

Header always set Strict-Transport-Security "max-age=10886400;"

You can also upgrade all subdomains using this header

A small addition to the header auto-upgrades all subdomains to HTTPS, making it really simple to upgrade long-outdated content deep within databases or on static content domains without doing large-scale migrations.

Header always set Strict-Transport-Security "max-age=10886400; includeSubdomains;"

Having a short max-age is good when you’re starting out with subdomains

Having a short max-age is bad in the long-run

If you have a max-age length shorter than 18 weeks then you are ineligible for the preload list.

Wait, what?

There’s a preload list – browsers know about HSTS-supported sites

It turns out that all of the browsers include a “preload” list of websites that support HSTS, and will therefore always point the user to the HTTPS version of the website no matter what link they have come from to get there.

So, how does it work?

Well, you go to https://hstspreload.appspot.com and submit your website to the list. Chrome, Firefox, Opera, IE 11 (assuming you got a patch after June 2015), Edge and Safari pick it up and will add you to a list to always use HTTPS, which will take away a redirect for you. There are four other requirements to meet – have a valid cert (check), include subdomains, have a max-age of 18 weeks, add a preload directive and some redirection rules.

Header always set Strict-Transport-Security "max-age=10886400; includeSubDomains; preload"

Here are the standard redirection scenarios for a non-HSTS site that uses a www subdomain (like most sites):

  1. User enters https://www.example.com – no HSTS. There are 0 redirects in this scenario as the user has gone straight to the secure www domain.
  2. User enters https://example.com – no HSTS. There is 1 redirect as the web server adds the www subdomain
  3. User enters example.com – no HSTS. There is 1 redirect here as the web server redirects you to https://www.example.com in 1 hop, adding both HTTPS and the www subdomain
How to Redirect HTTP to HTTPS as described by Ilya Grigorik at Google I/O in 2014

How to Redirect HTTP to HTTPS as described by Ilya Grigorik at Google I/O in 2014

This is the best-practice for standard HTTPS migrations as set out in HTTPS Everywhere as Ilya Grigorik shows us that scenario 3 should only have 1 redirect, otherwise you get a performance penalty.

HSTS goes against this redirection policy… for good reason

To be included on the preload list you must first redirect to HTTPS, then to the www subdomain:

`http://yell.com` (HTTP) should immediately redirect to `https://yell.com` (HTTPS) before adding the www subdomain. Right now, the first redirect is to `https://www.yell.com/`.

This felt incredibly alien to me, so I started asking some questions on Twitter, and Ilya pointed me in Lucas Garron‘s direction

Following that link I get a full explanation:

This order makes sure that the client receives a dynamic HSTS header from example.com, not just www.example.com

http -> https -> https://www is is good enough to protect sites for the common use case (visiting links to the parent domain or typing them into the URL bar), and it is easy to understand and implement consistently. It’s also simple for us and other folks to verify when scanning a site for HSTS.

This does impact the first page load, but will not affect subsequent visits.
And once a site is actually preloaded, there will still be exactly one redirect for users.

If I understand correctly, using HTTP/2 you can also reuse the https://example.com connection for https://www.example.com (if both domains are on the same cert, which is usually the case).

Given the growth of the preload list, I think it’s reasonable to expect sites to use strong HSTS practices if they want to take up space in the Chrome binary. This requirement is the safe choice for most sites.

Let me try to visualise that in the scenarios:

  1. First visit, user types example.com into their browser. They get a 301 redirect to https://example.com and receive the HSTS header. They are then 301 redirected to https://www.example.com. 2 redirects
  2. Second visit, the browser knows you’re on HSTS and automatically upgrades you to HTTPS before the first redirect, so typing yell.com into the browser performs 1 redirect from https://example.com to https://www.example.com
  3. If you’re in the preload list, the second visit scenario happens on the first visit. 1 redirect

So, that makes sense to me. In order to set the HSTS upgrade header for all subdomains, it needs to hit the naked domain, not the www subdomain. This appears to be a new requirement to be added to the preload list, as the Github issue was raised on May 19th this year, and Lucas has said that this new rule will not be applied to websites that are already on the list (like Google, Twitter etc).

For me, this takes away much of the usefulness of HSTS, which is meant to save redirects to HTTPS by auto-upgrading connections. If I have to add another redirect in to get the header set on all subdomains, I’m not sure if it’s really worth it.

So, I asked another question:

And this is the response I got from Lucas

So it helps when people type in the URL, sending them to HTTPS first. This takes out the potential for any insecure traffic being sent. Thinking of the rest of the links on the internet, the vast majority of yell.com links will include the www subdomain, so HSTS and the preload list will take out that redirect, leaving zero redirects. That’s a really good win, that Lucas confirmed.

Summary – HSTS will likely change how you perform redirects

So, whilst this all feels very strange to me, and goes against the HTTPS Everywhere principles, it will eventually make things better in the long run. Getting subdomains for free is a great boost, though the preload list feels like a very exclusive club that you have to know about in order to make the best of HSTS. It’s also quite difficult to get off the list, should you ever decided that HTTPS is not for you as you’ll have the HSTS header for 18 weeks, and there is no guarantee that the preload list will be modified regularly. It’s an experiment, but one that changes how you need to implement HSTS.

So, that’s my guide. Comments, queries, things I’ve gotten wrong, leave a comment below or on Twitter: @steveworkman

AMP – Long-term Harmful

You may have heard about a project that Google has just announced called AMP (Accelerated Mobile Pages), which aims to speed up the mobile web.

Publishers around the world use the mobile web to reach these readers, but the experience can often leave a lot to be desired. Every time a webpage takes too long to load, they lose a reader—and the opportunity to earn revenue through advertising or subscriptions. That’s because advertisers on these websites have a hard time getting consumers to pay attention to their ads when the pages load so slowly that people abandon them entirely.

I completely understand the need for this project – the web is slow and advertising is such a key revenue source for publishers that these adverts are ruining the experience. Facebook’s Instant project is designed to do much the same thing. AMP is not a new idea.

Technically, AMP builds on open standards and allows you to simply replace some parts of your build and some components of your system with AMP versions of those pages. What happens with these pages is the secret sauce – Google knows of these pages, and if they pass the AMP tests it caches versions of those sites on its own servers.

This isn’t too scary – Google has committed to allow  the major advertising networks and pixel tracking so that businesses can maintain their precious statistics. AMP, in general, is a good idea.

Except for one thing.

If you’re not on it, you’re screwed.

Have a look at these two search pages. Both are searches for “Obama” on a Nexus 5 (emulated). The one on the right is the current search page, the one on the left is with AMP articles.

AMP search page comparison

Notice where the first organic listing is in each page. Currently, it’s just below the first page, but with AMP, it’s over two screens of content away! That’s absolutely massive, and for anyone not in the top 3 results, they may as well not be there.

Basically, if you’re not an AMP publisher, expect your traffic to drop, by double-digit percentages.

Short-term benefits, Long-term harm

Short-term, AMP will make publishers sit up and take notice of web performance, it’ll make websites faster across the board, and make more customers use Google. For users, this is a win/win situation.

I’ve come to the conclusion that this project is harmful to the web in the long term. If a publisher isn’t getting traffic because it’s all going to AMP publishers, then the amount of content on Google drops, quality drops and is put into these large content producers. You end up with less diversity in the web, because small producers can’t make money without using AMP. Even in the end when you’ve only got a few publishers who can survive, they will be encouraged to reduce adverts, or be limited to the subset that Google’s DoubleClick deem performant enough to be included in the AMP-friendly set of advertisers. This may block the big-money adverts, which keep these sites going.

It also creates dependencies on Google, and whilst it’s unlikely to go down any time soon, being cached on their servers and allowing for only the most basic tracking mechanism, and only for browsers that Google supports. It creates a two-tier system that the web is firmly against.

This project seems to polyfill a developer/organisation’s lack of time to dedicate to performance. Many very clever people are working on this education problem (hello to Path to Perf, Designing for Performance and PageSpeed) though it will take time. Short-term fixes like AMP and Facebook Instant are encouraging developers to take shortcuts to performance, handing their problems off to Google. This does nothing for the underlying issues, but with Google giving AMP such prominence in its search results, how can publishers resist?

Or is it a temporary solution

I hope that from all of this, developers sit up and take notice of web performance; improve their sites and provide a better experience for all. If we don’t, Google will keep solutions like this around, leaving the only content that gets interaction as the content that Google approves of – and no one wants that.