The web must get to parity with native
However, there’s still a long way to go to get the web into the Play/App store
I’ve been playing with HTTP Strict Transport Security (HSTS, I’m late to the party as usual) and there’s some misconceptions that I had going in that I didn’t know about that threw me a bit. So, here’s a no nonsense guide to HSTS.
I actually thought that this would be the hard bit, but actually putting the header in is very simple. As it’s domain specific you just need to set it at the web server or load balancer level. In Apache, it’s pretty simple:
Header always set Strict-Transport-Security "max-age=10886400;"
A small addition to the header auto-upgrades all subdomains to HTTPS, making it really simple to upgrade long-outdated content deep within databases or on static content domains without doing large-scale migrations.
Header always set Strict-Transport-Security "max-age=
@steveworkman unless your env is so simple you can be *absolutely* sure there aren’t HTTP only subdomains you forgot about.
— Eric Lawrence (@ericlaw) July 6, 2016
If you have a max-age length shorter than 18 weeks then you are ineligible for the preload list.
It turns out that all of the browsers include a “preload” list of websites that support HSTS, and will therefore always point the user to the HTTPS version of the website no matter what link they have come from to get there.
So, how does it work?
Well, you go to https://hstspreload.appspot.com and submit your website to the list. Chrome, Firefox, Opera, IE 11 (assuming you got a patch after June 2015), Edge and Safari pick it up and will add you to a list to always use HTTPS, which will take away a redirect for you. There are four other requirements to meet – have a valid cert (check), include subdomains, have a max-age of 18 weeks, add a preload directive and some redirection rules.
Header always setStrict-Transport-Security "max-age=10886400; includeSubDomains; preload"
Here are the standard redirection scenarios for a non-HSTS site that uses a www subdomain (like most sites):
This is the best-practice for standard HTTPS migrations as set out in HTTPS Everywhere as Ilya Grigorik shows us that scenario 3 should only have 1 redirect, otherwise you get a performance penalty.
To be included on the preload list you must first redirect to HTTPS, then to the www subdomain:
This felt incredibly alien to me, so I started asking some questions on Twitter, and Ilya pointed me in Lucas Garron‘s direction
— Lucas Garron (@lgarron) July 4, 2016
Following that link I get a full explanation:
This order makes sure that the client receives a dynamic HSTS header from
example.com, not just
http -> https -> https://www is is good enough to protect sites for the common use case (visiting links to the parent domain or typing them into the URL bar), and it is easy to understand and implement consistently. It’s also simple for us and other folks to verify when scanning a site for HSTS.
This does impact the first page load, but will not affect subsequent visits.
And once a site is actually preloaded, there will still be exactly one redirect for users.
If I understand correctly, using HTTP/2 you can also reuse the
https://www.example.com(if both domains are on the same cert, which is usually the case).
Given the growth of the preload list, I think it’s reasonable to expect sites to use strong HSTS practices if they want to take up space in the Chrome binary. This requirement is the safe choice for most sites.
Let me try to visualise that in the scenarios:
— Eric Lawrence (@ericlaw) July 4, 2016
So, that makes sense to me. In order to set the HSTS upgrade header for all subdomains, it needs to hit the naked domain, not the www subdomain. This appears to be a new requirement to be added to the preload list, as the Github issue was raised on May 19th this year, and Lucas has said that this new rule will not be applied to websites that are already on the list (like Google, Twitter etc).
For me, this takes away much of the usefulness of HSTS, which is meant to save redirects to HTTPS by auto-upgrading connections. If I have to add another redirect in to get the header set on all subdomains, I’m not sure if it’s really worth it.
So, I asked another question:
@lgarron but we have 20 years worth of back links around the internet. Are there other benefits of the HSTS header + preload list?
— Steve Workman (@steveworkman) July 4, 2016
And this is the response I got from Lucas
@steveworkman The banner use case is for people typing domains into the URL bar.
— Lucas Garron (@lgarron) July 4, 2016
So it helps when people type in the URL, sending them to HTTPS first. This takes out the potential for any insecure traffic being sent. Thinking of the rest of the links on the internet, the vast majority of yell.com links will include the www subdomain, so HSTS and the preload list will take out that redirect, leaving zero redirects. That’s a really good win, that Lucas confirmed.
@lgarron yes, and our sitemaps are correct with that link. What will happen to http versions of those (most common use case)?
— Steve Workman (@steveworkman) July 4, 2016
— Lucas Garron (@lgarron) July 4, 2016
@steveworkman Yeah, 1 redirect the first time if not preloaded, and “0” (HTTP->HTTPS on the client side) after that if the browser has HSTS.
— Lucas Garron (@lgarron) July 4, 2016
So, whilst this all feels very strange to me, and goes against the HTTPS Everywhere principles, it will eventually make things better in the long run. Getting subdomains for free is a great boost, though the preload list feels like a very exclusive club that you have to know about in order to make the best of HSTS. It’s also quite difficult to get off the list, should you ever decided that HTTPS is not for you as you’ll have the HSTS header for 18 weeks, and there is no guarantee that the preload list will be modified regularly. It’s an experiment, but one that changes how you need to implement HSTS.
So, that’s my guide. Comments, queries, things I’ve gotten wrong, leave a comment below or on Twitter: @steveworkman
Over the last few months I’ve been putting together my talk for the year, based on a blog post that is titled “HTTPS is Hard”. You can read the full article on the Yell blog on which it is published. There’s also an abridged version on Medium. It’s been a very long time coming, and has changed over the time I’ve been writing it, so I thought I’d get down a few reflections on the article.
This is firstly, the longest article I’ve written (at over four thousand words, it’s a quarter of the length of my dissertation) and it’s taken the longest time to be published. I had a 95% complete draft ready back in September, when I was supposed to be working on my Velocity talk for October but found myself much more interested in this article. Dan Applequist has repeatedly asked me to “put it in a blog post, the TAG would be very interested” – so finally, it’s here.
The truth is that I’m constantly tweaking the post. Even the day before it goes live, I’m still making modifications as final comments and notes come in from friends that I’ve been working with on this. Also, it seems like every week the technology moves on and the landscape shifts: Adobe offers certs for free, Dreamhost gives away LetsEncrypt HTTPS certs through a one-click button, Netscaler supports HTTP/2, the Washington Post write an article, Google updates advice and documentation, and on and on and on… All through this evolution, new problems emerge and the situation morphs and I come up with new ways to fix things, and as I do, they get put into the blog post. Hence, it’s almost a 20 minute read.
A special thank you to Andy Davies, Pete Gasston, Patrick Hamann and the good people at Yell; Jurga, Claire and the UI team (Andrzej, Lee and Stevie) for their feedback throughout this whole process. I’m sure they skipped to the new bits each time.
Every day something silly happens. Today’s was from generally-awesome tech-friendly company Mailchimp. They originally claimed that “Hosted forms are secure on our end, so we don’t need to offer HTTPS. We get that some of our users would like this, though” (tweet has since been deleted). Thankfully, they owned up and showed CalEvans how to do secure forms.
@CalEvans We apologize for the inaccurate info from earlier, Cal. It is actually possible to use HTTPS with our hosted forms by grabbing the
— MailChimp (@MailChimp) March 29, 2016
Still, it’s this kind of naivety that puts everyone’s security at risk. A big thumbs up to Mailchimp for rectifying the situation.
Yes, though nowhere near as hard. We’d still have gone through the whole process, but it wouldn’t have taken as long (the Adobe and Netscaler bits were quite time-consuming) and the aftermath wouldn’t have gone on for anywhere near as long if I’d have realised in advance about the referrer problem.
Honestly, I’m not sure I would have pushed so hard for it. We don’t have any solid evidence to say it’s affecting any business metrics, but I personally wouldn’t like the impression that traffic just dropped off a cliff, and it wouldn’t make me sign up as an advertiser. Is this why Yelp, TripAdvisor and others haven’t migrated over? Who can say…
This is why the education piece of HTTPS is so important, because developers can easily miss little details like referrers, and just see the goals of ranking and HTTP/2 and just go for it.
The point of the whole article is that there just isn’t the huge incentive to move to HTTPS. Having a padlock doesn’t make a difference to users unless they sign in or buy something. There needs to be something far more aggressive to convince your average developer to move their web site to HTTPS. I am fully in support of Chrome and Firefox’s efforts to mark HTTP as insecure to the user. The only comments I get around the office about HTTPS happen when a Chrome extension causes a red line to go through the protocol in the address bar – setting a negative connotation around HTTP seems to be the only thing that gets people interested.
I am really pleased to see the Google Transparency Report include a section on HTTPS (blog post). An organisation with the might and engineering power of Google are still working towards HTTPS, overcoming technical boundaries that make HTTPS really quite hard. It’s nice to know that it’s not just you fighting against the technology.
The “Privileged Contexts” spec AKA “Powerful Features” and how to manage them is a working draft and there’s a lot of debate still to be had before they go near a browser. I like how the proposals work and how they’ve been implemented for Service Worker. I also appreciate why they’re necessary, especially for Service Worker (the whole thread of “why” can be read on github). I hope that Service Worker has an effect on HTTPS uptake, though this will only truly happen should Safari adopt the technology.
It looks like Chrome is going to turn off Geolocation from insecure origins very soon, as that part of the powerful features task list has been marked as “fixed” as of March 3rd. Give it a few months and geolocation will be the proving ground for the whole concept of powerful features – something that I’ll be watching very closely.
If you have to support IE8 I do not recommend that you ditch jQuery. IE8 doesn’t have many of the fundamentals to be considered “modern” and work without significant polyfilling in a way that modern Webkit browsers do. The Guardian uses a simple script to detect whether your browser is modern:
isModernBrowser:('querySelector'in document&&'addEventListener'in window&&'localStorage'in window&&'sessionStorage'in window&&'bind'in Function&&(('XMLHttpRequest'in window&&'withCredentials'in new XMLHttpRequest())||'XDomainRequest'in window))
This checks for the following functions:
If you’re modern, The Guardian will give you everything. If not, you get a gracefully degraded experience. IE8 will fail all but the local storage part of that test, and polyfilling that much code will negate all the benefits of removing jQuery.
With some browsers, it is possible to polyfill small parts of this functionality and still remove jQuery. For example, Yell supports iOS 4 and above, which doesn’t have Function.bind, Android 2.3 doesn’t support Element.ClassList or SVG, and IE10 doesn’t support Element.data. We chose to polyfill these functions for the older browsers, but not SVG or Element.data, as these can be resolved by other techniques or just coding differently.
In the end, it’s your choice what browsers you support, but you have to be careful. There is a complete list of ECMAScript 5 functions and browsers that support them on Kangax’s Github page (as well as pages for ES6 and ES7 if you’re interested) that is incredible for helping you make this decision.
There will be plenty of times that you will be able to find alternatives to plugins that work without jQuery. The simple way to find them is to use Google, Github, StackOverflow and Twitter to search for alternatives. I wish that there was a repository that told you the good alternatives for common jQuery plugins – but there isn’t one (note to self: do this). This can be laborious, and involve a lot of trial and error to find alternatives that match the feature set you’re looking for.
I went through this same process with my team for yell.com’s mobile site. Luckily, there was only one plugin that we needed to keep, Photoswipe – a cool plugin that creates a touch-friendly lightbox from a list of images. We looked high and low for vanilla JS alternatives that 1. were mobile-friendly, and 2. worked on Windows Phone 8 & Firefox OS and Firefox mobile. That last part was the hard bit, and I’m sad to say that we didn’t find an answer. So, we had to build this ourselves – you can see it on any business profile page on your smartphone like this one (though you’ll have to change your browser user agent to a mobile phone to see it).
TL;DR: you’ll need to find replacements for all plugins, and if you don’t, your options are to write it yourself, or drop the functionality.
Once you’ve found solutions for third party code, you need to focus on your own custom code. To find out what could break, you’ll need to look over your JS and see what jQuery methods and properties are in use. You can do this manually, but I don’t fancy looking over 10,000 lines of code by hand. I asked “how can I do this” on Stack Overflow and got a great answer from Elias Dorneles:
-ooption for grasp and adding a sed filter to get only the function names:
grasp '$(__).__' -e -o *.js | sed 's/.*[.]//' | uniq -c. This fails for some code for the same reasons that grep, but maybe it can help you get an estimate.
You can run this on one file, or an entire directory. Here’s what Bootstrap.js looks like (after I’ve tabulated it in excel):
This is the list of functions that you will have to find alternatives to in order for your JS to function correctly, along with an approximate count of the number of times a function is used. I say approximate, because in my experience, the grasp script doesn’t get everything, especially where chained functions are concerned. The good news with this set is that there aren’t many complex functions in use – the vast majority can be replaced with a line or two.
The results of this query can bring back all sorts of jQuery functions, things like .live, .die, .delegate, .browser, .size, .toggle, and other deprecated functions from over the years. These are the warning signs that the rest of your code may not be ready for a move away from jQuery, and if you get these, you should seriously consider why you’re doing this. I listed my reasons in the introduction post and there are more besides, like minimal memory footprint whilst adding Windows Phone 8 and Firefox OS support. You may end up spending a lot more effort on your code than you originally intended, just to bring it up to par with the current state of web standards. Clearly, this isn’t a bad thing, but your boss may be wondering why it’s taking so much time. For a great article on technical debt, try Paying Down your Technical Debt from Chris Atwood’s Coding Horror site.
That’s it for this part, in the next one, I’ll cover replacing the functions identified above with standards-compliant code to create your own min.js.