Over the last few months I’ve been putting together my talk for the year, based on a blog post that is titled “HTTPS is Hard”. You can read the full article on the Yell blog on which it is published. There’s also an abridged version on Medium. It’s been a very long time coming, and has changed over the time I’ve been writing it, so I thought I’d get down a few reflections on the article.
It’s really long, and took a long time to write
This is firstly, the longest article I’ve written (at over four thousand words, it’s a quarter of the length of my dissertation) and it’s taken the longest time to be published. I had a 95% complete draft ready back in September, when I was supposed to be working on my Velocity talk for October but found myself much more interested in this article. Dan Applequist has repeatedly asked me to “put it in a blog post, the TAG would be very interested” – so finally, it’s here.
The truth is that I’m constantly tweaking the post. Even the day before it goes live, I’m still making modifications as final comments and notes come in from friends that I’ve been working with on this. Also, it seems like every week the technology moves on and the landscape shifts: Adobe offers certs for free, Dreamhost gives away LetsEncrypt HTTPS certs through a one-click button, Netscaler supports HTTP/2, the Washington Post write an article, Google updates advice and documentation, and on and on and on… All through this evolution, new problems emerge and the situation morphs and I come up with new ways to fix things, and as I do, they get put into the blog post. Hence, it’s almost a 20 minute read.
A special thank you to Andy Davies, Pete Gasston, Patrick Hamann and the good people at Yell; Jurga, Claire and the UI team (Andrzej, Lee and Stevie) for their feedback throughout this whole process. I’m sure they skipped to the new bits each time.
Is HTTPS really neccessary, for everyone?
Every day something silly happens. Today’s was from generally-awesome tech-friendly company Mailchimp. They originally claimed that “Hosted forms are secure on our end, so we don’t need to offer HTTPS. We get that some of our users would like this, though” (tweet has since been deleted). Thankfully, they owned up and showed CalEvans how to do secure forms.
Still, it’s this kind of naivety that puts everyone’s security at risk. A big thumbs up to Mailchimp for rectifying the situation.
If I were to have started today, would HTTPS still be hard?
Yes, though nowhere near as hard. We’d still have gone through the whole process, but it wouldn’t have taken as long (the Adobe and Netscaler bits were quite time-consuming) and the aftermath wouldn’t have gone on for anywhere near as long if I’d have realised in advance about the referrer problem.
If you’d have known about the referrer issue, would you have made the switch to HTTPS?
Honestly, I’m not sure I would have pushed so hard for it. We don’t have any solid evidence to say it’s affecting any business metrics, but I personally wouldn’t like the impression that traffic just dropped off a cliff, and it wouldn’t make me sign up as an advertiser. Is this why Yelp, TripAdvisor and others haven’t migrated over? Who can say…
This is why the education piece of HTTPS is so important, because developers can easily miss little details like referrers, and just see the goals of ranking and HTTP/2 and just go for it.
The point of the whole article is that there just isn’t the huge incentive to move to HTTPS. Having a padlock doesn’t make a difference to users unless they sign in or buy something. There needs to be something far more aggressive to convince your average developer to move their web site to HTTPS. I am fully in support of Chrome and Firefox’s efforts to mark HTTP as insecure to the user. The only comments I get around the office about HTTPS happen when a Chrome extension causes a red line to go through the protocol in the address bar – setting a negative connotation around HTTP seems to be the only thing that gets people interested.
What’s changed since you wrote the article?
I am really pleased to see the Google Transparency Report include a section on HTTPS (blog post). An organisation with the might and engineering power of Google are still working towards HTTPS, overcoming technical boundaries that make HTTPS really quite hard. It’s nice to know that it’s not just you fighting against the technology.
What about “privileged apps” – you don’t talk about that
The “Privileged Contexts” spec AKA “Powerful Features” and how to manage them is a working draft and there’s a lot of debate still to be had before they go near a browser. I like how the proposals work and how they’ve been implemented for Service Worker. I also appreciate why they’re necessary, especially for Service Worker (the whole thread of “why” can be read on github). I hope that Service Worker has an effect on HTTPS uptake, though this will only truly happen should Safari adopt the technology.
It looks like Chrome is going to turn off Geolocation from insecure origins very soon, as that part of the powerful features task list has been marked as “fixed” as of March 3rd. Give it a few months and geolocation will be the proving ground for the whole concept of powerful features – something that I’ll be watching very closely.
What browsers do you need to support?
If you have to support IE8 I do not recommend that you ditch jQuery. IE8 doesn’t have many of the fundamentals to be considered “modern” and work without significant polyfilling in a way that modern Webkit browsers do. The Guardian uses a simple script to detect whether your browser is modern:
isModernBrowser:('querySelector'in document&&'addEventListener'in window&&'localStorage'in window&&'sessionStorage'in window&&'bind'in Function&&(('XMLHttpRequest'in window&&'withCredentials'in new XMLHttpRequest())||'XDomainRequest'in window))
This checks for the following functions:
- Local Storage and Session Storage
- Standards-based AJAX
- CORS (Cross-origin resource scripting)
If you’re modern, The Guardian will give you everything. If not, you get a gracefully degraded experience. IE8 will fail all but the local storage part of that test, and polyfilling that much code will negate all the benefits of removing jQuery.
With some browsers, it is possible to polyfill small parts of this functionality and still remove jQuery. For example, Yell supports iOS 4 and above, which doesn’t have Function.bind, Android 2.3 doesn’t support Element.ClassList or SVG, and IE10 doesn’t support Element.data. We chose to polyfill these functions for the older browsers, but not SVG or Element.data, as these can be resolved by other techniques or just coding differently.
In the end, it’s your choice what browsers you support, but you have to be careful. There is a complete list of ECMAScript 5 functions and browsers that support them on Kangax’s Github page (as well as pages for ES6 and ES7 if you’re interested) that is incredible for helping you make this decision.
Plugins & Third-party code
There will be plenty of times that you will be able to find alternatives to plugins that work without jQuery. The simple way to find them is to use Google, Github, StackOverflow and Twitter to search for alternatives. I wish that there was a repository that told you the good alternatives for common jQuery plugins – but there isn’t one (note to self: do this). This can be laborious, and involve a lot of trial and error to find alternatives that match the feature set you’re looking for.
I went through this same process with my team for yell.com’s mobile site. Luckily, there was only one plugin that we needed to keep, Photoswipe – a cool plugin that creates a touch-friendly lightbox from a list of images. We looked high and low for vanilla JS alternatives that 1. were mobile-friendly, and 2. worked on Windows Phone 8 & Firefox OS and Firefox mobile. That last part was the hard bit, and I’m sad to say that we didn’t find an answer. So, we had to build this ourselves – you can see it on any business profile page on your smartphone like this one (though you’ll have to change your browser user agent to a mobile phone to see it).
TL;DR: you’ll need to find replacements for all plugins, and if you don’t, your options are to write it yourself, or drop the functionality.
Once you’ve found solutions for third party code, you need to focus on your own custom code. To find out what could break, you’ll need to look over your JS and see what jQuery methods and properties are in use. You can do this manually, but I don’t fancy looking over 10,000 lines of code by hand. I asked “how can I do this” on Stack Overflow and got a great answer from Elias Dorneles:
-o option for grasp and adding a sed filter to get only the function names:
grasp '$(__).__' -e -o *.js | sed 's/.*[.]//' | uniq -c. This fails for some code for the same reasons that grep, but maybe it can help you get an estimate.
You can run this on one file, or an entire directory. Here’s what Bootstrap.js looks like (after I’ve tabulated it in excel):
This is the list of functions that you will have to find alternatives to in order for your JS to function correctly, along with an approximate count of the number of times a function is used. I say approximate, because in my experience, the grasp script doesn’t get everything, especially where chained functions are concerned. The good news with this set is that there aren’t many complex functions in use – the vast majority can be replaced with a line or two.
The results of this query can bring back all sorts of jQuery functions, things like .live, .die, .delegate, .browser, .size, .toggle, and other deprecated functions from over the years. These are the warning signs that the rest of your code may not be ready for a move away from jQuery, and if you get these, you should seriously consider why you’re doing this. I listed my reasons in the introduction post and there are more besides, like minimal memory footprint whilst adding Windows Phone 8 and Firefox OS support. You may end up spending a lot more effort on your code than you originally intended, just to bring it up to par with the current state of web standards. Clearly, this isn’t a bad thing, but your boss may be wondering why it’s taking so much time. For a great article on technical debt, try Paying Down your Technical Debt from Chris Atwood’s Coding Horror site.
Up next, replacing individual functions with standards-based code
That’s it for this part, in the next one, I’ll cover replacing the functions identified above with standards-compliant code to create your own min.js.
I was lucky enough to be invited to attend and speak at Edge Conf London 2014, an assembly of web development superheroes charged with discussing the future of web technology in front of a live audience. I’ve written up my main take-aways from the event.
Web Components Panel at EdgeConf 3
- Web Components are the custom elements that you’ve always wanted. If you’re after a <google-map> tag, web components can give it to you.
- The basics are registering a new element with the DOM, then you can do anything
- Web components are imported with a HTML link element: <link import=”component.html”>
- To make these components useful, you need to use the Shadow DOM – this is the DOM inside an element, which is already being used on the web: take a look inside Chrome’s dev tools at an <input type=”range”> element – tickers are <button> elements inside the <input>
- There are no browsers that support this out of the box yet, so there are two polyfills that you can use: Polymer (Google) and X-Tags (Mozilla)
- The Server/Client rendering trade-off is the concern at the moment. Any JS downloading in a web component will block rendering unless specified as async. You can also compress and minify web components and their resources, which is a necessary step to get anywhere near good performance. We’re adding new tools, but the old techniques still apply.
- Responsive components, that have media queries related to their own size, aren’t possible because they could be influenced by the parent too, which will get the rendering engine into infinite loops.
- On semantics and accessibility: they are still important, but with ARIA roles not caring what element they are on, you can make anything appear like anything, so the argument that web components are bad for semantics is kinda moot.
- On SEO, the usual rules still apply, you’ve got to make your content accessible and not hide it, but the search bots will read web components
- On styling, using scoped styles (a level 4 spec) works very well, as these will override at scope. However, using an object-oriented CSS approach makes this easier. It is, however, generally harder to make all of your CSS into OOCSS, which is more of a team/rigour problem.
- In the end, you’re responsible for packaging and de-duplicating your resources. Web components will remove any duplicate files from the same origin, but it’s still very easy to import two versions of jQuery. You are responsible for that.
Bruce Lawson has another very good write-up on this session, and web components in general on his blog.
Developer Tooling Panel
- Firefox’s in-built developer tooling has come a long way from Firebug, with new features like deep scoping of function variables, great memory profiling and visual descriptions of CSS transforms
- Chrome’s dev tools will have a better call stack flame graph
- Brackets does great in-line CSS editing and rendering
- Remote Debug aims to solve the complex workflow issue, where a developer knows how to use Chrome’s dev tools, but not Firefox’s, and it will let you use Chrome’s tools with Firefox.
- JS Promises aren’t in the dev tools sets yet. We need to experiment with the technology and then make tools. We can’t make tools for something that doesn’t exist yet
- Use debugger; rather than console.log – you may as well use alert()
- It would be great if we could inspect headless browsers with dev tools so that we can see what went on with a test
- It was also noted that contributing to dev tools is harder than it could be. Remy Sharp suggested creating some kind of Greasemonkey scripting capability for dev tools
Build Process panel
- Building software is not new. The Unix make command is 37 years old.
- We are trying to avoid a plugin apocalypse with grunt, gulp, bruch etc re-inventing the wheel
- A big mention in this session of the Extensible Web Manifesto as a basis for how we should be developing these tools
- There are things that belong in our build process, and things that belong in our browser. With Sass, variables should be in the browser, but minification and compilation should not.
- As a community, we need to be responsible with what we put into the standards
- Use git, not github. Use the npm protocol, not npm. The basics are great, but we can’t rely on services like this
- More tools fuel innovation. A single task spec as a way to describe tasks between the task runners would be great, but this is on hold as they currently can’t agree.
- On Grunt/Gulp – use the tool you’re comfortable with, and make the most of it.
- We are still in the early days of build tools. Being locked in to a certain tools for 5 years probably won’t hurt, because at least you’re using a build tool!
Page Load Performance
- “Put it in the first 15KB for ultimate performance”. Load content first, then enhancements, then the leftovers, and you’ll get great performance
- Use sitespeed.io and webpagetest for your synthetic testing
- Looking at The Guardian – their branched loading model saves them 42% with their responsive site
- There will still be times when an adaptive site, with proper redirects, will give you better performance. I can vouch for this: Yell.com on the desktop is around 700KB, the homepage on mobile is around 60KB.
- HTTP2 will make spriting an anti-pattern, as it makes it easier to only download what you need. Remember, it’s only the network that needs the data in one file
- If you own your site, instrument it. Target StartRenderTime, and use Window.performance for better timing. Look to improve the Page OnLoad Event.
- Resource Priorities and timing APIs will arrive soon. You’re encouraged to use these in your Real User Monitoring (RUM) stats. Not many companies do this at the moment.
- Finding out is a user is getting page jank is a hard problem as you’d have to hook into RequestAnimationFrame
Pointers and Interactions
I was on this panel, so I’ve not got any notes! Thankfully, Google were there to video it for everyone
- For accessibility, complying to geo-specific regulations is important, but, complying with the law doesn’t make your website accessible
- Are WCAG guidelines outdated? No, their values are still good, but there are more complex use cases since it was written. For example, gaming accessibility is about making visual cues auditory
- Mechanical audits of a website don’t give you the full accessibility brief. It can give you ARIA roles, colour contrast, click regions and alt-text etc
- Try the Chrome Accessibility Tools extension, and the SEE extension, to see your page though different eyes
- If you want to know what it feels like to have a muscular problem, try using your mouse with you non-writing hand
- Accessibility needs to be considered at the design stage – if done at the QA stage, you’ve missed the point
Sadly, I wasn’t at the event for this, but here’s the video for you all to enjoy
I loved the conference, I’ve not learnt so much in a day in years! Here are some other write-ups from around the web:
It’s 2014 and I’m feeling inspired to change my ways. In 2014, I want to go jQuery-free!
Before I get started, I want to clear the air and put in a big fat disclaimer about my opinions on jQuery. Here we go:
Lovely, now that’s done, this is why I want to do it. Firstly, as lots of people know, jQuery is quite a weighty library considering what it does. Coming in at 32KB for version 2.x and around 40KB for the IE-compatible (gzipped and minified), it’s a significant chunk of page weight before you’ve even started using it. There are alternatives that support the majority of its functions in the same API, such as Zepto, but even that comes in at around 15KB for the most recent version, and can grow larger. The worst thing for me, is that I don’t use half of the library, all I really do is select elements, use event handlers and delegation, show/hide things and change CSS classes. So, I want a library of utility functions that only does these things.
Word to the wise, this is not a new notion, and follows on very nicely from the work that Remy Sharp has done in this area in his min.js library.
I’m going to write a series of posts as I attempt to separate myself from jQuery, and make my websites leaner and faster. The first of which will be on “what you think you need, and what you actually need” and give you ways to work out if this approach is for you, or if you should be sticking with jQuery. Next, I’ll cover the basics of what a minimalist jQuery library; and finally I’ll cover strategies for dealing with unsupported browsers.
Let me know if there’s anything in particular you want me to cover, and I’ll do my best to go over it for you.