Over the last few months I’ve been putting together my talk for the year, based on a blog post that is titled “HTTPS is Hard”. You can read the full article on the Yell blog on which it is published. There’s also an abridged version on Medium. It’s been a very long time coming, and has changed over the time I’ve been writing it, so I thought I’d get down a few reflections on the article.
It’s really long, and took a long time to write
This is firstly, the longest article I’ve written (at over four thousand words, it’s a quarter of the length of my dissertation) and it’s taken the longest time to be published. I had a 95% complete draft ready back in September, when I was supposed to be working on my Velocity talk for October but found myself much more interested in this article. Dan Applequist has repeatedly asked me to “put it in a blog post, the TAG would be very interested” – so finally, it’s here.
The truth is that I’m constantly tweaking the post. Even the day before it goes live, I’m still making modifications as final comments and notes come in from friends that I’ve been working with on this. Also, it seems like every week the technology moves on and the landscape shifts: Adobe offers certs for free, Dreamhost gives away LetsEncrypt HTTPS certs through a one-click button, Netscaler supports HTTP/2, the Washington Post write an article, Google updates advice and documentation, and on and on and on… All through this evolution, new problems emerge and the situation morphs and I come up with new ways to fix things, and as I do, they get put into the blog post. Hence, it’s almost a 20 minute read.
A special thank you to Andy Davies, Pete Gasston, Patrick Hamann and the good people at Yell; Jurga, Claire and the UI team (Andrzej, Lee and Stevie) for their feedback throughout this whole process. I’m sure they skipped to the new bits each time.
Is HTTPS really neccessary, for everyone?
Every day something silly happens. Today’s was from generally-awesome tech-friendly company Mailchimp. They originally claimed that “Hosted forms are secure on our end, so we don’t need to offer HTTPS. We get that some of our users would like this, though” (tweet has since been deleted). Thankfully, they owned up and showed CalEvans how to do secure forms.
Still, it’s this kind of naivety that puts everyone’s security at risk. A big thumbs up to Mailchimp for rectifying the situation.
If I were to have started today, would HTTPS still be hard?
Yes, though nowhere near as hard. We’d still have gone through the whole process, but it wouldn’t have taken as long (the Adobe and Netscaler bits were quite time-consuming) and the aftermath wouldn’t have gone on for anywhere near as long if I’d have realised in advance about the referrer problem.
If you’d have known about the referrer issue, would you have made the switch to HTTPS?
Honestly, I’m not sure I would have pushed so hard for it. We don’t have any solid evidence to say it’s affecting any business metrics, but I personally wouldn’t like the impression that traffic just dropped off a cliff, and it wouldn’t make me sign up as an advertiser. Is this why Yelp, TripAdvisor and others haven’t migrated over? Who can say…
This is why the education piece of HTTPS is so important, because developers can easily miss little details like referrers, and just see the goals of ranking and HTTP/2 and just go for it.
The point of the whole article is that there just isn’t the huge incentive to move to HTTPS. Having a padlock doesn’t make a difference to users unless they sign in or buy something. There needs to be something far more aggressive to convince your average developer to move their web site to HTTPS. I am fully in support of Chrome and Firefox’s efforts to mark HTTP as insecure to the user. The only comments I get around the office about HTTPS happen when a Chrome extension causes a red line to go through the protocol in the address bar – setting a negative connotation around HTTP seems to be the only thing that gets people interested.
What’s changed since you wrote the article?
I am really pleased to see the Google Transparency Report include a section on HTTPS (blog post). An organisation with the might and engineering power of Google are still working towards HTTPS, overcoming technical boundaries that make HTTPS really quite hard. It’s nice to know that it’s not just you fighting against the technology.
What about “privileged apps” – you don’t talk about that
The “Privileged Contexts” spec AKA “Powerful Features” and how to manage them is a working draft and there’s a lot of debate still to be had before they go near a browser. I like how the proposals work and how they’ve been implemented for Service Worker. I also appreciate why they’re necessary, especially for Service Worker (the whole thread of “why” can be read on github). I hope that Service Worker has an effect on HTTPS uptake, though this will only truly happen should Safari adopt the technology.
It looks like Chrome is going to turn off Geolocation from insecure origins very soon, as that part of the powerful features task list has been marked as “fixed” as of March 3rd. Give it a few months and geolocation will be the proving ground for the whole concept of powerful features – something that I’ll be watching very closely.
I was lucky enough to be invited to attend and speak at Edge Conf London 2014, an assembly of web development superheroes charged with discussing the future of web technology in front of a live audience. I’ve written up my main take-aways from the event.
Web Components Panel at EdgeConf 3
- Web Components are the custom elements that you’ve always wanted. If you’re after a <google-map> tag, web components can give it to you.
- The basics are registering a new element with the DOM, then you can do anything
- Web components are imported with a HTML link element: <link import=”component.html”>
- To make these components useful, you need to use the Shadow DOM – this is the DOM inside an element, which is already being used on the web: take a look inside Chrome’s dev tools at an <input type=”range”> element – tickers are <button> elements inside the <input>
- There are no browsers that support this out of the box yet, so there are two polyfills that you can use: Polymer (Google) and X-Tags (Mozilla)
- The Server/Client rendering trade-off is the concern at the moment. Any JS downloading in a web component will block rendering unless specified as async. You can also compress and minify web components and their resources, which is a necessary step to get anywhere near good performance. We’re adding new tools, but the old techniques still apply.
- Responsive components, that have media queries related to their own size, aren’t possible because they could be influenced by the parent too, which will get the rendering engine into infinite loops.
- On semantics and accessibility: they are still important, but with ARIA roles not caring what element they are on, you can make anything appear like anything, so the argument that web components are bad for semantics is kinda moot.
- On SEO, the usual rules still apply, you’ve got to make your content accessible and not hide it, but the search bots will read web components
- On styling, using scoped styles (a level 4 spec) works very well, as these will override at scope. However, using an object-oriented CSS approach makes this easier. It is, however, generally harder to make all of your CSS into OOCSS, which is more of a team/rigour problem.
- In the end, you’re responsible for packaging and de-duplicating your resources. Web components will remove any duplicate files from the same origin, but it’s still very easy to import two versions of jQuery. You are responsible for that.
Bruce Lawson has another very good write-up on this session, and web components in general on his blog.
Developer Tooling Panel
- Firefox’s in-built developer tooling has come a long way from Firebug, with new features like deep scoping of function variables, great memory profiling and visual descriptions of CSS transforms
- Chrome’s dev tools will have a better call stack flame graph
- Brackets does great in-line CSS editing and rendering
- Remote Debug aims to solve the complex workflow issue, where a developer knows how to use Chrome’s dev tools, but not Firefox’s, and it will let you use Chrome’s tools with Firefox.
- JS Promises aren’t in the dev tools sets yet. We need to experiment with the technology and then make tools. We can’t make tools for something that doesn’t exist yet
- Use debugger; rather than console.log – you may as well use alert()
- It would be great if we could inspect headless browsers with dev tools so that we can see what went on with a test
- It was also noted that contributing to dev tools is harder than it could be. Remy Sharp suggested creating some kind of Greasemonkey scripting capability for dev tools
Build Process panel
- Building software is not new. The Unix make command is 37 years old.
- We are trying to avoid a plugin apocalypse with grunt, gulp, bruch etc re-inventing the wheel
- A big mention in this session of the Extensible Web Manifesto as a basis for how we should be developing these tools
- There are things that belong in our build process, and things that belong in our browser. With Sass, variables should be in the browser, but minification and compilation should not.
- As a community, we need to be responsible with what we put into the standards
- Use git, not github. Use the npm protocol, not npm. The basics are great, but we can’t rely on services like this
- More tools fuel innovation. A single task spec as a way to describe tasks between the task runners would be great, but this is on hold as they currently can’t agree.
- On Grunt/Gulp – use the tool you’re comfortable with, and make the most of it.
- We are still in the early days of build tools. Being locked in to a certain tools for 5 years probably won’t hurt, because at least you’re using a build tool!
Page Load Performance
- “Put it in the first 15KB for ultimate performance”. Load content first, then enhancements, then the leftovers, and you’ll get great performance
- Use sitespeed.io and webpagetest for your synthetic testing
- Looking at The Guardian – their branched loading model saves them 42% with their responsive site
- There will still be times when an adaptive site, with proper redirects, will give you better performance. I can vouch for this: Yell.com on the desktop is around 700KB, the homepage on mobile is around 60KB.
- HTTP2 will make spriting an anti-pattern, as it makes it easier to only download what you need. Remember, it’s only the network that needs the data in one file
- If you own your site, instrument it. Target StartRenderTime, and use Window.performance for better timing. Look to improve the Page OnLoad Event.
- Resource Priorities and timing APIs will arrive soon. You’re encouraged to use these in your Real User Monitoring (RUM) stats. Not many companies do this at the moment.
- Finding out is a user is getting page jank is a hard problem as you’d have to hook into RequestAnimationFrame
Pointers and Interactions
I was on this panel, so I’ve not got any notes! Thankfully, Google were there to video it for everyone
- For accessibility, complying to geo-specific regulations is important, but, complying with the law doesn’t make your website accessible
- Are WCAG guidelines outdated? No, their values are still good, but there are more complex use cases since it was written. For example, gaming accessibility is about making visual cues auditory
- Mechanical audits of a website don’t give you the full accessibility brief. It can give you ARIA roles, colour contrast, click regions and alt-text etc
- Try the Chrome Accessibility Tools extension, and the SEE extension, to see your page though different eyes
- If you want to know what it feels like to have a muscular problem, try using your mouse with you non-writing hand
- Accessibility needs to be considered at the design stage – if done at the QA stage, you’ve missed the point
Sadly, I wasn’t at the event for this, but here’s the video for you all to enjoy
I loved the conference, I’ve not learnt so much in a day in years! Here are some other write-ups from around the web:
“Our work here is done” – the immortal final words of the web standards project. Please, read the post, for me it’s a tearjerker, and I’ll tell you why.
Before I left uni, when I still didn’t know what I wanted to do, I found the WaSP group online and thought, “that’s amazing – people from all walks of life and competing companies no less, all getting together to make possibly the world’s most important invention a better place. I want to do what they do”
Many years later, as the web standards project closes its doors, I help to run a web standards meetup group, speak at conferences on web standards and evangelise their use every day. Thank you WaSP members for inspiring me to be where I am today.
Thank you so much
This month was a very special meetup for London Web Standards – it’s 5th birthday celebrations! Yes, it’s hard to believe that 5 years ago in October three guys met up in a North London pub to talk about the web. To celebrate this momentous occasion, Imogen Levy baked us a massive 7-layer London Web Standards Cake (British Bake-off contender next year 2013 for sure). Imogen, thank you so much (from all of the LWS Organisers)!
It was also a big LWS for me personally, as I took the stage to talk about a pet topic of mine: Less, Sass and CSS Pre-processors. Gotta say, I had a lot of fun and got some really great questions and comments from the audience. I’ll definitely do it again.
So, the sketchnotes service is at half capacity today, it being quite hard to do sketchnotes of my own talk. The notes this month are of Peter Gasston’s talk on The CSS of Tomorrow, covering future specs that will bring some of the features from Less/Sass to CSS, and hugely improve the way we layout websites (finally!).
My Sketchnotes from Peter’s Talk
The CSS of Tomorrow – Peter Gasston
Thanks again to everyone involved, we truly celebrated LWS’s birthday in style.