Articles Tagged ‘chrome’

HSTS – a no-nonsense guide

I’ve been playing with HTTP Strict Transport Security (HSTS, I’m late to the party as usual) and there’s some misconceptions that I had going in that I didn’t know about that threw me a bit. So, here’s a no nonsense guide to HSTS.

The HSTS Header is pretty simple to implement

I actually thought that this would be the hard bit, but actually putting the header in is very simple. As it’s domain specific you just need to set it at the web server or load balancer level. In Apache, it’s pretty simple:

Header always set Strict-Transport-Security "max-age=10886400;"

You can also upgrade all subdomains using this header

A small addition to the header auto-upgrades all subdomains to HTTPS, making it really simple to upgrade long-outdated content deep within databases or on static content domains without doing large-scale migrations.

Header always set Strict-Transport-Security "max-age=10886400; includeSubdomains;"

Having a short max-age is good when you’re starting out with subdomains

Having a short max-age is bad in the long-run

If you have a max-age length shorter than 18 weeks then you are ineligible for the preload list.

Wait, what?

There’s a preload list – browsers know about HSTS-supported sites

It turns out that all of the browsers include a “preload” list of websites that support HSTS, and will therefore always point the user to the HTTPS version of the website no matter what link they have come from to get there.

So, how does it work?

Well, you go to https://hstspreload.appspot.com and submit your website to the list. Chrome, Firefox, Opera, IE 11 (assuming you got a patch after June 2015), Edge and Safari pick it up and will add you to a list to always use HTTPS, which will take away a redirect for you. There are four other requirements to meet – have a valid cert (check), include subdomains, have a max-age of 18 weeks, add a preload directive and some redirection rules.

Header always set Strict-Transport-Security "max-age=10886400; includeSubDomains; preload"

Here are the standard redirection scenarios for a non-HSTS site that uses a www subdomain (like most sites):

  1. User enters https://www.example.com – no HSTS. There are 0 redirects in this scenario as the user has gone straight to the secure www domain.
  2. User enters https://example.com – no HSTS. There is 1 redirect as the web server adds the www subdomain
  3. User enters example.com – no HSTS. There is 1 redirect here as the web server redirects you to https://www.example.com in 1 hop, adding both HTTPS and the www subdomain
How to Redirect HTTP to HTTPS as described by Ilya Grigorik at Google I/O in 2014

How to Redirect HTTP to HTTPS as described by Ilya Grigorik at Google I/O in 2014

This is the best-practice for standard HTTPS migrations as set out in HTTPS Everywhere as Ilya Grigorik shows us that scenario 3 should only have 1 redirect, otherwise you get a performance penalty.

HSTS goes against this redirection policy… for good reason

To be included on the preload list you must first redirect to HTTPS, then to the www subdomain:

`http://yell.com` (HTTP) should immediately redirect to `https://yell.com` (HTTPS) before adding the www subdomain. Right now, the first redirect is to `https://www.yell.com/`.

This felt incredibly alien to me, so I started asking some questions on Twitter, and Ilya pointed me in Lucas Garron‘s direction

Following that link I get a full explanation:

This order makes sure that the client receives a dynamic HSTS header from example.com, not just www.example.com

http -> https -> https://www is is good enough to protect sites for the common use case (visiting links to the parent domain or typing them into the URL bar), and it is easy to understand and implement consistently. It’s also simple for us and other folks to verify when scanning a site for HSTS.

This does impact the first page load, but will not affect subsequent visits.
And once a site is actually preloaded, there will still be exactly one redirect for users.

If I understand correctly, using HTTP/2 you can also reuse the https://example.com connection for https://www.example.com (if both domains are on the same cert, which is usually the case).

Given the growth of the preload list, I think it’s reasonable to expect sites to use strong HSTS practices if they want to take up space in the Chrome binary. This requirement is the safe choice for most sites.

Let me try to visualise that in the scenarios:

  1. First visit, user types example.com into their browser. They get a 301 redirect to https://example.com and receive the HSTS header. They are then 301 redirected to https://www.example.com. 2 redirects
  2. Second visit, the browser knows you’re on HSTS and automatically upgrades you to HTTPS before the first redirect, so typing yell.com into the browser performs 1 redirect from https://example.com to https://www.example.com
  3. If you’re in the preload list, the second visit scenario happens on the first visit. 1 redirect

So, that makes sense to me. In order to set the HSTS upgrade header for all subdomains, it needs to hit the naked domain, not the www subdomain. This appears to be a new requirement to be added to the preload list, as the Github issue was raised on May 19th this year, and Lucas has said that this new rule will not be applied to websites that are already on the list (like Google, Twitter etc).

For me, this takes away much of the usefulness of HSTS, which is meant to save redirects to HTTPS by auto-upgrading connections. If I have to add another redirect in to get the header set on all subdomains, I’m not sure if it’s really worth it.

So, I asked another question:

And this is the response I got from Lucas

So it helps when people type in the URL, sending them to HTTPS first. This takes out the potential for any insecure traffic being sent. Thinking of the rest of the links on the internet, the vast majority of yell.com links will include the www subdomain, so HSTS and the preload list will take out that redirect, leaving zero redirects. That’s a really good win, that Lucas confirmed.

Summary – HSTS will likely change how you perform redirects

So, whilst this all feels very strange to me, and goes against the HTTPS Everywhere principles, it will eventually make things better in the long run. Getting subdomains for free is a great boost, though the preload list feels like a very exclusive club that you have to know about in order to make the best of HSTS. It’s also quite difficult to get off the list, should you ever decided that HTTPS is not for you as you’ll have the HSTS header for 18 weeks, and there is no guarantee that the preload list will be modified regularly. It’s an experiment, but one that changes how you need to implement HSTS.

So, that’s my guide. Comments, queries, things I’ve gotten wrong, leave a comment below or on Twitter: @steveworkman

Improving Javascript XML Node Finding Performance by 2000%

In my work, I’m parsing web services all of the time. Most of the time, they’re XML, which does not make the best use of bandwidth/CPU time (compared to JSON), however, if it’s all that you’re given then you can certainly get by. I’ve been looking into ways to speed up the XML document traversal in with jQuery after the current best practice method was removed.

The basic way to find certain nodes in an XML web service is to use the .find() method. This is used heavily by the SPServices jQuery helper (which is, in general, a great library).

$(xData.responseXML).find("[nodeName='z:row']").each(function() {
// Do stuff
});

That’s absolutely fine – it’s going to find the attribute nodeName with a value of z:row. However, since jQuery 1.7, this method does not work. I raised this regression in the jQuery bug tracker and was encouraged to find a solution; another selector that worked in all browsers. Unfortunately, at the time I couldn’t come up with anything better than this:

$(xData.responseXML).find("z\\:row, row").each(function() {
// Do stuff
});

The “z\\:row” selector works in IE and Firefox, and the “row” selector works in Chrome and Safari (I’m unable to test in Opera here, sorry). This was flagged as the solution to the problem and they wouldn’t be making any fixes to the jQuery core.

After a few weeks of using this method, I noticed that the site had been slowing down, especially in IE, and I thought this new selector was the cause. So, I looked into the performance numbers using jsPerf and I raised a bug too. My first test was to see what the current solution was doing, and whether jQuery 1.7 had made things worse.
Test case: http://jsperf.com/node-vs-double-select/4

So, performance in Chrome is identical for each of the selectors (and it’s the same in Firefox and Safari) but IE drops nearly half of its operations because it has to perform that second selector.

It’s still not very high performance though, and so I looked for other solutions.

Dmethvin suggested:

Did you try the custom plugin in the ticket? If you’re having performance issues that should be much faster.

The plugin he’s referring to is this:

jQuery.fn.filterNode = function(name){
   return this.filter(function(){
      return this.nodeName === name;
   });
});

This filters content by their nodeName and compares it against the name that you gave it. The issue with this is that .filter() does not traverse down the tree, staying at the level of the set of objects that it was given. Therefore, a quick solution was this:

$(xData.responseXML).children().children().children().children().children().children().children().filterNode('z:row').each(function() {
// Do stuff
});

jsPerf Test: http://jsperf.com/node-vs-double-select/1

Wow, that’s about 50 times faster. Even IE beats Chrome when doing this operation. The simple reason is that it’s got a smaller set of objects to go through and it’s comparing a single attribute rather than parsing the text of the XML to try and find the namespaced element.

Still, I wasn’t satisfied as in order to achieve that performance, I had to know how deep I was going to be going in order to retrieve the set. So, back to the bug and another suggestion by dmethvin:

If you’re going that deep, use a filter function passed to .find(). How does that fare?

After a few attempts, a colleague of mine came up with this beauty:

$.fn.filterNode = function(name) {
      return this.find('*').filter(function() {
        return this.nodeName === name;
      });
    };

jsPerf test: http://jsperf.com/node-vs-double-select/3

Incredible performance increase using .find('*').filterNode('z:row')
http://jsperf.com/node-vs-double-select/3

Using .find(‘*’).filter() increased performance to 200x faster than the original .find(‘z:row’) selector

I mean, wow, that’s incredible. On the graph, those tiny little bits of colour are the original selectors, and those only 20% of the way up are the previous massive performance increase by using filter. It should also be noted that IE8 performance using this selector increased in jQuery 1.7 in comparison to when using jQuery 1.6.

Side-note: IE10’s javascript performance is almost equal to that of Google Chrome. In comparison, IE9 (not shown) is about half of that.

The reason for this massive increase is that it’s backed by native selectors. A .find(‘*’) will translate into element.querySelectorAll(‘*’) which is very fast when compared to doing 8 .children() calls.

Summary
Dealing with large amounts of data from web services needs to be fast. Using a simple .find() on the node name no-longer works and alternatives have been investigated. The fastest method, using a short one-line plug-in, improves performance by up to 2000% compared to the old methodology.

I’ll be notifying the SPServices group of this post, and hopefully they can improve the performance of their library.

Problems loading local fonts with font-family

I’m investigating a problem with loading locally installed fonts in Windows 7. It’s a weird one this, and it only seems to affect Firefox and IE9/10, or, those browsers that use DirectWrite.

The problem

Font rendering comparison

Font rendering comparison: clockwise Chrome 16, IE10pp2, IE8, Firefox Aurora 9

This image shows the beautiful Univers font in four different browsers. Each of them are rendering the following CSS:

#univers55, #univers55bold, #univers45, #univers45bold { font-size:3em;}
#univers55 { font-family:"Univers 55", serif; }
#univers55bold { font-family: "Univers 55", serif; font-weight:bold; }
#univers45 { font-family: "Univers 45 Light", serif; }
#univers45bold { font-family: "Univers 45 Light", serif; font-weight:bold; }

Now, Univers 45 is a light-weight font, an should be a good 10 points lighter than Univers 55, you can see this in Chrome and IE8 (top-left and bottom right). In Firefox and IE10, Univers 45 looks heavier than Univers 55, and the bold version actually appears to be the same size. That just isn’t right. So I called for reinforcements: step up Paul Rouget (Mozilla) and Martin Beeby (Microsoft).

Further investigation

I raised a bug against Firefox core with a test case on jsfiddle and got some good responses. Martin and Paul both asked:

@steveworkman @paulrouget could you include the font-face code in the fiddle for Univers 55 and Univers 45 Light

So, they thought it was something to do with the way I was including the font in the CSS. So, let’s put it in and see what happens:

Font-face rule comparison

Font rendering comparison with the font-face rule added

The rendering appears to have corrected itself. Univers 45 and 55 are now properly weighted in Chrome, IE10 and Firefox. So, what happened? Well, Jonathan Kew sheds a little light on the problem:

Note that the @font-face rules shown in your second image are incorrect (e.g. ‘src: “Univers 55″‘ is invalid; did you mean “src: local(‘Univers 55’)”?), and as a result you’re not getting Univers at all in some of those examples, you’re getting fallback to the default sans-serif font, probably Arial. Look at the shape of the “5”, for example.

Also note that under the DirectWrite model, font families are organized differently from the old GDI model. Family names like “Univers 45 Light” are not generally used; instead, all the faces belong to a single family, with distinct font-weight values. (But I haven’t examined the Univers family to see exactly how they’re organized.)

So, yes, a little stumble over the “local(‘font-name’);” issue, but otherwise it appears to be fine. The second part where he mentions that in Windows 7, font families are organised differently, and so may be under a different name intrigued me. So, I took a look:

Univers Font Families

Univers Font Families

Univers 45 Fonts

Univers 45 Fonts

Univers 55 Fonts

Univers 55 Fonts

So, this is where it goes a bit fuzzy. I can address the “Univers 45 Light” font directly through font-face, but I can’t access the “Univers 55 Normal” font, though I can access the whole font family by looking for “Univers 55”. That doesn’t feel right to me, and needs more investigation. Still, I can at least get it to render the font when using font-face. So, is that it? Bug closed?

Why can’t I just use font-family?

This is the question that I want answering, and Martin has kindly volunteered to look into. Loading a local font through font-family should work in exactly the same way as @font-face and src: local(); – but in Firefox and IE 9/10 it’s not.

So, web community, do you know why this doesn’t work? Can you make it work with the test case? Please help! Leave a comment, send me a tweet or update the test case and make it work!

UPDATE 1st November 2011:

 Jonathan Kew has responded again with some more words of wisdom for me:

The @font-face and non-@font-face cases there aren’t comparable. When you say

font-family: "Univers 55";

without the use of @font-face, you’re requesting that _font family_, which may have multiple faces with different weights. What you’ll get when you use font-weight:bold along with this depends which faces have been grouped under that family name – which may well differ between GDI and DirectWrite environments. And if there’s no “bold” face in the family, then the browser will artificially “embolden” the text. (You can tell exactly which font is really being displayed using the “font-info” add-on, btw.) I suspect you might get a true bolder face in one case and synthetic bold in the other, but that depends very much on the structure of the font families you’re using.

In the @font-face case, you’re defining your own font families (independently of how the OS or the font designer organized things), and your CSS only assigns them a single (normal-weight) face each, which means that when your styles ask for font-weight:bold, you should be getting synthetic emboldening rather than a real face with a heavier weight.

To see what faces are supposed to be available within each family, look in the Windows Fonts folder – IIRC, this should reflect the DirectWrite organization of the fonts. Do you see separate “Univers 45” and “Univers 55” families? What faces exist in each?

So, I looked back at the spec and re-engineered the @font-face CSS, and the non-font-face CSS. The non-font-face CSS was definitely wrong, so I updated that, but the @font-face CSS still wasn’t working. I looked at the font libraries as they are in Windows, and identified the following named structure:

  • Univers 45
  • Univers 45 Light
  • Univers 45 Light Oblique
  • Univers 55
    • Univers 55 Black
    • Univers 55 Normal
    • Univers 55 Oblique

    Given the advice Jonathan gave me, the correct font-family should be “Univers 45” and “Univers 55”, and the correct fonts for @font-face should be “Univers 45 Light” and “Univers 55 Normal”. That gave the following results in Firefox and IE10

    Font-face comparison IE10 and Firefox 9

    Firefox Aurora 9 (left) fails to find the font, IE10 pp2 (right) finds the font

    IE is finding the correct font, but Firefox isn’t and is falling back to the serif font. I used the font finder plugin to look at which fonts were being used on the page, and the answer surprised me. When loading the Univers 55 font-family, the normal weight font is titled “Univers 55 Roman”, instead of “Univers 55 Normal”, and the bold version is “Univers 75 Black”, instead of “Univers 55 Black”.

    So, I took a look at the system file properties for the font, and lo-and-behold, the detailed properties were different to the name of the font:

    Univers 55 detail

    The file name says "Normal", but the properties say "Roman"

    I quickly looked to see if Firefox was looking for the title property instead of the file name, and I was right. If you’ve got the Univers font you can see the jsFiddle test case for this, otherwise, the final result is below:

    Corrected Font-Face

    With the extra @font-face rule, Aurora renders correctly like IE10

    The @font-face code goes like this:

    @font-face {
    	font-family: MyUnivers55;
    	src: local('Univers 55 Roman'), local('Univers 55 Normal');
    	font-weight:normal;
    }
    @font-face {
    	font-family: MyUnivers55;
    	src: local('Univers 75 Black'), local('Univers 55 Black');
    	font-weight:bold;
    }
    @font-face {
    	font-family: MyUnivers45;
    	src: local('Univers 45 Light');
    	font-weight:normal;
    }

    It looks to me like IE and Firefox use different properties to identify fonts under Windows 7: IE looks at the name in the file system and Firefox looks at the Title attribute of the file.

    Amazingly, it looks like the spec doesn’t care which one is right, and advises you to include both as different platforms use different naming conventions. What surprises me is that two browsers look for a different name on the same platform, as this could cause naming clashes; the Univers 55 Black is titled Univers 75 Black – so if there was also a Univers 75 Black titled Univers 85 Black, browsers are going to retrieve the wrong font.

    So, Microsoft, Mozilla, please sort it out 🙂 Who is right?

    Tips and Problems when Enhancing SharePoint with JavaScript

    If you’ve developed for Microsoft’s SharePoint before (I’m talking about 2007 here, but this applies to WSS2 and 2010 as well) , then you’ll know that you can reach the limits of it’s functionality very quickly. This is a big problem if you’re making a zero-code solution, i.e. you have no access to Visual Studio and can’t create web parts. This is more common than you’d think, especially in large organisations that use SharePoint extensively. For this, the only choice is to use SharePoint Designer 2007 (SPD), but it’s not pleasant because, frankly, SPD sucks. I’ve not found a program that crashes as much as SPD, or that performs so poorly when presented with the most basic tasks. If you make a page that is too complex, has too many web parts, large data sources or lots of conditionals, connections and filters, it can take anywhere up to 20 minutes to perform a single action.

    SharePoint crash

    Very quickly, you have to start looking at alternatives to complex data views. These days, the go-to technology is JavaScript, which is very powerful and can allow developers to access almost every SharePoint function through web services. However, this functionality comes at the cost of accessibility. So, the first piece of advice: if you can avoid using JavaScript, do so because otherwise the site won’t be accessible. See these links for why accessibility is a good thing.

    Unfortunately, SharePoint is so limited that often a JavaScript is the only way to add functionality in or to correct formatting. In this case, use of simple SPD functions and <noscript> tags can keep your content accessible, allowing you to progressively enhance the user’s experience on top.

    The final hurdle to cross before you can create great JavaScript-based interfaces in SharePoint is IE.

    Internet Explorer, especially IE6, has appalling developer tools for JavaScript debugging. There’s no console, inspector, breakpoint facility, no nothing. It’s almost impossible to debug your problems because they all manifest themselves as runtime errors on some arbitrary line on the page.

    The best way that you can debug JavaScript in IE, is with Google Chrome. It doesn’t sound right, but I promise it’s the easiest way to make your code work. Both Chrome’s Web Inspector and Firefox’s Firebug work very well with SharePoint, though my personal preference is for Chrome as it works better with Windows’ NTLM authentication system (it doesn’t ask you for your login details, just takes them from Windows). They allow you to check and validate your code so that it works well and runs as expected. You should be able to achieve this in half the time that you would if you were just developing for IE, using alerts to work out what’s going wrong.

    There’s another benefit for working this way around: your code will work on standards-compliant browsers, and any that come along in the future. This is always a good thing as you don’t know when the organisation will roll out IE8/9 to its users, nor can you always guarantee that a user will be using a IE. It’s important that sites are ready for these changes and best-practice development is maintained.

    In summary, if you have to use JavaScript, ensure the page content will work without it. If you are doing any major development work, do it in Chrome and reap the benefits of its debugger, then make it work in IE.