Articles Tagged ‘jQuery’

Going jQuery-free

It’s 2014 and I’m feeling inspired to change my ways. In 2014, I want to go jQuery-free!

Before I get started, I want to clear the air and put in a big fat disclaimer about my opinions on jQuery. Here we go:

jQuery is an excellent library. It is the right answer for the vast majority of websites and developers and is still the best way to do cross-browser JavaScript. What I am against is the notion of that jQuery is the answer to all JavaScript problems.

Lovely, now that’s done, this is why I want to do it. Firstly, as lots of people know, jQuery is quite a weighty library considering what it does. Coming in at 32KB for version 2.x and around 40KB for the IE-compatible (gzipped and minified), it’s a significant chunk of page weight before you’ve even started using it. There are alternatives that support the majority of its functions in the same API, such as Zepto, but even that comes in at around 15KB for the most recent version, and can grow larger. The worst thing for me, is that I don’t use half of the library, all I really do is select elements, use event handlers and delegation, show/hide things and change CSS classes. So, I want a library of utility functions that only does these things.

Word to the wise, this is not a new notion, and follows on very nicely from the work that Remy Sharp has done in this area in his min.js library.

I’m going to write a series of posts as I attempt to separate myself from jQuery, and make my websites leaner and faster. The first of which will be on “what you think you need, and what you actually need” and give you ways to work out if this approach is for you, or if you should be sticking with jQuery. Next, I’ll cover the basics of what a minimalist jQuery library; and finally I’ll cover strategies for dealing with unsupported browsers.

Let me know if there’s anything in particular you want me to cover, and I’ll do my best to go over it for you.

Installing JSDom on Windows

If you’ve ever wanted to scrape a web page and extract some information using Node.js, there’s a really useful module called JSDom that parses a document and gives you a DOM that you can then manipulate with YUI or jQuery.

This all works really well… on Linux and OS X. On Windows, Node.js can’t run pre-built native Node.js libraries, so it has to be build by the NPM build service. This is all documented on the JSDom github issue, but for brevity, this is what you have to do to make it work

  1. Node.js 0.6.12 is required, apparently 0.6.13 will make this easier
  2. NPM 1.1.8 is required – Node.js is bundled with 1.1.4 so you’ll need to run npm install npm
  3. Python 2.7 is required – And the python runtime needs to be on the PATH
  4. Microsoft Visual C++ 2010 is also required – I’ve got the whole Visual Studio installed on my machine, but I think you’ll be able to get away with the distributable package
  5. I believe you’ll also need the node-gyp module installed globally Node-gyp is included in NPM
Update: Node.js 0.6.13 is now out with npm 1.1.9 and an updated node-gyp which will make this a lot easier. However, you’ll still need Python 2.7 and Visual C++ 2010
Update 2: Having the Visual C++ redist package isn’t enough, you have to have Visual Studio installed too. You can get the express edition for free from Microsoft here
Update 3: The express edition of Visual C++ 2010 doesn’t come with the latest SDKs so it won’t compile out of the box. You’ll need to run Windows Update and download a special set of compiler updates from Microsoft. Thanks to Pavel Kuchera for finding this one out (the hard way).

That’s a lot of dependencies, but it should all work. Once Python and C++ are installed,  the commands you’ll need to run are:

npm install -g npm
npm install -g node-gyp
npm install jsdom

And that’s it. If there are build errors, let me or the JSDom team know.

Improving Javascript XML Node Finding Performance by 2000%

In my work, I’m parsing web services all of the time. Most of the time, they’re XML, which does not make the best use of bandwidth/CPU time (compared to JSON), however, if it’s all that you’re given then you can certainly get by. I’ve been looking into ways to speed up the XML document traversal in with jQuery after the current best practice method was removed.

The basic way to find certain nodes in an XML web service is to use the .find() method. This is used heavily by the SPServices jQuery helper (which is, in general, a great library).

$(xData.responseXML).find("[nodeName='z:row']").each(function() {
// Do stuff
});

That’s absolutely fine – it’s going to find the attribute nodeName with a value of z:row. However, since jQuery 1.7, this method does not work. I raised this regression in the jQuery bug tracker and was encouraged to find a solution; another selector that worked in all browsers. Unfortunately, at the time I couldn’t come up with anything better than this:

$(xData.responseXML).find("z\\:row, row").each(function() {
// Do stuff
});

The “z\\:row” selector works in IE and Firefox, and the “row” selector works in Chrome and Safari (I’m unable to test in Opera here, sorry). This was flagged as the solution to the problem and they wouldn’t be making any fixes to the jQuery core.

After a few weeks of using this method, I noticed that the site had been slowing down, especially in IE, and I thought this new selector was the cause. So, I looked into the performance numbers using jsPerf and I raised a bug too. My first test was to see what the current solution was doing, and whether jQuery 1.7 had made things worse.
Test case: http://jsperf.com/node-vs-double-select/4

So, performance in Chrome is identical for each of the selectors (and it’s the same in Firefox and Safari) but IE drops nearly half of its operations because it has to perform that second selector.

It’s still not very high performance though, and so I looked for other solutions.

Dmethvin suggested:

Did you try the custom plugin in the ticket? If you’re having performance issues that should be much faster.

The plugin he’s referring to is this:

jQuery.fn.filterNode = function(name){
   return this.filter(function(){
      return this.nodeName === name;
   });
});

This filters content by their nodeName and compares it against the name that you gave it. The issue with this is that .filter() does not traverse down the tree, staying at the level of the set of objects that it was given. Therefore, a quick solution was this:

$(xData.responseXML).children().children().children().children().children().children().children().filterNode('z:row').each(function() {
// Do stuff
});

jsPerf Test: http://jsperf.com/node-vs-double-select/1

Wow, that’s about 50 times faster. Even IE beats Chrome when doing this operation. The simple reason is that it’s got a smaller set of objects to go through and it’s comparing a single attribute rather than parsing the text of the XML to try and find the namespaced element.

Still, I wasn’t satisfied as in order to achieve that performance, I had to know how deep I was going to be going in order to retrieve the set. So, back to the bug and another suggestion by dmethvin:

If you’re going that deep, use a filter function passed to .find(). How does that fare?

After a few attempts, a colleague of mine came up with this beauty:

$.fn.filterNode = function(name) {
      return this.find('*').filter(function() {
        return this.nodeName === name;
      });
    };

jsPerf test: http://jsperf.com/node-vs-double-select/3

Incredible performance increase using .find('*').filterNode('z:row')
http://jsperf.com/node-vs-double-select/3

Using .find(‘*’).filter() increased performance to 200x faster than the original .find(‘z:row’) selector

I mean, wow, that’s incredible. On the graph, those tiny little bits of colour are the original selectors, and those only 20% of the way up are the previous massive performance increase by using filter. It should also be noted that IE8 performance using this selector increased in jQuery 1.7 in comparison to when using jQuery 1.6.

Side-note: IE10′s javascript performance is almost equal to that of Google Chrome. In comparison, IE9 (not shown) is about half of that.

The reason for this massive increase is that it’s backed by native selectors. A .find(‘*’) will translate into element.querySelectorAll(‘*’) which is very fast when compared to doing 8 .children() calls.

Summary
Dealing with large amounts of data from web services needs to be fast. Using a simple .find() on the node name no-longer works and alternatives have been investigated. The fastest method, using a short one-line plug-in, improves performance by up to 2000% compared to the old methodology.

I’ll be notifying the SPServices group of this post, and hopefully they can improve the performance of their library.

jQuery to be Integrated with ASP .NET

Fantastic news! Long-time golden boy of the javascript world, jQuery, is to be integrated with Microsoft’s ASP .NET framework.

In an announcement today on the jQuery blog, Scott Guthrie’s blog and Scott Hanselman’s blog, the jQuery library would be distributed AS IS with Visual Studio 2008 SP1 and the free Express editions. The aim is to extend ASP .NET AJAX support and generally make life easier for MS developers. Microsoft would also be contributing tests and patches to the jQuery core, but would not be submitting features etc.

This is a massive boost for a framework that won me and my company over a long time ago. jQuery now has backing from the biggest names in IT and will benefit immensely from the additional support.

The benefits are not only for the jQuery team, they’re also for any web standardistas. This support for jQuery signals Microsoft’s intentions for the ASP framework. I can only hope that from this, Microsoft adopts jQuery in its entirety, having ASP output unobtrusive, cross-browser javascript, not the ‘impossible to debug or follow’ mess that it currently uses.