Articles in the ‘JavaScript’ Category

Improving Javascript XML Node Finding Performance by 2000%

In my work, I’m parsing web services all of the time. Most of the time, they’re XML, which does not make the best use of bandwidth/CPU time (compared to JSON), however, if it’s all that you’re given then you can certainly get by. I’ve been looking into ways to speed up the XML document traversal in with jQuery after the current best practice method was removed.

The basic way to find certain nodes in an XML web service is to use the .find() method. This is used heavily by the SPServices jQuery helper (which is, in general, a great library).

$(xData.responseXML).find("[nodeName='z:row']").each(function() {
// Do stuff
});

That’s absolutely fine – it’s going to find the attribute nodeName with a value of z:row. However, since jQuery 1.7, this method does not work. I raised this regression in the jQuery bug tracker and was encouraged to find a solution; another selector that worked in all browsers. Unfortunately, at the time I couldn’t come up with anything better than this:

$(xData.responseXML).find("z\\:row, row").each(function() {
// Do stuff
});

The “z\\:row” selector works in IE and Firefox, and the “row” selector works in Chrome and Safari (I’m unable to test in Opera here, sorry). This was flagged as the solution to the problem and they wouldn’t be making any fixes to the jQuery core.

After a few weeks of using this method, I noticed that the site had been slowing down, especially in IE, and I thought this new selector was the cause. So, I looked into the performance numbers using jsPerf and I raised a bug too. My first test was to see what the current solution was doing, and whether jQuery 1.7 had made things worse.
Test case: http://jsperf.com/node-vs-double-select/4

So, performance in Chrome is identical for each of the selectors (and it’s the same in Firefox and Safari) but IE drops nearly half of its operations because it has to perform that second selector.

It’s still not very high performance though, and so I looked for other solutions.

Dmethvin suggested:

Did you try the custom plugin in the ticket? If you’re having performance issues that should be much faster.

The plugin he’s referring to is this:

jQuery.fn.filterNode = function(name){
   return this.filter(function(){
      return this.nodeName === name;
   });
});

This filters content by their nodeName and compares it against the name that you gave it. The issue with this is that .filter() does not traverse down the tree, staying at the level of the set of objects that it was given. Therefore, a quick solution was this:

$(xData.responseXML).children().children().children().children().children().children().children().filterNode('z:row').each(function() {
// Do stuff
});

jsPerf Test: http://jsperf.com/node-vs-double-select/1

Wow, that’s about 50 times faster. Even IE beats Chrome when doing this operation. The simple reason is that it’s got a smaller set of objects to go through and it’s comparing a single attribute rather than parsing the text of the XML to try and find the namespaced element.

Still, I wasn’t satisfied as in order to achieve that performance, I had to know how deep I was going to be going in order to retrieve the set. So, back to the bug and another suggestion by dmethvin:

If you’re going that deep, use a filter function passed to .find(). How does that fare?

After a few attempts, a colleague of mine came up with this beauty:

$.fn.filterNode = function(name) {
      return this.find('*').filter(function() {
        return this.nodeName === name;
      });
    };

jsPerf test: http://jsperf.com/node-vs-double-select/3

Incredible performance increase using .find('*').filterNode('z:row')
http://jsperf.com/node-vs-double-select/3

Using .find(‘*’).filter() increased performance to 200x faster than the original .find(‘z:row’) selector

I mean, wow, that’s incredible. On the graph, those tiny little bits of colour are the original selectors, and those only 20% of the way up are the previous massive performance increase by using filter. It should also be noted that IE8 performance using this selector increased in jQuery 1.7 in comparison to when using jQuery 1.6.

Side-note: IE10’s javascript performance is almost equal to that of Google Chrome. In comparison, IE9 (not shown) is about half of that.

The reason for this massive increase is that it’s backed by native selectors. A .find(‘*’) will translate into element.querySelectorAll(‘*’) which is very fast when compared to doing 8 .children() calls.

Summary
Dealing with large amounts of data from web services needs to be fast. Using a simple .find() on the node name no-longer works and alternatives have been investigated. The fastest method, using a short one-line plug-in, improves performance by up to 2000% compared to the old methodology.

I’ll be notifying the SPServices group of this post, and hopefully they can improve the performance of their library.

Sketchnotes from #LWS3D – A 50-line WebGL app

After a summer break, London Web Standards was back with an evening of WebGL with Ilmari Heikkenen from Google and a short demo from Carlos Ulloa of HelloEnjoy. Sketchnotes are below

Carlos Ulloa of Brighton-based HelloEnjoy showed off two demos that he made using Three.js and WebGL. The first was HelloRacer, an interactive look at the 2010 Ferrari F1 car that you can even drive and do handbrake turns in. The second demo got it premiere at LWS, an interactive music video for Ellie Goulding’s “Lights”. Honestly, it was extremely cool, on a Ro.me “3 dreams of black” scale. It’ll appear at the linked URL in the next week or so. There’s a great Q&A session on the London Web Standards blog of the event for more detail on how they did it.

Ilmari Heikkenen showed the gathered crowd how to make a basic WebGL app using Three.js in about 50 lines. He showed off all of the components that you need: a renderer, scene, camera, mesh and lights (and shadows). He went into more depth about vertex shaders and fragment shaders, the GPU effects that make everything look a lot more real.

Ilmari gave examples of a few uses, including games, 3D bar charts and scatter graphs. He then started animating all of these, including a 10,000 point scattergraph that moved in real-time. Finally, he demonstrated a loader for Collada meshes (supported by Maya) and brought in a monster that with a few lines of code started walking around the screen.

Overall, it was a great introduction to the subject, one worth a lot more of your time.

Ilmari’s slides can be found on his blog.

Tips and Problems when Enhancing SharePoint with JavaScript

If you’ve developed for Microsoft’s SharePoint before (I’m talking about 2007 here, but this applies to WSS2 and 2010 as well) , then you’ll know that you can reach the limits of it’s functionality very quickly. This is a big problem if you’re making a zero-code solution, i.e. you have no access to Visual Studio and can’t create web parts. This is more common than you’d think, especially in large organisations that use SharePoint extensively. For this, the only choice is to use SharePoint Designer 2007 (SPD), but it’s not pleasant because, frankly, SPD sucks. I’ve not found a program that crashes as much as SPD, or that performs so poorly when presented with the most basic tasks. If you make a page that is too complex, has too many web parts, large data sources or lots of conditionals, connections and filters, it can take anywhere up to 20 minutes to perform a single action.

SharePoint crash

Very quickly, you have to start looking at alternatives to complex data views. These days, the go-to technology is JavaScript, which is very powerful and can allow developers to access almost every SharePoint function through web services. However, this functionality comes at the cost of accessibility. So, the first piece of advice: if you can avoid using JavaScript, do so because otherwise the site won’t be accessible. See these links for why accessibility is a good thing.

Unfortunately, SharePoint is so limited that often a JavaScript is the only way to add functionality in or to correct formatting. In this case, use of simple SPD functions and <noscript> tags can keep your content accessible, allowing you to progressively enhance the user’s experience on top.

The final hurdle to cross before you can create great JavaScript-based interfaces in SharePoint is IE.

Internet Explorer, especially IE6, has appalling developer tools for JavaScript debugging. There’s no console, inspector, breakpoint facility, no nothing. It’s almost impossible to debug your problems because they all manifest themselves as runtime errors on some arbitrary line on the page.

The best way that you can debug JavaScript in IE, is with Google Chrome. It doesn’t sound right, but I promise it’s the easiest way to make your code work. Both Chrome’s Web Inspector and Firefox’s Firebug work very well with SharePoint, though my personal preference is for Chrome as it works better with Windows’ NTLM authentication system (it doesn’t ask you for your login details, just takes them from Windows). They allow you to check and validate your code so that it works well and runs as expected. You should be able to achieve this in half the time that you would if you were just developing for IE, using alerts to work out what’s going wrong.

There’s another benefit for working this way around: your code will work on standards-compliant browsers, and any that come along in the future. This is always a good thing as you don’t know when the organisation will roll out IE8/9 to its users, nor can you always guarantee that a user will be using a IE. It’s important that sites are ready for these changes and best-practice development is maintained.

In summary, if you have to use JavaScript, ensure the page content will work without it. If you are doing any major development work, do it in Chrome and reap the benefits of its debugger, then make it work in IE.

The Limitations of WebSQL and Offline apps

Web applications are the next big thing in the web. Being able to take web sites and make them run alongside native apps, having them work offline and perform just as well as their native counterparts is the next step along the road. As usual, with all new technology, there are some limitations.

There are three pieces of technology that are combined to make a web app: app caching, local storage and a web database technology.
App caching tells browsers what files to store offline and retrieve from disk. These are permanently stored until new instructions are received, unlike traditional caching which works until the cache is full, then starts removing files.

Limitation 1
In iOS 4.3, app caching for offline web apps is broken and does not work.

Local storage is for key/value pairs of information. It’s for simple things like settings and values that need to be retrieved quickly. It’s been called “cookies on crack” before, but it’s really just a very fast dictionary for simple data.

Limitation 2
Depending on your browser, localStorage will keep 5-10MB of data, the equivalent of 200,000-500,000 characters. If all you want to do is store small serialised JSON objects that aren’t related, use this.

Web databases are client-side data storage for larger amounts of more complex data. Whilst you can make web apps with just app caching and local storage, it’s not going to be very interactive, or the data may be unstructured, or there will be lots of Ajax calls to fetch data. Web databases are where this technology gets a bit dodgy.

Originally, there was Google Gears, a plug-in which brought a SQLite database to help web apps run offline. This was then standardised into the WebSQL module and developed as a SQL database available through JavaScript. Google, Apple and Opera all implemented it and it can be found in iOS and Android devices today.

Limitation 3
Chrome has a hard 5MB database size limit – you will need to use a chrome web app to remove this limit.
Limitation 4
Chrome doesn’t support OUTER JOIN or RIGHT JOIN statements.
Limitation 5
Debugging is very difficult with large amounts of data as the web inspector isn’t efficient at displaying a thousand rows (and will crash with around 20,000 rows, around 2MB of data).
Limitation 6
Version numbers are not taken into account. Don’t bother with them.
Limitation 7
All calls are asynchronous – if you rely on results at a certain time, be prepared to write a lot of callback functions. Your code can get messy very quickly.
Limitation 8
Performance is sluggish if you don’t batch up statements into transactions.

Even better still, WebSQL is no longer in development, so these problems will remain. Microsoft and Mozilla said they didn’t like it and wanted to used a different technology: IndexedDB.

IndexedDB is on it’s way, but not mature enough yet to be used, nor is it implemented in any of the mobile browsers.

Advice
For offline apps, you’re better sticking with WebSQL until IndexedDB matures.

Hopefully, some kind developer will come along and write a technology-agnostic wrapper, maybe that person will be you, the reader of this article. If you’re thinking about it, let me know :-)