Question

I've been looking at the source code of my.class.js to find out what makes it so fast on Firefox. Here's the snippet of code used to create a class:

my.Class = function () {
    var len = arguments.length;
    var body = arguments[len - 1];
    var SuperClass = len > 1 ? arguments[0] : null;
    var hasImplementClasses = len > 2;
    var Class, SuperClassEmpty;

    if (body.constructor === Object) {
        Class = function () {};
    } else {
        Class = body.constructor;
        delete body.constructor;
    }

    if (SuperClass) {
        SuperClassEmpty = function() {};
        SuperClassEmpty.prototype = SuperClass.prototype;
        Class.prototype = new SuperClassEmpty();
        Class.prototype.constructor = Class;
        Class.Super = SuperClass;
        extend(Class, SuperClass, false);
    }

    if (hasImplementClasses)
        for (var i = 1; i < len - 1; i++)
            extend(Class.prototype, arguments[i].prototype, false);    

    extendClass(Class, body);

    return Class;
};

The extend function is simply used to copy the properties of the second object onto the first (optionally overriding existing properties):

var extend = function (obj, extension, override) {
    var prop;
    if (override === false) {
        for (prop in extension)
            if (!(prop in obj))
                obj[prop] = extension[prop];
    } else {
        for (prop in extension)
            obj[prop] = extension[prop];
        if (extension.toString !== Object.prototype.toString)
            obj.toString = extension.toString;
    }
};

The extendClass function copies all the static properties onto the class, as well as all the public properties onto the prototype of the class:

var extendClass = my.extendClass = function (Class, extension, override) {
    if (extension.STATIC) {
        extend(Class, extension.STATIC, override);
        delete extension.STATIC;
    }
    extend(Class.prototype, extension, override);
};

This is all pretty straightforward. When you create a class, it simply returns the constructor function you provide it.

What beats my understanding however is how does creating an instance of this constructor execute faster than creating an instance of the same constructor written in Vapor.js.

This is what I'm trying to understand:

  1. How do constructors of libraries like my.class.js create so many instances so quickly on Firefox? The constructors of the libraries are all very similar. Shouldn't the execution time also be similar?
  2. Why does the way the class is created affect the execution speed of instantiation? Aren't definition and instantiation separate processes?
  3. Where is my.class.js gaining this speed boost from? I don't see any part of the constructor code which should make it execute any faster. In fact traversing a long prototype chain like MyFrenchGuy.Super.prototype.setAddress.call should slow it down significantly.
  4. Is the constructor function being JIT compiled? If so then why aren't the constructor functions of other libraries also being JIT compiled?
Was it helpful?

Solution

I don't mean to offend anyone, but this sort of thing really isn't worth the attention, IMHO. Almost any speed-difference between browsers is down to the JS engine. The V8 engine is very good at memory management, for example; especially when you compare it to IE's JScript engines of old.

Consider the following:

var closure = (function()
{
    var closureVar = 'foo',
    someVar = 'bar',
    returnObject = {publicProp: 'foobar'};
    returnObject.getClosureVar = function()
    {
        return closureVar;
    };
    return returnObject;
}());

Last time I checked, chrome actually GC'ed someVar, because it wasn't being referenced by the return value of the IIFE (referenced by closure), whereas both FF and Opera kept the entire function scope in memory.
In this snippet, it doesn't really matter, but for libs that are written using the module-pattern (AFAIK, that's pretty much all of them) that consist of thousands of lines of code, it can make a difference.

Anyway, modern JS-engines are more than just "dumb" parse-and-execute things. As you said: there's JIT compilation going on, but there's also a lot of trickery involved to optimize your code as much as possible. It could very well be that the snippet you posted is written in a way that FF's engine just loves.
It's also quite important to remember that there is some sort of speed-battle going on between Chrome and FF about who has the fastest engine. Last time I checked Mozilla's Rhino engine was said to outperform Google's V8, if that still holds true today, I can't say... Since then, both Google and Mozilla have been working on their engines...

Bottom line: speed differences between various browsers exist - nobody can deny that, but a single point of difference is insignificant: you'll never write a script that does just one thing over and over again. It's the overall performance that matters.
You have to keep in mind that JS is a tricky bugger to benchmark, too: just open your console, write some recursive function, and rung it 100 times, in FF and Chrome. compare the time it takes for each recursion, and the overall run. Then wait a couple of hours and try again... sometimes FF might come out on top, whereas other times Chrome might be faster, still. I've tried it with this function:

var bench = (function()
{
    var mark = {start: [new Date()],
                end: [undefined]},
    i = 0,
    rec = function(n)
    {
        return +(n === 1) || rec(n%2 ? n*3+1 : n/2);
        //^^ Unmaintainable, but fun code ^^\\
    };
    while(i++ < 100)
    {//new date at start, call recursive function, new date at end of recursion
        mark.start[i] = new Date();
        rec(1000);
        mark.end[i] = new Date();
    }
    mark.end[0] = new Date();//after 100 rec calls, first element of start array vs first of end array
    return mark;
}());

But now, to get back to your initial question(s):

First off: the snippet you provided doesn't quite compare to, say, jQuery's $.extend method: there's no real cloning going on, let alone deep-cloning. It doesn't check for circular references at all, which most other libs I've looked into do. checking for circular references does slow the entire process down, but it can come in handy from time to time (example 1 below). Part of the performance difference could be explained by the fact that this code simply does less, so it needs less time.

Secondly: Declaring a constructor (classes don't exist in JS) and creating an instance are, indeed, two different things (though declaring a constructor is in itself creating an instance of an object (a Function instance to be exact). The way you write your constructor can make a huge difference, as shown in example 2 below. Again, this is a generalization, and might not apply to certain use-cases on certain engines: V8, for example, tends to create a single function object for all instances, even if that function is part of the constructor - or so I'm told.

Thirdly: Traversing a long prototype-chain, as you mention is not as unusual as you might think, far from it, actually. You're constantly traversing chains of 2 or three prototypes, as shown in example 3. This shouldn't slow you down, as it's just inherent to the way JS resolves function calls or resolves expressions.

Lastly: It's probably being JIT-compiled, but saying that other libs aren't JIT-compiled just doesn't stack up. They might, then again, they might not. As I said before: different engines perform better at some tasks then other... it might be the case that FF JIT-compiles this code, and other engines don't.
The main reason I can see why other libs wouldn't be JIT-compiled are: checking for circular references, deep cloning capabilities, dependencies (ie extend method is used all over the place, for various reasons).

example 1:

var shallowCloneCircular = function(obj)
{//clone object, check for circular references
    function F(){};
    var clone, prop;
    F.prototype = obj;
    clone = new F();
    for (prop in obj)
    {//only copy properties, inherent to instance, rely on prototype-chain for all others
        if (obj.hasOwnProperty(prop))
        {//the ternary deals with circular references
            clone[prop] = obj[prop] === obj ? clone : obj[prop];//if property is reference to self, make clone reference clone, not the original object!
        }
    }
    return clone;
};

This function clones an object's first level, all objects that are being referenced by a property of the original object, will still be shared. A simple fix would be to simply call the function above recursively, but then you'll have to deal with the nasty business of circular references at all levels:

var circulars = {foo: bar};
circulars.circ1 = circulars;//simple circular reference, we can deal with this
circulars.mess = {gotcha: circulars};//circulars.mess.gotcha ==> circular reference, too
circulars.messier = {messiest: circulars.mess};//oh dear, this is hell

Of course, this isn't the most common of situations, but if you want to write your code defensively, you have to acknowledge the fact that many people write mad code all the time...

Example 2:

function CleanConstructor()
{};
CleanConstructor.prototype.method1 = function()
{
     //do stuff...
};
var foo = new CleanConstructor(), 
bar = new CleanConstructor);
console.log(foo === bar);//false, we have two separate instances
console.log(foo.method1 === bar.method1);//true: the function-object, referenced by method1 has only been created once.
//as opposed to:
function MessyConstructor()
{
    this.method1 = function()
    {//do stuff
    };
}
var foo = new MessyConstructor(),
bar = new MessyConstructor();
console.log(foo === bar);//false, as before
console.log(foo.method1 === bar.method1);//false! for each instance, a new function object is constructed, too: bad performance!

In theory, declaring the first constructor is slower than the messy way: the function object, referenced by method1 is created before a single instance has been created. The second example doesn't create a method1, except for when the constructor is called. But the downsides are huge: forget the new keyword in the first example, and all you get is a return value of undefined. The second constructor creates a global function object when you omit the new keyword, and of course creates new function objects for each call. You have a constructor (and a prototype) that is, in fact, idling... Which brings us to example 3

example 3:

var foo = [];//create an array - empty
console.log(foo[123]);//logs undefined.

Ok, so what happens behind the scenes: foo references an object, instance of Array, which in turn inherits form the Object prototype (just try Object.getPrototypeOf(Array.prototype)). It stands to reason, therefore that an Array instance works in pretty much the same way as any object, so:

foo[123] ===> JS checks instance for property 123 (which is coerced to string BTW)
    || --> property not found @instance, check prototype (Array.prototype)
    ===========> Array.prototype.123 could not be found, check prototype
         ||
         ==========> Object.prototype.123: not found check prototype?
             ||
             =======>prototype is null, return undefined

In other words, a chain like you describe isn't too far-fetched or uncommon. It's how JS works, so expecting that to slow things down is like expecting your brain to fry because your thinking: yes, you can get worn out by thinking too much, but just know when to take a break. Just like in the case of prototype-chains: their great, just know that they are a tad slower, yes...

OTHER TIPS

I'm not entirely sure, but I do know that when programming, it is good practice to make the code as small as possible without sacrificing functionality. I like to call it minimalist code.

This can be a good reason to obfuscate code. Obfuscation shrinks the size of the file by using smaller method and variable names, making it harder to reverse-engineer, shrinking the file size, making it faster to download, as well as a potential performance boost. Google's javascript code is intensely obfuscated, and that contributes to their speed.

So in JavaScript, bigger isn't always better. When I find a way I can shrink my code, I implement it immediately, because I know it will benefit performance, even if by the smallest amount.

For example, using the var keyword in a function where the variable isn't needed outside the function helps garbage collection, which provides a very small speed boost versus keeping the variable in memory.

With a library like this this that produces "millions of operations per second" (Blaise's words), small performance boosts can add up to a noticeable/measurable difference.

So it is possible that my.class.js is "minimalist coded" or optimized in some manner. It could even be the var keywords.

I hope this helped somewhat. If it didn't help, then I wish you luck in getting a good answer.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top