Why is JavaScript prototyping?

Solution 1:

So, on to my problem: Why the hell would you want to do this? Why would you not just put the play function in Guitar to begin with? Why declare an instance and then start adding methods later?

Javascript is not a 'classical' inheritance language. It uses prototypal inheritance. Its just the way it is. That being the case, the proper way to create an method on a 'class' is to put the method on the prototype. Note that I put 'class' in quotes, since strictly speaking JS has no notion of 'class'. In JS, you work on objects, which are defined as functions.

You can declare the method in the function that defines Guitar, however, when you do that, every new guitar gets its own copy of the play method. Putting it on the prototype is more efficient in the runtime environment when you start creating Guitars. Every instance shares the same play method, but the context/scope is set when invoked so it acts a proper instance method you are used to in your classical inheritance language.

Note the difference. In the 'why not this way' example you posted, every time you create a new Guitar, you need to create a new play method that is the same as every other play method. If play is on the prototype, however, all Guitars borrow from the same prototype, so they all share the same code for play. Its the difference between x number of guitars, each with identical play code (so you have x copies of play) vs x number of guitars sharing the same play code (1 copy of play no matter how many Guitars). The trade off is of course that at runtime play needs to be associated with the object on which it is called for scoping, but javascript has methods that allow you to do that very efficiently and easily (namely the call and apply methods)

Many javascript frameworks define their own utilities for creating 'classes'. Typically they allow you to write code like the example you said you would like to see. Behind the scenes, they are putting the functions on the prototype for you.


EDIT -- in answer to your updated question, why can't one do

function Guitar() {
    this.prototype.play = function()....
}

it has to do with how javascript creates objects with the 'new' keyword. See the second answer here -- basically when you create an instance, javascript creates the object and then assigns the prototype properties. So this.prototype.play doesn't really make sense; in fact, if you try it you get an error.

Solution 2:

As a note before beginning -- I am using ECMAScript here instead of JavaScript as ActionScript 1 and 2 exhibit exactly the same behavior at runtime.

Those of us who work in a more "traditional" object oriented world (read Java/C#/PHP) find the idea of extending a class at runtime almost entirely foreign. I mean, seriously, this is supposed to be my OBJECT. My OBJECT will go forth and DO THINGS which have been SET FORTH. Child classes EXTEND other CLASSES. It has a very structured, solid, set in stone feeling to it. And, for the most part, this works and it works reasonably well. (And this is one of the reasons Gosling has argued, and I think most of us would agree fairly effectively, that it is so well suited to massive systems)

ECMAScript, on the other hand, follows a much more primative concept of OOP. In ECMAScript, class inheritance is entirely, believe it or not, a gigantic decorator pattern. But this isn't just the decorator pattern you might say is present in C++ and Python (and you can easily say that those are decorators). ECMAScript lets you assign a class prototype to an instance.

Imagine this happening in Java:

class Foo {
    Foo(){}
}

class Bar extends new Foo() {
    // AAAHHHG!!!! THE INSANITY!
}

But, that is exactly what is available in ECMAScript (I believe Io also allows for something like this, but don't quote me).

The reason I said that this is primitive is that this type of design philosophy is very much linked up with the way that McCarthy used Lambda Calculus to implement Lisp. It has more to do with the idea of closures than, say, Java OOP does.

So, back in the day, Alonzo Church wrote The Calculi Lambda Conversion, the seminal work in Lambda Calculus. In it he proposes two ways of looking at multi-argument functions. First, they can be considered to be functions which accept singletons, tuples, triples, etc. Basically f(x,y,z) would be understood as f which accepts the parameter (x,y,z). (By the way, it is my humble opinion that this is a primary impetus for the structure of Python’s argument lists, but that is conjecture).

The other (and for our purposes (and, to be honest, Church’s purposes) more important) definition was picked up by McCarthy. f(x,y,z) should be translated instead to f(x g(y h(z))). Resolution of the outermost method could come from a series of states which were generated by the internal function calls. That stored, internal state is the very basis of the closure, which, in turn, is one of the bases for modern OOP. Closures allow for passing enclosed, executable states between different points.

A diversion courtesy of the book Land Of Lisp:

; Can you tell what this does? It it is just like your favorite 
; DB’s sequence!
; (getx) returns the current value of X. (increment) adds 1 to x 
; The beauty? Once the let parens close, x only exists in the 
; scope of the two functions! passable enclosed executable state!
; It is amazingly exciting!
(let (x 0)
  ; apologies if I messed up the syntax
  (defun increment ()(setf x (+ 1 x)))
  (defun getx ()(x)))

Now, what does this have to do with ECMAScript vs. Java? Well, when an object is created in ECMAScript it can follow that pattern almost exactly:

 function getSequence()
{
     var x = 0;
     function getx(){ return x }
     function increment(){ x++ }
     // once again, passable, enclosed, executable state
     return { getX: getX, increment:increment}
}

And here is where the prototype starts coming in. Inheritance in ECMAScript means, “start with object A and add to it.” It does not copy it. It takes this magical state and ECMAScript appends it. And that is the very source and summit of why it must allow for MyClass.prototype.foo = 1.

As to why you would append methods “after the fact”. For the most part it boils down to style preferences really. Everything which happens inside of the original definition is doing no more than the same type of decoration which happens outside.

For the most part it is stylistically beneficial to put all of your definitions in the same place, but sometimes that is not possible. jQuery extensions, for example, work based on the idea of appending the jQuery object prototype directly. The Prototype library actually has a specialized way of expanding class definitions which it uses consistently.

If I remember Prototype.js correctly, it is something like this:

 var Sequence = function(){}

 // Object.extend takes all keys & values from the right object and
 // adds them to the one on the left.
 Object.extend( Sequence.prototype, (function()
 {
     var x = 0;
     function getx(){ return x }
     function increment(){ x++ }
     return { getX: getX, increment:increment}
  })());

As to use of the prototype keyword inside of the original definition, well, that won’t work in most cases because “this” refers to an instance of the object being defined (at the time when the instance is constructed). Unless the instance also had a “prototype” property, this.prototype would necessarily be undefined!

Since all of the this’s inside of the original definition will be instances of that object, modifying this would be sufficient. But, (and I smile as I say this because it goes right along with prototype) each this has a constructor property.

 // set the id of all instances of this “class”. Event those already 
 // instantiated...
 this.constructor.prototype.id = 2
 console.log( this.id );

Solution 3:

If you don't use the prototype, every time you call the constructor of Guitar, you will create a new function. If you are creating a lot of Guitar objects, you will notice a difference in performance.

Another reason to use prototypes is to emulate classical inheritance.

var Instrument = {
    play: function (chord) {
      alert('Playing chord: ' + chord);
    }
};

var Guitar = (function() {
    var constructor = function(color, strings) {
        this.color = color;
        this.strings = strings;
    };
    constructor.prototype = Instrument;
    return constructor;
}());

var myGuitar = new Guitar('Black', ['D', 'A', 'D', 'F', 'A', 'E']);
myGuitar.play('D5');

In this example, Guitar extends Instrument, and therefore has a 'play' function. You can also override the Instrument's 'play' function in Guitar if you like.

Solution 4:

JavaScript is a prototypical language, a rather rare breed. This is not arbitrary at all, it's a requirement of a language that is live evaluated and capable of "eval", dynamic modifications, and REPL.

Prototypical inheritence can be understood as compared to Object Oriented Programming based on runtime "live" class definitions instead of static predefined ones.

Edit: another explanation stolen from from the following link is also useful. In an Object Oriented Language (Class -> Object/Instance) all the possible properties of any given X are enumerated in Class X, and an instance fills in its own specific values for each of them. In prototypical inheritance you only describe the differences between existing reference to live X and similar but different live Y, and there is no Master Copy.

http://web.media.mit.edu/~lieber/Lieberary/OOP/Delegation/Delegation.html

First off you need to understand the context. JavaScript is an interpreted language that is executed and can be modified in a live environment. The program's internal structure itself can be modified at runtime. This places different constraints and advantages from any compiled language, or even CLR linked language such as .Net stuff.

The concept of "eval"/REPL requires dynamic variable typing. You can't effectively live-edit an environment where you have to have predefined monolithic Class based inheritances structures. It's pointless, you might as well just precompile to assembly or bytecode.

Instead of that we have prototypical inheritance where you link the properties of an INSTANCE of an object. The concept is if you're in an all-live environment, classes (static, predefined constructs) are unnecessarily limiting. Classes are built on constraints that don't exist in JavaScript.

With this strategy JavaScript basically banks on everything being "live" Nothing is off-limits, there's no "defined and done" classes you can never touch. There's no "One True Scotsmen" among variables that is holier than your code because everything obeys the same rules as the code you decide to write today.

The consequences of this are pronounced, and also very much human based. It pushes language implementers to use a light, efficient touch in providing native objects. If they do a poor job them the mob will simply usurp the platform and rebuild their own (read the source of MooTools, it literally redefines/reimplements everything, starting from Function and Object) . This is how compatibility is brought to platforms like old Internet Explorer versions. It promotes libraries that are shallow and narrow, densely functional. Deep inheritance results in the most used parts being (easily) cherry picked out and becoming the ultimate go-to library. Wide libraries result in fracturing as people pick and choose which pieces they need, because taking a bite out is easy, instead of impossible as in most other environments.

The concept of micro libraries is uniquely flourishing in JavaScript and it absolutely can be traced back to the fundamentals of the language. It encourages efficiency and brevity in terms of human consumption in ways no other language (that I know of) promote.