If there are few objects, it does not matter, but if their number is large, putting it in prototype
has the advantage that only one copy of the function exists, not several (functions are first class objects in JavaScript) . This representation below illustrates the fact:
{x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c}
Vs.:
{a,b,c}
^
|
+-------+-------+-------+-------+-------+
| | | | | |
{x,y,z} {x,y,z} {x,y,z} {x,y,z} {x,y,z} {x,y,z}
This will not necessarily guarantee that the performance in time will be worse, only when copying the function in each instance will spend more memory . And often these two factors constitute just a tradeoff (i.e. the space to reduce time is increased, or vice versa). But in this particular case, I believe that solution with prototye
will work out better in both cases, since:
- If an object takes up more memory, fewer objects fit on a cache page, so the number of misses is greater;
- If the function being called several times is in the prototype, and the prototype is in the cache, access to it is as fast as it could be (the overhead but should be negligible).
Again, this is just my private interpretation, to know for certain just by testing. This example in jsperf , for example, gave results according to my interpretation (in Chrome, by less).
P.S. Depending on how it is done, there may be a single function object, and only several references to it. Example:
function foo() { ... }
function MinhaClasse(...) {
...
this.foo = foo;
}
In this case there is still the memory worn by the reference itself, but the impact is not so great. On the other hand, if the function is defined internally - mainly if it captures external function variables (see closure ) - then the requirement in space becomes even greater (as there is in fact an extra object for each instance):
function MinhaClasse(...) {
...
this.foo = function() { ... }
}