Why is extending native objects a bad practice?

JavascriptPrototypePrototypal Inheritance

Javascript Problem Overview


Every JS opinion leader says that extending the native objects is a bad practice. But why? Do we get a perfomance hit? Do they fear that somebody does it "the wrong way", and adds enumerable types to Object, practically destroying all loops on any object?

Take TJ Holowaychuk's should.js for example. He adds a simple getter to Object and everything works fine (source).

Object.defineProperty(Object.prototype, 'should', {
  set: function(){},
  get: function(){
    return new Assertion(Object(this).valueOf());
  },
  configurable: true
});

This really makes sense. For instance one could extend Array.

Array.defineProperty(Array.prototype, "remove", {
  set: function(){},
  get: function(){
    return removeArrayElement.bind(this);
  }
});
var arr = [0, 1, 2, 3, 4];
arr.remove(3);

Are there any arguments against extending native types?

Javascript Solutions


Solution 1 - Javascript

When you extend an object, you change its behaviour.

Changing the behaviour of an object that will only be used by your own code is fine. But when you change the behaviour of something that is also used by other code there is a risk you will break that other code.

When it comes adding methods to the object and array classes in javascript, the risk of breaking something is very high, due to how javascript works. Long years of experience have taught me that this kind of stuff causes all kinds of terrible bugs in javascript.

If you need custom behaviour, it is far better to define your own class (perhaps a subclass) instead of changing a native one. That way you will not break anything at all.

The ability to change how a class works without subclassing it is an important feature of any good programming language, but it is one that must be used rarely and with caution.

Solution 2 - Javascript

There's no measurable drawback, like a performance hit. At least nobody mentioned any. So this is a question of personal preference and experiences.

The main pro argument: It looks better and is more intuitive: syntax sugar. It is a type/instance specific function, so it should be specifically bound to that type/instance.

The main contra argument: Code can interfere. If lib A adds a function, it could overwrite lib B's function. This can break code very easily.

Both have a point. When you rely on two libraries that directly change your types, you will most likely end up with broken code as the expected functionality is probably not the same. I totally agree on that. Macro-libraries must not manipulate the native types. Otherwise you as a developer won't ever know what is really going on behind the scenes.

And that is the reason I dislike libs like jQuery, underscore, etc. Don't get me wrong; they are absolutely well-programmed and they work like a charm, but they are big. You use only 10% of them, and understand about 1%.

That's why I prefer an atomistic approach, where you only require what you really need. This way, you always know what happens. The micro-libraries only do what you want them to do, so they won't interfere. In the context of having the end user knowing which features are added, extending native types can be considered safe.

TL;DR When in doubt, don't extend native types. Only extend a native type if you're 100% sure, that the end user will know about and want that behavior. In no case manipulate a native type's existing functions, as it would break the existing interface.

If you decide to extend the type, use Object.defineProperty(obj, prop, desc); if you can't, use the type's prototype.


I originally came up with this question because I wanted Errors to be sendable via JSON. So, I needed a way to stringify them. error.stringify() felt way better than errorlib.stringify(error); as the second construct suggests, I'm operating on errorlib and not on error itself.

Solution 3 - Javascript

In my opinion, it's a bad practice. The major reason is integration. Quoting should.js docs:

> OMG IT EXTENDS OBJECT???!?!@ Yes, yes it does, with a single getter > should, and no it won't break your code

Well, how can the author know? What if my mocking framework does the same? What if my promises lib does the same?

If you're doing it in your own project then it's fine. But for a library, then it's a bad design. Underscore.js is an example of the thing done the right way:

var arr = [];
_(arr).flatten()
// or: _.flatten(arr)
// NOT: arr.flatten()

Solution 4 - Javascript

> If you look at it on a case by case basis, perhaps some implementations are acceptable.

String.prototype.slice = function slice( me ){
  return me;
}; // Definite risk.

Overwriting already created methods creates more issues than it solves, which is why it is commonly stated, in many programming languages, to avoid this practice. How are Devs to know the function has been changed?

String.prototype.capitalize = function capitalize(){
  return this.charAt(0).toUpperCase() + this.slice(1);
}; // A little less risk.

In this case we are not overwriting any known core JS method, but we are extending String. One argument in this post mentioned how is the new dev to know whether this method is part of the core JS, or where to find the docs? What would happen if the core JS String object were to get a method named capitalize?

What if instead of adding names that may collide with other libraries, you used a company/app specific modifier that all the devs could understand?

String.prototype.weCapitalize = function weCapitalize(){
  return this.charAt(0).toUpperCase() + this.slice(1);
}; // marginal risk.

var myString = "hello to you.";
myString.weCapitalize();
// => Hello to you.

If you continued to extend other objects, all devs would encounter them in the wild with (in this case) we, which would notify them that it was a company/app specific extension.

This does not eliminate name collisions, but does reduce the possibility. If you determine that extending core JS objects is for you and/or your team, perhaps this is for you.

Solution 5 - Javascript

Extending prototypes of built-ins is indeed a bad idea. However, ES2015 introduced a new technique that can be utilized to obtain the desired behavior:

Utilizing WeakMaps to associate types with built-in prototypes

The following implementation extends the Number and Array prototypes without touching them at all:

// new types

const AddMonoid = {
  empty: () => 0,
  concat: (x, y) => x + y,
};

const ArrayMonoid = {
  empty: () => [],
  concat: (acc, x) => acc.concat(x),
};

const ArrayFold = {
  reduce: xs => xs.reduce(
   type(xs[0]).monoid.concat,
   type(xs[0]).monoid.empty()
)};


// the WeakMap that associates types to prototpyes

types = new WeakMap();

types.set(Number.prototype, {
  monoid: AddMonoid
});

types.set(Array.prototype, {
  monoid: ArrayMonoid,
  fold: ArrayFold
});


// auxiliary helpers to apply functions of the extended prototypes

const genericType = map => o => map.get(o.constructor.prototype);
const type = genericType(types);


// mock data

xs = [1,2,3,4,5];
ys = [[1],[2],[3],[4],[5]];


// and run

console.log("reducing an Array of Numbers:", ArrayFold.reduce(xs) );
console.log("reducing an Array of Arrays:", ArrayFold.reduce(ys) );
console.log("built-ins are unmodified:", Array.prototype.empty);

As you can see even primitive prototypes can be extended by this technique. It uses a map structure and Object identity to associate types with built-in prototypes.

My example enables a reduce function that only expects an Array as its single argument, because it can extract the information how to create an empty accumulator and how to concatenate elements with this accumulator from the elements of the Array itself.

Please note that I could have used the normal Map type, since weak references doesn't makes sense when they merely represent built-in prototypes, which are never garbage collected. However, a WeakMap isn't iterable and can't be inspected unless you have the right key. This is a desired feature, since I want to avoid any form of type reflection.

Solution 6 - Javascript

One more reason why you should not extend native Objects:

We use Magento which uses prototype.js and extends a lot of stuff on native Objects. This works fine until you decide to get new features in and that's where big troubles start.

We have introduced Webcomponents on one of our pages, so the webcomponents-lite.js decides to replace the whole (native) Event Object in IE (why?). This of course breaks prototype.js which in turn breaks Magento. (until you find the problem, you may invest a lot of hours tracing it back)

If you like trouble, keep doing it!

Solution 7 - Javascript

I can see three reasons not to do this (from within an application, at least), only two of which are addressed in existing answers here:

  1. If you do it wrong, you'll accidentally add an enumerable property to all objects of the extended type. Easily worked around using Object.defineProperty, which creates non-enumerable properties by default.
  2. You might cause a conflict with a library that you're using. Can be avoided with diligence; just check what methods the libraries you use define before adding something to a prototype, check release notes when upgrading, and test your application.
  3. You might cause a conflict with a future version of the native JavaScript environment.

Point 3 is arguably the most important one. You can make sure, through testing, that your prototype extensions don't cause any conflicts with the libraries you use, because you decide what libraries you use. The same is not true of native objects, assuming that your code runs in a browser. If you define Array.prototype.swizzle(foo, bar) today, and tomorrow Google adds Array.prototype.swizzle(bar, foo) to Chrome, you're liable to end up with some confused colleagues who wonder why .swizzle's behaviour doesn't seem to match what's documented on MDN.

(See also the story of how mootools' fiddling with prototypes they didn't own forced an ES6 method to be renamed to avoid breaking the web.)

This is avoidable by using an application-specific prefix for methods added to native objects (e.g. define Array.prototype.myappSwizzle instead of Array.prototype.swizzle), but that's kind of ugly; it's just as well solvable by using standalone utility functions instead of augmenting prototypes.

Solution 8 - Javascript

Perf is also a reason. Sometimes you might need to loop over keys. There are several ways to do this

for (let key in object) { ... }
for (let key in object) { if (object.hasOwnProperty(key) { ... } }
for (let key of Object.keys(object)) { ... }

I usually use for of Object.keys() as it does the right thing and is relatively terse, no need to add the check.

But, it's much slower.

for-of vs for-in perf results

Just guessing the reason Object.keys is slow is obvious, Object.keys() has to make an allocation. In fact AFAIK it has to allocate a copy of all the keys since.

  const before = Object.keys(object);
  object.newProp = true;
  const after = Object.keys(object);

  before.join('') !== after.join('')

It's possible the JS engine could use some kind of immutable key structure so that Object.keys(object) returns a reference something that iterates over immutable keys and that object.newProp creates an entirely new immutable keys object but whatever, it's clearly slower by up to 15x

Even checking hasOwnProperty is up to 2x slower.

The point of all of that is that if you have perf sensitive code and need to loop over keys then you want to be able to use for in without having to call hasOwnProperty. You can only do this if you haven't modified Object.prototype

note that if you use Object.defineProperty to modify the prototype if the things you add are not enumerable then they won't affect the behavior of JavaScript in the above cases. Unfortunately, at least in Chrome 83, they do affect performance.

enter image description here

I added 3000 non-enumerable properties just to try to force the any perf issues to appear. With only 30 properties the tests were too close to tell if there was any perf impact.

https://jsperf.com/does-adding-non-enumerable-properties-affect-perf

Firefox 77 and Safari 13.1 showed no difference in perf between the Augmented and Unaugmented classes maybe v8 will get fixed in this area and you can ignore the perf issues.

But, Let me also add there is the story of Array.prototype.smoosh. The short version is Mootools, a popular library, made their own Array.prototype.flatten. When the standards committee tried to add a native Array.prototype.flatten they found the couldn't without breaking lots of sites. The devs who found out about the break suggested naming the es5 method smoosh as a joke but people freaked out not understanding it was a joke. They settled on flat instead of flatten

The moral of the story is you shouldn't extend native objects. If you do you could run into the same issue of your stuff breaking and unless your particular library happens to be as popular as MooTools the browser vendors are unlikely to work around the issue you caused. If your library does get that popular it would be kind of mean to force everyone else work around the issue you caused. So, please Don't Extend Native Objects

Solution 9 - Javascript

Edited:

After a time I've changed my mind - prototype pollution is bad (however I left example at the end of the post).

It may cause a lot more trouble than mentioned in above and below posts.

It's important to have single standard across whole JS/TS universe (it would be nice to have consistent npmjs).

Previously I wrote bull**it and encouraged people to do it in their libraries - sorry for this:

> Jeff Clayton > proposal seems to be also good - method name prefix can be your > package name followed by underscore eg: > Array.prototype.<custom>_Flatten (not existing package prefix may > became existing package in future)


Part of original answer:

I was personally extending native methods, I am just using x prefix in my libraries (using it also when extending 3rd party libraries).

TS only:

declare global {
  interface Array<T> {
    xFlatten(): ExtractArrayItemsRecursive<T>;
  }
}

JS+TS:

Array.prototype.xFlatten = function() { /*...*/ }

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionbuschtoensView Question on Stackoverflow
Solution 1 - JavascriptAbhi BeckertView Answer on Stackoverflow
Solution 2 - JavascriptbuschtoensView Answer on Stackoverflow
Solution 3 - JavascriptStefanView Answer on Stackoverflow
Solution 4 - JavascriptSoEzPzView Answer on Stackoverflow
Solution 5 - Javascriptuser6445533View Answer on Stackoverflow
Solution 6 - JavascriptEugen WesselohView Answer on Stackoverflow
Solution 7 - JavascriptMark AmeryView Answer on Stackoverflow
Solution 8 - JavascriptgmanView Answer on Stackoverflow
Solution 9 - JavascriptfiderView Answer on Stackoverflow