Is reading the `length` property of an array really that expensive an operation in JavaScript?

JavascriptArraysMicro Optimization

Javascript Problem Overview


I always assumed caching the length of an array in JavaScript is a good idea (especially in the condition of a for loop) because of the expensiveness of calculating the length of an array.

Example
for (var i = 0; i < arr.length; i++) { }

// vs

for (var i = 0, arrLength = arr.length; i < arrLength; i++) { }

However, I thought perhaps the length property is only updated on creation and alteration of the array. Therefore, reading it shouldn't be too expensive an operation as opposed to reading it stored in a variable (as opposed to other methods in other languages that may need to seek in memory to find the end of something, e.g. strlen() in C).

I have two questions. I am also interested in how this works, so please don't hit me with the premature optimisation stick.

Assume the JavaScript engines in browsers.

  1. Is there any advantage to caching the length property of an array in JavaScript? Is there much more involved in reading a local variable over an object's property?
  2. Is the length property simply altered on creation and on shift() and pop() type methods that don't return a new array and otherwise simply stored as an integer?

Javascript Solutions


Solution 1 - Javascript

Well, I would have said it was expensive, but then I wrote a little test @ jsperf.com and to my surprise using i<array.length actually was faster in Chrome, and in FF(4) it didn't matter.

My suspicion is that length is stored as an integer (Uint32). From the ECMA-specs (262 ed. 5, page 121):

> Every Array object has a > length property whose value is always a nonnegative integer less than 232. The value of the length property is > numerically greater than the name of > every property whose name is an array > index; whenever a property of an Array > object is created or changed, other > properties are adjusted as necessary > to maintain this invariant. > Specifically, whenever a property is > added whose name is an array index, > the length property is changed, if > necessary, to be one more than the > numeric value of that array index; and > whenever the length property is > changed, every property whose name is > an array index whose value is not > smaller than the new length is > automatically deleted. This constraint > applies only to own properties of an > Array object and is unaffected by > length or array index properties that > may be inherited from its prototypes

Phew! I don't know if I ever get used to such language ...

Finally, we always have our good old lagging behind browser. In IE (9, 8, 7) caching the length is really faster. One of many more reasons to not use IE, I say.

Solution 2 - Javascript

TL;DR:

From what I can gather, it seems like the length of the array is cached internally (at least in V8)..

(Details? Read on :))

So, this question has dinged around in my head a few times and I've decided to get to the root of the problem (at least, in one implementation).

Digging around V8 source yielded the JSArray class.

// The JSArray describes JavaScript Arrays
//  Such an array can be in one of two modes:
//    - fast, backing storage is a FixedArray and length <= elements.length();
//       Please note: push and pop can be used to grow and shrink the array.
//    - slow, backing storage is a HashTable with numbers as keys.

I'm making an assumption that the type of array elements dictates whether it's fast or slow. I got down to a bit flag being set in set_has_fast_elements (set_bit_field2(bit_field2() | (1 << kHasFastElements))), which is where I figured I'd draw the digging line as I was looking in google code and don't have the source locally.

Now, it seems that any time any operation is done on the array (which is a child class of JSObject, a call is made to NormalizeElements(), which executes the following:

// Compute the effective length.
  int length = IsJSArray() ?
      Smi::cast(JSArray::cast(this)->length())->value() :
      array->length();

So, in answering your questions:

  1. There doesn't seem to be any advantage in Chrome (or other browsers that use V8) to caching the length property of an array (unless you're doing something odd that would force it to be slow (I'm not sure what those conditions are) - having said that, I'll most likely continue to cache length until I get a chance to go through all OS browser implementations ;)
  2. The length property seems to be altered after any operation on the object.

Edit:

On a side note, it seems that an "empty" array is actually allocated to have 4 elements:

// Number of element slots to pre-allocate for an empty array.
static const int kPreallocatedArrayElements = 4;

I'm not sure how many elements the array grows by once the bounds have been exceeded - I didn't dig that deep :)

Solution 3 - Javascript

Another set of performance tests. The loop is done over an array of million random numbers with an empty loop.

In Chrome, the loops with cached and non-cached lengths clock pretty much the same times, so I am guessing it's a V8 optimization to cache the length.

In Safari and Firefox, the cached length was consistently about 2x faster than the non-cached version.

Solution 4 - Javascript

This article investigates automatic caching in V8 and Chrome, by asking IRHydra for the generated code:

> How the Grinch stole array.length access by Vyacheslav Egorov

He found that under certain conditions manually caching the .length actually added overhead rather than improving performance!

But anyway, this kind of micro-optimization is unlikely to achieve any noticeable gain for your users. For their benefit, and for yours, focus instead on code that is clear to read, and using good data structures and algorithms in your code!

Avoid premature optimisation: Focus on elegant code until a performance issue arises. Only then, seek out the bottleneck through profiling, and then optimise just that part of the code.

Solution 5 - Javascript

Just a note:

On some browsers (I've noticed it in Safari, IE, and Opera), you can get a speed boost by caching the length inside the for loop declaration:

var j;
for (var i = 0, len = arr.length; i < len; i++) {
  j = arr[i];
}

I edited @KooiInc's jsperf test above to add this case.

Solution 6 - Javascript

Take care not to assume that this is true for all iterable collections. For example caching the length of an HTMLCollection is 65% faster in Chrome (Version 41) and 35% faster in Firefox (Version 36).

http://jsperf.com/array-length-in-loop-dom

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionalexView Question on Stackoverflow
Solution 1 - JavascriptKooiIncView Answer on Stackoverflow
Solution 2 - JavascriptDemian BrechtView Answer on Stackoverflow
Solution 3 - JavascriptAnuragView Answer on Stackoverflow
Solution 4 - JavascriptjoeytwiddleView Answer on Stackoverflow
Solution 5 - JavascriptmusicnothingView Answer on Stackoverflow
Solution 6 - JavascriptDustin HayesView Answer on Stackoverflow