Time complexity of unshift() vs. push() in Javascript

JavascriptArraysTimePushComplexity Theory

Javascript Problem Overview


I know what is the difference between unshift() and push() methods in JavaScript, but I'm wondering what is the difference in time complexity?

I suppose for push() method is O(1) because you're just adding an item to the end of array, but I'm not sure for unshift() method, because, I suppose you must "move" all the other existing elements forward and I suppose that is O(log n) or O(n)?

Javascript Solutions


Solution 1 - Javascript

push() is faster.

js>function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)}
js>foo()
2190
js>function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)}
js>bar()
10

function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)} console.log(foo())

function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)} console.log(bar());


Update

The above does not take into consideration the order of the arrays. If you want to compare them properly, you must reverse the pushed array. However, push then reverse is still faster by ~10ms for me on chrome with this snippet:

var a=[]; 
var start = new Date; 
for (var i=0;i<100000;i++) {
  a.unshift(1);
}
var end = (new Date)-start;
console.log(`Unshift time: ${end}`);

var a=[];
var start = new Date;
for (var i=0;i<100000;i++) {
  a.push(1);
}

a.reverse();
var end = (new Date)-start;
console.log(`Push and reverse time: ${end}`);

Solution 2 - Javascript

The JavaScript language spec does not mandate the time complexity of these functions, as far as I know.

It is certainly possible to implement an array-like data structure (O(1) random access) with O(1) push and unshift operations. The C++ std::deque is an example. A Javascript implementation that used C++ deques to represent Javascript arrays internally would therefore have O(1) push and unshift operations.

But if you need to guarantee such time bounds, you will have to roll your own, like this:

http://code.stephenmorley.org/javascript/queues/

Solution 3 - Javascript

For people curious about the v8 implementation here is the source. Because unshift takes an arbitrary number of arguments, the array will shift itself to accommodate all arguments.

UnshiftImpl ends up calling AddArguments with a start_position of AT_START which kicks it to this else statement

  // If the backing store has enough capacity and we add elements to the
  // start we have to shift the existing objects.
  Isolate* isolate = receiver->GetIsolate();
  Subclass::MoveElements(isolate, receiver, backing_store, add_size, 0,
                         length, 0, 0);

and takes it to the MoveElements.

  static void MoveElements(Isolate* isolate, Handle<JSArray> receiver,
                           Handle<FixedArrayBase> backing_store, int dst_index,
                           int src_index, int len, int hole_start,
                           int hole_end) {
    Heap* heap = isolate->heap();
    Handle<BackingStore> dst_elms = Handle<BackingStore>::cast(backing_store);
    if (len > JSArray::kMaxCopyElements && dst_index == 0 &&
        heap->CanMoveObjectStart(*dst_elms)) {
      // Update all the copies of this backing_store handle.
      *dst_elms.location() =
          BackingStore::cast(heap->LeftTrimFixedArray(*dst_elms, src_index))
              ->ptr();
      receiver->set_elements(*dst_elms);
      // Adjust the hole offset as the array has been shrunk.
      hole_end -= src_index;
      DCHECK_LE(hole_start, backing_store->length());
      DCHECK_LE(hole_end, backing_store->length());
    } else if (len != 0) {
      WriteBarrierMode mode = GetWriteBarrierMode(KindTraits::Kind);
      dst_elms->MoveElements(heap, dst_index, src_index, len, mode);
    }
    if (hole_start != hole_end) {
      dst_elms->FillWithHoles(hole_start, hole_end);
    }
  }

I also want to call out that v8 has a concept of different element kinds depending what the array contains. This also can affect the performance.

It's hard to actually say what the performance is because truthfully it depends on what types of elements are passed, how many holes are in the array, etc. If I dig through this more, maybe I can give a definitive answer but in general I assume since unshift needs to allocate more space in the array, in general you can kinda assume it's O(N) (will scale linerally depending on the number of elements) but someone please correct me if I'm wrong.

Solution 4 - Javascript

imho it depends on the javascript-engine... if it will use a linked list, unshift should be quite cheap...

Solution 5 - Javascript

One way of implementing Arrays with both fast unshift and push is to simply put your data into the middle of your C-level array. That's how perl does it, IIRC.

Another way to do it is have two separate C-level arrays, so that push appends to one of them, and unshift appends to the other. There's no real benefit to this approach over the previous one, that I know of.

Regardless of how it's implemented, a push or and unshift will take O(1) time when the internal C-level array has enough spare memory, otherwise, when reallocation must be done, at least O(N) time to copy the old data to the new block of memory.

Solution 6 - Javascript

Yes, you are right. The default Complexity of push() is O(1) and unshift() is O(n). Because unshift() has to increment all the elements that already present in the Array. But, push() has to insert an element at the end of the array, so none of the Array elements' index has to change. But, push() can also be said with Complexity of O(n) because of dynamic allocation of memory. In javascript, when you create a new Array without specifying the size you need, it will create an Array of the default value. Until the default size gets filled, the push operation takes O(1) Complexity. But, if the default size is full, the compiler has to create a new Contiguous block of memory which is twice the size of the default memory, and copy the already existing elements to the newly allocated memory. So, it takes O(n) time to move the elements from one Contiguous block of memory to another Contiguous block of memory.

If you know the number of elements that you are going to put in the array, you can avoid getting O(n) for inserting an element.

  1. Initialize the Array with the required size, and fill it with a dummy value. let array = new Array(size).fill(0)
  2. Iterate through the elements that you wanna push and change the values by its index.
for (let i = 0; i < size; i++) {
  array[i] = i
}

So, instead of push() we altered the index of the elements in their position. It's way more memory efficient and less complex than creating an array with a default value and pushing elements to it. As we are using up only the required amount of memory, no extra memory is wasted on it.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestiondperitchView Question on Stackoverflow
Solution 1 - JavascriptShantiView Answer on Stackoverflow
Solution 2 - JavascriptNemoView Answer on Stackoverflow
Solution 3 - JavascriptaugView Answer on Stackoverflow
Solution 4 - JavascriptTheHeView Answer on Stackoverflow
Solution 5 - JavascriptBenGoldbergView Answer on Stackoverflow
Solution 6 - JavascriptHema ChandranView Answer on Stackoverflow