Is list::size() really O(n)?

C++ListStlComplexity TheoryBig O

C++ Problem Overview


Recently, I noticed some people mentioning that std::list::size() has a linear complexity.
According to some sources, this is in fact implementation dependent as the standard doesn't say what the complexity has to be.
The comment in this blog entry says:

> Actually, it depends on which STL you > are using. Microsoft Visual Studio V6 > implements size() as {return (_Size); > } whereas gcc (at least in versions > 3.3.2 and 4.1.0) do it as { return std::distance(begin(), end()); } The > first has constant speed, the second > has o(N) speed

  1. So my guess is that for the VC++ crowd size() has constant complexity as Dinkumware probably won't have changed that fact since VC6. Am I right there?
  2. What does it look like currently in gcc? If it is really O(n), why did the developers choose to do so?

C++ Solutions


Solution 1 - C++

In C++11 it is required that for any standard container the .size() operation must be complete in "constant" complexity (O(1)). (Table 96 — Container requirements). Previously in C++03 .size() should have constant complexity, but is not required (see https://stackoverflow.com/questions/256033/is-stdstring-size-a-o1-operation).

The change in standard is introduced by n2923: Specifying the complexity of size() (Revision 1).

However, the implementation of .size() in libstdc++ still uses an O(N) algorithm in gcc up to 4.8:

  /**  Returns the number of elements in the %list.  */
  size_type
  size() const _GLIBCXX_NOEXCEPT
  { return std::distance(begin(), end()); }

See also https://stackoverflow.com/questions/10065055/why-is-stdlist-bigger-on-c11 for detail why it is kept this way.

Update: std::list::size() is properly O(1) when using gcc 5.0 in C++11 mode (or above).


By the way, the .size() in libc++ is correctly O(1):

_LIBCPP_INLINE_VISIBILITY
size_type size() const _NOEXCEPT     {return base::__sz();}

...

__compressed_pair<size_type, __node_allocator> __size_alloc_;

_LIBCPP_INLINE_VISIBILITY
const size_type& __sz() const _NOEXCEPT
    {return __size_alloc_.first();}

Solution 2 - C++

Pre-C++11 answer

You are correct that the standard does not state what the complexity of list::size() must be - however, it does recommend that it "should have constant complexity" (Note A in Table 65).

Here's an interesting article by Howard Hinnant that explains why some people think list::size() should have O(N) complexity (basically because they believe that O(1) list::size() makes list::splice() have O(N) complexity) and why an O(1) list::size() is be a good idea (in the author's opinion):

I think the main points in the paper are:

  • there are few situations where maintaining an internal count so list::size() can be O(1) causes the splice operation to become linear
  • there are probably many more situations where someone might be unaware of the negative effects that might happen because they call an O(N) size() (such as his one example where list::size() is called while holding a lock).
  • that instead of permitting size() be O(N), in the interest of 'least surprise', the standard should require any container that implements size() to implement it in an O(1) fashion. If a container cannot do this, it should not implement size() at all. In this case, the user of the container will be made aware that size() is unavailable, and if they still want or need to get the number of elements in the container they can still use container::distance( begin(), end()) to get that value - but they will be completely aware that it's an O(N) operation.

I think I tend to agree with most of his reasoning. However, I do not like his proposed addition to the splice() overloads. Having to pass in an n that must be equal to distance( first, last) to get correct behavior seems like a recipe for hard to diagnose bugs.

I'm not sure what should or could be done moving forward, as any change would have a significant impact on existing code. But as it stands, I think that existing code is already impacted - behavior might be rather significantly different from one implementation to another for something that should have been well-defined. Maybe onebyone's comment about having the size 'cached' and marked known/unknown might work well - you get amortized O(1) behavior - the only time you get O(N) behavior is when the list is modified by some splice() operations. The nice thing about this is that it can be done by implementors today without a change to the standard (unless I'm missing something).

As far as I know, C++0x is not changing anything in this area.

Solution 3 - C++

I've had to look into gcc 3.4's list::size before, so I can say this:

  1. It uses std::distance(head, tail).
  2. std::distance has two implementations: for types that satisfy RandomAccessIterator, it uses "tail-head", and for types that merely satisfy InputIterator, it uses an O(n) algorithm relying on "iterator++", counting until it hits the given tail.
  3. std::list does not satisfy RandomAccessIterator, so size is O(n).

As to the "why", I can only say that std::list is appropriate for problems that require sequential access. Storing the size as a class variable would introduce overhead on every insert, delete, etc., and that waste is a big no-no per the intent of the STL. If you really need a constant-time size(), use std::deque.

Solution 4 - C++

I personally don't see the issue with splice being O(N) as the only reason why size is permitted to be O(N). You don't pay for what you don't use is an important C++ motto. In this case, maintaining the list size requires an extra increment/decrement on every insert/erase whether you check the list's size or not. This is a small fixed overhead, but its still important to consider.

Checking the size of a list is rarely needed. Iterating from begin to end without caring the total size is infinitely more common.

Solution 5 - C++

I would go to the source (archive). SGI's STL page says that it is permitted to have a linear complexity. I believe that the design guideline they followed was to allow the list implementation to be as general as possible, and thus to allow more flexibility in using lists.

Solution 6 - C++

This bug report: [C++0x] std::list::size complexity, captures in excruciating detail the fact that the implementation in GCC 4.x is linear time and how the transition to constant time for C++11 was slow in coming (available in 5.0) due to ABI compatibility concerns.

The manpage for the GCC 4.9 series still includes the following disclaimer:

> Support for C++11 is still > experimental, and may change in incompatible ways in future releases.


The same bug report is referenced here: https://stackoverflow.com/q/19154205/86967

Solution 7 - C++

If you are correctly using lists you aren't probably noticing any difference.

Lists are good with big data structures that you want to rearrange without copying, of for data you want to keep valid pointers after insertion.

In the first case it makes no difference, in the second i would prefer the old (smaller) size() implementation.

Anyway std is more about correctness and standard behavious and 'user friendlyness' than raw speed.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionforaidtView Question on Stackoverflow
Solution 1 - C++kennytmView Answer on Stackoverflow
Solution 2 - C++Michael BurrView Answer on Stackoverflow
Solution 3 - C++intropView Answer on Stackoverflow
Solution 4 - C++Greg RogersView Answer on Stackoverflow
Solution 5 - C++Yuval FView Answer on Stackoverflow
Solution 6 - C++Brent BradburnView Answer on Stackoverflow
Solution 7 - C++Luke GivensView Answer on Stackoverflow