Is x += a quicker than x = x + a?

C++PerformanceOperators

C++ Problem Overview


I was reading Stroustrup's "The C++ Programming Language", where he says that out of two ways to add something to a variable

x = x + a;

and

x += a;

He prefers += because it is most likely better implemented. I think he means that it works faster too.
But does it really? If it depends on the compiler and other things, how do I check?

C++ Solutions


Solution 1 - C++

Any compiler worth its salt will generate exactly the same machine-language sequence for both constructs for any built-in type (int, float, etc) as long as the statement really is as simple as x = x + a; and optimization is enabled. (Notably, GCC's -O0, which is the default mode, performs anti-optimizations, such as inserting completely unnecessary stores to memory, in order to ensure that debuggers can always find variable values.)

If the statement is more complicated, though, they might be different. Suppose f is a function that returns a pointer, then

*f() += a;

calls f only once, whereas

*f() = *f() + a;

calls it twice. If f has side effects, one of the two will be wrong (probably the latter). Even if f doesn't have side effects, the compiler may not be able to eliminate the second call, so the latter may indeed be slower.

And since we're talking about C++ here, the situation is entirely different for class types that overload operator+ and operator+=. If x is such a type, then -- before optimization -- x += a translates to

x.operator+=(a);

whereas x = x + a translates to

auto TEMP(x.operator+(a));
x.operator=(TEMP);

Now, if the class is properly written and the compiler's optimizer is good enough, both will wind up generating the same machine language, but it's not a sure thing like it is for built-in types. This is probably what Stroustrup is thinking of when he encourages use of +=.

Solution 2 - C++

You can check by looking at the dissasembly, which will be the same.

For basic types, both are equally fast.

This is output generated by a debug build (i.e. no optimizations):

	a += x;
010813BC  mov         eax,dword ptr [a]  
010813BF  add         eax,dword ptr [x]  
010813C2  mov         dword ptr [a],eax  
	a = a + x;
010813C5  mov         eax,dword ptr [a]  
010813C8  add         eax,dword ptr [x]  
010813CB  mov         dword ptr [a],eax  

For user-defined types, where you can overload operator + and operator +=, it depends on their respective implementations.

Solution 3 - C++

Yes! It's quicker to write, quicker to read, and quicker to figure out, for the latter in the case that x might have side effects. So it's overall quicker for the humans. The human time in general costs much more than the computer time, so that must be what you were asking about. Right?

Solution 4 - C++

The difference between x = x + a and x += a is the amount of work the machine has to go through - some compilers may (and usually do) optimize it away, but usually, if we ignore optimization for a while, what happens is that in the former code snippet, the machine has to lookup the value for x twice, while in the latter one, this lookup needs to occur only once.

However, as I mentioned, today most compilers are intelligent enough to analyse the instruction and reduce the resulting machine instructions required.

PS: First answer on Stack Overflow!

Solution 5 - C++

It really depends on the type of x and a and the implementation of +. For

   T x, a;
   ....
   x = x + a;

the compiler has to create a temporary T to contain the value of x + a whilst it evaluates it, which it can then assign to x. (It can't use x or a as workspace during this operation).

For x += a, it doesn't need a temporary.

For trivial types, there is no difference.

Solution 6 - C++

If you say += you're making life a lot easier for the compiler. In order for the compiler to recognize that x = x+a is the same as x += a, the compiler has to

  • analyze the left hand side (x) to make sure it has no side effects and always refers to the same l-value. For example, it could be z[i], and it has to make sure that both z and i don't change.

  • analyze the right hand side (x+a) and make sure it is a summation, and that the left hand side occurs once and only once on the right hand side, even though it could be transformed, as in z[i] = a + *(z+2*0+i).

If what you mean is to add a to x, the compiler writer appreciates it when you just say what you mean. That way, you're not exercising the part of the compiler that its writer hopes he/she got all the bugs out of, and that doesn't actually make life any easier for you, unless you honestly can't get your head out of Fortran mode.

Solution 7 - C++

As you've labelled this C++, there is no way to know from the two statements you've posted. You need to know what 'x' is (it's a bit like the answer '42'). If x is a POD, then it's not really going to make much difference. However, if x is a class, there may be overloads for the operator + and operator += methods which could have different behaviours that lead to very different execution times.

Solution 8 - C++

You're asking the wrong question.

This is unlikely to drive the performance of an app or feature. Even if it were, the way to find out is to profile the code and know how it affects you for certain. Instead of worrying at this level about which is faster, it's far more important to think in terms of clarity, correctness, and readability.

This is especially true when you consider that, even if this is a significant performance factor, compilers evolve over a time. Someone may figure out a new optimization and the right answer today can become wrong tomorrow. It's a classic case of premature optimization.

This isn't to say that performance doesn't matter at all... Just that it's the wrong approach to achieve your perf goals. The right approach is to use profiling tools to learn where your code is actually spending its time, and thus where to focus your efforts.

Solution 9 - C++

For a concrete example, imagine a simple complex number type:

struct complex {
    double x, y;
    complex(double _x, double _y) : x(_x), y(_y) { }
    complex& operator +=(const complex& b) {
        x += b.x;
        y += b.y;
        return *this;
    }
    complex operator +(const complex& b) {
        complex result(x+b.x, y+b.y);
        return result;
    }
    /* trivial assignment operator */
}

For the a = a + b case, it has to make an extra temporary variable and then copy it.

Solution 10 - C++

I think that should depend on the machine and its architecture. If its architecture allows indirect memory addressing, the compiler writer MIGHT just use this code instead(for optimization):

mov $[y],$ACC

iadd $ACC, $[i] ; i += y. WHICH MIGHT ALSO STORE IT INTO "i"

Whereas, i = i + y might get translated to (without optimization):

mov $[i],$ACC

mov $[y],$B 

iadd $ACC,$B

mov $B,[i]


That said, other complications such as if i is a function returning pointer etc. should also be thought of. Most production level compilers, including GCC, produce the same code for both statements (if they're integers).

Solution 11 - C++

No, both ways get handeled the same.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionChiffaView Question on Stackoverflow
Solution 1 - C++zwolView Answer on Stackoverflow
Solution 2 - C++Luchian GrigoreView Answer on Stackoverflow
Solution 3 - C++Mark AdlerView Answer on Stackoverflow
Solution 4 - C++Sagar AhireView Answer on Stackoverflow
Solution 5 - C++Tom TannerView Answer on Stackoverflow
Solution 6 - C++Mike DunlaveyView Answer on Stackoverflow
Solution 7 - C++SkizzView Answer on Stackoverflow
Solution 8 - C++Joel CoehoornView Answer on Stackoverflow
Solution 9 - C++Random832View Answer on Stackoverflow
Solution 10 - C++Aniket IngeView Answer on Stackoverflow
Solution 11 - C++CloudyMarbleView Answer on Stackoverflow