Enforcing statement order in C++

C++C++11Operator Precedence

C++ Problem Overview


Suppose I have a number of statements that I want to execute in a fixed order. I want to use g++ with optimization level 2, so some statements could be reordered. What tools does one have to enforce a certain ordering of statements?

Consider the following example.

using Clock = std::chrono::high_resolution_clock;

auto t1 = Clock::now(); // Statement 1
foo();                  // Statement 2
auto t2 = Clock::now(); // Statement 3

auto elapsedTime = t2 - t1;

In this example it is important that the statements 1-3 are executed in the given order. However, can't the compiler think statement 2 is independent of 1 and 3 and execute the code as follows?

using Clock=std::chrono::high_resolution_clock;

foo();                  // Statement 2
auto t1 = Clock::now(); // Statement 1
auto t2 = Clock::now(); // Statement 3

auto elapsedTime = t2 - t1;

C++ Solutions


Solution 1 - C++

I'd like to try to provide a somewhat more comprehensive answer after this was discussed with the C++ standards committee. In addition to being a member of the C++ committee, I'm also a developer on the LLVM and Clang compilers.

Fundamentally, there is no way to use a barrier or some operation in the sequence to achieve these transformations. The fundamental problem is that the operational semantics of something like an integer addition are totally known to the implementation. It can simulate them, it knows they cannot be observed by correct programs, and is always free to move them around.

We could try to prevent this, but it would have extremely negative results and would ultimately fail.

First, the only way to prevent this in the compiler is to tell it that all of these basic operations are observable. The problem is that this then would preclude the overwhelming majority of compiler optimizations. Inside the compiler, we have essentially no good mechanisms to model that the timing is observable but nothing else. We don't even have a good model of what operations take time. As an example, does converting a 32-bit unsigned integer to a 64-bit unsigned integer take time? It takes zero time on x86-64, but on other architectures it takes non-zero time. There is no generically correct answer here.

But even if we succeed through some heroics at preventing the compiler from reordering these operations, there is no guarantee this will be enough. Consider a valid and conforming way to execute your C++ program on an x86 machine: DynamoRIO. This is a system that dynamically evaluates the machine code of the program. One thing it can do is online optimizations, and it is even capable of speculatively executing the entire range of basic arithmetic instructions outside of the timing. And this behavior isn't unique to dynamic evaluators, the actual x86 CPU will also speculate (a much smaller number of) instructions and reorder them dynamically.

The essential realization is that the fact that arithmetic isn't observable (even at the timing level) is something that permeates the layers of the computer. It is true for the compiler, the runtime, and often even the hardware. Forcing it to be observable would both dramatically constrain the compiler, but it would also dramatically constrain the hardware.

But all of this should not cause you to lose hope. When you want to time the execution of basic mathematical operations, we have well studied techniques that work reliably. Typically these are used when doing micro-benchmarking. I gave a talk about this at CppCon2015: https://youtu.be/nXaxk27zwlk

The techniques shown there are also provided by various micro-benchmark libraries such as Google's: https://github.com/google/benchmark#preventing-optimization

The key to these techniques is to focus on the data. You make the input to the computation opaque to the optimizer and the result of the computation opaque to the optimizer. Once you've done that, you can time it reliably. Let's look at a realistic version of the example in the original question, but with the definition of foo fully visible to the implementation. I've also extracted a (non-portable) version of DoNotOptimize from the Google Benchmark library which you can find here: https://github.com/google/benchmark/blob/v1.0.0/include/benchmark/benchmark_api.h#L208

#include <chrono>

template <class T>
__attribute__((always_inline)) inline void DoNotOptimize(const T &value) {
  asm volatile("" : "+m"(const_cast<T &>(value)));
}

// The compiler has full knowledge of the implementation.
static int foo(int x) { return x * 2; }

auto time_foo() {
  using Clock = std::chrono::high_resolution_clock;

  auto input = 42;

  auto t1 = Clock::now();         // Statement 1
  DoNotOptimize(input);
  auto output = foo(input);       // Statement 2
  DoNotOptimize(output);
  auto t2 = Clock::now();         // Statement 3

  return t2 - t1;
}

Here we ensure that the input data and the output data are marked as un-optimizable around the computation foo, and only around those markers are the timings computed. Because you are using data to pincer the computation, it is guaranteed to stay between the two timings and yet the computation itself is allowed to be optimized. The resulting x86-64 assembly generated by a recent build of Clang/LLVM is:

% ./bin/clang++ -std=c++14 -c -S -o - so.cpp -O3
        .text
        .file   "so.cpp"
        .globl  _Z8time_foov
        .p2align        4, 0x90
        .type   _Z8time_foov,@function
_Z8time_foov:                           # @_Z8time_foov
        .cfi_startproc
# BB#0:                                 # %entry
        pushq   %rbx
.Ltmp0:
        .cfi_def_cfa_offset 16
        subq    $16, %rsp
.Ltmp1:
        .cfi_def_cfa_offset 32
.Ltmp2:
        .cfi_offset %rbx, -16
        movl    $42, 8(%rsp)
        callq   _ZNSt6chrono3_V212system_clock3nowEv
        movq    %rax, %rbx
        #APP
        #NO_APP
        movl    8(%rsp), %eax
        addl    %eax, %eax              # This is "foo"!
        movl    %eax, 12(%rsp)
        #APP
        #NO_APP
        callq   _ZNSt6chrono3_V212system_clock3nowEv
        subq    %rbx, %rax
        addq    $16, %rsp
        popq    %rbx
        retq
.Lfunc_end0:
        .size   _Z8time_foov, .Lfunc_end0-_Z8time_foov
        .cfi_endproc


        .ident  "clang version 3.9.0 (trunk 273389) (llvm/trunk 273380)"
        .section        ".note.GNU-stack","",@progbits

Here you can see the compiler optimizing the call to foo(input) down to a single instruction, addl %eax, %eax, but without moving it outside of the timing or eliminating it entirely despite the constant input.

Hope this helps, and the C++ standards committee is looking at the possibility of standardizing APIs similar to DoNotOptimize here.

Solution 2 - C++

Summary:

There seems to be no guaranteed way to prevent reordering, but as long as link-time/full-program optimisation is not enabled, locating the called function in a separate compilation unit seems a fairly good bet. (At least with GCC, although logic would suggest that this is likely with other compilers too.) This comes at the cost of the function call - inlined code is by definition in the same compilation unit and open to reordering.

Original answer:

GCC reorders the calls under -O2 optimisation:

#include <chrono>
static int foo(int x)    // 'static' or not here doesn't affect ordering.
{
	return x*2;
}
int fred(int x)
{
	auto t1 = std::chrono::high_resolution_clock::now();
	int y = foo(x);
	auto t2 = std::chrono::high_resolution_clock::now();
	return y;
}

GCC 5.3.0:

g++ -S --std=c++11 -O0 fred.cpp :

_ZL3fooi:
        pushq   %rbp
        movq    %rsp, %rbp
        movl    %ecx, 16(%rbp)
        movl    16(%rbp), %eax
        addl    %eax, %eax
        popq    %rbp
        ret
_Z4fredi:
        pushq   %rbp
        movq    %rsp, %rbp
        subq    $64, %rsp
        movl    %ecx, 16(%rbp)
        call    _ZNSt6chrono3_V212system_clock3nowEv
        movq    %rax, -16(%rbp)
        movl    16(%rbp), %ecx
        call    _ZL3fooi
        movl    %eax, -4(%rbp)
        call    _ZNSt6chrono3_V212system_clock3nowEv
        movq    %rax, -32(%rbp)
        movl    -4(%rbp), %eax
        addq    $64, %rsp
        popq    %rbp
        ret

But:

g++ -S --std=c++11 -O2 fred.cpp :

_Z4fredi:
        pushq   %rbx
        subq    $32, %rsp
        movl    %ecx, %ebx
        call    _ZNSt6chrono3_V212system_clock3nowEv
        call    _ZNSt6chrono3_V212system_clock3nowEv
        leal    (%rbx,%rbx), %eax
        addq    $32, %rsp
        popq    %rbx
        ret

Now, with foo() as an extern function:

#include <chrono>
int foo(int x);
int fred(int x)
{
	auto t1 = std::chrono::high_resolution_clock::now();
	int y = foo(x);
	auto t2 = std::chrono::high_resolution_clock::now();
	return y;
}

g++ -S --std=c++11 -O2 fred.cpp :

_Z4fredi:
        pushq   %rbx
        subq    $32, %rsp
        movl    %ecx, %ebx
        call    _ZNSt6chrono3_V212system_clock3nowEv
        movl    %ebx, %ecx
        call    _Z3fooi
        movl    %eax, %ebx
        call    _ZNSt6chrono3_V212system_clock3nowEv
        movl    %ebx, %eax
        addq    $32, %rsp
        popq    %rbx
        ret

BUT, if this is linked with -flto (link-time optimisation):

0000000100401710 <main>:
   100401710:   53                      push   %rbx
   100401711:   48 83 ec 20             sub    $0x20,%rsp
   100401715:   89 cb                   mov    %ecx,%ebx
   100401717:   e8 e4 ff ff ff          callq  100401700 <__main>
   10040171c:   e8 bf f9 ff ff          callq  1004010e0 <_ZNSt6chrono3_V212system_clock3nowEv>
   100401721:   e8 ba f9 ff ff          callq  1004010e0 <_ZNSt6chrono3_V212system_clock3nowEv>
   100401726:   8d 04 1b                lea    (%rbx,%rbx,1),%eax
   100401729:   48 83 c4 20             add    $0x20,%rsp
   10040172d:   5b                      pop    %rbx
   10040172e:   c3                      retq

Solution 3 - C++

Reordering may be done by the compiler, or by the processor.

Most compilers offer a platform-specific method to prevent reordering of read-write instructions. On gcc, this is

asm volatile("" ::: "memory");

(More information here)

Note that this only indirectly prevents reordering operations, as long as they depend on the reads / writes.

In practice I haven't yet seen a system where the system call in Clock::now() does have the same effect as such a barrier. You could inspect the resulting assembly to be sure.

It is not uncommon, however, that the function under test gets evaluated during compile time. To enforce "realistic" execution, you may need to derive input for foo() from I/O or a volatile read.


Another option would be to disable inlining for foo() - again, this is compiler specific and usually not portable, but would have the same effect.

On gcc, this would be __attribute__ ((noinline))


@Ruslan brings up a fundamental issue: How realistic is this measurement?

Execution time is affected by many factors: one is the actual hardware we are running on, the other is concurrent access to shared resources like cache, memory, disk and CPU cores.

So what we usually do to get comparable timings: make sure they are reproducible with a low error margin. This makes them somewhat artificial.

"hot cache" vs. "cold cache" execution performance can easily differ by an order of magnitude - but in reality, it will be something inbetween ("lukewarm"?)

Solution 4 - C++

The C++ language defines what is observable in a number of ways.

If foo() does nothing observable, then it can be eliminated completely. If foo() only does a computation that stores values in "local" state (be it on the stack or in an object somewhere), and the compiler can prove that no safely-derived pointer can get into the Clock::now() code, then there are no observable consequences to moving the Clock::now() calls.

If foo() interacted with a file or the display, and the compiler cannot prove that Clock::now() does not interact with the file or the display, then reordering cannot be done, because interaction with a file or display is observable behavior.

While you can use compiler-specific hacks to force code not to move around (like inline assembly), another approach is to attempt to outsmart your compiler.

Create a dynamically loaded library. Load it prior to the code in question.

That library exposes one thing:

namespace details {
  void execute( void(*)(void*), void *);
}

and wraps it like this:

template<class F>
void execute( F f ) {
  struct bundle_t {
    F f;
  } bundle = {std::forward<F>(f)};

  auto tmp_f = [](void* ptr)->void {
    auto* pb = static_cast<bundle_t*>(ptr);
    (pb->f)();
  };
  details::execute( tmp_f, &bundle );
}

which packs up a nullary lambda and uses the dynamic library to run it in a context that the compiler cannot understand.

Inside the dynamic library, we do:

void details::execute( void(*f)(void*), void *p) {
  f(p);
}

which is pretty simple.

Now to reorder the calls to execute, it must understand the dynamic library, which it cannot while compiling your test code.

It can still eliminate foo()s with zero side effects, but you win some, you lose some.

Solution 5 - C++

No it can't. According to the C++ standard [intro.execution]:

> 14 Every value computation and side effect associated with a > full-expression is sequenced before every value computation and side > effect associated with the next full-expression to be evaluated.

A full-expression is basically a statement terminated by a semicolon. As you can see the above rule stipulates statements must be executed in order. It is within statements that the compiler is allowed more free rein (i.e. it is under some circumstance allowed to evaluate expressions that make up a statement in orders other than left-to-right or anything else specific).

Note the conditions for the as-if rule to apply are not met here. It is unreasonable to think that any compiler would be able to prove that reordering calls to get the system time would not affect observable program behaviour. If there was a circumstance in which two calls to get the time could be reordered without changing observed behaviour, it would be extremely inefficient to actually produce a compiler that analyses a program with enough understanding to be able to infer this with certainty.

Solution 6 - C++

No.

Sometimes, by the "as-if" rule, statements may be re-ordered. This is not because they are logically independent of each other, but because that independence allows such a re-ordering to occur without changing the semantics of the program.

Moving a system call that obtains the current time obviously does not satisfy that condition. A compiler that knowingly or unknowingly does so is non-compliant and really silly.

In general, I wouldn't expect any expression that results in a system call to be "second-guessed" by even an aggressively optimizing compiler. It just doesn't know enough about what that system call does.

Solution 7 - C++

noinline function + inline assembly black box + full data dependencies

This is based on https://stackoverflow.com/a/38025837/895245 but because I didn't see any clear justification of why the ::now() cannot be reordered there, I would rather be paranoid and put it inside a noinline function together with the asm.

This way I'm pretty sure the reordering cannot happen, since the noinline "ties" the the ::now and the data dependency.

main.cpp

#include <chrono>
#include <iostream>
#include <string>

// noinline ensures that the ::now() cannot be split from the __asm__
template <class T>
__attribute__((noinline)) auto get_clock(T& value) {
    // Make the compiler think we actually use / modify the value.
    // It can't "see" what is going on inside the assembly string.
    __asm__ __volatile__ ("" : "+g" (value));
    return std::chrono::high_resolution_clock::now();
}

template <class T>
static T foo(T niters) {
    T result = 42;
    for (T i = 0; i < niters; ++i) {
        result = (result * result) - (3 * result) + 1;
    }
    return result;
}

int main(int argc, char **argv) {
    unsigned long long input;
    if (argc > 1) {
        input = std::stoull(argv[1], NULL, 0);
    } else {
        input = 1;
    }

    // Must come before because it could modify input
    // which is passed as a reference.
    auto t1 = get_clock(input);
    auto output = foo(input);
    // Must come after as it could use the output.
    auto t2 = get_clock(output);
    std::cout << "output " << output << std::endl;
    std::cout << "time (ns) "
              << std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
              << std::endl;
}

GitHub upstream.

Compile and run:

g++ -ggdb3 -O3 -std=c++14 -Wall -Wextra -pedantic -o main.out main.cpp
./main.out 1000
./main.out 10000
./main.out 100000

The only minor downside of this method is that we add one extra callq instruction over an inline method. objdump -CD shows that main contains:

    11b5:       e8 26 03 00 00          callq  14e0 <auto get_clock<unsigned long long>(unsigned long long&)>
    11ba:       48 8b 34 24             mov    (%rsp),%rsi
    11be:       48 89 c5                mov    %rax,%rbp
    11c1:       b8 2a 00 00 00          mov    $0x2a,%eax
    11c6:       48 85 f6                test   %rsi,%rsi
    11c9:       74 1a                   je     11e5 <main+0x65>
    11cb:       31 d2                   xor    %edx,%edx
    11cd:       0f 1f 00                nopl   (%rax)
    11d0:       48 8d 48 fd             lea    -0x3(%rax),%rcx
    11d4:       48 83 c2 01             add    $0x1,%rdx
    11d8:       48 0f af c1             imul   %rcx,%rax
    11dc:       48 83 c0 01             add    $0x1,%rax
    11e0:       48 39 d6                cmp    %rdx,%rsi
    11e3:       75 eb                   jne    11d0 <main+0x50>
    11e5:       48 89 df                mov    %rbx,%rdi
    11e8:       48 89 44 24 08          mov    %rax,0x8(%rsp)
    11ed:       e8 ee 02 00 00          callq  14e0 <auto get_clock<unsigned long long>(unsigned long long&)>

so we see that foo was inlined, but get_clock were not and surround it.

get_clock itself however is extremely efficient, consisting of a single leaf call optimized instruction that doesn't even touch the stack:

00000000000014e0 <auto get_clock<unsigned long long>(unsigned long long&)>:
    14e0:       e9 5b fb ff ff          jmpq   1040 <std::chrono::_V2::system_clock::now()@plt>

Since the clock precision is itself limited, I think that is unlikely that you will be able to notice the timing effects of one extra jmpq. Note that one call is required regardless since ::now() is in a shared library.

Call ::now() from inline assembly with a data dependency

This would be the most efficient solution possible, overcoming even the extra jmpq mentioned above.

This is unfortunately extremely hard to do correctly as shown at: https://stackoverflow.com/questions/37502841/calling-printf-in-extended-inline-asm/37503773#37503773

If your time measurement can be done directly in inline assembly without a call however, then this technique can be used. This is the case for example for gem5 magic instrumentation instructions, x86 RDTSC (not sure if this is representative anymore) and possibly other performance counters.

Related threads:

Tested with GCC 8.3.0, Ubuntu 19.04.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionS2108887View Question on Stackoverflow
Solution 1 - C++Chandler CarruthView Answer on Stackoverflow
Solution 2 - C++JeremyView Answer on Stackoverflow
Solution 3 - C++peterchenView Answer on Stackoverflow
Solution 4 - C++Yakk - Adam NevraumontView Answer on Stackoverflow
Solution 5 - C++SmeeheeyView Answer on Stackoverflow
Solution 6 - C++Lightness Races in OrbitView Answer on Stackoverflow
Solution 7 - C++Ciro Santilli Путлер Капут 六四事View Answer on Stackoverflow