Difference between std::system_clock and std::steady_clock?
C++C++11TimerChronoC++ Problem Overview
What is the difference between std::system_clock
and std::steady_clock
? (An example case that illustrate different results/behaviours would be great).
If my goal is to precisely measure execution time of functions (like a benchmark), what would be the best choice between std::system_clock
, std::steady_clock
and std::high_resolution_clock
?
C++ Solutions
Solution 1 - C++
From N3376:
20.11.7.1 [time.clock.system]/1:
>Objects of class system_clock
represent wall clock time from the system-wide realtime clock.
20.11.7.2 [time.clock.steady]/1:
>Objects of class steady_clock
represent clocks for which values of time_point
never decrease as physical time advances and for which values of time_point
advance at a steady rate relative to real time. That is, the clock may not be adjusted.
20.11.7.3 [time.clock.hires]/1:
>Objects of class high_resolution_clock
represent clocks with the shortest tick period. high_resolution_clock
may be a synonym for system_clock
or steady_clock
.
For instance, the system wide clock might be affected by something like daylight savings time, at which point the actual time listed at some point in the future can actually be a time in the past. (E.g. in the US, in the fall time moves back one hour, so the same hour is experienced "twice") However, steady_clock
is not allowed to be affected by such things.
Another way of thinking about "steady" in this case is in the requirements defined in the table of 20.11.3 [time.clock.req]/2:
>In Table 59 C1
and C2
denote clock types. t1
and t2
are values returned by C1::now()
where the call returning t1
happens before the call returning t2
and both of these calls occur before C1::time_point::max()
. [ Note: this means C1
did not wrap around between t1
and t2
. —end note ]
>Expression: C1::is_steady
>Returns: const bool
>Operational Semantics: true
if t1 <= t2
is always true and the time between clock ticks is constant, otherwise false
.
That's all the standard has on their differences.
If you want to do benchmarking, your best bet is probably going to be std::high_resolution_clock
, because it is likely that your platform uses a high resolution timer (e.g. QueryPerformanceCounter
on Windows) for this clock. However, if you're benchmarking, you should really consider using platform specific timers for your benchmark, because different platforms handle this differently. For instance, some platforms might give you some means of determining the actual number of clock ticks the program required (independent of other processes running on the same CPU). Better yet, get your hands on a real profiler and use that.
Solution 2 - C++
Billy provided a great answer based on the ISO C++ standard that I fully agree with. However there is another side of the story - real life. It seems that right now there is really no difference between those clocks in implementation of popular compilers:
gcc 4.8:
#ifdef _GLIBCXX_USE_CLOCK_MONOTONIC
...
#else
typedef system_clock steady_clock;
#endif
typedef system_clock high_resolution_clock;
Visual Studio 2012:
class steady_clock : public system_clock
{ // wraps monotonic clock
public:
static const bool is_monotonic = true; // retained
static const bool is_steady = true;
};
typedef system_clock high_resolution_clock;
In case of gcc you can check if you deal with steady clock simply by checking is_steady
and behave accordingly. However VS2012 seems to cheat a bit here :-)
If you need high precision clock I recommend for now writing your own clock that conforms to C++11 official clock interface and wait for implementations to catch up. It will be much better approach than using OS specific API directly in your code. For Windows you can do it like that:
// Self-made Windows QueryPerformanceCounter based C++11 API compatible clock
struct qpc_clock {
typedef std::chrono::nanoseconds duration; // nanoseconds resolution
typedef duration::rep rep;
typedef duration::period period;
typedef std::chrono::time_point<qpc_clock, duration> time_point;
static bool is_steady; // = true
static time_point now()
{
if(!is_inited) {
init();
is_inited = true;
}
LARGE_INTEGER counter;
QueryPerformanceCounter(&counter);
return time_point(duration(static_cast<rep>((double)counter.QuadPart / frequency.QuadPart *
period::den / period::num)));
}
private:
static bool is_inited; // = false
static LARGE_INTEGER frequency;
static void init()
{
if(QueryPerformanceFrequency(&frequency) == 0)
throw std::logic_error("QueryPerformanceCounter not supported: " + std::to_string(GetLastError()));
}
};
For Linux it is even easier. Just read the man page of clock_gettime
and modify the code above.
Solution 3 - C++
GCC 5.3.0 implementation
C++ stdlib is inside GCC source:
high_resolution_clock
is an alias forsystem_clock
system_clock
forwards to the first of the following that is available:clock_gettime(CLOCK_REALTIME, ...)
gettimeofday
time
steady_clock
forwards to the first of the following that is available:clock_gettime(CLOCK_MONOTONIC, ...)
system_clock
Then CLOCK_REALTIME
vs CLOCK_MONOTONIC
is explained at: https://stackoverflow.com/questions/3523442/difference-between-clock-realtime-and-clock-monotonic
Solution 4 - C++
Maybe, the most significant difference is the fact that the starting point of std::chrono:system_clock
is the 1.1.1970, so-called UNIX-epoch.
On the other side, for std::chrono::steady_clock
typically the boot time of your PC and it's most suitable for measuring intervals.
Solution 5 - C++
Relevant talk about chrono by Howard Hinnant, author of chrono
:
don't use high_resolution_clock
, as it's an alias for one of these:
-
system_clock
: it's like a regular clock, use it for time/date related stuff -
steady_clock
: it's like a stopwatch, use it for timing things.