Benchmarking small code samples in C#, can this implementation be improved?

C#.NetPerformanceProfiling

C# Problem Overview


Quite often on SO I find myself benchmarking small chunks of code to see which implemnetation is fastest.

Quite often I see comments that benchmarking code does not take into account jitting or the garbage collector.

I have the following simple benchmarking function which I have slowly evolved:

  static void Profile(string description, int iterations, Action func) {
        // warm up 
        func();
        // clean up
        GC.Collect();

        var watch = new Stopwatch();
        watch.Start();
        for (int i = 0; i < iterations; i++) {
            func();
        }
        watch.Stop();
        Console.Write(description);
        Console.WriteLine(" Time Elapsed {0} ms", watch.ElapsedMilliseconds);
    }

Usage:

Profile("a descriptions", how_many_iterations_to_run, () =>
{
   // ... code being profiled
});

Does this implementation have any flaws? Is it good enough to show that implementaion X is faster than implementation Y over Z iterations? Can you think of any ways you would improve this?

EDIT Its pretty clear that a time based approach (as opposed to iterations), is preferred, does anyone have any implementations where the time checks do not impact performance?

C# Solutions


Solution 1 - C#

Here is the modified function: as recommended by the community, feel free to amend this its a community wiki.

static double Profile(string description, int iterations, Action func) {
    //Run at highest priority to minimize fluctuations caused by other processes/threads
    Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
    Thread.CurrentThread.Priority = ThreadPriority.Highest;

    // warm up 
    func();

    var watch = new Stopwatch(); 

    // clean up
    GC.Collect();
    GC.WaitForPendingFinalizers();
    GC.Collect();

    watch.Start();
    for (int i = 0; i < iterations; i++) {
        func();
    }
    watch.Stop();
    Console.Write(description);
    Console.WriteLine(" Time Elapsed {0} ms", watch.Elapsed.TotalMilliseconds);
    return watch.Elapsed.TotalMilliseconds;
}

Make sure you compile in Release with optimizations enabled, and run the tests outside of Visual Studio. This last part is important because the JIT stints its optimizations with a debugger attached, even in Release mode.

Solution 2 - C#

Finalisation won't necessarily be completed before GC.Collect returns. The finalisation is queued and then run on a separate thread. This thread could still be active during your tests, affecting the results.

If you want to ensure that finalisation has completed before starting your tests then you might want to call GC.WaitForPendingFinalizers, which will block until the finalisation queue is cleared:

GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();

Solution 3 - C#

If you want to take GC interactions out of the equation, you may want to run your 'warm up' call after the GC.Collect call, not before. That way you know .NET will already have enough memory allocated from the OS for the working set of your function.

Keep in mind that you're making a non-inlined method call for each iteration, so make sure you compare the things you're testing to an empty body. You'll also have to accept that you can only reliably time things that are several times longer than a method call.

Also, depending on what kind of stuff you're profiling, you may want to do your timing based running for a certain amount of time rather than for a certain number of iterations -- it can tend to lead to more easily-comparable numbers without having to have a very short run for the best implementation and/or a very long one for the worst.

Solution 4 - C#

I'd avoid passing the delegate at all:

  1. Delegate call is ~ virtual method call. Not cheap: ~ 25% of smallest memory allocation in .NET. If you're interested in details, see e.g. this link.
  2. Anonymous delegates may lead to usage of closures, that you won't even notice. Again, accessing closure fields is noticeably than e.g. accessing a variable on the stack.

An example code leading to closure usage:

public void Test()
{
  int someNumber = 1;
  Profiler.Profile("Closure access", 1000000, 
    () => someNumber + someNumber);
}

If you're not aware about closures, take a look at this method in .NET Reflector.

Solution 5 - C#

I think the most difficult problem to overcome with benchmarking methods like this is accounting for edge cases and the unexpected. For example - "How do the two code snippets work under high CPU load/network usage/disk thrashing/etc." They're great for basic logic checks to see if a particular algorithm works significantly faster than another. But to properly test most code performance you'd have to create a test that measures the specific bottlenecks of that particular code.

I'd still say that testing small blocks of code often has little return on investment and can encourage using overly complex code instead of simple maintainable code. Writing clear code that other developers, or myself 6 months down the line, can understand quickly will have more performance benefits than highly optimized code.

Solution 6 - C#

I'd call func() several times for the warm-up, not just one.

Solution 7 - C#

Suggestions for improvement

  1. Detecting if the execution environment is good for benchmarking (such as detecting if a debugger is attached or if jit optimization is disabled which would result in incorrect measurements).

  2. Measuring parts of the code independently (to see exactly where the bottleneck is).

  3. Comparing different versions/components/chunks of code (In your first sentence you say '... benchmarking small chunks of code to see which implementation is fastest.').

Regarding #1:

  • To detect if a debugger is attached, read the property System.Diagnostics.Debugger.IsAttached (Remember to also handle the case where the debugger is initially not attached, but is attached after some time).

  • To detect if jit optimization is disabled, read property DebuggableAttribute.IsJITOptimizerDisabled of the relevant assemblies:

      private bool IsJitOptimizerDisabled(Assembly assembly)
      {
          return assembly.GetCustomAttributes(typeof (DebuggableAttribute), false)
              .Select(customAttribute => (DebuggableAttribute) customAttribute)
              .Any(attribute => attribute.IsJITOptimizerDisabled);
      }
    

Regarding #2:

This can be done in many ways. One way is to allow several delegates to be supplied and then measure those delegates individually.

Regarding #3:

This could also be done in many ways, and different use-cases would demand very different solutions. If the benchmark is invoked manually, then writing to the console might be fine. However if the benchmark is performed automatically by the build system, then writing to the console is probably not so fine.

One way to do this is to return the benchmark result as a strongly typed object that can easily be consumed in different contexts.


Etimo.Benchmarks

Another approach is to use an existing component to perform the benchmarks. Actually, at my company we decided to release our benchmark tool to public domain. At it's core, it manages the garbage collector, jitter, warmups etc, just like some of the other answers here suggest. It also has the three features I suggested above. It manages several of the issues discussed in Eric Lippert blog.

This is an example output where two components are compared and the results are written to the console. In this case the two components compared are called 'KeyedCollection' and 'MultiplyIndexedKeyedCollection':

Etimo.Benchmarks - Sample Console Output

There is a NuGet package, a sample NuGet package and the source code is available at GitHub. There is also a blog post.

If you're in a hurry, I suggest you get the sample package and simply modify the sample delegates as needed. If you're not in a hurry, it might be a good idea to read the blog post to understand the details.

Solution 8 - C#

You must also run a "warm up" pass prior to actual measurement to exclude the time JIT compiler spends on jitting your code.

Solution 9 - C#

Depending on the code you are benchmarking and the platform it runs on, you may need to account for how code alignment affects performance. To do so would probably require a outer wrapper that ran the test multiple times (in separate app domains or processes?), some of the times first calling "padding code" to force it to be JIT compiled, so as to cause the code being benchmarked to be aligned differently. A complete test result would give the best-case and worst-case timings for the various code alignments.

Solution 10 - C#

If you're trying to eliminate Garbage Collection impact from the benchmark complete, is it worth setting GCSettings.LatencyMode?

If not, and you want the impact of garbage created in func to be part of the benchmark, then shouldn't you also force collection at the end of the test (inside the timer)?

Solution 11 - C#

The basic problem with your question is the assumption that a single measurement can answer all your questions. You need to measure multiple times to get an effective picture of the situation and especially in a garbage collected langauge like C#.

Another answer gives an okay way of measuring the basic performance.

static void Profile(string description, int iterations, Action func) {
    // warm up 
    func();

    var watch = new Stopwatch(); 

    // clean up
    GC.Collect();
    GC.WaitForPendingFinalizers();
    GC.Collect();

    watch.Start();
    for (int i = 0; i < iterations; i++) {
        func();
    }
    watch.Stop();
    Console.Write(description);
    Console.WriteLine(" Time Elapsed {0} ms", watch.Elapsed.TotalMilliseconds);
}

However, this single measurement does not account for garbage collection. A proper profile additionally accounts for the worst case performance of garbage collection spread out over many calls (this number is sort of useless as the VM can terminate without ever collecting left over garbage but is still useful for comparing two different implementations of func.)

static void ProfileGarbageMany(string description, int iterations, Action func) {
    // warm up 
    func();

    var watch = new Stopwatch(); 

    // clean up
    GC.Collect();
    GC.WaitForPendingFinalizers();
    GC.Collect();

    watch.Start();
    for (int i = 0; i < iterations; i++) {
        func();
    }
    GC.Collect();
    GC.WaitForPendingFinalizers();
    GC.Collect();

    watch.Stop();
    Console.Write(description);
    Console.WriteLine(" Time Elapsed {0} ms", watch.Elapsed.TotalMilliseconds);
}

And one might also want to measure the worst case performance of garbage collection for a method that is only called once.

static void ProfileGarbage(string description, int iterations, Action func) {
    // warm up 
    func();

    var watch = new Stopwatch(); 

    // clean up
    GC.Collect();
    GC.WaitForPendingFinalizers();
    GC.Collect();

    watch.Start();
    for (int i = 0; i < iterations; i++) {
        func();

        GC.Collect();
        GC.WaitForPendingFinalizers();
        GC.Collect();
    }
    watch.Stop();
    Console.Write(description);
    Console.WriteLine(" Time Elapsed {0} ms", watch.Elapsed.TotalMilliseconds);
}

But more important than recommending any specific possible additional measurements to profile is the idea that one should measure multiple different statistics and not just one kind of statistic.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionSam SaffronView Question on Stackoverflow
Solution 1 - C#Sam SaffronView Answer on Stackoverflow
Solution 2 - C#LukeHView Answer on Stackoverflow
Solution 3 - C#Jonathan RuppView Answer on Stackoverflow
Solution 4 - C#Alex YakuninView Answer on Stackoverflow
Solution 5 - C#Paul AlexanderView Answer on Stackoverflow
Solution 6 - C#Alexey RomanovView Answer on Stackoverflow
Solution 7 - C#JoakimView Answer on Stackoverflow
Solution 8 - C#Alex YakuninView Answer on Stackoverflow
Solution 9 - C#Edward BreyView Answer on Stackoverflow
Solution 10 - C#Danny TuppenyView Answer on Stackoverflow
Solution 11 - C#Molossus SpondeeView Answer on Stackoverflow