When is ReaderWriterLockSlim better than a simple lock?

C#.NetMultithreadingLocking

C# Problem Overview


I'm doing a very silly benchmark on the ReaderWriterLock with this code, where reading happens 4x more often than writting:

class Program
{
    static void Main()
    {
        ISynchro[] test = { new Locked(), new RWLocked() };

        Stopwatch sw = new Stopwatch();

        foreach ( var isynchro in test )
        {
            sw.Reset();
            sw.Start();
            Thread w1 = new Thread( new ParameterizedThreadStart( WriteThread ) );
            w1.Start( isynchro );

            Thread w2 = new Thread( new ParameterizedThreadStart( WriteThread ) );
            w2.Start( isynchro );

            Thread r1 = new Thread( new ParameterizedThreadStart( ReadThread ) );
            r1.Start( isynchro );

            Thread r2 = new Thread( new ParameterizedThreadStart( ReadThread ) );
            r2.Start( isynchro );

            w1.Join();
            w2.Join();
            r1.Join();
            r2.Join();
            sw.Stop();

            Console.WriteLine( isynchro.ToString() + ": " + sw.ElapsedMilliseconds.ToString() + "ms." );
        }

        Console.WriteLine( "End" );
        Console.ReadKey( true );
    }

    static void ReadThread(Object o)
    {
        ISynchro synchro = (ISynchro)o;

        for ( int i = 0; i < 500; i++ )
        {
            Int32? value = synchro.Get( i );
            Thread.Sleep( 50 );
        }
    }

    static void WriteThread( Object o )
    {
        ISynchro synchro = (ISynchro)o;

        for ( int i = 0; i < 125; i++ )
        {
            synchro.Add( i );
            Thread.Sleep( 200 );
        }
    }

}

interface ISynchro
{
    void Add( Int32 value );
    Int32? Get( Int32 index );
}

class Locked:List<Int32>, ISynchro
{
    readonly Object locker = new object();

    #region ISynchro Members

    public new void Add( int value )
    {
        lock ( locker ) 
            base.Add( value );
    }

    public int? Get( int index )
    {
        lock ( locker )
        {
            if ( this.Count <= index )
                return null;
            return this[ index ];
        }
    }

    #endregion
    public override string ToString()
    {
        return "Locked";
    }
}

class RWLocked : List<Int32>, ISynchro
{
    ReaderWriterLockSlim locker = new ReaderWriterLockSlim();

    #region ISynchro Members

    public new void Add( int value )
    {
        try
        {
            locker.EnterWriteLock();
            base.Add( value );
        }
        finally
        {
            locker.ExitWriteLock();
        }
    }

    public int? Get( int index )
    {
        try
        {
            locker.EnterReadLock();
            if ( this.Count <= index )
                return null;
            return this[ index ];
        }
        finally
        {
            locker.ExitReadLock();
        }
    }

    #endregion

    public override string ToString()
    {
        return "RW Locked";
    }
}

But I get that both perform in more or less the same way:

Locked: 25003ms.
RW Locked: 25002ms.
End

Even making the read 20 times more often that writes, the performance is still (almost) the same.

Am I doing something wrong here?

Kind regards.

C# Solutions


Solution 1 - C#

In your example, the sleeps mean that generally there is no contention. An uncontended lock is very fast. For this to matter, you would need a contended lock; if there are writes in that contention, they should be about the same (lock may even be quicker) - but if it is mostly reads (with a write contention rarely), I would expect the ReaderWriterLockSlim lock to out-perform the lock.

Personally, I prefer another strategy here, using reference-swapping - so reads can always read without ever checking / locking / etc. Writes make their change to a cloned copy, then use Interlocked.CompareExchange to swap the reference (re-applying their change if another thread mutated the reference in the interim).

Solution 2 - C#

My own tests indicate that ReaderWriterLockSlim has about 5x the overhead as compared to a normal lock. That means for the RWLS to outperform a plain old lock the following conditions would generally be occurring.

  • The number of readers significantly outnumbers the writers.
  • The lock would have to be held long enough to overcome the additional overhead.

In most real applications these two conditions are not enough to overcome that additional overhead. In your code specifically, the locks are held for such a short period of time that the lock overhead will probably be the dominating factor. If you were to move those Thread.Sleep calls inside the lock then you would probably get a different result.

Solution 3 - C#

There's no contention in this program. The Get and Add methods execute in a few nanoseconds. The odds that multiple threads hit those methods at the exact time are vanishingly small.

Put a Thread.Sleep(1) call in them and remove the sleep from the threads to see the difference.

Solution 4 - C#

Edit 2: Simply removing the Thread.Sleep calls from ReadThread and WriteThread, I saw Locked outperform RWLocked. I believe Hans hit the nail on the head here; your methods are too fast and create no contention. When I added Thread.Sleep(1) to the Get and Add methods of Locked and RWLocked (and used 4 read threads against 1 write thread), RWLocked beat the pants off of Locked.


Edit: OK, if I were actually thinking when I first posted this answer, I would've realized at least why you put the Thread.Sleep calls in there: you were trying to reproduce the scenario of reads happening more frequently than writes. This is just not the right way to do that. Instead, I would introduce extra overhead to your Add and Get methods to create a greater chance of contention (as Hans suggested), create more read threads than write threads (to ensure more frequent reads than writes), and remove the Thread.Sleep calls from ReadThread and WriteThread (which actually reduce contention, achieving the opposite of what you want).


I like what you've done so far. But here are a few issues I see right off the bat:

  1. Why the Thread.Sleep calls? These are just inflating your execution times by a constant amount, which is going to artificially make performance results converge.
  2. I also wouldn't include the creation of new Thread objects in the code that's measured by your Stopwatch. That is not a trivial object to create.

Whether you will see a significant difference once you address the two issues above, I don't know. But I believe they should be addressed before the discussion continues.

Solution 5 - C#

You will get better performance with ReaderWriterLockSlim than a simple lock if you lock a part of code which needs longer time to execute. In this case readers can work in parallel. Acquiring a ReaderWriterLockSlim takes more time than entering a simple Monitor. Check my ReaderWriterLockTiny implementation for a readers-writer lock which is even faster than simple lock statement and offers a readers-writer functionality: http://i255.wordpress.com/2013/10/05/fast-readerwriterlock-for-net/

Solution 6 - C#

Check out this article: Link

Your sleeps are probably long enough that they make your locking/unlocking statistically insignificant.

Solution 7 - C#

Uncontested locks take on the order of microseconds to acquire, so your execution time will be dwarfed by your calls to Sleep.

Solution 8 - C#

Unless you have multicore hardware (or at least the same as your planned production environment) you won't get a realistic test here.

A more sensible test would be to extend the lifetime of your locked operations by putting a brief delay inside the lock. That way you should really be able to contrast the parallelism added using ReaderWriterLockSlim versus the serialization implied by basic lock().

Currently, the time taken by your operations that are locked are lost in the noise generated by the Sleep calls that happen outside the locks. The total time in either case is mostly Sleep-related.

Are you sure your real-world app will have equal numbers of reads and writes? ReaderWriterLockSlim is really better for the case where you have many readers and relatively infrequent writers. 1 writer thread versus 3 reader threads should demonstrate ReaderWriterLockSlim benefits better, but in any case your test should match your expected real-world access pattern.

Solution 9 - C#

I guess this is because of the sleeps you have in you reader and writer threads.
Your read thread has a 500tims 50ms sleep which is 25000 Most of the time it is sleeping

Solution 10 - C#

> When is ReaderWriterLockSlim better than a simple lock?

When you have significantly more reads than writes.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionvtortolaView Question on Stackoverflow
Solution 1 - C#Marc GravellView Answer on Stackoverflow
Solution 2 - C#Brian GideonView Answer on Stackoverflow
Solution 3 - C#Hans PassantView Answer on Stackoverflow
Solution 4 - C#Dan TaoView Answer on Stackoverflow
Solution 5 - C#i255View Answer on Stackoverflow
Solution 6 - C#Nelson RothermelView Answer on Stackoverflow
Solution 7 - C#MSNView Answer on Stackoverflow
Solution 8 - C#Steve TownsendView Answer on Stackoverflow
Solution 9 - C#Itay KaroView Answer on Stackoverflow
Solution 10 - C#IllidanView Answer on Stackoverflow