Destructor vs IDisposable?

C#.Net

C# Problem Overview


I've read about disposing objects/IDisposable interface and destructors in C#, but to me they seem to do the same thing?

What is the difference between the two? Why would I use one over the other? In fact, in this example (link below) this code uses both the IDisposable interface and a destructor:

http://msdn.microsoft.com/en-us/library/system.idisposable.aspx

The comment says the destructor is if the finalization code is not used, but how do I decide when to use one over the other?

C# Solutions


Solution 1 - C#

I wrote a fairly in-depth post which should help to explain about finalizers, IDisposable, and when you should use one or the other: http://gregbee.ch/blog/implementing-and-using-the-idisposable-interface

Probably the most relevant part is quoted below:

> When you are using unmanaged resources > such as handles and database > connections, you should ensure that > they are held for the minimum amount > of time, using the principle of > acquire late and release early. In C++ > releasing the resources is typically > done in the destructor, which is > deterministically run at the point > where the object is deleted. The .NET > runtime, however, uses a garbage > collector (GC) to clean up and reclaim > the memory used by objects that are no > longer reachable; as this runs on a > periodic basis it means that the point > at which your object is cleaned up is > nondeterministic. The consequence of > this is that destructors do not exist > for managed objects as there is no > deterministic place to run them. > > Instead of destructors, C# has > finalizers which are implemented by > overriding the Finalize method defined > on the base Object class (though C# > somewhat confusingly uses the C++ > destructor syntax ~Object for this). > If an object overrides the Finalize > method then rather than being > collected by the GC when it is out of > scope, the GC places it on a finalizer > queue. In the next GC cycle all > finalizers on the queue are run (on a > single thread in the current > implementation) and the memory from > the finalized objects reclaimed. It's > fairly obvious from this why you don't > want to do clean up in a finalizer: it > takes two GC cycles to collect the > object instead of one and there is a > single thread where all finalizers are > run while every other thread is > suspended, so it's going to hurt > performance. > > So if you don't have destructors, and > you don't want to leave the cleanup to > the finalizer, then the only option is > to manually, deterministically, clean > up the object. Enter the IDisposable > interface which provides a standard > for supporting this functionality and > defines a single method, Dispose, > where you put in the cleanup logic for > the object. When used within a finally > block, this interface provides > equivalent functionality to > destructors. The reason for finally > blocks in code is primarily to support > the IDisposable interface; this is why > C++ uses simply try/except as there is > no need for a finally block with > destructors.

Solution 2 - C#

Short version

A finalizer gives you an opportunity to dispose of unmanaged resources in case the user of your object forgot to call IDisposable.Dispose.

If your object implements IDisposable, the user of your object must call .Dispose. You don't have to clean up the user's mess; but it's a nice thing to do.


My most popular answer on Stackoverflow walks you from the beginning why you have IDisposable, what it should do, what your finalizer can do, what it shouldn't do.

> This answer melts faces

has been used to describe it :P

Solution 3 - C#

Having destructor (~Object()) in managed programming language is the most dummest idea. It perfectly make sense for unmanaged languages like C,C++ to have destructors as they use RAII idiom but for managed like Java,C#, just so absurd.

It has been pointed out by Joshua Bloch, a former project lead in Java Collection Framework, that the idea of finalize() method (which is equivalent to C#'s C++ like destructor) in Java was the biggest mistake ever made. Same as C#, finallize() in Java gives overhead to "new" as it must be added to the finallizer queue during allocation. More over, Garbage Collector must pop and run finallize() in the queue, so twice the overhead during gc.

C# had many enhanced features like "using(IDisposable) {}" which not only allow the IDisposable variable to be confined to the scope of "using" block but also guarantee it's cleanup. My question is, why did C# follow the same trail of Java which lead to great mistake. May be if the development of dotnet started after around 2003 ~ 2005, when many Java architects found the fallacy of finallize(), then the mistake would have been prevented.

Many good idea of one language is often transferred to other language like the "IDisposable/using combo " in C# which was transferred to Java 1.7 in its "try(object-to-dispose) {}" statement. But its too bad that language architects fail to discover the bad idea disguised as good idea during it's transfer from one to the other.

My advise is never to use ~Destructor() and stick with IDisposable/using combo if you need to manually cleanup the unmanaged resource like database connections.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionGurdeepSView Question on Stackoverflow
Solution 1 - C#Greg BeechView Answer on Stackoverflow
Solution 2 - C#Ian BoydView Answer on Stackoverflow
Solution 3 - C#David LeeView Answer on Stackoverflow