Default implementation for Object.GetHashCode()

.NetHashGethashcode

.Net Problem Overview


How does the default implementation for GetHashCode() work? And does it handle structures, classes, arrays, etc. efficiently and well enough?

I am trying to decide in what cases I should pack my own and in what cases I can safely rely on the default implementation to do well. I don't want to reinvent the wheel, if at all possible.

.Net Solutions


Solution 1 - .Net

For a class, the defaults are essentially reference equality, and that is usually fine. If writing a struct, it is more common to override equality (not least to avoid boxing), but it is very rare you write a struct anyway!

When overriding equality, you should always have a matching Equals() and GetHashCode() (i.e. for two values, if Equals() returns true they must return the same hash-code, but the converse is not required) - and it is common to also provide ==/!= operators, and often to implement IEquatable<T> too.

For generating the hash code, it is common to use a factored sum, as this avoids collisions on paired values - for example, for a basic 2 field hash:

unchecked // disable overflow, for the unlikely possibility that you
{         // are compiling with overflow-checking enabled
    int hash = 27;
    hash = (13 * hash) + field1.GetHashCode();
    hash = (13 * hash) + field2.GetHashCode();
    return hash;
}

This has the advantage that:

  • the hash of {1,2} is not the same as the hash of {2,1}
  • the hash of {1,1} is not the same as the hash of {2,2}

etc - which can be common if just using an unweighted sum, or xor (^), etc.

Solution 2 - .Net

namespace System {
    public class Object {
        [MethodImpl(MethodImplOptions.InternalCall)]
        internal static extern int InternalGetHashCode(object obj);

        public virtual int GetHashCode() {
            return InternalGetHashCode(this);
        }
    }
}

InternalGetHashCode is mapped to an ObjectNative::GetHashCode function in the CLR, which looks like this:

FCIMPL1(INT32, ObjectNative::GetHashCode, Object* obj) {  
    CONTRACTL  
    {  
        THROWS;  
        DISABLED(GC_NOTRIGGER);  
        INJECT_FAULT(FCThrow(kOutOfMemoryException););  
        MODE_COOPERATIVE;  
        SO_TOLERANT;  
    }  
    CONTRACTL_END;  

    VALIDATEOBJECTREF(obj);  
  
    DWORD idx = 0;  
  
    if (obj == 0)  
        return 0;  
  
    OBJECTREF objRef(obj);  

    HELPER_METHOD_FRAME_BEGIN_RET_1(objRef);        // Set up a frame  

    idx = GetHashCodeEx(OBJECTREFToObject(objRef));  

    HELPER_METHOD_FRAME_END();  

    return idx;  
}  
FCIMPLEND

The full implementation of GetHashCodeEx is fairly large, so it's easier to just link to the C++ source code.

Solution 3 - .Net

Since I couldn't find an answer that explains why we should override GetHashCode and Equals for custom structs and why the default implementation "is not likely to be suitable for use as a key in a hash table", I'll leave a link to this blog post, which explains why with a real-case example of a problem that happened.

I recommend reading the whole post, but here is a summary (emphasis and clarifications added).

Reason the default hash for structs is slow and not very good:

> The way the CLR is designed, every call to a member defined in System.ValueType or System.Enum types [may] cause a boxing allocation [...] > > An implementer of a hash function faces a dilemma: make a good distribution of the hash function or to make it fast. In some cases, it's possible to achieve them both, but it is hard to do this generically in ValueType.GetHashCode. > > The canonical hash function of a struct "combines" hash codes of all the fields. But the only way to get a hash code of a field in a ValueType method is to use reflection. So, the CLR authors decided to trade speed over the distribution and the default GetHashCode version just returns a hash code of a first non-null field and "munges" it with a type id [...] This is a reasonable behavior unless it's not. For instance, if you're unlucky enough and the first field of your struct has the same value for most instances, then a hash function will provide the same result all the time. And, as you may imagine, this will cause a drastic performance impact if these instances are stored in a hash set or a hash table. > > [...] Reflection-based implementation is slow. Very slow. > > [...] Both ValueType.Equals and ValueType.GetHashCode have a special optimization. If a type does not have "pointers" and is properly packed [...] then more optimal versions are used: GetHashCode iterates over an instance and XORs blocks of 4 bytes and Equals method compares two instances using memcmp. [...] But the optimization is very tricky. First, it is hard to know when the optimization is enabled [...] Second, a memory comparison will not necessarily give you the right results. Here is a simple example: [...] -0.0 and +0.0 are equal but have different binary representations.

Real-world issue described in the post:

private readonly HashSet<(ErrorLocation, int)> _locationsWithHitCount;
readonly struct ErrorLocation
{
    // Empty almost all the time
    public string OptionalDescription { get; }
    public string Path { get; }
    public int Position { get; }
}

> We used a tuple that contained a custom struct with default equality implementation. And unfortunately, the struct had an optional first field that was almost always equals to [empty string]. The performance was OK until the number of elements in the set increased significantly causing a real performance issue, taking minutes to initialize a collection with tens of thousands of items.

So, to answer the question "in what cases I should pack my own and in what cases I can safely rely on the default implementation", at least in the case of structs, you should override Equals and GetHashCode whenever your custom struct might be used as a key in a hash table or Dictionary.
I would also recommend implementing IEquatable<T> in this case, to avoid boxing.

As the other answers said, if you're writing a class, the default hash using reference equality is usually fine, so I wouldn't bother in this case, unless you need to override Equals (then you would have to override GetHashCode accordingly).

Solution 4 - .Net

The documentation for the GetHashCode method for Object says "the default implementation of this method must not be used as a unique object identifier for hashing purposes." and the one for ValueType says "If you call the derived type's GetHashCode method, the return value is not likely to be suitable for use as a key in a hash table.".

The basic data types like byte, short, int, long, char and string implement a good GetHashCode method. Some other classes and structures, like Point for example, implement a GetHashCode method that may or may not be suitable for your specific needs. You just have to try it out to see if it's good enough.

The documentation for each class or structure can tell you if it overrides the default implementation or not. If it doesn't override it you should use your own implementation. For any classes or structs that you create yourself where you need to use the GetHashCode method, you should make your own implementation that uses the appropriate members to calculate the hash code.

Solution 5 - .Net

Generally speaking, if you're overriding Equals, you want to override GetHashCode. The reason for this is because both are used to compare equality of your class/struct.

Equals is used when checking Foo A, B;

if (A == B)

Since we know the pointer isn't likely to match, we can compare the internal members.

Equals(obj o)
{
    if (o == null) return false;
    MyType Foo = o as MyType;
    if (Foo == null) return false;
    if (Foo.Prop1 != this.Prop1) return false;

    return Foo.Prop2 == this.Prop2;
}

GetHashCode is generally used by hash tables. The hashcode generated by your class should always be the same for a classes give state.

I typically do,

GetHashCode()
{
    int HashCode = this.GetType().ToString().GetHashCode();
    HashCode ^= this.Prop1.GetHashCode();
    etc.

    return HashCode;
}

Some will say that the hashcode should only be calculated once per object lifetime, but I don't agree with that (and I'm probably wrong).

Using the default implementation provided by object, unless you have the same reference to one of your classes, they will not be equal to each other. By overriding Equals and GetHashCode, you can report equality based on internal values rather than the objects reference.

Solution 6 - .Net

If you're just dealing with POCOs you can use this utility to simplify your life somewhat:

var hash = HashCodeUtil.GetHashCode(
           poco.Field1,
           poco.Field2,
           ...,
           poco.FieldN);

...

public static class HashCodeUtil
{
    public static int GetHashCode(params object[] objects)
    {
        int hash = 13;

        foreach (var obj in objects)
        {
            hash = (hash * 7) + (!ReferenceEquals(null, obj) ? obj.GetHashCode() : 0);
        }

        return hash;
    }
}

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionFungView Question on Stackoverflow
Solution 1 - .NetMarc GravellView Answer on Stackoverflow
Solution 2 - .NetDavid BrownView Answer on Stackoverflow
Solution 3 - .NetgeekleyView Answer on Stackoverflow
Solution 4 - .NetGuffaView Answer on Stackoverflow
Solution 5 - .NetBennett DillView Answer on Stackoverflow
Solution 6 - .NetDaniel MarshallView Answer on Stackoverflow