There are some misconceptions here, we need to solve them before thinking about the most appropriate solution:
Is there really a "collision", or is your problem "duplicate keys"?
If your data mass has, for example, key "a" mapping both 1
and 2
, then the problem is not a hash collision. Collision would be if we had "a" mapping to 1
and "b" mapping to 2
, but hash(a) == hash(b)
.
3 collisions in 59 entries does not seem to me "very small", but an excessive number, especially if the hash function is well done (as I believe to be the case with getHashCode
). So I suspect your case is duplicate keys, but that's only you can confirm. If so, and a normal key in your application is pointing to the same value - then in fact a solution like your proposal (map key to a list) would be correct.
If there is a collision, who is responsible for handling it?
Collisions can occur in any hash function, and the libraries that make use of them are (or should be) prepared to handle them. In the example above, if you insert "a" and "b" in a Dictionary
and there is a collision between your hashes, it is the obligation of the Dictionary to do something about it, not yours. Unless this library is very broken I have no practical experience with C #, but I doubt that is the case. This will be handled transparently for the programmer-user [of the library, ie you].
Your example above leads me to wonder if it was even a% w_that you wanted. If your keys are strings, would not it be the case to use Dictionary<int, int>
? Because if you call the method Dictionary<string, int>
of the string manually, and use its result as the key (and not the original string) then you are transferring to you a library liability (handle collisions). As far as I know, there's no reason to do this ...
If after reading the above you still need to deal with collisions manually, please update your question in more detail and I'll try to give you a better direction on your options (chaining, rehashing ...).