Problem Detail: Under assumption that the hash function is uniform, we have worst-case performance for the search operation in a separate-chaining (e.g. java.util.HashMap) hashtable $O(log n)$. Why? I cannot really understand this. If we have a uniformly-distributed hash values it should be the case that each hash bucket contains approximately the same number of elements. Therefore, if we have load-factor (buckets_number/elements_number) say $0.5$, we guarantee the constant-time performance for search operations $O(2)$.
Asked By : user3663882
Answered By : Philip Becker
The load factor denotes the average expected length of a chain, therefore it is interesting for an average case analysis, not the worst case analysis. That’s why on average you expect needing constant time for search operations. In the worst case however, all your elements hash to the same location and are part of one long chain of size n. Then, it depends on the data structure used to implement the chaining. If you choose a sorted array, you can do binary search and the worst case complexity for search is O(log n). If you choose an unsorted list, you have a worst case of O(n) for search. Depending on your choice of data structure, the performance (worst and average case) of insert, delete and search changes. According to Coding-Geek.com, Java 7’s HashMap implementation uses a linked list (unsorted), but Java 8 uses a balanced binary tree instead. Of course, insert/delete/search in a balanced tree of size n has complexity O(log n).
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/57564 Ask a Question Download Related Notes/Documents