Question

I am a computer science sophomore doing a data structures and algorithms course. My professor said that insertion sort requires random access, while merge sort does not.

According to him, the insertion step in insertion sort requires random access. But can't it be implemented using sequential access in a linked list, going through each element, and as soon as you find that the element of the next node is more than the element you wish to insert, you squeeze that element after the current element (for ascending order list).

He is almost never wrong, but he also doesn't entertain doubts, due to which I'm forced to ask here. Please let me know. Thanks!

Était-ce utile?

La solution

Ok, so I asked this question on the theoretical CS stack exchange website.

Louis has given what I think is the most suitable answer to this question:

Your implementation of linked lists also needs to be able to access memory non-sequentially for the pointer operations that splice in the new value.

That is, when I am inserting a new node at an arbitrary location in the linked list, I am assuming I have random access. Else, I would have to shift all the subsequent nodes ahead in memory before adding the new one.

Autres conseils

Wikipedia describes a variant of insertion sort that requires random access to binary search for the insertion point, noting that it is useful in the case where comparisons are much more expensive than swaps. This is labeled "binary insertion sort", but this could well be what your professor is talking about.

It's average and worst case complexity is O(n^2) swaps and O(nlogn) comparisons, which is better than the O(n^2) swaps and comparisons in your method.

If you are operating on a data-structure, such as a heap, with O(logn) insert and O(logn) search, then it becomes overall O(nlogn). This would normally be described as heap-sort or similar.

Licencié sous: CC-BY-SA avec attribution
scroll top