Based on the table linked by @bigown , there are at least 4 sorting algorithms which, on average, are of O(n*log(n))
vel complexity, know:
- Quicksort
- Mergesort
- Timsort
- Heapsort
Copying data to an array can be an option
The Heapsort page on Wikipedia says:
Merge sort can be adapted to operate on singly linked lists with O (1) extra space. Heapsort can be adapted to operate on doubly linked lists with only (1) extra space overhead.
Translating:
Mergesort can be adapted to run on uniquely chained lists with extra O (1) space. Heapsort can be tailored to work on double-chained lists with an extra O (1) extra space.
Unfortunately, there is no example or reference. How to do this? Basically you can change the random vector access by references to the elements of the linked list.
In the case of Heap sort, the Wikipedia algorithm looks like this:
end ← count - 1
while end > 0 do
(a[0] is the root and largest value. The swap moves it in front of the sorted elements.)
swap(a[end], a[0])
(the heap size is reduced by one)
end ← end - 1
(the swap ruined the heap property, so restore it)
siftDown(a, 0, end)
So, for example, where there is increment or decrement, just replace this by a forward or backward in the list. Where there is reference to the first element, a reference to the first element is used, the same is done with the last one.
Example:
procedure heapsort(first, last, count) is
input: first and last elements of a doubly linked list with length of count
heapify(first, last, count)
end ← last
while end.previous != null do
swap(end, first)
end ← end.previous
count ← count - 1
siftDown(first, end, count)
procedure heapify(first, last, count) is
start ← iParent(first, count)
while start != null do
siftDown(start, last, count - 1)
start ← start.previous
count ← count - 1
procedure siftDown(start, end, count) is
root ← start
while iLeftChild(start, root, count) ≤ end do
child ← iLeftChild(start, root, count)
swap ← root
if swap.value < child.value
swap ← child
if child.next.value ≤ end.value and swap.value < child.next.value
swap ← child.next
if swap.value = root.value
return
else
swap(root, swap)
Note that in my example above, each element ( first
, last
, start
, end
, root
, swap
) is now actually a reference that contains 3 attributes: value
, previous
and next
.
You still have to implement the iParent(start, count)
and iLeftChild(start, end, count)
routines.
-
iParent
basically needs to find the core element. This is not trivial in threaded lists, but can be easily implemented since we have the reference to the first element and the quantity. All that needs to be done is to advance% of% times from the first element.
-
floor((count-1) / 2)
basically advances iLeftChild
, 2*i + 1
being the position of the i
element relative to root
. Then it can be easily implemented by advancing a reference start
times from count + 1
.
Does it work? Should , but did not test. Implement it at your own risk, considering that the search routines described above will leave the algorithm a little slower, though I do believe that other forms of optimization can be found.