Sort threaded list with method O (n * log (n))

6

I need to sort a linked list (own implementation, without using Java API) with a sort method that has complexity O (n * log (n)). Searching for methods that satisfy the condition found quicksort, however, it assumes random access to the elements that will be ordered, which is impractical in the case of the list. Home So I had the idea of copying all the elements of my list to a vector, performing the sort with quicksort and copying them back to the list after sorted. Is this solution feasible? Is it able to maintain the desired complexity? Also, are there other methods with the same complexity that best suit the linked list?

    
asked by anonymous 02.02.2016 / 01:55

3 answers

3

The question is one that can guarantee complexity. It does not talk about anything related to speed. They are different concepts.

Chained list with this complexity is difficult. Only quicksort does not achieve this complexity . Copying sooner, it only makes the situation worse.

Copying to another structure can give you more speed, but do not give this complexity.

It seems like someone else has been able to do with mergesort . But without copying anything before :) The Answer Accepted in Question in the SO talks about this solution.

mergesort , as well as heapsort can give the required complexity in the question. These are two well-known algorithms. Of course it depends a bit on implementation. The problems of these two algorithms is that it consumes a lot of space - O (N). The question does not say anything in space. But specifically because it is a linked list, these algorithms achieve O (1).

Take a look at Radix . It usually works miracles, but it's hard to implement.

This table can help although it is not specific to linked list , the properly implemented algorithm can guarantee complexity required.

    
02.02.2016 / 02:33
2

Based on the table linked by @bigown , there are at least 4 sorting algorithms which, on average, are of O(n*log(n)) vel complexity, know:

  • Quicksort
  • Mergesort
  • Timsort
  • Heapsort

Copying data to an array can be an option

The Heapsort page on Wikipedia says:

  

Merge sort can be adapted to operate on singly linked lists with O (1) extra space. Heapsort can be adapted to operate on doubly linked lists with only (1) extra space overhead.

Translating:

  

Mergesort can be adapted to run on uniquely chained lists with extra O (1) space. Heapsort can be tailored to work on double-chained lists with an extra O (1) extra space.

Unfortunately, there is no example or reference. How to do this? Basically you can change the random vector access by references to the elements of the linked list.

In the case of Heap sort, the Wikipedia algorithm looks like this:

end ← count - 1
while end > 0 do
    (a[0] is the root and largest value. The swap moves it in front of the sorted elements.)
    swap(a[end], a[0])
    (the heap size is reduced by one)
    end ← end - 1
    (the swap ruined the heap property, so restore it)
    siftDown(a, 0, end)

So, for example, where there is increment or decrement, just replace this by a forward or backward in the list. Where there is reference to the first element, a reference to the first element is used, the same is done with the last one.

Example:

procedure heapsort(first, last, count) is
    input: first and last elements of a doubly linked list with length of count

    heapify(first, last, count)
    end ← last
    while end.previous != null do
        swap(end, first)
        end ← end.previous
        count ← count - 1
        siftDown(first, end, count)


procedure heapify(first, last, count) is
    start ← iParent(first, count)
    while start != null do
        siftDown(start, last, count - 1)
        start ← start.previous
        count ← count - 1


procedure siftDown(start, end, count) is
    root ← start
    while iLeftChild(start, root, count) ≤ end do 
        child ← iLeftChild(start, root, count)
        swap ← root     
        if swap.value < child.value
            swap ← child
        if child.next.value ≤ end.value and swap.value < child.next.value
            swap ← child.next
        if swap.value = root.value
            return
        else
            swap(root, swap)

Note that in my example above, each element ( first , last , start , end , root , swap ) is now actually a reference that contains 3 attributes: value , previous and next .

You still have to implement the iParent(start, count) and iLeftChild(start, end, count) routines.

  • iParent basically needs to find the core element. This is not trivial in threaded lists, but can be easily implemented since we have the reference to the first element and the quantity. All that needs to be done is to advance% of% times from the first element.
  • floor((count-1) / 2) basically advances iLeftChild , 2*i + 1 being the position of the i element relative to root . Then it can be easily implemented by advancing a reference start times from count + 1 .

Does it work? Should , but did not test. Implement it at your own risk, considering that the search routines described above will leave the algorithm a little slower, though I do believe that other forms of optimization can be found.

    
03.02.2016 / 01:49
0

Copy might not be a bad idea.

O(n*log n + n) for a sufficiently large n can be considered the same as O(n*log(n)) .

Note, however, that Big-O notation is a very important, but not definitive, direction when doing practical algorithm analysis. There are several hardware optimization details that are not taken into account in a purely Big-O analysis.

    
02.02.2016 / 17:46