From selection to sorting

When I first saw the quicksort it was in an algorithms class back in the day. We first learned the quicksort, then choosing a good pivot element and then as an afterthought we did quickselect.

Fast forward to teaching. I was never really happy teaching quicksort. Mergesort is easy to motivate and it's pretty easy to write. Quicksort always felt a little forced.

I thought I'd try switching things up this time and doing quickselect first.

The motivating problem: find the Kth smallest item in a list - in our case the list is an array of ints.

I want to start with the least efficient algorithm so I stack the deck. I remind them that we've been finding the smallest item in a list for two years now.

They don't disappoint and suggest something like this:

L = [10,3,28,82,14,42,66,74,81]

def findKth(L,k):
    omits=[]
    for i in range(k):
        ans=max(L)
        for item in L:
            if item < ans and item not in omits:
                ans=item
        omits.append(ans)
    return ans

print findKth(L,3)

Clearly an \(O(n^2)\) algorithm.

Can we do better?

Certainly.

The students then suggest sorting the data set first. If we use mergesort, we can sort in \(O(nLg (n))\) time. This lead to a great conversation about sorting being so fast it's practically free and that you don't have to hard code everything from scratch. Not only is sorting the data set then plucking the kth item out much faster, if you already have a sort written or if you use your language's library's sort, it's much easier as well:

def findKth(L,k):
    tmp = sorted(L)
    return tmp[k]

But we can do even better. So now we talk about quickselect

We pick a random pivot, partition the list a la quicksort (reorder the list such that all items less than the pivot are to its left, and all items greater than the pivot are to its right).

We now know that after partitioning. the pivot is in it's exact location. If its index is k then we're done. If not, we can recursively quickselect on either the left or right side.

Pretty cool, but is it faster?

It's easy to see that if we keep choosing a bad pivot (the smallest or largest in the list), each iteration takes \(n\) time to partition and each iteration takes one item out of contention. This takes us back to \(O(n^2)\).

However…

If we choose a good partition – at the middle of the list, each partition takes less and less time. We get a run time of:

\(n+\frac{n}{2} +\frac{n}{4}+\frac{n}{8}+\dots\) and since \(\frac{n}{2} +\frac{n}{4}+\frac{n}{8}\dots=n\) this becomes an \(O(2n)\), or \(O(n)\) algorithm.

That's really cool.

Homework was the actual implementation.

I think this might be a better way to approach quicksort. It seems less forced, plus the class gets to go through the exercise of taking an algorithm form \(O(n^2)\) to \(O(nlg(n))\) to \(O(n)\).

Next, moving to the quicksort and also showing that we can indeed avoid those really bad pivots.

Addendum

We moved to quicksort today and overall I'm happy with this approach. The only thing I think needs tweaking is going from the idea of partitioning to Java code. Java makes it somewhat of a bear.

Comments

Comments powered by Disqus



Enter your email address:

Delivered by FeedBurner

Google Analytics Alternative