- Understand how quicksort partitions around a pivot and recurses
- See why quicksort is O(n log n) average but O(n²) worst case
- Learn when quicksort beats merge sort: in-place, cache-friendly, fast average case
Quick Sort Algorithm
You need to sort a pile of exams by score. Pick one exam (the pivot). Go through the rest: scores lower than the pivot go left, higher go right. Now you have two piles. Repeat on each pile.
That's quicksort. Pick a pivot, partition around it, recurse.
Why is it called "quick"? Because in practice, it's often the fastest sorting algorithm. Better cache locality and in-place sorting beat merge sort's guarantees.
The Algorithm
def quicksort(arr, low, high):
if low < high:
# Partition and get pivot position
pivot_index = partition(arr, low, high)
# Sort left and right of pivot
quicksort(arr, low, pivot_index - 1)
quicksort(arr, pivot_index + 1, high)
def partition(arr, low, high):
# Choose pivot (using last element here)
pivot = arr[high]
# i tracks position for next element ≤ pivot
i = low - 1
# Move all elements ≤ pivot to the left
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
# Place pivot in its final position
arr[i + 1], arr[high] = arr[high], arr[i + 1]
return i + 1
Step-by-Step Trace
Sort [10, 80, 30, 90, 40, 50, 70].
First partition (pivot = 70):
i = -1, j = 0
arr[0] = 10 ≤ 70 → i++, swap arr[0] with arr[0]
Array: [10, 80, 30, 90, 40, 50, 70], i=0
j = 1
arr[1] = 80 > 70 → skip
i=0
j = 2
arr[2] = 30 ≤ 70 → i++, swap arr[1] with arr[2]
Array: [10, 30, 80, 90, 40, 50, 70], i=1
j = 3
arr[3] = 90 > 70 → skip
i=1
j = 4
arr[4] = 40 ≤ 70 → i++, swap arr[2] with arr[4]
Array: [10, 30, 40, 90, 80, 50, 70], i=2
j = 5
arr[5] = 50 ≤ 70 → i++, swap arr[3] with arr[5]
Array: [10, 30, 40, 50, 80, 90, 70], i=3
After loop, place pivot: swap arr[i+1] with arr[high]
Array: [10, 30, 40, 50, 70, 90, 80], pivot at index 4
Now 70 is in its final position. Everything left of it is ≤ 70, everything right is > 70.
Recurse left [10, 30, 40, 50] (pivot = 50):
10 ≤ 50 → left
30 ≤ 50 → left
40 ≤ 50 → left
Array: [10, 30, 40, 50], pivot at index 3
Already sorted. Recurse on smaller subarrays...
Recurse right [90, 80] (pivot = 80):
90 > 80 → skip
Swap pivot: [80, 90]
Done. Final array: [10, 30, 40, 50, 70, 80, 90].
Choosing the Pivot
Last element (Lomuto scheme): Simple but can degrade on sorted/reverse sorted arrays.
First element: Same issue.
Random element: Randomization avoids worst case on specific inputs. Good in practice.
Median-of-three: Pick median of first, middle, and last elements. Better than random for many inputs.
Median-of-medians: Guaranteed to pick a good pivot, but complex. Rarely used in practice.
For sorted [1, 2, 3, 4, 5] with last-element pivot:
- Pivot is 5 (largest), so nothing goes right
- Left subarray is [1, 2, 3, 4], again pivot is largest
- Degenerates to O(n²)
With random pivot, this is unlikely.
Time Complexity
Best/Average case: O(n log n)
- Good pivot splits array roughly in half
- log n levels, O(n) work per level
Worst case: O(n²)
- Terrible pivot always picks smallest/largest
- n levels, O(n) work per level
For random pivots, worst case is extremely unlikely. In practice, quicksort is O(n log n).
Why faster than merge sort on average? Better cache locality (in-place), fewer array copies, simpler operations.
Space Complexity
In-place: Quicksort sorts within the original array. O(1) extra space for variables.
Recursion stack: O(log n) on average for the call stack. O(n) in worst case (unbalanced partitions).
Total: O(log n) average, O(n) worst case.
Compare to merge sort's guaranteed O(n) extra space. Quicksort wins for space.
Stability
Quicksort is not stable. Equal elements can change relative order during partitioning.
Example: Sort [(3, a), (1, b), (3, c)] by number.
After partitioning, the two 3's might swap order. Result could be [(1, b), (3, c), (3, a)].
If you need stability, use merge sort or add extra logic to quicksort (which negates its speed advantage).
Hoare Partition Scheme
An alternative to Lomuto, using two pointers from both ends:
def hoare_partition(arr, low, high):
pivot = arr[low] # Choose first element
i = low - 1
j = high + 1
while True:
# Move i right while arr[i] < pivot
i += 1
while arr[i] < pivot:
i += 1
# Move j left while arr[j] > pivot
j -= 1
while arr[j] > pivot:
j -= 1
if i >= j:
return j
# Swap elements at i and j
arr[i], arr[j] = arr[j], arr[i]
Hoare makes fewer swaps on average. More efficient, but trickier to implement correctly.
Optimization: Tail Recursion
Quicksort makes two recursive calls. You can eliminate one with tail call optimization:
def quicksort_optimized(arr, low, high):
while low < high:
pi = partition(arr, low, high)
# Recurse on smaller partition, loop on larger
if pi - low < high - pi:
quicksort_optimized(arr, low, pi - 1)
low = pi + 1
else:
quicksort_optimized(arr, pi + 1, high)
high = pi - 1
This ensures recursion depth is O(log n) even in worst case for stack space.
When to Use Quicksort
General-purpose sorting: Fast on average, in-place, widely used.
Limited memory: In-place sorting is crucial when you can't afford O(n) extra space.
Good cache performance: Quicksort's in-place operations are cache-friendly.
Random access: Works well on arrays. Less suitable for linked lists (merge sort is better there).
When NOT to Use Quicksort
Worst-case guarantees: If you absolutely need O(n log n) worst case, use merge sort or heapsort.
Stability required: Quicksort is unstable. Use merge sort for stable sorting.
Nearly sorted data: Without randomization, quicksort degrades on sorted/reverse sorted input. Use insertion sort or merge sort.
Quicksort vs Merge Sort
| Aspect | Quicksort | Merge Sort |
|---|---|---|
| Time (average) | O(n log n) | O(n log n) |
| Time (worst) | O(n²) | O(n log n) |
| Space | O(log n) avg | O(n) |
| In-place | Yes | No |
| Stable | No | Yes |
| Cache locality | Good | Poor |
Quicksort: faster in practice, but no guarantees.
Merge sort: slower but predictable.
Most languages use quicksort (or variants like introsort) for their built-in sort.
Introsort: The Hybrid
Standard library sorts often use introsort: start with quicksort, switch to heapsort if recursion depth exceeds 2×log n (avoiding O(n²)).
Python's Timsort uses merge sort + insertion sort. Java uses dual-pivot quicksort for primitives, merge sort for objects (to preserve stability).
These hybrids combine the best of multiple algorithms.
Three-Way Partitioning
For arrays with many duplicates, three-way partitioning is better:
def three_way_partition(arr, low, high):
pivot = arr[high]
lt = low # Elements < pivot
gt = high # Elements > pivot
i = low
while i <= gt:
if arr[i] < pivot:
arr[i], arr[lt] = arr[lt], arr[i]
lt += 1
i += 1
elif arr[i] > pivot:
arr[i], arr[gt] = arr[gt], arr[i]
gt -= 1
else:
i += 1
return lt, gt
Result: [< pivot | == pivot | > pivot]
For arrays like [5, 2, 5, 5, 3, 5, 1, 5], this avoids re-sorting the many 5's.
The Power of Quicksort
Quicksort is elegant. The partition logic is clever. The performance is excellent in practice.
It's not perfect—worst case can bite you. But with randomization, it's reliable enough for most uses.
Understanding quicksort means understanding the trade-offs between average-case performance and worst-case guarantees. And knowing when to prioritize one over the other.
That's algorithm design: not just solving the problem, but choosing the right tool for the job.
