Sorting Algorithms
Various sorting algorithms with implementation and complexity analysis.
Bubble Sort
Simple comparison-based algorithm that repeatedly swaps adjacent elements if they are in wrong order.
#include <stdio.h>
void bubbleSort(int arr[], int n) {
for (int i = 0; i < n-1; i++) {
for (int j = 0; j < n-i-1; j++) {
if (arr[j] > arr[j+1]) {
// Swap
int temp = arr[j];
arr[j] = arr[j+1];
arr[j+1] = temp;
}
}
}
}
| Case | Time Complexity |
|---|---|
| Best | O(n) |
| Average | O(n²) |
| Worst | O(n²) |
| Space | O(1) |
Selection Sort
Finds minimum element and places it at the beginning, then repeats for remaining elements.
#include <stdio.h>
void selectionSort(int arr[], int n) {
for (int i = 0; i < n-1; i++) {
int minIdx = i;
for (int j = i+1; j < n; j++) {
if (arr[j] < arr[minIdx])
minIdx = j;
}
// Swap
int temp = arr[i];
arr[i] = arr[minIdx];
arr[minIdx] = temp;
}
}
| Case | Time Complexity |
|---|---|
| All Cases | O(n²) |
| Space | O(1) |
Insertion Sort
Builds sorted array one element at a time by inserting each element in its correct position.
#include <stdio.h>
void insertionSort(int arr[], int n) {
for (int i = 1; i < n; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j+1] = arr[j];
j--;
}
arr[j+1] = key;
}
}
| Case | Time Complexity |
|---|---|
| Best | O(n) |
| Average | O(n²) |
| Worst | O(n²) |
| Space | O(1) |
Quick Sort
Divide and conquer algorithm that picks a pivot and partitions array around it.
#include <stdio.h>
int partition(int arr[], int low, int high) {
int pivot = arr[high];
int i = low - 1;
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
int temp = arr[i+1];
arr[i+1] = arr[high];
arr[high] = temp;
return i + 1;
}
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
| Case | Time Complexity |
|---|---|
| Best | O(n log n) |
| Average | O(n log n) |
| Worst | O(n²) |
| Space | O(log n) |
Merge Sort
Divide and conquer algorithm that divides array in half, sorts them, and merges.
#include <stdio.h>
#include <stdlib.h>
void merge(int arr[], int left, int mid, int right) {
int n1 = mid - left + 1;
int n2 = right - mid;
int L[n1], R[n2];
for (int i = 0; i < n1; i++)
L[i] = arr[left + i];
for (int j = 0; j < n2; j++)
R[j] = arr[mid + 1 + j];
int i = 0, j = 0, k = left;
while (i < n1 && j < n2) {
if (L[i] <= R[j])
arr[k++] = L[i++];
else
arr[k++] = R[j++];
}
while (i < n1)
arr[k++] = L[i++];
while (j < n2)
arr[k++] = R[j++];
}
void mergeSort(int arr[], int left, int right) {
if (left < right) {
int mid = left + (right - left) / 2;
mergeSort(arr, left, mid);
mergeSort(arr, mid + 1, right);
merge(arr, left, mid, right);
}
}
| Case | Time Complexity |
|---|---|
| All Cases | O(n log n) |
| Space | O(n) |
Heap Sort
Real-world analogy: Think of organizing a library where the most popular books stay at eye level. When a book is checked out, you reorganize so the next most popular one moves up. This "heap" structure keeps the maximum (or minimum) element always at the top for quick access!
Heap Sort uses a binary heap data structure. A heap is a complete binary tree where each parent node is greater than its children (Max Heap).
How the Heap Structure Works
Array: [9, 7, 8, 3, 6, 5, 4]
Visualized as Max Heap:
9 (root - largest)
/ \
7 8
/ \ / \
3 6 5 4
Array representation: parent at index i has children at 2i+1 and 2i+2
Step-by-Step Visual: Build Max Heap
| Step | Array | Action |
|---|---|---|
| Initial | [4, 10, 3, 5, 1] | Unsorted array |
| Heapify | [10, 5, 3, 4, 1] | 10 moves to root (largest) |
| Swap & Heapify | [1, 5, 3, 4, 10] | 10 goes to end (sorted) |
| Heapify | [5, 4, 3, 1, 10] | 5 moves to root |
| Swap & Heapify | [1, 4, 3, 5, 10] | 5 goes to end |
| Continue... | [1, 3, 4, 5, 10] | All elements sorted! |
#include <stdio.h>
// To heapify a subtree rooted at index i
// n is size of heap
void heapify(int arr[], int n, int i) {
int largest = i; // Initialize largest as root
int left = 2 * i + 1; // Left child
int right = 2 * i + 2; // Right child
// If left child is larger than root
if (left < n && arr[left] > arr[largest])
largest = left;
// If right child is larger than largest so far
if (right < n && arr[right] > arr[largest])
largest = right;
// If largest is not root
if (largest != i) {
// Swap
int temp = arr[i];
arr[i] = arr[largest];
arr[largest] = temp;
// Recursively heapify the affected sub-tree
heapify(arr, n, largest);
}
}
void heapSort(int arr[], int n) {
// Build max heap (rearrange array)
for (int i = n / 2 - 1; i >= 0; i--)
heapify(arr, n, i);
// Extract elements one by one
for (int i = n - 1; i > 0; i--) {
// Move current root to end
int temp = arr[0];
arr[0] = arr[i];
arr[i] = temp;
// Heapify reduced heap
heapify(arr, i, 0);
}
}
int main() {
int arr[] = {12, 11, 13, 5, 6, 7};
int n = sizeof(arr) / sizeof(arr[0]);
printf("Original: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
heapSort(arr, n);
printf("\nSorted: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
return 0;
}
| Case | Time Complexity |
|---|---|
| Best | O(n log n) |
| Average | O(n log n) |
| Worst | O(n log n) |
| Space | O(1) |
Radix Sort
Real-world analogy: Imagine sorting exam papers by student ID. First, you sort by the last digit, then by the second-to-last digit, and so on. Each pass organizes the numbers more, until they're fully sorted!
Radix Sort processes digits from least significant to most significant (LSD) using counting sort as a subroutine.
Step-by-Step Visual: Sorting [170, 45, 75, 90, 2, 802, 24, 66]
| Pass | By Digit | Result After Pass |
|---|---|---|
| Initial | - | [170, 45, 75, 90, 2, 802, 24, 66] |
| Pass 1 | Ones (0-9) | [170, 90, 2, 802, 24, 45, 75, 66] |
| Pass 2 | Tens (0-9) | [2, 802, 24, 45, 66, 170, 75, 90] |
| Pass 3 | Hundreds (0-9) | [2, 24, 45, 66, 75, 90, 170, 802] ā Sorted! |
Bucket Visualization (Pass 1 - Sort by Ones Digit)
Digit Buckets: 0 1 2 3 4 5 6 7 8 9
ā ā ā ā ā ā ā ā ā ā
[170] [] [2] [] [24] [45] [66] [75] [] [90]
ā ā ā
[802] [?]
Collect: 170 ā 90 ā 2 ā 802 ā 24 ā 45 ā 75 ā 66
#include <stdio.h>
#include <stdlib.h>
// Get maximum value in array
int getMax(int arr[], int n) {
int max = arr[0];
for (int i = 1; i < n; i++)
if (arr[i] > max)
max = arr[i];
return max;
}
// Counting sort based on digit represented by exp (1, 10, 100...)
void countingSortByDigit(int arr[], int n, int exp) {
int output[n]; // Output array
int count[10] = {0}; // Count array for digits 0-9
// Store count of occurrences
for (int i = 0; i < n; i++)
count[(arr[i] / exp) % 10]++;
// Change count[i] to contain actual position
for (int i = 1; i < 10; i++)
count[i] += count[i - 1];
// Build output array (traverse from end for stability)
for (int i = n - 1; i >= 0; i--) {
output[count[(arr[i] / exp) % 10] - 1] = arr[i];
count[(arr[i] / exp) % 10]--;
}
// Copy output to original array
for (int i = 0; i < n; i++)
arr[i] = output[i];
}
void radixSort(int arr[], int n) {
// Find maximum to know number of digits
int max = getMax(arr, n);
// Do counting sort for every digit
// exp is 10^i where i is current digit number
for (int exp = 1; max / exp > 0; exp *= 10)
countingSortByDigit(arr, n, exp);
}
int main() {
int arr[] = {170, 45, 75, 90, 2, 802, 24, 66};
int n = sizeof(arr) / sizeof(arr[0]);
printf("Original: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
radixSort(arr, n);
printf("\nSorted: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
return 0;
}
| Case | Time Complexity |
|---|---|
| Best | O(d Ć (n + k)) |
| Average | O(d Ć (n + k)) |
| Worst | O(d Ć (n + k)) |
| Space | O(n + k) |
Where d = number of digits, k = range of digits (0-9, so k=10)
Counting Sort
Real-world analogy: Imagine you're a teacher counting how many students scored each grade (A, B, C, D, F). You tally counts in a frequency table, then use those counts to place students in order. No comparing needed!
Counting Sort works by counting occurrences of each unique element, then calculating positions.
Step-by-Step Visual: Sorting [4, 2, 2, 8, 3, 3, 1]
Step 1: Count Frequencies
Input: [4, 2, 2, 8, 3, 3, 1]
Range: 1 to 8
Index: 0 1 2 3 4 5 6 7 8
Count: [0, 1, 2, 2, 1, 0, 0, 0, 1]
ā ā ā ā ā
(1) (2) (3) (4) (8)
Counts: 1 appears 1 time
2 appears 2 times
3 appears 2 times
4 appears 1 time
8 appears 1 time
Step 2: Calculate Cumulative Count (for positions)
Index: 0 1 2 3 4 5 6 7 8
Count: [0, 1, 3, 5, 6, 6, 6, 6, 7]
ā ā ā ā ā
(1) (2) (3) (4) (8)
Meaning: Element 1 goes at position 1
Element 2 goes at positions 2-3
Element 3 goes at positions 4-5
Element 4 goes at position 6
Element 8 goes at position 7
Step 3: Place Elements
Result: [1, 2, 2, 3, 3, 4, 8]
ā ā ā ā ā ā ā
1 2 2 3 3 4 8
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void countingSort(int arr[], int n) {
// Find max and min
int max = arr[0], min = arr[0];
for (int i = 1; i < n; i++) {
if (arr[i] > max) max = arr[i];
if (arr[i] < min) min = arr[i];
}
int range = max - min + 1;
int *count = (int *)calloc(range, sizeof(int));
int *output = (int *)malloc(n * sizeof(int));
// Count occurrences
for (int i = 0; i < n; i++)
count[arr[i] - min]++;
// Calculate cumulative count
for (int i = 1; i < range; i++)
count[i] += count[i - 1];
// Build output array (traverse from end for stability)
for (int i = n - 1; i >= 0; i--) {
output[count[arr[i] - min] - 1] = arr[i];
count[arr[i] - min]--;
}
// Copy back to original array
for (int i = 0; i < n; i++)
arr[i] = output[i];
free(count);
free(output);
}
int main() {
int arr[] = {4, 2, 2, 8, 3, 3, 1};
int n = sizeof(arr) / sizeof(arr[0]);
printf("Original: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
countingSort(arr, n);
printf("\nSorted: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
return 0;
}
| Case | Time Complexity |
|---|---|
| Best | O(n + k) |
| Average | O(n + k) |
| Worst | O(n + k) |
| Space | O(n + k) |
Where k = range of input values (max - min + 1). Best when k is not much larger than n.
Shell Sort
Real-world analogy: Imagine organizing a messy deck of cards. Instead of comparing adjacent cards, you first group cards that are far apart and sort those groups. Then you reduce the gap and repeat. By the time you compare adjacent cards, the deck is already almost sorted!
Shell Sort is a generalization of insertion sort that allows exchange of far-apart elements. It uses a gap sequence (like n/2, n/4, ..., 1).
Step-by-Step Visual: Sorting [64, 34, 25, 12, 22, 11, 90]
n = 7, starting gap = n/2 = 3
| Gap | Groups Being Sorted | Result |
|---|---|---|
| 3 | [64,12], [34,22], [25,11], [90] | [12, 22, 11, 64, 34, 25, 90] |
| 1 | Standard insertion sort on nearly sorted array | [11, 12, 22, 25, 34, 64, 90] |
Gap Sequence Visualization
Array: [64, 34, 25, 12, 22, 11, 90]
Index: 0 1 2 3 4 5 6
GAP = 3: Compare elements 3 positions apart
Group 1 (indices 0,3): [64]---[12] ā Swap! ā [12]---[64]
Group 2 (indices 1,4): [34]---[22] ā Swap! ā [22]---[34]
Group 3 (indices 2,5): [25]---[11] ā Swap! ā [11]---[25]
Group 4 (index 6): [90]
After gap=3: [12, 22, 11, 64, 34, 25, 90]
GAP = 1: Standard insertion sort
[11, 12, 22, 25, 34, 64, 90] ā Sorted!
#include <stdio.h>
void shellSort(int arr[], int n) {
// Start with large gap, then reduce
for (int gap = n / 2; gap > 0; gap /= 2) {
// Do insertion sort for this gap
for (int i = gap; i < n; i++) {
int temp = arr[i];
int j;
// Shift elements of arr[0..i-gap] that are > temp
// to their position gap ahead
for (j = i; j >= gap && arr[j - gap] > temp; j -= gap)
arr[j] = arr[j - gap];
// Place temp in correct location
arr[j] = temp;
}
}
}
// Alternative with different gap sequence (better performance)
void shellSortImproved(int arr[], int n) {
// Knuth's sequence: 1, 4, 13, 40, 121, 364...
int gap = 1;
while (gap < n / 3)
gap = 3 * gap + 1;
while (gap >= 1) {
for (int i = gap; i < n; i++) {
for (int j = i; j >= gap && arr[j] < arr[j - gap]; j -= gap) {
// Swap
int temp = arr[j];
arr[j] = arr[j - gap];
arr[j - gap] = temp;
}
}
gap /= 3;
}
}
int main() {
int arr[] = {64, 34, 25, 12, 22, 11, 90};
int n = sizeof(arr) / sizeof(arr[0]);
printf("Original: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
shellSort(arr, n);
printf("\nSorted: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
return 0;
}
| Case | Time Complexity |
|---|---|
| Best | O(n log n) |
| Average | O(n log n) to O(n^1.5) |
| Worst | O(n²) |
| Space | O(1) |
Complexity depends on gap sequence. Worst case is with n/2, n/4... sequence.
Understanding Stability
What is a stable sort? A sorting algorithm is stable if two elements with equal values appear in the same order in the sorted output as they appeared in the input.
Why Stability Matters
Real-world example: Sorting student records by grade, then by name. You want students with the same grade to stay in alphabetical order.
Original: [(Alice, A), (Bob, B), (Charlie, A), (David, B), (Eve, A)]
Sort by Grade (Stable):
[(Alice, A), (Charlie, A), (Eve, A), (Bob, B), (David, B)]
ā Original order preserved for same grades
Sort by Grade (Unstable - BAD):
[(Charlie, A), (Alice, A), (Eve, A), (David, B), (Bob, B)]
Ć Alice and Charlie swapped! Original order lost.
Stability of Each Algorithm
| Algorithm | Stable? | Why? |
|---|---|---|
| Bubble Sort | ā Yes | Only swaps adjacent elements when strictly greater |
| Selection Sort | ā No | Non-adjacent swaps can change relative order |
| Insertion Sort | ā Yes | Shifts elements, preserves order of equals |
| Merge Sort | ā Yes | Takes left element when equal during merge |
| Quick Sort | ā No | Random pivot selection can reorder equals |
| Heap Sort | ā No | Heap restructuring changes relative positions |
| Radix Sort | ā Yes | Counting sort used is stable |
| Counting Sort | ā Yes | Processes from end, maintains order |
| Shell Sort | ā No | Long-distance exchanges change order |
Complete Algorithm Comparison
| Algorithm | Best | Average | Worst | Space | Stable | Method |
|---|---|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) | Yes | Exchange |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) | No | Selection |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | Yes | Insertion |
| Shell Sort | O(n log n) | O(n^1.5) | O(n²) | O(1) | No | Insertion |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes | Merge |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) | No | Partition |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) | No | Selection |
| Counting Sort | O(n+k) | O(n+k) | O(n+k) | O(n+k) | Yes | Non-comparison |
| Radix Sort | O(dĆ(n+k)) | O(dĆ(n+k)) | O(dĆ(n+k)) | O(n+k) | Yes | Non-comparison |
When to Use Each Algorithm
- Bubble/Insertion Sort: Small arrays, nearly sorted data
- Selection Sort: Minimal swaps needed (data movement is expensive)
- Merge Sort: Stable sort needed, guaranteed O(n log n), linked lists
- Quick Sort: General purpose, fastest average case
- Heap Sort: Memory constrained, guaranteed O(n log n) in-place
- Counting/Radix Sort: Small integer ranges, fixed-length keys
- Shell Sort: Simple implementation, better than O(n²)