- Understand greedy choice property and optimal substructure
- See when greedy works (coin change, activity selection) and when it fails
- Learn the difference between greedy and dynamic programming
Introduction to Greedy Algorithms
You're at a vending machine with coins: 25¢, 10¢, 5¢, 1¢. You need to make 63¢. What do you do?
You probably grab two quarters (50¢), one dime (60¢), and three pennies (63¢). That's six coins total.
Why did you pick that way? Because at each step, you took the biggest coin that didn't overshoot. Two quarters, then can't use another quarter, so one dime, then can't use another dime, so pennies.
That's a greedy algorithm. Make the locally optimal choice at each step and hope it leads to a globally optimal solution.
The Greedy Strategy
Greedy algorithms build solutions piece by piece. At each step, pick the option that looks best right now, commit to it, and never reconsider.
For making change with 63¢:
- Biggest coin ≤ 63¢? 25¢. Take it. Remaining: 38¢
- Biggest coin ≤ 38¢? 25¢. Take it. Remaining: 13¢
- Biggest coin ≤ 13¢? 10¢. Take it. Remaining: 3¢
- Biggest coin ≤ 3¢? 1¢. Take it. Remaining: 2¢
- Biggest coin ≤ 2¢? 1¢. Take it. Remaining: 1¢
- Biggest coin ≤ 1¢? 1¢. Take it. Remaining: 0¢
Done. Six coins.
Is this optimal? For standard US coins, yes. But greedy doesn't always work.
When Greedy Fails
Change the coin system. You have coins: 1¢, 3¢, 4¢. Make 6¢.
Greedy says:
- Biggest ≤ 6¢? 4¢. Take it. Remaining: 2¢
- Biggest ≤ 2¢? 1¢. Take it. Remaining: 1¢
- Biggest ≤ 1¢? 1¢. Take it. Remaining: 0¢
Three coins: 4¢ + 1¢ + 1¢.
But the optimal solution: 3¢ + 3¢ = two coins.
Greedy failed. It made the locally best choice (take the 4¢ coin) but that prevented a better global solution.
Why Greedy Sometimes Works
For greedy to guarantee an optimal solution, the problem needs two properties:
Greedy choice property: A locally optimal choice leads to a globally optimal solution. In standard coin change, taking the biggest coin first always works.
Optimal substructure: An optimal solution contains optimal solutions to subproblems. If the best way to make 63¢ uses a quarter, then the best way to make the remaining 38¢ must also be optimal.
Not all problems have these properties. When they do, greedy is fast and simple. When they don't, greedy gives you a suboptimal answer.
Activity Selection Problem
You have n activities with start and finish times. Pick the maximum number of non-overlapping activities.
Activities:
A1: [1, 4)
A2: [3, 5)
A3: [0, 6)
A4: [5, 7)
A5: [3, 9)
A6: [5, 9)
A7: [6, 10)
A8: [8, 11)
A9: [8, 12)
A10: [2, 14)
Greedy strategy: always pick the activity that finishes earliest.
Sort by finish time:
A1: [1, 4), A2: [3, 5), A4: [5, 7), A7: [6, 10), A8: [8, 11), A9: [8, 12), A3: [0, 6), A5: [3, 9), A6: [5, 9), A10: [2, 14)
Pick A1 (finishes at 4)
A2 starts at 3, overlaps with A1 (which ends at 4). Skip.
A4 starts at 5 ≥ 4. Pick A4 (finishes at 7)
A7 starts at 6, overlaps. Skip.
A8 starts at 8 ≥ 7. Pick A8 (finishes at 11)
A9 starts at 8, overlaps. Skip.
... rest overlap or are checked
Selected: A1, A4, A8 (three activities)
Why does "earliest finish time" work? Intuition: finishing early leaves the most room for future activities. If we pick an activity that finishes late, it blocks more potential activities.
Proof sketch: Suppose the optimal solution doesn't pick the earliest-finishing activity. Swap it in—it finishes no later, so it can't create conflicts the optimal choice didn't have. Contradiction.
Fractional Knapsack
You have a knapsack with capacity W. Items have weights and values. You can take fractions of items. Maximize value.
Items:
Item 1: value=60, weight=10 → value/weight = 6
Item 2: value=100, weight=20 → value/weight = 5
Item 3: value=120, weight=30 → value/weight = 4
Capacity: W = 50
Greedy: sort by value/weight ratio, take as much of the highest-ratio item as possible.
1. Take all of Item 1: weight=10, value=60. Remaining capacity: 40
2. Take all of Item 2: weight=20, value=100. Remaining capacity: 20
3. Take 20/30 of Item 3: weight=20, value=80. Remaining capacity: 0
Total value: 60 + 100 + 80 = 240
This is optimal for fractional knapsack. But if you can't take fractions (0/1 knapsack), greedy fails. You'd need dynamic programming.
Huffman Coding
Compress text by assigning shorter codes to frequent characters.
Text: "ABRACADABRA"
Frequencies: A=5, B=2, R=2, C=1, D=1
Build a tree bottom-up by merging the two least-frequent nodes:
Step 1: Merge C(1) and D(1) → CD(2)
Step 2: Merge B(2) and R(2) → BR(4)
Step 3: Merge CD(2) and BR(4) → CDBR(6)
Step 4: Merge A(5) and CDBR(6) → root(11)
Codes:
A: 0
B: 110
R: 111
C: 100
D: 101
Greedy choice: always merge the two smallest frequencies. This minimizes the tree's weighted path length, which minimizes the encoded size.
The Pattern
Greedy works when:
- You can prove the greedy choice is safe (won't block better solutions)
- After the greedy choice, the remaining problem is similar to the original (optimal substructure)
When those hold, greedy is beautiful: simple, fast, easy to code.
When they don't, greedy might give a "good enough" approximation, but you need dynamic programming or other techniques for optimality.
Greedy vs Dynamic Programming
Both solve optimization problems. How to choose?
Greedy: Make one choice per step, never reconsider. Fast (often O(n log n) for sorting).
Dynamic Programming: Consider all choices, remember results, build up from subproblems. Slower (often O(n²) or worse) but handles cases greedy can't.
Example: 0/1 knapsack (can't take fractions). Greedy by value/weight fails. DP works.
Example: Fractional knapsack. Greedy by value/weight works and is faster than DP.
When to Try Greedy
Ask yourself:
- Is there an obvious "locally best" choice at each step?
- Does taking that choice leave a similar problem to solve?
- Can you prove it won't block better solutions?
If yes, try greedy. If it works, great—you have a simple, fast algorithm. If not, you learned the problem needs something more powerful.
Greedy algorithms are optimistic. They assume the obvious choice is the right choice. Sometimes they're right, and you get elegant solutions. Sometimes they're wrong, and you need to be smarter. But they're always worth trying first.
