foreach-ui logo
codeLanguages
account_treeDSA

Quick Actions

quizlock Random Quiz
trending_uplock Progress
  • 1
  • 2
  • 3
  • 4
  • quiz
Algorithms
  • Understand what Big O notation represents and why we use it
  • Learn to simplify complexity expressions by dropping constants and lower terms
  • Recognize common Big O patterns (O(1), O(log n), O(n), O(n²), O(2ⁿ))
  • Distinguish between worst-case, average-case, and best-case complexity

Big O Notation Basics

Big O notation is how we talk about algorithm efficiency. It's a universal scale that lets you compare algorithms regardless of hardware, programming language, or implementation details.

Instead of saying "this takes 2.3 seconds on my laptop," you say "this is O(n)" or "this is O(n²)." Big O describes how an algorithm behaves as problems get larger, focusing on the shape of growth rather than exact numbers.

What it actually tells you

Big O describes the upper bound of how an algorithm slows down as input grows. In the worst case, how much slower does this get?

  • O(1) stays about the same speed no matter what
  • O(n) gets proportionally slower — double the input, double the time
  • O(n²) gets much slower, quadratically — double the input, quadruple the time

If you're scanning an array to find the maximum value, you check every item once. That's O(n). If you're comparing every pair of items (like in bubble sort), you have nested loops — that's O(n²).

Why we drop constants and smaller terms

Big O focuses on the dominant term. Constants and lower-order terms get ignored.

Say your algorithm does 3n² + 2n + 5 operations. When n is 1000, that's roughly 3 million (from 3n²) plus 2 thousand (from 2n) plus 5. The n² term dominates. So we write O(n²).

A few more examples:

  • 4n log n + 3n → O(n log n)
  • 100 → O(1)
  • 5n + 1000 → O(n)

Even if you have a constant of 1000, when n gets large enough (say 100,000), the linear term overwhelms it.

Common Big O Complexities

Notation Name Example Description
O(1) Constant Array access by index Same time regardless of size
O(log n) Logarithmic Binary search Cuts problem in half each time
O(n) Linear Finding max in array Proportional to input size
O(n log n) Linearithmic Efficient sorting Common in divide-and-conquer
O(n²) Quadratic Bubble sort Nested loops over data
O(2ⁿ) Exponential Subset generation Grows extremely fast

How these grow in practice

Here's a rough comparison with n = 10:

Input Size (n) → Relative Time
     10    →   1    | O(1)      - constant
     10    →   4    | O(log n)  - logarithmic
     10    →  10    | O(n)      - linear
     10    →  40    | O(n log n)- linearithmic
     10    → 100    | O(n²)     - quadratic
     10    → 1024   | O(2ⁿ)     - exponential

As n grows larger, the differences become dramatic. An O(2ⁿ) algorithm with n = 20 would take over a million operations. That's why exponential algorithms are usually impractical for anything beyond small inputs.

Worst case, best case, average case

Big O typically describes worst-case performance. When we say "quicksort is O(n log n)," we usually mean on average. In the worst case (already sorted data with a bad pivot), it's O(n²).

You'll also see:

  • Big Ω (Omega): best-case lower bound
  • Big Θ (Theta): tight bound when best and worst are the same

For practical purposes, Big O (worst-case) gives you what you need to make good choices. You design for the worst, hope for the best.


Remember: Big O shows growth rate, not exact time. Focus on the dominant term — constants and smaller terms fade away as n grows. Lower is better: O(1) is ideal, O(2ⁿ) is usually impractical. And context matters — choose based on your expected input sizes.

© 2026 forEach. All rights reserved.

Privacy Policy•Terms of Service