Why Big-O Matters
Writing code that works is step one. Writing code that works efficiently at scale is step two. Big-O notation is the standard vocabulary developers use to describe and compare how algorithms behave as data grows. Whether you're preparing for technical interviews or designing a system that needs to handle millions of records, Big-O is a tool you use constantly.
What Big-O Actually Measures
Big-O describes the upper bound of an algorithm's growth rate — how its runtime (or memory use) scales relative to the size of the input, denoted n. It deliberately drops constants and lower-order terms because they become insignificant at large n. You're answering the question: "If I double the input size, what happens to the cost?"
The Most Common Complexity Classes
O(1) — Constant Time
The operation takes the same time regardless of input size.
def get_first(arr):
return arr[0] # Always one operation
Examples: array index access, hash map lookup (average case), stack push/pop.
O(log n) — Logarithmic Time
Each step eliminates half the remaining work. Common in divide-and-conquer algorithms.
def binary_search(arr, target):
lo, hi = 0, len(arr) - 1
while lo <= hi:
mid = (lo + hi) // 2
if arr[mid] == target: return mid
elif arr[mid] < target: lo = mid + 1
else: hi = mid - 1
return -1
Examples: binary search, balanced BST operations, heap operations.
O(n) — Linear Time
Work grows proportionally to input size. You touch each element once.
def find_max(arr):
max_val = arr[0]
for x in arr: # n iterations
if x > max_val:
max_val = x
return max_val
O(n log n) — Linearithmic Time
The sweet spot for comparison-based sorting. You can't sort a general list faster than this.
Examples: Merge sort, Heap sort, Timsort (Python's built-in sort).
O(n²) — Quadratic Time
A nested loop over the same data. Fine for small inputs; catastrophic for large ones.
def bubble_sort(arr):
for i in range(len(arr)): # n
for j in range(len(arr)-1): # n
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
O(2ⁿ) and O(n!) — Exponential & Factorial
These grow so fast they're only practical for tiny inputs. Classic examples: recursive Fibonacci without memoization (O(2ⁿ)), generating all permutations (O(n!)).
Complexity Comparison at a Glance
| Complexity | n = 10 | n = 100 | n = 1,000 |
|---|---|---|---|
| O(1) | 1 | 1 | 1 |
| O(log n) | 3 | 7 | 10 |
| O(n) | 10 | 100 | 1,000 |
| O(n log n) | 33 | 664 | 9,966 |
| O(n²) | 100 | 10,000 | 1,000,000 |
| O(2ⁿ) | 1,024 | ~10³⁰ | impossible |
Space Complexity
Big-O applies to memory too, not just time. An algorithm that creates a new copy of its input uses O(n) space. One that sorts in place uses O(1) extra space. Both dimensions matter when designing systems under memory constraints.
Common Mistakes
- Ignoring constants in practice. O(n) with a huge constant can be slower than O(n²) for small n. Profile real code; don't only reason abstractly.
- Assuming average-case is worst-case. Quicksort is O(n log n) on average but O(n²) worst case. Know the difference.
- Forgetting hidden loops. Built-in functions like
inon a Python list are O(n). A call inside a loop makes it O(n²).
Conclusion
Big-O gives you a shared language for reasoning about code efficiency. Start by recognizing common patterns — loops give O(n), nested loops O(n²), halving gives O(log n) — and build intuition from there. The goal isn't to memorize charts; it's to see the scaling behavior in your own code before it becomes a production problem.