Understanding Big O Notation
You may be surprised to learn that writing a function that works correctly isn’t enough to pass a coding interview. Interviewers also test your ability to write efficient code that performs well at scale. The most common way to evaluate (and thus explain) this is through Big-O notation, which you'll often hear in discussions about time and space complexity, with phrases like:
- "Solve this in time."
- "Try to achieve time complexity."
- "The space complexity should be or less."
That's all Big O notation!

The table above shows how time and space requirements change with input size for different types of complexities. Don’t worry if it seems unclear for now—we’ll break it down below.
Time Complexity: This measures how an algorithm's execution time scales with input size. It helps estimate the run time growth as input increases, regardless of hardware or environment.
Space Complexity: This gauges the memory usage of an algorithm relative to its input size, helping assess its impact on system resources, especially under hardware constraints.
Clarify constraints early: Ask questions like, "Are there any time or space limits?" This shows the interviewer that you're thinking ahead and helps you tailor your solution to what's expected.
So what does it mean solve something in time? Simply put, it means that the worst-case time taken for the function to complete grows linearly with the size of the input.
Asymptotic notation
Big O notation is one of three types of asymptotic notations. Let's explore all three and understand why Big O is preferred by comparing two functions, and , which achieve the same result. We’ll compare the number of operations each requires in the worst case as the input size grows:
At smaller input sizes (e.g., 1-5), your function might seem more efficient. But as the input grows, your function's operations increase exponentially, while your friend's function stays more stable. A plot of operations versus input size would show this clearly:

The plot shows that your function doesn't scale as efficiently as your friend's. Since real-world inputs are usually larger than 10, your friend's function is better suited for larger inputs.
Ask about input size: Knowing the input size helps you decide if efficiency is needed. For example, a quadratic solution might be fine for small inputs but inefficient for larger ones.
To describe how algorithms scale with input size, we use asymptotic notation, which includes Big-O, Omega, and Theta.
Big-O Notation. Big-O notation describes the upper bound of an algorithm’s running time. It provides the worst-case scenario of how an algorithm performs as the input size grows. For example, if an algorithm runs in time, the time to complete will grow linearly with the input size.
Omega Notation. Omega notation describes the lower bound of an algorithm’s running time. It provides the best-case scenario. For example, if an algorithm runs in time, it means that in the best-case scenario, the algorithm’s performance will be proportional to the input size.
Theta Notation. Theta notation describes the tight bound of an algorithm’s running time. It provides both the upper and lower bounds. For example, if an algorithm runs in time, it means that the algorithm’s performance is tightly bound to the input size, both in the best and worst cases.
Among the three types of asymptotic notation (Big O, Omega, and Theta) Big O notation is the most commonly used because it provides an upper bound on an algorithm’s running time, representing the worst-case scenario. This is crucial for ensuring predictability and reliability in performance. Big-O notation simplifies the comparison of algorithm efficiencies and is widely accepted in both academia and industry.
Comparing Big O complexity
Understanding how different asymptotic complexities affect the performance of algorithms is crucial for designing efficient code. Let’s explore this with a table of common time complexities and their performance rankings:

The complexities listed above are not exhaustive. Theoretically, you can encounter complexities like or . However, such complexities are rarely seen in practical programs because they perform poorly with larger inputs. Programs with these complexities typically exhibit significant performance issues, which is why more efficient algorithms are preferred.
Asymptotic complexities help us describe both the time and space requirements of an algorithm.
For example, if a function function_a has a time complexity of and a space complexity of , it means that function_a runs in linear time, which is generally acceptable, and uses a constant amount of memory, which is considered excellent. This way, we can easily communicate how an algorithm performs in terms of both its execution time and memory usage.
Asymptotic equivalence
When analyzing the running time of an algorithm, we often encounter complex expressions. However, not all terms in these expressions equally affect the growth rate as increases. The idea of asymptotic equivalence allows us to ignore constants and lower-order terms, simplifying the expression to focus on the dominant term.
For example,
The constant factor in does not affect the growth rate because Big O notation ignores constant multipliers. Therefore, simplifies to .
In this case, the term dominates as becomes large. Since grows much faster than , the term becomes insignificant in the context of Big O notation. Thus, simplifies to .
Test your knowledge
Question:
Question:
Question:
Question:


