Analyzing Time Complexity
Analyzing the time complexity of an algorithm involves examining the number of operations it performs as the size of the input grows. An operation, in this context, refers to a basic computational step, such as a comparison or arithmetic operation.
Example 1: Linear time complexity
Consider a simple for loop that iterates over an array. Here's an example in Python:
Pythondef linear_example(arr):
for item in arr:
print(item) # Operation: printing each item
In this function, the number of operations scales linearly with the size of the input array arr. Thus, this function has a time complexity of , where is the number of elements in the array. This is known as linear time complexity.
Example 2: Quadratic time complexity
Now, let’s look at a function with nested loops and an additional loop:
Pythondef quadratic_example(arr):
# First for loop: O(n)
for i in range(len(arr)):
# Nested for loop: O(n)
for j in range(len(arr)):
print(arr[i], arr[j]) # Operation: printing pairs
# Separate for loop: O(n)
for item in arr:
print(item)
In this function:
- The nested
forloops iterate over all pairs of elements, resulting in complexity. - The separate
forloop adds complexity.
When combined, the total time complexity is dominated by the quadratic term, so the overall complexity is .
When analyzing time complexity, we focus on how the running time of an algorithm grows as the input size increases. This involves comparing the rates at which different terms grow. In the example:
- The nested
forloops contribute complexity. - The separate
forloop contributes complexity.
The total time complexity is the sum of these two terms: . However, in asymptotic analysis, we are interested in the term that grows fastest as becomes very large. As increases, grows much faster than .
In Big O notation, we simplify the expression by focusing on the term that grows the fastest. Therefore, simplifies to because the quadratic term dominates the linear term as becomes large.
Thus, we can say that is asymptotically equivalent to . This simplification is a standard practice in asymptotic notation to focus on the most significant term.
Example 3: Built-in functions
It's important to consider the time complexity of built-in functions when analyzing algorithms. For example, Python’s built-in sort function:
Pythondef sort_example(arr):
arr.sort() # Python's sort function
While sorting may look like a linear operation due to its one-line invocation, Python’s sort function actually has a time complexity of . Therefore, if you incorrectly assume linear time complexity, you might misjudge the performance of your entire function. Here's a table of some common built-in functions in Python and their associated time complexities:
Note that these complexities are specific to Python’s implementation and may vary in other programming languages due to differences in underlying data structures and implementations.
Test your knowledge
Question: True or False: A function with a time complexity of is less efficient than a function with a time complexity of .
Question: True or False: The time complexity is considered the same as when comparing their growth rates.
Question: What is the time complexity of func1?
Pythondef func1(arr):
return arr[0]
Question: What is the time complexity of func2?
Pythondef func2(arr):
arr.sort()
for i in range(len(arr)):
print(arr[i])
Question: What is the time complexity of func3?
Pythondef func3(arr):
for i in range(len(arr)):
for j in range(len(arr)):
for k in range(len(arr)):
print(arr[i], arr[j], arr[k])
Question: What is the time complexity of func4?
Pythondef func4(n):
if n <= 1:
return 1
else:
return func4(n-1) + func4(n-2)