Algorithms & Data Structures: Big O notation

Algorithms & Data Structures: Big O notation

In computer science, the big-O notation is used to express the performance or complexity of an algorithm. Big-O is a term that indicates the worst-case scenario and can be used to describe the amount of time an algorithm takes to execute or the amount of space it takes up.

To implement a good system that is used by millions and possibly billions of users then it is important to understand the performance of the code/algorithms that you write or use. For those with no background in computer science understanding the big-O notations can be a daunting task. Having recently started learning DSA, I'm going to explain the big-O notations as simply as I understood it. A programmatic solution to a problems has three main case:

  • Best case
  • Expected case
  • Worst case

Often the worst-case scenarios are given the most attention as they present the point of failure for the solution. For example looking for an element is a list of 10 items is considered a best case as it won't take up significant computation resources. However looking for an items in a list with 100 million items will result into significant resources uptake. This complexity can then simply be defined by a big-O notations. Different algorithms can have the same big-O notations but it is important to only compare algorithms from the same domain i.e (compare bubble sort with other sorting algorithms and not search algorithms).

The most common big-O notations include:

  • O(1)
  • O(n)
  • O(n^2)
  • O(log n)

To help us understand what these notations mean I'm going to form theoretical problems with multiples of 10. However, it is important to note that the big-O notation reduces the math down so the relation between the input and output may not be exactly equal to what the big-O has indicated. This is because mostly, it is the dominating factor of the complexity that is always considered.

1. O(1): Constant complexity

  • 1 item: 1 operation
  • 10 items: 1 operation
  • 100 items: 1 operation
  • 1000 items: 1 operation

Note that as the number of items increases by a factor of 10, the operation is always 1. This is then denoted by O(1) where the 1 in the bracket indicates a constant.

2. O(n): Linear complexity

  • 1 item: 1 operation
  • 10 items: 10 operations
  • 100 items: 100 operations
  • 1000 items: 1000 operations

Note that as the number of items increases by a factor of 10, the operations also increase by the same factor. This is to say the complexity of a given solution increases in equal proportion with the amount of input. This can be represented by O(n) where n indicates the scaling factor.

3. O(n^2): Quadratic complexity

  • 1 item: 1 operation
  • 10 items: 100 operations
  • 100 items: 10,000 operations
  • 1000 items: 1,000,000 operations

Note that as the number of items increases by a factor of 10, the operations increase by a factor of 10^2. The number of operations is square the input. This is represented by O(n^2) where n^2 represents the squaring aspect.

4. O(log n): Logarithmic complexity

  • 1 item: 1 operation
  • 10 items: 2 operations
  • 100 items: 3 operations
  • 1000 items: 4 operations

Note that the number of computations is only increased by a log of the input value. Here we use log with a base of 10. The log of 1 will be 1, the log of 10 will be 2 and so on and so forth. This can then be represented by O(log n).

Understanding the performance and complexity of the algorithms you write will definitely help you improve as a programmer. Here is a video link that will help you better understand the big-O notations.

Happy learning.