Part-2: What is Big O Notation?

In our previous tutorial we covered some basics of Algorithms and why we need Big O. In this tutorial we will learn in depth about Big O notation. If you haven't read my last tutorial I would suggest to read in order to get depth understanding of Algorithms:

What is Algorithm?

What is Big O Notation?

A program or algorithm is mainly depends on two factors:

  • Time: how long it takes to complete the program or algorithm
  • Space: how much space does it need to run a specific program or algorithm

A program or algorithm might have multiple solutions and they all have different time/space complexities. Big.O Notation is used to find/calculate time/space complexity of a problem.

Big O notation help us figuring out upper, middle and lower bound values for time/space complexity. Using Big O notation you can check which solution would perform better or worst in all cases.

The compexity of an algorithm f(n) gives us:

  • time required to complete a program
  • space required by a program

Big-O notation:

Big O refers to the upper bound of time/space complexity of an algorithm. I.e. worst runtime scenario

In simple words, complexity of an algorithm f(n) will never cross c*(g(n)).

0 <= f(n) <= c*(g(n)), where c is a positive constant.

Big-Ω notation:

Big-Ω refers to the lower bound of time/space complexity of an algorithm. i.e. best runtime scenario

In simple words,

 0 <= c*g(n) <= f(n) where c is a positive constant

Big-θ notation:

Big-θ refers to the tight bound of time/space complexity of an algorithm. i.e. intersection of Big-O and Big-Ω

In simple words, complexity of an algorithm f(n) will always between its upper and lower limit.

0 <= c1*g(n) <= f(n) <= c2*g(n), where c1 and c2 are positive constant

What is a program or an algorithm?

Before we move on to time/space complexity is it crucial to understand about an algorithm. An algorithm is basically steps that are required to solve a programming problem.

For example: Designing an algorithm that can add two numbers.

A program in general consists of:

  • statements like
    • declaring a variable
    • adding to numbers
    • a control structure like loops etc..
  • each statement is made up of one or more operations
    • addition
    • substractions
    • multiplications
    • assignments etc...

Let's understand by example below:

# write an algorithm that adds two numbers
function sum(a, b) {
   return a + b;  // this is a statement which has one operation (addition)
}

# write a program to swap to numbers
function swap(a, b) {
    
    let c = a; // statement: 1 operation (assignment)
    
    a = b; // statement: 1 operation (assignment)
    b = c; // statement: 1 operation (assignment)
    
    console.log(a, b); // statement: 1 operation (print)
}

Once you understand difference between a statement and operations you are good to learn about time/space complexity.

What is space complexity of an algorithm?

Space complexity of an algorithm is basically amount memory that is required to run a specific algorithm.  Space complexity is basically sum of:

  • fixed part: space required for storing data which is not dependent of input size.
  • variable part: space that is calculated based on input size.
# Equation can be defined as below
# Space complexity of an algorithm n can be
S(n) = Fixed Part + Variable Part

To calculate space complexity of an algorithm you have to assign 1 unit of space to operations where it requires a memory. For example: creating a new variable require some space let say it is 1 unit of space.

We are not going to learn in depth about how to exactly calculate space required by a program rather we would learn about how many unit of space does a program require to solve a specfic problem.

Let's look at some examples below:

// FIXED PART: constant space (1 unit of space)
var a = 1;  // variable assignment = 1 unit of space


// MOVING PART: an operation is repeated based on input size of n
for (var i=0; i<n; i++) {   // this statement will repeat for n times
    var z = i; // 1 unit of space * n
}

In above example, we have fixed part and moving part. You need to understand fix part is basically constant space i.e. 1 unit of space. Moving part is quite different, they are usually dependent of input size in our case its n.

When an operation is repeated n times that means all the operations that falls under this statement they also repeated n times. In our case a for loop repeats n times therefore variable z which has 1 unit of space is repeated n times.

To calculate overall space complexity of above code:

# Equation: S(n) = Fixed Part + Moving Part
S(n) = 1 unit of space + n unit of space

What is the time complexity of an algorithm?

A time complexity of an algorithm is basically time required by an algorithm to produce the output or to complete the program. Just like space complexity time complexity has two parts:

  • Fixed part: 1 unit of time (constant time)
  • Moving part: n unit of time (constant time * number of repetarions)

Each operation is considered to take 1 unit of time to calculate overall time complexity basically you add up all the statements. Each statement might have 1 or more unit of time.

Let's understand by two examples:

# Let's assume that
1 operation = 1 unit of time

// FIXED PART
function sum(a, b) {
    return a + b; // one statement with one operation (1 unit of time)
}

// Overall Time complexity of sum function
T(n) = 1 unit of time

// MOVING PART
function sumOfN(n) {
    let sum = 0; // 1-statement, 1-operation = 1 unit of time

    // 1 loop statement repeates for n times
    // any operations that falls in this loop is repeated n times
    for(let i=0; i < n; i++) { 
        sum += i; // 1 operation * n
    }

    return sum; // 1-statement 1-operation (print)
}

// Overall Time compexity for sumOfN function is
T(n) = 1 unit of time + n unit of time +  1 unit of time

Writing equations for time/space complexity?

Now that we learned about how to calculate space and time complexity its time to learn about how we can write equations for an algorithm. Writing equations would help you clearly understand the time/space complexity of the program.

Later on we will use these equations to find Big O complexities. Let's take one example: you are desigining an algorithm to sum of n numbers.

# calculate time/space complexity of following function
function sumOfN(n) {
    let sum = 0;
    for(let i=0; i < n; i++) { 
        sum += i;
    }
    return sum;
}

# caculate time complexity of above program
# time complexity of an algorithm n is
T(n) = sum of all unit time per statement

# Let x is equals to 1 unit of time
function sumOfN(n) {
    let sum = 0; // x

    for(let i=0; i < n; i++) { // n repetition
        sum += i; // x*n
    }

    return sum; // x
}

T(n) = x + xn + x 
     = 2x + xn 
     = 2 + n (because x = 1 unit of time)

# caculate space complexity of above program
# space complexity of an algorithm n is
S(n) = sum of all unit space per statement

# Let x is equals to 1 unit of space
function sumOfN(n) {
    let sum = 0; // x

    for(let i=0; i < n; i++) { // n repetition
        sum += i; // x*n
    }

    return sum; // x
}

S(n) = x + xn + x 
     = 2x + xn 
     = 2 + n (because x = 1 unit of space)

What are the different types of Big O complexities?

Followings are some of the different types of time/space complexities:

  • Constant Time: O(1)
  • Logarithmic Time: O(log n)
  • Linear Time: O(n)
  • Linearithmic Time: O(n log n)
  • Quadric Time: O(n^2)
  • Cubic TIme: O(n^3)
  • Plynomial TIme: O(n^k)
  • Exponential Time: O(b^n)
  • Factorial Time: O(n!)

n - size of input, complexities are order in from smallest to largest.

Reference: https://danielmiessler.com/study/big-o-notation

Next, we will learn about Constant Time O(1) complexity of an algorithm.

Constant Time O(1) Complexity