Think about when you’ve successfully made an algorithm to solve a problem, you’ll be happy of course. But what if your algorithm is actually quite slow and you can make faster algorithms? How to tell how fast or slow your algorithm is going to be? The Big O Notation is the answer.

**The Big O Notation is a method to measure the complexity of an algorithm based on the number of inputs (n) without implementing them. As the n gets larger, the algorithm will also be more complex and slower. The Big O is used to determine how slow it will get as the input gets larger based on the efficiency of the algorithm.**

If you need a refresher on algorithm, read my other post, What is Algorithm And Why is It Important?

## How Does Big O Work?

Think of it this way, we have n number of inputs. Then, for each operation (+. -, /, *, if), n also increases.

That’s quite ambiguous right? Then try this example

You have a simple algorithm which simply multiplies an input by 2.

```
function multiply(x) {
return x * 2;
}
```

The complexity of this algorithm is simply 1 because there is only one operation in it. No matter haw large the x is, it is only done one time, so it is constant.

We write the complexity as O(1) or constant time complexity. We write the complexities with O and brackets between the actual complexity.

Now, look at this algorithm

```
function loop(x) {
let a = 1;
for (let i = 0; i < x.length; i++) {
a++;
}
return a;
}
```

It is a bit more complex, the complexity grows as the number of inputs gets larger. The loop runs as many as the length of x. If x is 7, it runs 7 times, if x is 8, it runs 8 times. It runs linearly and matches the number of input, so it is called linear complexity or O(n)

Constants are usually ignored in Big O. So, if your complexity is O(2n), 2 is ignored and considered as O(n) because when your input is very large, the constant becomes insignificant.

One more thing to add, to formally write the Big O, for example, if the complexity is n, we write it as O(n) to tell it is a Big O Notation.

## Complexity Growths

As you can see in the previous example, we have O(n) complexity, it is also called linear complexity.

But, there are also various other complexities :

- O(1) or constant complexity. It is found in simple algorithms which only involve simple operations like equations. Its runtime would stay the same no matter how large the input is.
- O(log n) or logarithmic complexity. It is an algorithm that grows slowly. It is usually found in algorithms such as binary search.
- O(n) or linear complexity is a complexity that grows according to the number of inputs. It is usually found if you loop through each inputs.
- O(n log n) or super-linear complexity is found in algorithms such as quick sort and merge sort.
- O(n
^{2}) or quadratic complexity is usually found in nested loops. - O(n
^{3}) or cubic complexity is found is you loop through a triple nested loop. - O(c
^{n}) or exponential complexity is found when you enumerate all subsets of n items. - O(n!) or factorial complexity is found in permutations or ordering of n items.

The chart below shows the growth rate of each complexities and the category whether it is ideal or not.

## Closing

So that’s the overview of Big O. It is still a pretty broad overview but I hope you guys got the point.

If you want a more mathematical explanation of it, I recommend Algorithm Design Manual by Skiena.

If you want more contents about JavaScript, Laravel, or Algorithms, feel free to check out the rest of my blog.