Big O Notation

What is it and why do I need to know

Fay Vera
2 min readJul 2, 2021

The Big O Notation is nothing but a way of comparing code and its performance to other pieces of code so that we can determine what is the “best” one.

The “best” or “better” code is the one that is more efficient! When dealing with small pieces of code, it’s hard to see how it would really matter and if the time and space complexities will impact the performance of our code. Keep in mind that different machines will record different times (and the SAME machine will often record different times as well) for the same piece of code, so we have to be mindful of the efficiency of it.

The Big O Notation focuses on the BIG PICTURE. It’s a general way of looking and calculating how our code will perform, figuring out what the pattern will be so that it is likely to be efficient across multiple machines. All we care about is the general trend!

There are many ways to calculate and check our code — which we won’t get into today, but will in the future! A quick example of ‘better code’ would be to avoid nested loops. You would be looping over the data you have (O(n)), only to loop over again (another O(n)). This would bring your Big O to O(n²), reaching that exponential curve. Not ideal.

If we’re working with smaller code like a small array: [1,2,3,4,5]

Iterating through it wouldn’t be a problem. But imagine we have a big database and a huge amount of data that we need to loop over check. That would take super long! Keeping the Big O notation in mind and striving for “better code” will certainly increase the speed and performance of your task!

--

--