# Big O notation

Big O Notation is a way of comparing algorithms. It compares them by calculating how much memory is needed and how much time it takes to complete.

The Big O Notation is often used in identifying how complex a problem is, also known as the problem's complexity class. The mathematician Paul Bachmann (1837-1920) was the first to use this notation, in the second edition of his book "Analytische Zahlentheorie", in 1896. Edmund Landau (1877-1938) made the notation popular. For this reason, when people talk about a Landau symbols, they refer to this notation.

Big O Notation is named after the term "order of the function", which refers to the growth of functions. Big O Notation is used to find the upper bound (the highest possible amount) of the function's growth rate, meaning it works out the longest time it will take to turn the input into the output. This means an algorithm can be grouped by how long it can take in a worst-case scenario where the longest route will be taken every time.

Big O is an expression that finds worst-case scenario run-time, showing how efficient an algorithm is without having to run the program on a computer. This is also useful as different computers may have different hardware and therefore need different amounts of time to complete it. Since Big O always assumes the worst-case it can show a consistent measurement of speed: regardless of your hardware, $O(1)$ is always going to complete faster than $O(n!)$ because they have different levels of efficiency.

## Examples

The following examples all use code written in Python. Note that this is not a complete list of Big O types.

### Constant

$O(1)$ . Always takes the same amount of time regardless of input. For example, take a function that accepts an integer (called x) and returns double its value:

def double(x):
return x * 2   #Return the value of x times 2


After accepting the input this function will always take one step to return an output. It is constant because it will always take the same amout of time, so it is $O(1)$ .

### Linear

$O(n)$ . Increases according to the size of the input, represented by $n$ . Let's assume a function accepts n, and returns every number from 1 to n:

def count(n):
i = 1           #Create a counter called "i" with a value of 1
while i <= n:   #While i is less-than or equal to n
print(i)    #Print the value of i
i = i + 1   #Redefine i as "the value of i + 1"


If we were to input the value of 5 then this would output $1,2,3,4,5$ , requiring 5 loops to complete. Similarly, if we input 100 it would output $1,2,3...98,99,100$ , requiring 100 loops to complete. If the input is $n$ then the algorithm's run time is exactly $n$ loops every time, therefore it is $O(n)$ .

### Factorial

$O(n!)$ . Increases in factorial amounts, meaning the time taken increases massively with the input. For example, say we wish to visit five cities around the world and want to see every possible ordering (permutation). An algorithm we could write using Python's itertools library is as follows:

import itertools    #Import the itertools library
cities = ['London', 'Paris', 'Berlin', 'Amsterdam', 'Rome'] #An array of our chosen cities

def permutations(cities):                    #Taking an array of cities as input:
for i in itertools.permutations(cities): #For each permutation of our items (assigned to variable "i")
print(i)                             #Output i


This algorithm will calculate each unique permutation of our cities and then output it. Examples of output will include:

('London', 'Paris', 'Berlin', 'Amsterdam', 'Rome')
('London', 'Paris', 'Berlin', 'Rome', 'Amsterdam')
('London', 'Paris', 'Amsterdam', 'Berlin', 'Rome')
...
('Rome', 'Amsterdam', 'Paris', 'Berlin', 'London')
('Rome', 'Amsterdam', 'Berlin', 'London', 'Paris')
('Rome', 'Amsterdam', 'Berlin', 'Paris', 'London')

Here our input list is 5 items long, and for every selection our remaining options decreases by 1. In other words our 5 inputs choose $5\times 4\times 3\times 2\times 1$ items (or $5!$ ). If our input is $n$ cities long then the number of outputs is $n!$ ; in other words, assuming we go through every permutation we will require $O(n!)$ loops to complete it.

## Little-o Notation

A stricter version of the Big O is little-o. The difference between big O and little-o is that little-o is a strict upper bound: while $O(n)$ means the completion time will increase to this maximum based on input size, the $o(n)$ means the completion time will generally be under this amount of time. In other words, Big O assumes every loop will take the longest path and the process will take as long as possible, whereas little-o is more realistic about actual run times; if the number of loops is based on the roll of a 6-sided dice Big O will always assume a 6 is rolled, whereas little-o will factor in the equal probability of other numbers being rolled.