# Big O notation

Consider for example that we're trying to find the maximum number from a given set of numbers. To find it, we have to look at each and every number in the set. If we were given twice the numbers, we would take twice as long to find the maximum number. Thus, finding the maximum in a set is $O(n)$ - the time or number of steps required to find an answer increases or decreases with increase or decrease in the size of the input. On the other hand, if an algorithm were said to be $O(n^2)$, then the time taken to produce the output would take a time that would square in magnitude with change in size of the input - for e.g. if someone gave an input that was twice as large as an initial input, the time taken would be $n^2 = 2^2 = 4$ times as large.
Notice that when we're talking about Big O, we mean the big picture. We're not talking about how the time changes when we're given a set of 10 or 20 numbers. Instead, we're considering a huge bulk of data - talk about analyzing data from the Hubble Telescope, for example - where you have 120GB of data pouring in every week. In such situations, it becomes unnecessary to give the "exact" relation between the input and the size of growth. Thus you won't find expressions like $O(n + 3)$ or $O(n^2 + 3n + 5)$ - the $3$ in the first case and the $3n + 5$ in the second case are trivial when compared to $n$ and the $n^2$ terms respectively. So, you simply write them as $O(n)$ and $O(n^2)$. Expressions like $O(3n)$ and $O(2n^2)$ are avoided as well, because what we need is a gist of how the time for the output changes with the input. So again the expressions are written as $O(n)$ and $O(n^2)$ (strictly speaking, the provision for a coefficient is contained in the mathematical definition of the Big O).