Big O notation
||The English used in this article may not be easy for everybody to understand. (March 2012)|
The Big O Notation is often used in the context of complexity class.
Use[change | change source]
Big O Notation is an expression that is meant to give a person a feel for the worst case scenario without them having to do any calculations around it.
Consider for example that we're trying to find the maximum number from a given set of numbers. To find it, we have to look at each and every number in the set. If we were given twice the numbers, we would take twice as long to find the maximum number. Thus, finding the maximum in a set is - the time or number of steps required to find an answer increases or decreases with increase or decrease in the size of the input. On the other hand, if an algorithm were said to be , then the time taken to produce the output would take a time that would square in magnitude with change in size of the input - for e.g. if someone gave an input that was twice as large as an initial input, the time taken would be times as large.
Notice that when we're talking about Big O, we mean the big picture. We're not talking about how the time changes when we're given a set of 10 or 20 numbers. Instead, we're considering a huge bulk of data - talk about analyzing data from the Hubble Telescope, for example - where you have 120GB of data pouring in every week. In such situations, it becomes unnecessary to give the "exact" relation between the input and the size of growth. Thus you won't find expressions like or - the in the first case and the in the second case are trivial when compared to and the terms respectively. So, you simply write them as and . Expressions like and are avoided as well, because what we need is a gist of how the time for the output changes with the input. So again the expressions are written as and (strictly speaking, the provision for a coefficient is contained in the mathematical definition of the Big O).