Talk:Big O notation

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

I've marked this article as complex as it's a difficult concept and it has been explained simply enough. Also, I've removed the following and have highlighted parts of it which led to it's removal:

Most of this came from,, and for that Rob Bell, I thank you. Honestly, I learned Big O Notation from that and started out in other places.

Lawl in progress, is this considered good practice in wikipedia?

Text relates to personal experiences. Also asks how to write articles within an article.

--Tb240904 (talk) 00:53, 27 June 2009 (UTC)

clearer explanations[change source]


Big O notation is used to show how long algorithms will take to finish (in the worst case scenario) relative to each other (based on how many items the algorithm must operate on), instead of in absolute units of time. This makes it possible to see if one algorithm is better than another without even needing to write either of them as a program. For example: if an algorithm takes O(n) time, that means that it operates on n items and takes about the same amount of time to process each one. If there is another algorithm that can do the same job in O(1) time, that means that it will always take about the same amount of time to finish no matter how many items it must operate on.

For example, if the first algorithm took one millisecond to process one item and the other one took 50 milliseconds to operate on any number items, then the second algorithm would be faster whenever there were more than 50 items to process.