Variance

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The variance in probability theory and statistics is a way to measure how far a set of numbers is spread out. Variance describes how much a random variable differs from its expected value. The variance is defined as the average of the squares of the differences between the individual (observed) and the expected value. That means it is always positive. In practice, it is a measure of how much something changes. For example, temperature has more variance in Moscow than in Hawaii.

The variance is not simply the average difference from the expected value. The standard deviation, which is the square root of the variance and comes closer to the average difference, also is not simply the average difference. Variance and standard deviation are used because it makes the mathematics easier when adding two random variables together.

In accountancy, a variance refers to the difference between the budget for a cost, and the actual cost.

History[change | change source]

Karl Pearson, the father of biometry first used the term variance:"It is here attempted to (show) the biometrical properties of a population of a more general type that has (..) been examined, inheritance in which follows this scheme. It is hoped that in this way it will be possible to make a more exact analysis of the causes of human variability. The great body of available statistics shows us that the the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and that therefore, the variablility may be uniformly measured by the standard deviation, corresponding to the square root of the mean square error."[1]

References[change | change source]

Other websites[change | change source]