Central limit theorem
||This article needs more sources for reliability. (September 2013)|
The central limit theorems are theorems for probability theory. They say that given a large number of independent random variables, their sum will follow a stable distribution. If the variance of the random variables is finite, a Gaussian distribution will result. This is one of the reasons why this distribution is also known as normal distribution.
The best known and most important of these is known as the central limit theorem. It is about large numbers of random variables with the same distribution, and with a finite variance and expected value.
There are different generalisations of this theorem. Some of these generalisations no longer require an identical distribution of all random variables. In these generalisations, another precondition makes sure that no single random variable has a bigger influence on the outcome than the others. Examples are the Lindeberg and Lyapunov conditions.
References[change | change source]
- Jeff Miller: Earliest Known Uses of Some of the Words of Mathematics.
George Pólya: Über den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem, Mathematische Zeitschrift, 8, 1920, pp. 171–181. Scan of the article