Algorithmic information theory

From Simple English Wikipedia, the free encyclopedia

Algorithmic information theory is a field of theoretical computer science. It is concerned with how information and computation are related. Most information can be represented as a String (or a sequence of characters). Algorithmic information theory studies the complexity of information represented that way (In other words, how difficult it is to get that information, or how long it takes). Unlike regular information theory, it uses Kolmogorov complexity to describe complexity, and not the measure of complexity developed by Claude Shannon and Warren Weaver. Kolmogorov complexity was developed independently by Andrey Kolmogorov and Gregory Chaitin .

Example[change | change source]

According to Claude Shannon the following two binary strings have the same content in information (this is only valid for the first-order entropy):

1000110111100101
1111111100000000


The first was generated with a random number generator, for example by throwing a coin. The second is easier to describe (eight times "1", then eight times "0"). For this reason, the first sequence has more algorithmic information, because it is harder to shorten ("compress") the description on how to generate it. Shortening the description may not be possible at all. The information value of a string is higher, if it is more difficult to shorten ("compress") its description. Random strings and white noise do not contain patterns that occur again. For this reason they cannot be compressed, and have a higher information value.