# Harmonic series (mathematics)

Jump to navigation Jump to search

In mathematics, the harmonic series is the divergent infinite series:

$\sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\cdots$ Divergent means that as you add more terms the sum never stops getting bigger. It does not go towards a single finite value.

Infinite means that you can always add another term. There is no final term to the series.

Its name comes from the idea of harmonics in music: the wavelengths of the overtones of a vibrating string are 1/2, 1/3, 1/4, etc., of the string's fundamental wavelength. Apart from the first term, every term of the series is the harmonic mean of the terms either side of it. The phrase harmonic mean also comes from music.

## History

The fact that the harmonic series diverges was first proven in the 14th century by Nicole Oresme, but was forgotten. Proofs were given in the 17th century by Pietro Mengoli, Johann Bernoulli, and Jacob Bernoulli.

Harmonic sequences have been used by architects. In the Baroque period architects used them in the proportions of floor plans, elevations, and in the relationships between architectural details of churches and palaces.

## Divergence

There are several well-known proofs of the divergence of the harmonic series. A few of them are given below.

### Comparison test

One way to prove divergence is to compare the harmonic series with another divergent series, where each denominator is replaced with the next-largest power of two:

{\begin{aligned}&{}1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+{\frac {1}{6}}+{\frac {1}{7}}+{\frac {1}{8}}+{\frac {1}{9}}+\cdots \\[12pt]\geq {}&1+{\frac {1}{2}}+{\frac {1}{\color {red}{\mathbf {4} }}}+{\frac {1}{4}}+{\frac {1}{\color {red}{\mathbf {8} }}}+{\frac {1}{\color {red}{\mathbf {8} }}}+{\frac {1}{\color {red}{\mathbf {8} }}}+{\frac {1}{8}}+{\frac {1}{\color {red}{\mathbf {16} }}}+\cdots \end{aligned}} Each term of the harmonic series is greater than or equal to the corresponding term of the second series, and therefore the sum of the harmonic series must be greater than or equal to the sum of the second series. However, the sum of the second series is infinite:

{\begin{aligned}&{}1+\left({\frac {1}{2}}\right)+\left({\frac {1}{4}}\!+\!{\frac {1}{4}}\right)+\left({\frac {1}{8}}\!+\!{\frac {1}{8}}\!+\!{\frac {1}{8}}\!+\!{\frac {1}{8}}\right)+\left({\frac {1}{16}}\!+\!\cdots \!+\!{\frac {1}{16}}\right)+\cdots \\[12pt]={}&1+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}+\cdots =\infty \end{aligned}} It follows (by the comparison test) that the sum of the harmonic series must be infinite as well. More precisely, the comparison above proves that

$\sum _{n=1}^{2^{k}}{\frac {1}{n}}\geq 1+{\frac {k}{2}}$ for every positive integer k.

This proof, proposed by Nicole Oresme in around 1350, is considered to be a high point of medieval mathematics. It is still a standard proof taught in mathematics classes today.

### Integral test

It is possible to prove that the harmonic series diverges by comparing its sum with an improper integral. Consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and 1/n units high, so the total area of the infinite number of rectangles is the sum of the harmonic series:

${\begin{array}{c}{\text{area of}}\\{\text{rectangles}}\end{array}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\cdots$ The total area under the curve y = 1/x from 1 to infinity is given by a divergent improper integral:

${\begin{array}{c}{\text{area under}}\\{\text{curve}}\end{array}}=\int _{1}^{\infty }{\frac {1}{x}}\,dx=\infty .$ Since this area is entirely contained within the rectangles, the total area of the rectangles must be infinite as well. This proves that

$\sum _{n=1}^{k}{\frac {1}{n}}>\int _{1}^{k+1}{\frac {1}{x}}\,dx=\ln(k+1).$ The generalization of this argument is known as the integral test.

## Rate of divergence

The harmonic series diverges very slowly. For example, the sum of the first 1043 terms is less than 100. This is because the partial sums of the series have logarithmic growth. In particular,

$\sum _{n=1}^{k}{\frac {1}{n}}=\ln k+\gamma +\varepsilon _{k}\leq (\ln k)+1$ where γ is the Euler–Mascheroni constant and εk ~ 1/2k which approaches 0 as k goes to infinity. Leonhard Euler proved both this and also that the sum which includes only the reciprocals of primes also diverges, that is:

$\sum _{p{\text{ prime }}}{\frac {1}{p}}={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{11}}+{\frac {1}{13}}+{\frac {1}{17}}+\cdots =\infty .$ ## Partial sums

The first thirty harmonic numbers
n Partial sum of the harmonic series, Hn
expressed as a fraction decimal relative size
1 1 ~1

2 3 /2 ~1.5

3 11 /6 ~1.83333

4 25 /12 ~2.08333

5 137 /60 ~2.28333

6 49 /20 ~2.45

7 363 /140 ~2.59286

8 761 /280 ~2.71786

9 7129 /2520 ~2.82897

10 7381 /2520 ~2.92897

11 83711 /27720 ~3.01988

12 86021 /27720 ~3.10321

13 1145993 /360360 ~3.18013

14 1171733 /360360 ~3.25156

15 1195757 /360360 ~3.31823

16 2436559 /720720 ~3.38073

17 42142223 /12252240 ~3.43955

18 14274301 /4084080 ~3.49511

19 275295799 /77597520 ~3.54774

20 55835135 /15519504 ~3.59774

21 18858053 /5173168 ~3.64536

22 19093197 /5173168 ~3.69081

23 444316699 /118982864 ~3.73429

24 1347822955 /356948592 ~3.77596

25 34052522467 /8923714800 ~3.81596

26 34395742267 /8923714800 ~3.85442

27 312536252003 /80313433200 ~3.89146

28 315404588903 /80313433200 ~3.92717

29 9227046511387 /2329089562800 ~3.96165

30 9304682830147 /2329089562800 ~3.99499

The finite partial sums of the diverging harmonic series,

$H_{n}=\sum _{k=1}^{n}{\frac {1}{k}},$ are called harmonic numbers.

The difference between Hn and ln n converges to the Euler–Mascheroni constant. The difference between any two harmonic numbers is never an integer. No harmonic numbers are integers, except for H1 = 1.:p. 24:Thm. 1

## Related series

### Alternating harmonic series The first fourteen partial sums of the alternating harmonic series (black line segments) shown converging to the natural logarithm of 2 (red line).

The series

$\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}=1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+{\frac {1}{5}}-\cdots$ is known as the alternating harmonic series. This series converges by the alternating series test. In particular, the sum is equal to the natural logarithm of 2:

$1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+{\frac {1}{5}}-\cdots =\ln 2.$ The alternating harmonic series, while conditionally convergent, is not absolutely convergent: if the terms in the series are systematically rearranged, in general the sum becomes different and, dependent on the rearrangement, possibly even infinite.

The alternating harmonic series formula is a special case of the Mercator series, the Taylor series for the natural logarithm.

A related series can be derived from the Taylor series for the arctangent:

$\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}=1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots ={\frac {\pi }{4}}.$ This is known as the Leibniz series.

### General harmonic series

The general harmonic series is of the form

$\sum _{n=0}^{\infty }{\frac {1}{an+b}},$ where a ≠ 0 and b are real numbers, and b/a is not zero or a negative integer.

By the limit comparison test with the harmonic series, all general harmonic series also diverge.

### p-series

A generalization of the harmonic series is the p-series (or hyperharmonic series), defined as

$\sum _{n=1}^{\infty }{\frac {1}{n^{p}}}$ for any real number p. When p = 1, the p-series is the harmonic series, which diverges. Either the integral test or the Cauchy condensation test shows that the p-series converges for all p > 1 (in which case it is called the over-harmonic series) and diverges for all p ≤ 1. If p > 1 then the sum of the p-series is ζ(p), i.e., the Riemann zeta function evaluated at p.

The problem of finding the sum for p = 2 is called the Basel problem; Leonhard Euler showed it is π2/6. The value of the sum for p = 3 is called Apéry's constant, since Roger Apéry proved that it is an irrational number.

### ln-series

Related to the p-series is the ln-series, defined as

$\sum _{n=2}^{\infty }{\frac {1}{n(\ln n)^{p}}}$ for any positive real number p. This can be shown by the integral test to diverge for p ≤ 1 but converge for all p > 1.

### φ-series

For any convex, real-valued function φ such that

$\limsup _{u\to 0^{+}}{\frac {\varphi \left({\frac {u}{2}}\right)}{\varphi (u)}}<{\frac {1}{2}},$ the series

$\sum _{n=1}^{\infty }\varphi \left({\frac {1}{n}}\right)$ is convergent.[source?]

### Random harmonic series

The random harmonic series

$\sum _{n=1}^{\infty }{\frac {s_{n}}{n}},$ where the sn are independent, identically distributed random variables taking the values +1 and −1 with equal probability 1/2, is a well-known example in probability theory for a series of random variables that converges with probability 1. The fact of this convergence is an easy consequence of either the Kolmogorov three-series theorem or of the closely related Kolmogorov maximal inequality. Byron Schmuland of the University of Alberta further examined the properties of the random harmonic series, and showed that the convergent series is a random variable with some interesting properties. In particular, the probability density function of this random variable evaluated at +2 or at −2 takes on the value 0.124999999999999999999999999999999999999999764..., differing from 1/8 by less than 10−42. Schmuland's paper explains why this probability is so close to, but not exactly, 1/8. The exact value of this probability is given by the infinite cosine product integral C2 divided by π.

### Depleted harmonic series

The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge and its value is less than 80. In fact, when all the terms containing any particular string of digits (in any base) are removed the series converges.

## Applications

The harmonic series can be counterintuitive. This is because it is a divergent series even though the terms of the series get smaller and go towards zero. The divergence of the harmonic series is the source of some paradoxes.

• The "worm on the rubber band". Suppose that a worm crawls along an infinitely-elastic one-meter rubber band at the same time as the rubber band is uniformly stretched. If the worm travels 1 centimeter per minute and the band stretches 1 meter per minute, will the worm ever reach the end of the rubber band? The answer, counterintuitively, is "yes", for after n minutes, the ratio of the distance travelled by the worm to the total length of the rubber band is
${\frac {1}{100}}\sum _{k=1}^{n}{\frac {1}{k}}.$ Because the series gets arbitrarily large as n becomes larger, eventually this ratio must exceed 1, which implies that the worm reaches the end of the rubber band. However, the value of n at which this occurs must be extremely large: approximately e100, a number exceeding 1043 minutes (1037 years). Although the harmonic series does diverge, it does so very slowly.

• The Jeep problem asks how much total fuel is required for a car with a limited fuel-carrying capacity to cross a desert leaving fuel drops along the route. The distance the car can go with a given amount of fuel is related to the partial sums of the harmonic series, which grow logarithmically. And so the fuel required increases exponentially with the desired distance.
• The block-stacking problem: given a collection of identical dominoes, it is possible to stack them at the edge of a table so that they hang over the edge of the table without falling. The counterintuitive result is that they can be stacked in a way that makes the overhang as large as you want. That is, provided there are enough dominoes.
• A swimmer that goes faster each time they touch the wall of the pool. The swimmer starts crossing a 10-meter pool at a speed of 2 m/s, and with every crossing, another 2 m/s is added to the speed. In theory, the swimmer's speed is unlimited, but the number of pool crosses needed to get to that speed becomes very large; for instance, to get to the speed of light (ignoring special relativity), the swimmer needs to cross the pool 150 million times. Contrary to this large number, the time needed to reach a given speed depends on the sum of the series at any given number of pool crosses:
${\frac {10}{2}}\sum _{k=1}^{n}{\frac {1}{k}}.$ Calculating the sum shows that the time required to get to the speed of light is only 97 seconds.