Dynamic programming

From Simple English Wikipedia, the free encyclopedia

Dynamic programming is a method of solving problems, which is used in computer science, mathematics and economics. Using this method, a complex problem is split into simpler problems, which are then solved. At the end, the solutions of the simpler problems are used to find the solution of the original complex problem.

Dynamic programming can be used in cases where it is possible to split a problem into smaller problems, which are all quite similar.

Richard Bellman, a US mathematician, first used the term in the 1940s when he wanted to solve problems in the field of Control theory. He also stated what is now known as Bellman's Principle of Optimality:

An optimal policy has the property that whatever the initial state and

initial decision are, the remaining decisions must constitute an optimal

policy with regard to the state resulting from the first decision.

— Bellman, 1957

References[change | change source]

  • R.Bellman,On the Theory of Dynamic Programming,Proc Natl Acad Sci U S A. 1952 August; 38(8): 716–719. [1]