Dynamic Programming

Gopika Singh
6 min readMar 24, 2021

When the rest of the world was busy finding solutions to complex problems, Richard Bellman developed a method called Dynamic Programming in the 1950s to simplify the complicated problems themselves and find their optimal solutions.

Photo by Florian Olivo on Unsplash

What is Dynamic Programming?

Dynamic Programming is an optimization method that can be used to simplify complex and complicated recursive problems by breaking them down into smaller subproblems. Let's understand this by taking an example of the most common recursive sequence- The Fibonacci Sequence. The Fibonacci sequence is a series of numbers in which every number of the sequence is the sum of the two preceding numbers starting from 0 and 1. The numbers of the sequence are -0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 and so on…

To calculate the Nth number of the Fibonacci sequence, we can use the following recursive equation.

Code for Nth Fibonacci number

From the image given below, we can observe that the main problem (i.e. fibonacci(int N) ) can be clearly further divided into smaller subproblems ( which are fibonacci(n-1) and fibonacci(n-2) ). Therefore we can use Dynamic Programming to solve it.

In Computer Science, Dynamic Programming cannot be applied to every problem and requires the problem to follow certain criteria.

Dynamic Programming can be applied to a problem if it has the following two attributes — optimal substructure and overlapping sub-problems.

Optimal substructure

If a problem can be optimally broken down into smaller sub-problems and its optimal solution can be found by the combination of these sub-problems recursively, then it is said to have optimal substructure. In the example given above, the main Fibonacci problem (fibonacci(int N)) of size ‘N’ was divided into smaller sizes ‘N-1’ and ‘N-2’. Therefore the problem given above has optimal substructure property.

Overlapping sub-problems

If any problem involves finding its solution by solving the same subproblem more than once, then the problem has overlapping sub-problems. Taking the example of the Fibonacci sequence again; for finding the solution to fibonacci(4), we need to break it down into the following sub-problems:

Recursion tree for fibonacci(4)

We can observe that fibonacci(2) is being calculated twice and fibonacci(1) is being solved three times here. Thus the problem has overlapping subproblem property.

Since the Nth Fibonacci number problem has both the required attributes of dynamic programming, it can be optimized using dynamic programming method.

Dynamic Programming has two methods that can be used to solve a complex problem-

  1. Top-down approach with Memoization
  2. 2. Bottom-up approach with Tabulation
top-down approach vs bottom-up approach

Top-Down approach

For this approach, we try and solve the complex problem by finding the solution to smaller sub-problems recursively. While solving each sub-problem, we cache its result so that we don’t solve it more than once if it is called multiple times and use the already saved result of that particular problem instead. This technique is called Memoization where we stored all the results of the once already solved subproblems.

In other words, in memoization, we do it top-down in the sense that we solve the top problem first (which typically recurses down to solve the sub-problems).

Top-down approach example

In the example given above- the original problem (i.e. F(5)) is divided into subproblems but the repeating subproblems are not solved again. The result of every subproblem was stored using Memoization.

Bottom-Up approach

In this approach, contrary to memoization, we begin by finding solutions to all the subproblems first and filling these values up in an N-dimensional table. Then the results stored in the table are used to finally compute the solution of the ‘top’ or the original problem. We solve the problems ‘bottom-up’.

The solutions of the subproblems are tabulated which is the opposite of Memoization because in Memoization, as we proceed and solve the subproblems, a map of the already solved ones is maintained and they are not solved again.

Some programming languages use call-by-need mechanism and can automatically memoizing the solutions of a recursive problem functions with a particular set of arguments which speeds up the call-by-name evaluation (or call-by-need).

Some languages make it possible portably (e.g. Scheme, Common Lisp, Perl or D).

Some languages have automatic memoization built in, such as tabled Prolog and J, which supports memoization with the M. adverb. In any case, this is only possible for a referentially transparent function.

Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such as Wolfram Language.

Dynamic Programming Examples -

Floyd-Warshall Algorithm

Floyd-Warshall Algorithm is an algorithm that is used in a weighted graph for finding and computing the shortest path between its vertices. A weighted graph is a graph in which each edge has a numerical value associated with it.

The weighted graph used can be either directed or undirected but it should not have negative cycles. A negative cycle is a cycle within a graph for which the sum of its edges is negative. Floyd-Warshall Algorithm fails to work for such a graph.

Floyd-Warhshall algorithm is often also known as Floyd’s algorithm, or WFI algorithm, or Roy-Warshall algorithm, or Roy-Floyd algorithm.

This algorithm follows the dynamic programming approach to find the shortest paths. The fundamental characteristic is to apply the principle of optimality in the algorithm to calculate the optimal value and then build the solution through a traversal of sub-solutions.

Matric Chain Multiplication

Matrix chain multiplication is another example that illustrates the application of dynamic programming. The chain of matrices can be multiplied in several different ways, for example:

((A1 × A2) × A3) × … An

A1×(((A2×A3)× … ) × An)

(A1 × A2) × (A3 × … An)

…and so on.

All these different ways to multiply the chain of matrices will always yield the same result. However, it is not necessary that the multiplication will take exactly the same time for each method. Based on particular matrices that are chosen first to multiply, the time taken may vary.

If we take two matrices A and B and if they have dimensions a×b and dimensions c×d, then the product of matrices C=A×B will have dimensions b×c. The multiplication using a simple matrix multiplication algorithm will require a*b*c scalar multiplications.

Let’s take another example with matrices A, B and C with dimensions a×b, b×p, and p×s, respectively. The size of the matrix A×B×C will be a×s and can be calculated in 2 ways presented below:

  1. A×(B×C) which requires (bps + abs) scalar multiplications.
  2. (A×B)×C which requires (abp + aps) scalar multiplications.

Let a = 10, b = 100, p = 10 and s = 1000. So, the first way of multiplication requires 1,000,000 + 1,000,000 scalar multiplications and similarly, the second way requires 10,000+100,000 scalar multiplications. We can clearly observe that the second method of multiplying is faster than the first method. So, we should perform the multiplication for the given example using the second arrangement of parenthesis.

Therefore, the order of multiplication is important in finding the solution in optimal time and dynamic programming is used to find this optimal order of parenthesis.

--

--