Skip to content

Dynamic Proramming

Sandip Bhuyan edited this page Oct 16, 2019 · 3 revisions

Dynamic Programming and Subproblems

In dynamic programming, we divide a problem into multiple subproblems and find the solution of it separately. We store the solution of a subproblem to stop the redundant calculation of the perticular result. Due to this, the property of dynamic programming are

  • Overlapping Subproblem
  • Optimal Substructure Let us take the first part for our current blogpost which is Overlapping subproblem

Overlapping Subproblem

Before going to the whole context let us see some important points,i.e why we need dynamic programming? In which scenarios we can use dynamic programming? what are the basic applications of dynamic programming?

Need of Dynamic Programming

Suppose there is a problem we want to solve. If the solution of the problem has some certain task that is going to be done again and again, in this case except calculating the value, again and again, we can store the value in certain place and get the data from that when needed. If you are doing so then congratulations you are using dynamic programming knowingly or unknowingly. This explains the scenarios in which you can use dynamic programming. Application for Dynamic Programming

  1. 0/1 knapsack problem
  2. Mathematical optimization problem
  3. All pair shortest path problem
  4. Reliability design problem
  5. Longest common subsequence (LCS)
  6. Flight control and robotics control
  7. Time-sharing: It schedules the job to maximize CPU usage
  8. Cutting of rod
  9. Bio Informatics(Floyd Warshall, LCS, Genetic Algorithm)

As we have discussed in the above section the use and application of dynamic programming. But dynamic programming cannot be used where there are subproblems but each problem are not dependent on each other, ex. Binary search, Merge sort, etc. For this, we use to divide and conquer. So it is good to know when to use dynamic programming and when to use divide and conquer. After all algorithm design is meant to make the algorithm run fast in an optimized way, not in a theoretical way. If we will go in a theoretical way then we may apply dynamic programming into something which doesn't require it and ended up by increasing the time complexity. To explain my context let us take the example of calculating the Fibonacci series. A Fibonacci series code can be written like below

 int fibonacci(int num)
{
	if(num <= 1)
		return n;
	return fibonacci(n-1) + fibonacci(n-2);
}

If we see the data recursion call daigram we will able to see the same functions are calling and getting executed again and again. Let us take the example of to find the Fibonacci series of Fibonacci(5). If we analyze the fib structure we will see the Fibonacci(3) is being called several times(2 times). If we store the value of Fibonacci(3), then the number of execution will be reduced and we can get the output sooner.

fibonacci(5) -> fibonacci(4) + fibonacci(3)
left child => fibonacci(4) -> fibonacci(3) + fibonacci(2)
		fibonacci(3) => fibonacci(2) + fibonacci(1)
				fibonacci(2) -> fibonacci(1) + fibonacci(0)
		fibonacci(2) -> fibonacci(1) + fibonacci(0)
Right child => fibonacci(3) => fibonacci(2) + fibonacci(1)
				fibonacci(2) -> fibonacci(1) + fibonacci(0)

As we can see in above the Fibonacci(3) is called twice. If we store the value in the table and look up in it then it will save a lot of time. To convert the programming into dynamic programming we can follow the below processes.

  1. Memoization (Top Down)
  2. Tabulation (Bottom Up)

Memoization

It follows the same recursive version with a little modification. In each iteration it will look into the lookup table first then if it is not there it will evaluate the value for that and will store the computed value in the lookup table.

Code

public class Fibonacci 
{ 
final int MAX = 100; 
final int NIL = -1; 

int lookup[] = new int[MAX]; 

void _initialize() 
{ 
	for (int i = 0; i < MAX; i++) 
		lookup[i] = NIL; 
} 

int fib(int n) 
{ 
	if (lookup[n] == NIL) 
	{ 
	if (n <= 1) 
		lookup[n] = n; 
	else
		lookup[n] = fib(n-1) + fib(n-2); 
	} 
	return lookup[n]; 
} 

public static void main(String[] args) 
{ 
	Fibonacci f = new Fibonacci(); 
	int n = 40; 
	f._initialize(); 
	System.out.println("Fibonacci number is" + " " + f.fib(n)); 
} 

} 

Tabulation

In this process, the data is being stored into a table at the first in a bottom-up manner. i.e the values will be calculated first and then the data is required it will be injected.

code

public class Fibonacci 
{ 
int fib(int n) 
{ 
	int f[] = new int[n+1]; 
	f[0] = 0; 
	f[1] = 1; 
	for (int i = 2; i <= n; i++) 
		f[i] = f[i-1] + f[i-2]; 
	return f[n]; 
} 

public static void main(String[] args) 
{ 
	Fibonacci f = new Fibonacci(); 
	int n = 9; 
	System.out.println("Fibonacci number is" + " " + f.fib(n)); 
} 

} 

Difference between tabulation and memoization

Both Tabulated and Memoized store the solutions of subproblems. In Memoized version, the table is filled on demand while in the Tabulated version, starting from the first entry, all entries are filled one by one. Unlike the Tabulated version, all entries of the lookup table are not necessarily filled in the Memoized version.

Conclusion

I think now we all have an overview of how we can solve a problem through subproblem overlapping. If you have any quries then please send an mail to sandipbhuyan@gmail.com.

Cheers Sandip

References

  1. Geeks for Geeks
  2. The application of dynamic programming in production planning
  3. Include help

Clone this wiki locally