Dynamic programming is a widely used and often used concept for optimization. In this article, I will introduce you to the concept of dynamic programming which is one of the best-known concepts for competitive coding and almost all coding interviewing.
Introduction to Dynamic Programming

Dynamic programming refers to the simplification of a complicated problem by breaking it down into simpler subproblems in a recursive fashion, usually a bottom-up approach.
Also, Read – Machine Learning Full Course for free.
A problem must have two key attributes for dynamic programming to be applicable “Optimal substructure” and “Superimposed subproblems”. To achieve its optimization, dynamic programming uses a concept called memorization.
Applications of Dynamic Programming
The basic idea of ​​dynamic programming is to break down a complex problem into several small, simple problems that repeat themselves. If you can identify a simple subproblem that is calculated over and over again, chances are there is a dynamic programming approach to the problem.
As this section is titled Applications of Dynamic Programming, it will focus more on applications than on the process of building dynamic programming algorithms.
Fibonacci Numbers:
Fibonacci numbers are a hot topic for dynamic programming because the traditional recursive approach does a lot of repeated calculations. In these examples, I’ll use the base case of f (0) = f (1) = 1.
Here is an example of a recursive tree for Fibonacci (4), note the repeated calculations:

Non-dynamic programming 0(2 ^ n) Complexity of execution, 0(n) Complexity of the stack:
This is the most intuitive way to write the problem. At most, the stack space will be 0(n) when you descend the first recursive branch making Fibonacci calls (n-1) until you reach the base case n <2.
Stored 0(n) execution complexity, 0(n) space complexity, 0(n) stack complexity:
With the stored approach, we introduce an array which can be considered like all previous function calls. The location memo [n] is the result of the Fibonacci function call (n). This allows us to swap a space complexity of 0 (n) for a 0 (n) runtime because we no longer need to calculate duplicate function calls.
Iterative dynamic programming O (n) Execution complexity, O (n) Spatial complexity, No recursive stack:
If we break the problem down into its basic parts, you will notice that to calculate Fibonacci (n), we need Fibonacci (n-1) and Fibonacci (n-2). Moreover, we can notice that our base case will appear at the end of this recursive tree as seen above.
With this information, it now makes sense to calculate the solution in reverse, starting with the base cases and working upward. Now, to calculate Fibonacci (n), we first calculate all the Fibonacci numbers up to and up to n.
This main advantage here is that we have now eliminated the recursive stack while maintaining the 0 (n) runtime. Unfortunately, we still have 0 (n) space complexity, but this can also be changed.
Advanced iterative dynamic programming 0 (n) Execution complexity, 0 (1) Spatial complexity, No recursive stack:
As stated above, the iterative programming approach starts from the base cases and works until the end result.
The key observation to make to arrive at the spatial complexity at 0 (1) (constant) is the same observation we made for the recursive stack – we only need Fibonacci (n-1) and Fibonacci (n -2) to construct Fibonacci (n). This means that we only need to record the results for Fibonacci (n-1) and Fibonacci (n-2) at any point in our iteration.
To store these last 2 results I use an array of size 2 and just return the index I assign using i% 2 which will alternate as follows: 0, 1, 0, 1, 0, 1, .. ., i% 2.
I add the two indexes of the array together because we know the addition is commutative (5 + 6 = 11 and 6 + 5 == 11). The result is then attributed to the oldest of the two spots (noted i% 2). The final result is then stored at position n% 2.
It’s important to note that sometimes it may be better to come up with an iterative, remembered solution for functions that do large calculations over and over again, as you will be building a cache of the response to subsequent function calls and possibly 0 calls. (1) has already been calculated. This is what dynamic programming is.
Hope you liked this article on the concept of dynamic programming. Please feel free to ask your valuable questions in the comments section below.