Difference between back tracking and dynamic programming
AlgorithmData StructuresAlgorithm Problem Overview
I heard the only difference between dynamic programming and back tracking is DP allows overlapping of sub problems, e.g.
fib(n) = fib(n1) + fib (n2)
Is it right? Are there any other differences?
Also, I would like know some common problems solved using these techniques.
Algorithm Solutions
Solution 1  Algorithm
There are two typical implementations of Dynamic Programming approach: bottomtotop and toptobottom.
Toptobottom Dynamic Programming is nothing else than ordinary recursion, enhanced with memorizing the solutions for intermediate subproblems. When a given subproblem arises second (third, fourth...) time, it is not solved from scratch, but instead the previously memorized solution is used right away. This technique is known under the name memoization (no 'r' before 'i').
This is actually what your example with Fibonacci sequence is supposed to illustrate. Just use the recursive formula for Fibonacci sequence, but build the table of fib(i)
values along the way, and you get a Toptobottom DP algorithm for this problem (so that, for example, if you need to calculate fib(5)
second time, you get it from the table instead of calculating it again).
In Bottomtotop Dynamic Programming the approach is also based on storing subsolutions in memory, but they are solved in a different order (from smaller to bigger), and the resultant general structure of the algorithm is not recursive. LCS algorithm is a classic Bottomtotop DP example.
Bottomtotop DP algorithms are usually more efficient, but they are generally harder (and sometimes impossible) to build, since it is not always easy to predict which primitive subproblems you are going to need to solve the whole original problem, and which path you have to take from small subproblems to get to the final solution in the most efficient way.
Solution 2  Algorithm
Dynamic problems also requires "optimal substructure".
According to Wikipedia:
> Dynamic programming is a method of > solving complex problems by breaking > them down into simpler steps. It is > applicable to problems that exhibit > the properties of 1) overlapping > subproblems which are only slightly > smaller and 2) optimal substructure. > > Backtracking is a general algorithm > for finding all (or some) solutions to > some computational problem, that > incrementally builds candidates to the > solutions, and abandons each partial > candidate c ("backtracks") as soon as > it determines that c cannot possibly > be completed to a valid solution.
For a detailed discussion of "optimal substructure", please read the CLRS book.
Common problems for backtracking I can think of are:
DP problems:
 This website at MIT has a good collection of DP problems with nice animated explanations.
 A chapter from a book from a professor at Berkeley.
Solution 3  Algorithm
One more difference could be that Dynamic programming problems usually rely on the principle of optimality. The principle of optimality states that an optimal sequence of decision or choices each sub sequence must also be optimal.
Backtracking problems are usually NOT optimal on their way! They can only be applied to problems which admit the concept of partial candidate solution.
Solution 4  Algorithm
DP allows for solving a large, computationally intensive problem by breaking it down into subproblems whose solution requires only knowledge of the immediate prior solution. You will get a very good idea by picking up NeedlemanWunsch and solving a sample because it is so easy to see the application.
Backtracking seems to be more complicated where the solution tree is pruned is it is known that a specific path will not yield an optimal result.
Therefore one could say that Backtracking optimizes for memory since DP assumes that all the computations are performed and then the algorithm goes back stepping through the lowest cost nodes.
Solution 5  Algorithm
Say that we have a solution tree, whose leaves are the solutions for the original problem, and whose nonleaf nodes are the suboptimal solutions for part of the problem. We try to traverse the solution tree for the solutions.
Dynamic programming is more like BFS: we find all possible suboptimal solutions represented the nonleaf nodes, and only grow the tree by one layer under those nonleaf nodes.
Backtracking is more like DFS: we grow the tree as deep as possible and prune the tree at one node if the solutions under the node are not what we expect.
Then there is one inference derived from the aforementioned theory: Dynamic programming usually takes more space than backtracking, because BFS usually takes more space than DFS (O(N) vs O(log N)). In fact, dynamic programming requires memorizing all the suboptimal solutions in the previous step for later use, while backtracking does not require that.
Solution 6  Algorithm
IMHO, the difference is very subtle since both (DP and BCKT) are used to explore all possibilities to solve a problem.
As for today, I see two subtelties:

BCKT is a brute force solution to a problem. DP is not a brute force solution. Thus, you might say: DP explores the solution space more optimally than BCKT. In practice, when you want to solve a problem using DP strategy, it is recommended to first build a recursive solution. Well, that recursive solution could be considered also the BCKT solution.

There are hundreds of ways to explore a solution space (wellcome to the world of optimization) "more optimally" than a brute force exploration. DP is DP because in its core it is implementing a mathematical recurrence relation, i.e., current value is a combination of past values (bottomtotop). So, we might say, that DP is DP because the problem space satisfies exploring its solution space by using a recurrence relation. If you explore the solution space based on another idea, then that won't be a DP solution. As in any problem, the problem itself may facilitate to use one optimization technique or another, based on the problem structure itself. The structure of some problems enable to use DP optimization technique. In this sense, BCKT is more general though not all problems allow BCKT too.
Example: Sudoku enables BCKT to explore its whole solution space. However, it does not allow to use DP to explore more efficiently its solution space, since there is no recurrence relation anywhere that can be derived. However, there are other optimization techniques that fit with the problem and improve brute force BCKT.
Example: Just get the minimum of a classic mathematical function. This problem does not allow BCKT to explore the state space of the problem.
Example: Any problem that can be solved using DP can also be solved using BCKT. In this sense, the recursive solution of the problem could be considered the BCKT solution.
Hope this helps a bit.
Solution 7  Algorithm
In a very simple sentence I can say: Dynamic programming is a strategy to solve optimization problem. optimization problem is about minimum or maximum result (a single result). but in, Backtracking we use brute force approach, not for optimization problem. it is for when you have multiple results and you want all or some of them.
Solution 8  Algorithm
Depth first node generation of state space tree with bounding function is called backtracking. Here the current node is dependent on the node that generated it.
Depth first node generation of state space tree with memory function is called top down dynamic programming. Here the current node is dependant on the node it generates.