Algorithm - Dynamic Programming (DAG)

Dynamic Programming (DP) is good at solving optimal sub-structure problems where the optimum answer to sub-problems make up the solution for the original problem.

If the problem can be represented as a Directed Acyclic Graphs (DAG), DP can solve the problem in 0(V+E) running time. To solve DAG using DP, the DAG needs to be in a topological order (DFS.reverse) and have a round of Bellmanford in a single recursion.

Why DAGs?

DAG guarantee that there are no 0ve weight cycles. When using DP -ve cycles cause infinite loop because when there is a -ve cycle, sub-problem will have cyclic dependencies.

Graph with -ve cycle causing infinite recursion due to dependency of sub-problems.

Note that the problems with cyclic dependency in Graphs can still be resolved with DP, see (Overcoming Cyclic Dependencies).

Example : Shortest Path (SP)

Finding the SP is an example of an optimal sub-structure problem since finding the SP involves discovering all the optimum paths between vertex that form the SP.

Example of SP problem.

To solve this problem with DP, we need to process vertices (from S to G) in the topological order.
Once you have the topological order, the topologically sorted vertices can be processed with 2 similar approaches:

  1. Finding SP edge from S to S’, etc… all the way to V. (Bottom-up)
  2. Find U –> V, and U’ to U’ etc… all the way back to S (recursion ensures).

Bottom-up approach needs to perform topological sort to ensure vertices are processed in correct dependency order. However using DP, the recursions of sub-problems in a DAG will ensure that the dependencies are processed in topological order (so no need to for a topological sort).

Regardless of the approach as long as vertices are processed in topological order and at each sub-problem the minimum path of all the possible paths (optimal substructure) is used. In general the solution can be represented as:

δ(S,V)=min{δ(S,U)+w(u,v)(u,v)ϵE}\delta(S,V) = min\{ \delta(S,U) + w(u,v) | (u,v) \epsilon E \}

Two similar approaches...

Implementation

DP Memoization for Shortest Path DAG

Bottom-up approach for Shortest Path DAG

Time complexity

The running time is the sum of all the sub-problems multiplied by the time for each sub-problem (ignoring the recursion). With Memoized DP, the sub-problem can be considered θ(1)\theta (1), there are V.length sub-problems:

θ((indegrees(v)+1)\theta (\sum (indegrees(v) + 1)

Time complexity for processing V vertices and E edges:

θ(V+E)\theta (V + E)

This is the same for Bottom-up approach (Topological sort + all edges and vertices)

Overcoming Cyclic Dependencies

For DP to work, the sub-problem must be acyclic (no cycles). To work around the cyclic dependency we can transform the graph and modify the sub-problem. Below is a visual representation of this transformation :

Breaking dependency cycle by Graph Transformation and limiting recursion to  V.length -1

  • For every vertex in the graph G=(V,E), make (V.length) copies of v as v0, V2, V3 in new Graph G’
  • For every edge (u, v) in Graph G, create an edge between v layers such that (uk1,vk)(u_{k-1}, v_k)
  • Graph, G’ does not have cyclic dependencies.

With DP, you do not need to create the graph G’, since we can represent the transformation as recursions:

1
2
3
4
5
6
7
8
9
10
DP(graph_as_copy, k, vertexV) {
//...
}
// ...
// δ0(s, v) = ∞ for s = v (base case - unreachable)
// δk(s, s) = 0 for any k (base case)
// ...
graph_inverse_neighbors(v).each((u) => {
DP(graph_as_copy, k-1, u); // recursion through layers.
});

In general the solution for SP for graphs with cycles is the same (with modified sub-problem):

δk(s,v)=min{δk1(s,u)+w(u,v)(u,v)ϵE}\delta_k(s,v) = min\{ \delta_{k-1}(s,u) + w(u,v) | (u,v) \epsilon E \}

Time complexity

Same as Memoized DP Graph with no cycles: (which is also Bellman-Ford)

θ(V.(indegrees(v)+1)\theta (V . \sum (indegrees(v) + 1) or θ(V.E)\Large \theta(V.E)