The main idea behind approximation algorithms is as follows. When one has to solve a problem which is proved to be NP-Complete, one has two choices.
- Use usual algorithm and get answer in non-polynomial time (most of the times exponential) and be ready to wait until eternity OR
- Compromise on the quality of answer but get things done faster
As suggested by 2 above, approximation algorithms have the same aim. Instead of finding the correct, optimal, best possible solution to the given problem, they try to find an approximated one; but they guarantee to find it in polynomial time so that they run faster. Now the question is - even if they are faster, how much does the quality of solution degrades? This is measured by Approximation factor or Approximation ratio. This factor or ratio basically captures how far we are from doing the best i.e. from the optimal. Consider the optimization in terms of minimization. Suppose a minimization problem is NP-Complete and suppose the optimal solution has the value 5; then if any proposed approximation algorithm, that is the algorithm that solves the problem correctly in polynomial time gives the solution whose value is 7.5 then we will say that we have got 1.5 times more than the optimal one; i.e. we are doing 1.5 times worse. So, here the approximation factor = approximation ratio = 7.5/5 = 1.5. Note that in case of minimization problem, if we call OPT as value of optimal solution and V as value of approximate solution then V >= OPT always; i.e. V can be equal or worse (here more, being minimization problem) than OPT and thus approximation ratio for minimization problem is always greater than or equal to 1. Similarly for maximization problem, V <= OPT and thus approximation ratio is always less than or equal to 1.
Thus, one comes up with an (approximation) algorithm A such that
- It solves the problem in polynomial time
- One can give proof of correctness for A
- It does not degrade the quality of solution too much; i.e. one can come up with the reasonable approximation ratio for A and can prove that for the general case
There exists many ways to come up with approximation algorithms and to prove that the have a reasonable approximation ratio. One such recipe is as follows.
- Suppose it is the maximization problem so that the value V <= OPT where V is the value for the solution returned by any approximation algorithm. So we have to come up with a ratio k s.t. V is at most k times worse than OPT.
- In order to prove that V is at most k times worse than OPT, we have to know what is OPT which is not possible because that itself will take non-polynomial time in general case.
- Suppose we know some number M such that the optimal solution has maximum cost of M then OPT <= M considering all possible cases.
- Thus, if we can prove that V is at most k times worse than M, it is guaranteed that V is at most k times worse than OPT as well. Because, given that V >= k*M and M >= OPT, we get V >= k*OPT
- For example, if the OPT is 100, and we know that M is 120. Then suppose V is 80 (the best we can achieve with approximation algorithm) then k = V / M = 80 / 120 = 2/3 meaning we can achieve 2/3 rd of the maximum possible solution value. We note that V / OPT is 80 / 100 = 4/5 which is always more than 2/3. Thus considering the upper bound M is enough.
- For minimization problem as we shall see below, we have to consider lower bound.
I came across a nice lecture by Prof. Abhiram Ranade (IIT Bombay). In this lecture, in addition to introducing the basics of approximation algorithms and defining above mentioned notions, Sir also illustrates the approximation algorithms designed for two of the NP-Complete problems. One of them (my favorite and the easier one :D) is briefly explained below.
(Note: I have tried to explain the problem and its approximation algorithm the way I understood it, so it is recommended to go through the original lecture by Ranade Sir)
Metric TSP (Traveling Salesperson Problem)
Input: A graph G with N cities, a matrix D giving distance between a pair of cities s.t. D is a metric.
A distance D measure is a metric if
- D[i,i] = 0 for all i
- D[i,j] = D[j,i] for all i,j (D is Symmetric)
- D[i,j] <= D[i,k] + D[k,j] for all i, j and k (D follows Triangle Inequality)
Output: A cycle in G that goes through every city exactly once and has shortest length (in terms of distance values in D)
Recipe:
This will follow the recipe mentioned above. This being the minimization problem, we will come up with a lower bound. For the given graph G, suppose we know the optimal tour and we remove an edge from it. As the required output is the cycle containing all vertexes in G, the edge_removed_tour is the spanning path in G (spanning = containing all vertexes). Thus this is the spanning tree of G because it contains all vertexes and no cycle. Consider the minimum spanning tree T in G which has cost M. Now as any other spanning tree in G is as expensive as T so the spanning tree that we came up with, by removing an edge from the optimal tour can have length no less than that of MST, which we called as M. So, length of edge_removed_tour >= M.
As, length of (optimal) tour > length of edge_removed_tour (obvious), so we get
length of (optimal) tour >= M
Here, LHS is nothing but OPT and thus M is the lower bound on OPT.
Here, LHS is nothing but OPT and thus M is the lower bound on OPT.
So, we come up with an algorithm that has solution with value V no more than twice of M i.e. V <= 2 * M.
The algorithm and its correctness is described in the lecture. The main idea used is triangle inequality property of D being a metric.
No comments:
Post a Comment