Bruteforce and backtracking to solve NP-complete problems

Reading time30 min

In brief

Article summary

In this article, we present you an approach to solve the TSP, or more generally NP-complete problems. This approach is based on an exhaustive search of possible solutions, with a small acceleration strategy called “backtracking”.

Main takeaways

  • A bruteforce (exhaustive) approach can be used to solve a NP-Complete problem.

  • For the TSP, it builds a tree such that each branch is a possible solution.

  • The complexity of solving the TSP with a bruteforce algorithm is $O(n!)$.

  • Backtracking is an optimization that allows to stop exploring branches early in the search.

  • Branch-and-bound is another possible optimization.

Article contents

1 — Video

Please check the video below. You can also read the transcripts if you prefer.

Important

There are some “codes” that appear in this video. Please keep in mind that these “codes” are what we call pseudo-codes, i.e., they are not written in a real programming language. Their purpose is just to describe the algorithms, not to be interpretable per se. You will thus need a bit of work to make them fully-functional Python codes if you wish to.

Information

Hi all! Welcome back to this course on algorithmics and discrete mathematics.

In today’s lesson, we’re going to talk about solving NP-Complete problems with two algorithms: an exhaustive one, and backtracking. At the end of the video, you should be able to implement these two algorithms to solve the traveling salesman problem.

Computing a solution

Classicaly, NP-complete problems are tackled either through:

  • “Exact algorithms”, which are reasonably fast for small problems.
  • Or, “sub-optimal”, or “heuristic algorithms”, which deliver good-enough solutions, but which are not necessarily optimal.

Today, let’s explore two exact algorithms to solve the traveling salesman problem, or TSP.

Brute force algorithm

The most direct solution to the TSP is to try all possible elementary paths – or permutations – of the $n$ cities, and see which one is the cheapest. This is known as a “brute force”, or “exhaustive search”.

If the starting city is fixed, the number of such permutations equals $(n-1)!$. Consequently, the calculation time, or the time it takes for the computer to solve the problem with this approach, lies within a polynomial factor of $O(n!)$.

To give some more concrete meaning to this complexity, imagine that evaluating one permutation, or one elementary path, takes 1 microsecond. The following table summarizes the calculation time when the number of cities increases. For example, for a problem with 5 cities, the number of possible elementary paths equals 24, and the calculation time will be 24 microseconds. Multiplying the number of cities by only 3 increases the resolution time to approximately 24 hours, and multiplying the number of cities by 10, thus increasing it to 30, will take around 3 millions of billions of centuries.

So, it's quite obvious that this algorithm becomes impractical even for only 20 cities. Let's have a look at an example. Consider this graph.

The starting point for the traveling salesman is vertex $v_1$, and imagine for the sake of simplicity that he does not have to return to $v_1$ in the end. The number of solutions to explore is therefore $(n-1)!$ if $n$ is the number of cities.

Trying all possible solutions starting from $v_1$ can be done using a depth-first search, during which we keep in memory the shortest possible path encountered.

So far in this MOOC, we considered DFS for which each vertex is explored only once. Here, we are interested in the case where a vertex does not appear twice on the same path, but can appear twice if in different paths. So, here we consider a variation of the DFS where the list of explored vertices is modified for the current explored branch, and not for the other branches.

The tree of all the possible paths is shown here.

The search starts on the left, by proposing the first solution $( v_1, v_2, v_3, v_4 )$ with a cost of 16. This solution is stored in memory as the best solution identified so far.

The next explored solution is $( v_1, v_2, v_4, v_4 )$, which has a cost of 10. This solution is better than the currently known best one, $( v_1, v_2, v_3, v_4 )$, and so this new solution is stored in memory as the current best solution.

The search continues until all possible solutions have been evaluated. The output of the algorithm is solution $( v_1, v_3, v_4, v_2 )$, which has the lowest possible cost of 7. A total of six solutions have been evaluated in this depth-first search.

Algorithm

The algorithm that runs through the previous tree can be seen here.

It is a recursive algorithm that calls itself until a stopping condition is reached. The idea behind the algorithm is to keep in memory the best possible path that has been encountered thus far, and to update it when a better or shorter one is encountered.

The arguments of the call are as follows: the list of vertexes (minus the starting point), the initial vertex, an empty list (which will contain the optimal path in the end), the initial weight of this empty list (which is 0) and the graph.

Backtracking solution

A quick improvement of the previous algorithm is to stop the depth search when it’s certain that the currently explored branch will not generate a better solution than the current best one.

This is the case when the total weight of the edges used in the current branch of the tree is already larger than the total weight of the best solution found.

This means that the branch that’s explored at each step has an influence on the overall calculation time.

For example, imagine that by chance we start with the branch that generates the optimal solution. When exploring the other branches, we will abort exploring them as soon as the total weight exceeds the one found for the first branch, which will save a lot of calculation time.

Various strategies can be used to further optimize backtracking, by preselecting which branches to examine first. As an example, we could select the branch whose starting vertex is the closest to the current vertex.

Example

Let's illustrate this particular backtracking approach on the same graph again, shown here. The tree of all possible paths from vertex $v_1$ is shown here.

First, we try to move to its closest neighbor, which is $v_4$. It costs 3 to move to $v_4$ so our route already has a cost of 3 so far. Then we continue to $v_3$, which is the closest unexplored neighbor of $v_4$. It costs 1 more and so the total length is 4. Finally, we move to $v_2$ the last remaining vertex. It adds 8 to our sum and so we obtain a total of 12. We thus have a first estimation of the shortest path, which is $(v_1,v_4,v_3,v_2)$ even though this path is not optimal.

There is no possible movement from this configuration, so we move backwards. Again, there are no other alternatives here, as we've already explored $v_2$ after $v_3$, so we continue backwards. Now we have an alternative option, which is to go to $v_2$. We add the corresponding weight of 2 and obtain a partial route with a length of 5. There's only one option left, which is to visit $v_3$. However, this move would cost us an additional 8, which would result in a total that's more than our previously found path with a length 12. So, we move backwards instead.

We've now explored all the options starting with $v_1$ then $v_4$, so we continue backwards. We now select another option when starting at $v_1$, which is to go to the second nearest neighbor of $v_1$, which is $v_3$. From here we go to $v_4$ first. And finally we go to $v_2$. We obtain a total length of 7, and thus update the shortest route so far.

We continue exploring the tree by going backwards, twice. Now we have the option to go to $v_2$, but it would cost too much compared to our best route, so we continue backwards.

From v_1 there's only one option left: going to $v_2$. It would already cost 7, which is as much as our best route, so we cannot expect to find a shortest path in this direction.

So, we can conclude that the shortest route has a length of 7, and yet we've only explored a fraction of the trees of all possible paths.

So, the backtracking solution has a much lower complexity than the brute force approach.

Concluding words

OK, that’s it for today! I hope you’ve understood how to obtain an exact solution of the TSP using a brute force or exhaustive search, and how we can solve the TSP faster using backtracking.

In the next lesson, Patrick will tell you how to identify the complexity of a problem.

To go further

Important

The content of this section is optional. It contains additional material for you to consolidate your understanding of the current topic.

2 — The branch-and-bound algorithm

A third algorithm for solving exactly the TSP is the branch-and-bound algorithm.

The branch-and-bound algorithm explores the tree of possibilities (in the case of the traveling salesman, the tree of possible paths). To do this, it uses a BFS-like algorithm. At each vertex it encounters, it tries to estimate an interval (as narrow as possible) estimating the length of a shorter path obtained from the branch of the tree where it currently is.

To obtain these intervals, there are several techniques:

  • To obtain an upper bound, it is sufficient to use a greedy algorithm, that estimates the length of the remaining path by always going to the closest vertex. This results in a path that is not necessarily optimal, but which has a length that can only be longer than that of a shortest path.

  • To obtain a lower bound, we can for example count the number of vertices not yet visited (let’s call this number $k$). In the best case, they can be visited by going through the $k$ edges with the lowest weights. The sum of these $k$ weights gives the lower bound. This reduction is often coarse, and it is better to look for the finest.

When the algorithm has determined several intervals (one per branch to explore), it can compare them to each other. In the case of searching for a path of minimum length, if the interval of a branch $b_1$ has as its upper bound a value smaller than the lower bound of an interval of another branch $b_2$, then the sub-tree corresponding to $b_2$ is eliminated. Indeed, there is no point in exploring the sub-tree $b_2$ knowing that the best solution found will in any case be less interesting than the worst ones by continuing on branch $b_1$.

2.1 — Complexity

The complexity of the algorithm is in the worst case the same as an exhaustive search (if the estimation of the limits does not cost too much). This is for example the case with a complete graph in which all the edges have the same weight.

In practice, having good accuracy in estimating intervals can save a lot of time. An optimal case is one of linear complexity.

2.2 — Precision-performance compromise

As in the for all approximate algorithms, interval estimation poses the problem of the precision-performance trade-off. Indeed, the narrower the interval, the more it will allow parts of the tree to be pruned. However, if its estimation requires an expensive calculation, it has an impact on total complexity.

It is therefore necessary to find estimates of the bounds of the intervals that offer a good compromise between computation time and accuracy.

To go beyond

Important

The content of this section is very optional. We suggest you directions to explore if you wish to go deeper in the current topic.