Approximate solutions

Reading time25 min

In brief

Article summary

As mentioned in other articles, heuristics provide approximate solutions. Here, we give more details on how to characterize the quality of an approximate solution. In particular, we focus on speed and correctness.

Main takeaways

  • An approximate solution is a compromise between quality and required time.

  • Solutions that are optimal trade-off are called Pareto-optimal.

Article contents

1 — Video

Please check the video below. You can also read the transcripts if you prefer.

Information

So, here we are again. Nice to see you’ve made it to week 5!

Today, we’ll talk about comparing implementations. When facing practical problems like the traveling salesman problem, the TSP, we’d obviously prefer the solution that’s both fast and correct.

Pareto frontier and Pareto optimal

In the case of NP-Complete problems, which is the case for the TSP, we know that this just isn’t possible. We have to sacrifice speed, correctness, or a bit of both.

This kind of trade-off is often referred to as a "Pareto frontier", which we can define in the following way: "what's the fastest algorithm we can find to solve a problem with a given amount of correctness?".

We call "Pareto optimal" a point that is lying on a Pareto frontier, which means it corresponds to an optimal trade-off between speed, and correctness.

A Pareto optimal is typically an optimal that depends on the relative importance of two or more parameters, such as the size of the problem we are dealing with.

Measuring speed and correctness

Let's define some notions that are related to measuring speed and correctness. We can measure how fast an algorithm is using complexity or measures of time of execution. But the correctness of an algorithm is tricky to define, which makes it difficult to measure.

For the TSP, we know that we want to find a shortest path, and so correctness could simply be measured as a deviation from the length of the shortest path.

But in practice, it isn't always so simple. Finding the actual shortest path can be impossible if there are more than a few dozen vertices, as we saw that the complexity of an exhaustive search scales exponentially. This means that correctness of an algorithm approaching a solution for the TSP is very hard to estimate.

Comparing algorithms

What we know is that the more complexity we have to deal with, the better the solution should be in terms of correctness. In the worst case, we can always rely on a less complex solution that has already been proposed.

Let's take the TSP again as an example. There are many intermediate steps between the greedy algorithm, which corresponds to always targeting the next closest city, and the brute force one, which exhaustively examines all possible paths to visit every city. Finding an intermediate between these two extremes should only be considered if they actually provide improvements in complexity.

To determine whether a proposed solution is efficient, you should always compare it with a simple approach, both in terms of correctness and complexity. If your solution needs to be 10 times more complex to provide a 1% improvement in terms of correctness, maybe it’s not worth the effort, except if this 1% improvement makes a significant difference compared with other solutions.

Concluding words

Well, that’s it for today, thanks for your attention! This is my last video. I really enjoyed teaching for you guys! I will leave you in good hands for week 6, during which Patrick and Vincent will tell you about combinatorial game theory. Bye bye!

To go further

Important

The content of this section is optional. It contains additional material for you to consolidate your understanding of the current topic.

2 — Example – coloring graphs

2.1 — The exact algorithm

Let us consider the following example: we call “coloring of a graph” a vector of integers, each associated with one of its vertices. These integers are called vertex colors, and can be identical.

A coloring is said to be “clean” if all vertices connected by an edge are of different colors. The “chromatic number of a graph” is the minimum number of colors that appear in a specific coloring.

This problem is a known example of a NP-Complete problem. One way to solve it accurately is to list all possible 1-color colorings, then 2-colors, etc. until you find a clean coloring.

The following Python code solves the problem as described above:

# Easy iteration over permutations
import itertools

# Function to check if a coloring is correct
# A coloring is correct if no neighbors share a color
def check_coloring (graph, colors):
    for vertex in range(len(graph)):
        for neighbor in graph[vertex]:
            if colors[vertex] == colors[neighbor]:
                return False
    return True

# This function returns a coloring of the given graph using a minimum number of colors
# We gradually increase the number of available colors
# For each number, we test all possible arrangements of colors
def exhaustive_coloring (graph):
    for nb_colors in range(len(graph)):
        for coloring in itertools.product(range(nb_colors), repeat=len(graph)):
            print("Nb colors:", nb_colors, "- Candidate coloring:", coloring)
            if check_coloring(graph, coloring):
                return coloring
    
# Test graph
# Here, we represnt graphs as an adjacency list
graph = [[1, 2, 5], [0, 2, 5], [0, 1], [4, 5], [3, 5], [0, 1, 3, 4]]
result = exhaustive_coloring(graph)
print(result)

In that example, a graph is implemented in the form of a list of lists. It contains 6 vertices (named $v_1$ to $v_6$), and 8 (symetric) edges. The solution returned by the algorithm is:

Output
[0, 1, 2, 0, 1, 2]

This indicates that three colors are sufficient to have a clean coloring of the graph, with:

  • $v_1$ and $v_4$ sharing color 0.
  • $v_2$ and $v_5$ sharing color 1.
  • $v_3$ and $v_6$ sharing color 2.

2.2 — The greedy approach

This algorithm is very complex and a well-known approximate solution is to sort the vertices by decreasing number of neighbors, coloring them in this order by choosing the smallest positive color that leaves the coloring clean. This approximate algorithm is described below:

# For min-heaps
import heapq

# Function to check if a coloring is correct
# A coloring is correct if no neighbors share a color
def check_coloring (graph, colors):
    for vertex in range(len(graph)):
        if colors[vertex] is not None:
            for neighbor in graph[vertex]:
                if colors[neighbor] is not None:
                    if colors[vertex] == colors[neighbor]:
                        return False
    return True

# This function greedily tries to color the graph from highest degree node to lowest degree one
# First, we sort nodes in descending degree order using a max-heap (negative min-heap)
# Then we color nodes using that heap
def greedy_coloring (graph):
    heap = []
    for vertex in range(len(graph)):
        heapq.heappush(heap, (-len(graph[vertex]), vertex))
    colors = [None] * len(graph)
    while len(heap) > 0:
        degree, vertex = heapq.heappop(heap)
        for color in range(len(graph)):
            colors[vertex] = color
            if check_coloring(graph, colors):
                break
    return colors

# Test graph
graph = [[1, 2, 5], [0, 2, 5], [0, 1], [4, 5], [3, 5], [0, 1, 3, 4]]
result = greedy_coloring(graph)
print(result)

This algorithm is much less complex than the previous one, and therefore allows considering graphs with a lot more vertices. Here, we obtain the following result:

Output
[0, 1, 2, 0, 1, 2]

Note that here, we find the same result as the exhaustive algorithm. However this may not be always the case.

2.3 — Simulations

In order to evaluate the quality of our approximate algorithm, we will focus on two things: computation time, and accuracy. To test these two quantities, we will average results on a large number of random graphs.

In order to automatize things a bit, we will adapt our codes to take arguments out of the command line and output a processable file. Let’s create two Python files.

The following program, measure_greedy.py, returns the average execution time of the greedy approach for solving the considered problem, as well as the average number of colors needed for a fixed-size graph:

# Various imports
import math
import random
import heapq
import time
import sys

# Arguments
NB_NODES = int(sys.argv[1])
EDGE_PROBABILITY = math.log(NB_NODES) / NB_NODES
NB_TESTS = int(sys.argv[2])

# Set a fixed random seed for comparing scripts on same graphs
random.seed(NB_NODES)

# Generates an Erdos-Renyi random graph
def generate_graph ():
    graph = [[] for i in range(NB_NODES)]
    for i in range(NB_NODES):
        for j in range(i + 1, NB_NODES):
            if random.random() < EDGE_PROBABILITY:
                graph[i].append(j)
                graph[j].append(i)
    return graph

# Function to check if a coloring is correct
# A coloring is correct if no neighbors share a color
def check_coloring (graph, colors):
    for vertex in range(len(graph)):
        if colors[vertex] is not None:
            for neighbor in graph[vertex]:
                if colors[neighbor] is not None:
                    if colors[vertex] == colors[neighbor]:
                        return False
    return True

# This function greedily tries to color the graph from highest degree node to lowest degree one
# First, we sort nodes in descending degree order using a max-heap (negative min-heap)
# Then we color nodes using that heap
def greedy_coloring (graph):
    heap = []
    for vertex in range(len(graph)):
        heapq.heappush(heap, (-len(graph[vertex]), vertex))
    colors = [None] * len(graph)
    while len(heap) > 0:
        degree, vertex = heapq.heappop(heap)
        for color in range(len(graph)):
            colors[vertex] = color
            if check_coloring(graph, colors):
                break
    return colors

# Tests
average_time = 0.0
average_solution_length = 0.0
for i in range(NB_TESTS):
    graph = generate_graph()
    time_start = time.time()
    solution = greedy_coloring(graph)
    average_solution_length += len(set(solution)) / NB_TESTS
    average_time += (time.time() - time_start) / NB_TESTS

# Print average time as a function of problem size
print(NB_NODES, ";", average_time, ";", average_solution_length)

Similarly, the following program measure_exhaustive.py evaluates the exhaustive solution:

# Various imports
import math
import random
import time
import sys
import itertools

# Arguments
NB_NODES = int(sys.argv[1])
EDGE_PROBABILITY = math.log(NB_NODES) / NB_NODES
NB_TESTS = int(sys.argv[2])

# Set a fixed random seed for comparing scripts on same graphs
random.seed(NB_NODES)

# Generates an Erdos-Renyi random graph
def generate_graph ():
    graph = [[] for i in range(NB_NODES)]
    for i in range(NB_NODES):
        for j in range(i + 1, NB_NODES):
            if random.random() < EDGE_PROBABILITY:
                graph[i].append(j)
                graph[j].append(i)
    return graph

# Function to check if a coloring is correct
# A coloring is correct if no neighbors share a color
def check_coloring (graph, colors):
    for vertex in range(len(graph)):
        for neighbor in graph[vertex]:
            if colors[vertex] == colors[neighbor]:
                return False
    return True

# This function returns a coloring of the given graph using a minimum number of colors
# We gradually increase the number of available colors
# For each number, we test all possible arrangements of colors
def exhaustive_coloring (graph):
    for nb_colors in range(len(graph)):
        for coloring in itertools.product(range(nb_colors), repeat=len(graph)):
            print("Nb colors:", nb_colors, "- Candidate coloring:", coloring)
            if check_coloring(graph, coloring):
                return coloring

# Tests
average_time = 0.0
average_solution_length = 0.0
for i in range(NB_TESTS):
    graph = generate_graph()
    time_start = time.time()
    solution = exhaustive_coloring(graph)
    average_solution_length += len(set(solution)) / NB_TESTS
    average_time += (time.time() - time_start) / NB_TESTS

# Print average time as a function of problem size
print(NB_NODES, ";", average_time, ";", average_solution_length)

We can now run the following commands to run both algorithms on random graphs (100 graphs per graph order). The exhaustive search will only be evaluated on a subset of the graph orders due to time considerations:

@echo off
for /l %%n in (5,1,100) do (
    python measure_greedy.py %%n 100 >> results_greedy.csv
)
for ($n = 5; $n -le 100; $n++) {
    python measure_greedy.py $n 100 >> results_greedy.csv
}
for n in {5..100}; do
    python3 measure_greedy.py $n 100 >> results_greedy.csv
done
for n in {5..100}; do
    python3 measure_greedy.py $n 100 >> results_greedy.csv
done
@echo off
for /l %%n in (5,1,20) do (
    python measure_exhaustive.py %%n 100 >> results_exhaustive.csv
)
for ($n = 5; $n -le 20; $n++) {
    python measure_exhaustive.py $n 100 >> results_exhaustive.csv
}
for n in {5..20}; do
    python3 measure_exhaustive.py $n 100 >> results_exhaustive.csv
done
for n in {5..20}; do
    python3 measure_exhaustive.py $n 100 >> results_exhaustive.csv
done

Let’s compare execution times using Libreoffice:

Clearly, the gain in execution time is significant. Similarly, let us compare the average chromatic numbers found for the two algorithms:

Looking at the precision curve, it seems that the precision loss is not very high, at least for the problem sizes that could be computed using the exhaustive approach. For larger graphs, we cannot compute the exact solution due to complexity of finding it.

In order to evaluate a bit more that aspect, we are going to evaluate the solutions found using the greedy approach on bipartite graphs, for which we know the chromatic number is always 2:

# Generates a bipartite Erdos-Renyi random graph
def generate_bipartite_graph ():
    graph = [[] for i in range(NB_NODES)]
    for i in range(NB_NODES):
        for j in range(i + 1, NB_NODES):
            if i % 2 != j % 2 and random.random() < EDGE_PROBABILITY:
                graph[i].append(j)
                graph[j].append(i)
    return graph

We obtain the following results:

Here again, the result seems reasonable, which leads us think this heuristic is well-suited to the problem.

You should still note that for low problem sizes, the chromatic number found is sometimes lower than 2, which seems wrong. However, notice that the random graph generator does not check connectivity of the graph, which may lead to disconnected graphs for low vertex counts.

To go beyond

Important

The content of this section is very optional. We suggest you directions to explore if you wish to go deeper in the current topic.