# Cracking the world hardest sudoku

Before powerful GPUs and multi-core processors made it possible for machines to learn from data, AI was about coding a deterministic algorithm. Thе old and well-explored principles of graph trees, constraint propagation and search still find many applications today.

# Constraint Propagation and Search

Artificial intelligence is all about designing computer systems able to perform tasks that normally require human intelligence. We already know computers can do some arithmetic tasks like multiplying large numbers much faster than any human will ever do. But what about non-arithmetic tasks? Well, by now everyone knows that Tesla, Google, Apple and many other tech companies are working on autonomous driving. And yet, they haven’t completely cracked it yet. On the other side, it is now 20 years since IBM’s Deep Blue won both a chess game and a chess match against Garry Kasparov - the reigning world champion at the time. To sum it up - driving a car is obviously an easy task for humans, two billion people are driving to work every day, but it is very hard for a computer system to manage. At the same time, computer systems can beat the world champion at chess - a task that hardly any human can achieve. Makes you wonder, doesn’t it?

## Coding a Sudoku Environment

Another non-arithmetic and seemingly human task at which computers excel is solving a sudoku. The use of constraint propagation and search is illustrated in this great blog post by Peter Norvig. In this post I will go one step further by introducing a small, but powerful optimization for Norvig’s solution. My whole sudoku solver implementation can be found in this repo: AIND-Sudoku.

In a sudoku, the rows, columns and 3x3 squares all contain digits from 1 to 9 exactly once. Norvig introduces a very flexible design, which is easily extended to a diagonal sudoku. Indeed, Norvig’s solution can be extended to solve a diagonal sudoku by just adding the diagonals to the units, used in the constraint propagation steps:

## Naked twins strategy

In solution_performance_test.py I added a small performance test to measure the time needed to solve 20 hard sudoku puzzles. I furthermore modified the code to print the amount of search attempts the solver needs for solving each sudoku puzzle. A search attempt is made whenever the potential of constraint propagation is exhausted and the algorithm has to try different digits for the same box. When executed the test output looks like this:

As previously mentioned, in order to solve a sudoku puzzle one needs to use only constraint propagation and search. To increase the performance of Norvig’s solution I simply added an additional constraint, called naked twins:

## Putting it all together

Adding just this single constraint led to the significant performance boost. The time needed to solve twenty sudoku puzzles was cut in half. You can clearly see the algorithm is making far fewer attempts than before:

One can even go further and implement additional constraints. In the sudoku world those constraints are called sudoku strategies. So how good is a computer at solving a sudoku? In this Telegraph article I found a sudoku puzzle which was designed by Japanese scientists to be especially hard to solve. It is supposed to take hours if not days to solve. Below is a slow motion video of the algorithm solving the sudoku. Note, the video would be much longer if not for the naked twins strategy that is significantly reducing the number of unsuccessful attempts.

As you can see on the video, the algorithm is making quite a few unsuccessful attempts and consequent steps back. One thing is sure - an AI engineer will be faster at writing the code that solves a sudoku than actually solving a puzzle that hard.