Question

I have a homework assignment to write a multi-threaded sudoku solver, which finds all solutions to a given puzzle. I have previously written a very fast single-threaded backtracking sudoku solver, so I don't need any help with the sudoku solving aspect.

My problem is probably related to not really grokking concurrency, but I don't see how this problem benefits from multi-threading. I don't understand how you can find different solutions to the same problem at the same time without maintaining multiple copies of the puzzle. Given this assumption (please prove it wrong), I don't see how the multi-threaded solution is any more efficient than a single-threaded.

I would appreciate it if anyone could give me some starting suggestions for the algorithm (please, no code...)


I forgot to mention, the number of threads to be used is specified as an argument to the program, so as far as I can tell it's not related to the state of the puzzle in any way...

Also, there may not be a unique solution - a valid input may be a totally empty board. I have to report min(1000, number of solutions) and display one of them (if it exists)

Was it helpful?

Solution

Pretty simple really. The basic concept is that in your backtracking solution you would branch when there was a choice. You tried one branch, backtracked and then tried the other choice.

Now, spawn a thread for each choice and try them both simultaneously. Only spawn a new thread if there are < some number of threads already in the system (that would be your input argument), otherwise just use a simple (i.e your existing) single-threaded solution. For added efficiency, get these worker threads from a thread pool.

This is in many ways a divide and conquer technique, you are using the choices as an opportunity to split the search space in half and allocate one half to each thread. Most likely one half is harder than the other meaning thread lifetimes will vary but that is what makes the optimisation interesting.

The easy way to handle the obvious syncronisation issues is to to copy the current board state and pass it into each instance of your function, so it is a function argument. This copying will mean you don't have to worry about any shared concurrency. If your single-threaded solution used a global or member variable to store the board state, you will need a copy of this either on the stack (easy) or per thread (harder). All your function needs to return is a board state and a number of moves taken to reach it.

Each routine that invokes several threads to do work should invoke n-1 threads when there are n pieces of work, do the nth piece of work and then wait with a syncronisation object until all the other threads are finished. You then evaluate their results - you have n board states, return the one with the least number of moves.

OTHER TIPS

Multi-threading is useful in any situation where a single thread has to wait for a resource and you can run another thread in the meantime. This includes a thread waiting for an I/O request or database access while another thread continues with CPU work.

Multi-threading is also useful if the individual threads can be farmed out to diffent CPUs (or cores) as they then run truly concurrently, although they'll generally have to share data so there'll still be some contention.

I can't see any reason why a multi-threaded Sudoku solver would be more efficient than a single-threaded one, simply because there's no waiting for resources. Everything will be done in memory.

But I remember some of the homework I did at Uni, and it was similarly useless (Fortran code to see how deep a tunnel got when you dug down at 30 degrees for one mile then 15 degrees for another mile - yes, I'm pretty old :-). The point is to show you can do it, not that it's useful.

On to the algorithm.

I wrote a single threaded solver which basically ran a series of rules in each pass to try and populate another square. A sample rule was: if row 1 only has one square free, the number is evident from all the other numbers in row 1.

There were similar rules for all rows, all columns, all 3x3 mini-grids. There were also rules which checked row/column intersects (e.g. if a given square could only contain 3 or 4 due to the row and 4 or 7 due to the column, then it was 4). There were more complex rules I won't detail here but they're basically the same way you solve it manually.

I suspect you have similar rules in your implementation (since other than brute force, I can think of no other way to solve it, and if you've used brute force, there's no hope for you :-).

What I would suggest is to allocate each rule to a thread and have them share the grid. Each thread would do it's own rule and only that rule.

Update:

Jon, based on your edit:

[edit] I forgot to mention, the number of threads to be used is specified as an argument to the program, so as far as I can tell it's not related to the state of the puzzle in any way...

Also, there may not be a unique solution - a valid input may be a totally empty board. I have to report min(1000, number of solutions) and display one of them (if it exists)

It looks like your teacher doesn't want you to split based on the rules but instead on the fork-points (where multiple rules could apply).

By that I mean, at any point in the solution, if there are two or more possible moves forward, you should allocate each possibility to a separate thread (still using your rules for efficiency but concurrently checking each possibility). This would give you better concurrency (assuming threads can be run on separate CPUs/cores) since there will be no contention for the board; each thread will get it's own copy.

In addition, since you're limiting the number of threads, you'll have to work some thread-pool magic to achieve this.

What I would suggest is to have a work queue and N threads. The work queue is initially empty when your main thread starts all the worker threads. Then the main thread puts the beginning puzzle state into the work queue.

The worker threads simply wait for a state to be placed on the work queue and one of them grabs it for processing. The work thread is your single-threaded solver with one small modification: when there are X possibilities to move forward (X > 1), your worker puts X-1 of those back onto the work queue then continues to process the other possibility.

So, lets say there's only one solution (true Sudoku :-). The first worker thread will whittle away at the solution without finding any forks and that will be exactly as in your current situation.

But with two possibilities at move 27 (say, 3 or 4 could go into the top left cell), your thread will create another board with the first possibility (put 3 into that cell) and place that in the work queue. Then it would put 4 in its own copy and continue.

Another thread will pick up the board with 3 in that cell and carry on. That way, you have two threads running concurrently handling the two possibilities.

When any thread decides that its board is insoluble, it throws it away and goes back to the work queue for more work.

When any thread decides that its board is solved, it notifies the main thread which can store it, over-writing any previous solution (first-found is solution) or throw it away if it's already got a solution (last-found is solution) then the worker thread goes back to the work queue for more work. In either case, the main thread should increment a count of solutions found.

When all the threads are idle and the work queue is empty, main either will or won't have a solution. It will also have a count of solutions.

Keep in mind that all communications between workers and main thread will need to be mutexed (I'm assuming you know this based on information in your question).

The idea behind multi-threading is taking advantage of having several CPUs, allowing you to make several calculations simultaneously. Of course each thread is going to need its own memory, but that's usually not a problem.

Mostly, what you want to do is divide the possible solution state into several sub-spaces which are as independent as possible (to avoid having to waste too many resources on thread creation overhead), and yet "fit" your algorithm (to actually benefit from having multiple threads).

Here is a greedy brute-force single-threaded solver:

  1. Select next empty cell. If no more empty cells, victory!
  2. Possible cell value = 1
  3. Check for invalid partial solution (duplicates in row, column or 3x3 block).
  4. If partial solution is invalid, increment cell value and return to step 3. Otherwise, go to step 1.

If you look at the above outline, the combination of steps 2 and 3 are obvious candidates for multithreading. More ambitious solutions involve creating a recursive exploration that spawns tasks that are submitted to a thread pool.

EDIT to respond to this point: "I don't understand how you can find different solutions to the same problem at the same time without maintaining multiple copies of the puzzle."

You can't. That's the whole point. However, a concrete 9-thread example might make the benefits more clear:

  1. Start with an example problem.
  2. Find the first empty cell.
  3. Create 9 threads, where each thread has its own copy of the problem with its own index as a candidate value in the empty cell.
  4. Within each thread, run your original single-threaded algorithm on this thread-local modified copy of the problem.
  5. If one of the threads finds an answer, stop all the other threads.

As you can imagine, each thread is now running a slightly smaller problem space and each thread has the potential to run on its own CPU core. With a single-threaded algorithm alone, you can't reap the benefits of a multi-core machine.

Does it need to benefit from multithreading, or just make use of multthreading so you can learn for the assignment?

If you use a brute force algoritm it is rather easy to split into multiple threads, and if the assignment is focused on coding threads that may be an acceptable solution.

When you say all solutions to a given puzzle, do you mean the final one and only solution to the puzzle? Or the different ways of arriving at the one solution? I was of the understanding that by definition, a sudoku puzzle could have only one solution...

For the former, either Pax's rule based approach or Tom Leys' take on multi-threading your existing backtracking algorithm might be the way to go.

If the latter, you could implement some kind of branching algorithm which launches a new thread (with it's own copy of the puzzle) for each possible move at each stage of the puzzle.

Depending on how you coded your single threaded solver, you might be able to re-use the logic. You can code a multi-threaded solver to start each thread using a different set of strategies to solve the puzzle.

Using those different strategies, your multi-threaded solver may find the total set of solutions in less time than your single threaded solver (keep in mind though that a true Sudoku puzzle only has one solution...you're not the only one who had to deal with that god awful game in class)

Some general points: I don't run processes in parallel unless 1) it is easy to divide the problem 2) I know I'll get a benefit to doing so - e.g. I won't hit another bottleneck. I entirely avoid sharing mutable values between threads - or minimize it. Some people are smart enough to work safely with mutexes. I'm not.

You need to find points in your algorithm that create natural branches or large units of work. Once you've identified a unit to work, you drop it in a queue for a thread to pick up. As a trivial example. 10 databases to upgrade. Start upgrade async on all 10 servers. Wait for all to complete. I can easily avoid sharing state between threads / processes, and can easily aggregate the results.

What comes to mind for sudoku is that an efficient suduko solution should combining 2-3 (or more) strategies that never run past a certain depth. When I do Sudoku, it's apparent that, at any given moment, different algorithms provide the solution with the least work. You could simply fire off a handful of strategies, let them investigate to a limited depth, wait for report. Rinse, repeat. This avoids "brute-forcing" the solution. Each algorithm has it's own data space, but you combine the answers.

Sciam.com had article on this a year or two back - looks like it isn't public, though.

You said you used back tracking to solve the problem. What you can do is to split the search space into two and handle each space to a thread, then each thread would do the same till you reach the last node. I did a solution to this which can be found www2.cs.uregina.ca/~hmer200a but using single thread but the mechanism of splitting the search space is there using branch and bound.

A few years ago when I looked at solving sudoku, it seemed like the optimal solution used a combination of logical analysis algorithms and only fell back on brute force when necessary. This allowed the solver to find the solution very quickly, and also rank the board by difficulty if you wanted to use it to generate a new puzzle. If you took this approach you could certainly introduce some concurrency, although having the threads actually work together might be tricky.

I have an idea that's pretty fun here.. do it with the Actor Model! I'd say using erlang.. How? You start with the original board, and..

  • 1) at first empty cell create 9 children with different number, then commit suicide
  • 2) each child check if it's invalid, if so it commits suicide, else
    • if there is an empty cell, go to 1)
    • if complete, this actor is a solution

Clearly every surviving actor is a solution to the problem =)

Just a side note. I actually implemented an optimized sudoku solver and looked into multithreading, but two things stopped me.

First, the simple overhead of starting a thread took 0.5 milliseconds, while the whole resolution took between 1 to 3 milliseconds (I used Java, other languages or environments may give different results).

Second, most problems don't require any backtracking. And those that do, only need it late in the resolution of the problem, once all game rules have been exhausted and we need to make an hypothesis.

Here's my own penny. Hope it helps.

Remember that inter-processor/inter-thread communications are expensive. Don't multi-thread unless you have to. If there isn't much work/computation to be done in other threads, you might as well just go on with a single-thread.

Try as much as possible to avoid sharing data between threads. Use them only when necessary

Take advantage of SIMD extensions wherever possible. With the Vector Extensions you can perform calculations on multiple data in a single swoop. It can help you aplenty.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top