Question

I searched the web for different solutions to the n-queens problem in Haskell but couldn't find any that could check for unsafe positions in O(1) time, like that one that you keep an array for the / diagonals and one for the \ diagonals.

Most solutions I found just checked each new queen against all the previous ones. Something like this: http://www.reddit.com/r/programming/comments/62j4m/nqueens_in_haskell/

nqueens :: Int -> [[(Int,Int)]]
nqueens n = foldr qu [[]] [1..n]
    where qu k qss = [ ((j,k):qs) | qs <- qss, j <- [1..n], all (safe (j,k)) qs ]
      safe (j,k) (l,m) = j /= l && k /= m && abs (j-l) /= abs (k-m)

What would be the best way to implement such an "O(1) approach" in Haskell? I am not looking for anything "super-optimized". Just some way to produce the "is this diagonal already used?" array in a functional manner.

UPDATE:

Thanks for all the answers, folks! The reason I originally asked the question is because I wanted to solve a harder backtracking problem. I knew how to solve it in an imperative language but could not readily think of a purely functional data structure to do the job. I figured that the queens problem would be a good model (being the backtracking problem :) ) for the overall data-structure problem, but it isn't my real problem though.

I actually want to find a data structure that allows O(1) random access and holds values that are either on a "initial" state (free line/diagonal, in the n-queens case) or in a "final" state (occupied line/diagonal), with transitions (free to occupied) being O(1). This can be implemented using mutable arrays in an imperative language but I feel that the restriction of updating values only allows for a nice purely functional data structure (as opposed to Quicksort, for example, that really wants mutable arrays).

I figure that sth's solution is as good as you can get using immutable arrays in Haskell and the "main" function looks like what I wanted it to be:

-- try all positions for a queen in row n-1
place :: BoardState -> Int -> [[(Int, Int)]]
place _ 0 = [[]]
place b n = concatMap place_ (freefields b (n-1))
   where place_ p = map (p:) (place (occupy b p) (n-1))

The main problem seems to be finding a better data structure though, as Haskell Arrays have O(n) updating. Other nice suggestions fall short of the mythical O(1) holy grail:

  • DiffArrays come close but mess up in the backtracking. They actually get super slow :( .
  • STUArrays conflict with the pretty functional backtracking approach so they are discarded.
  • Maps and Sets have only O(log n) updating.

I am not really sure there is a solution overall, but it seems promising.

UPDATE:

The most promising data structure I found where Trailer Arrays. Basically a Haskell DiffArray but it mutates back when you backtrack.

Was it helpful?

Solution

In general you are probably going to be stuck paying the O(log n) complexity tax for a functional non-destructive implementation or you'll have to relent and use an (IO|ST|STM)UArray.

Strict pure languages may have to pay an O(log n) tax over an impure language that can write to references by implementing references through a map-like structure; lazy languages can sometimes dodge this tax, although there is no proof either way whether or not the extra power offered by laziness is sufficient to always dodge this tax -- even if it is strongly suspected that laziness isn't powerful enough.

In this case it is hard to see a mechanism by which laziness could be exploited to avoid the reference tax. And, after all that is why we have the ST monad in the first place. ;)

That said, you might investigate whether or not some kind of board-diagonal zipper could be used to exploit locality of updates -- exploiting locality in a zipper is a common way to try to drop a logarithmic term.

OTHER TIPS

Probably the most straightforward way would be to use a UArray (Int, Int) Bool to record safe/unsafe bits. Although copying this is O(n2), for small values of N this is the fastest method available.

For larger values of N, there are three major options:

  • Data.DiffArray removes copy overhead as long as you never use the old values again after modifying them. That is, if you always throw away the old value of the array after mutating it, the modification is O(1). If, however, you access the old value of the array later (even for only a read), the O(N2) is paid then in full.
  • Data.Map and Data.Set allow O(lg n) modifications and lookups. This changes the algorithmic complexity, but is often fast enough.
  • Data.Array.ST's STUArray s (Int, Int) Bool will give you imperative arrays, allowing you to implement the algorithm in the classic (non-functional) manner.

The basic potential problem with this approach is that the arrays for the diagonals need to be modified every time a queen is placed. The small improvement of constant lookup time for the diagonals might not necessarily be worth the additional work of constantly creating new modified arrays.

But the best way to know the real answer is to try it, so I played around a bit and came up with the following:

import Data.Array.IArray (array, (//), (!))
import Data.Array.Unboxed (UArray)
import Data.Set (Set, fromList, toList, delete)

-- contains sets of unoccupied columns and lookup arrays for both diagonals
data BoardState = BoardState (Set Int) (UArray Int Bool) (UArray Int Bool)

-- an empty board
board :: Int -> BoardState
board n
   = BoardState (fromList [0..n-1]) (truearr 0 (2*(n-1))) (truearr (1-n) (n-1))
   where truearr a b = array (a,b) [(i,True) | i <- [a..b]]

-- modify board state if queen gets placed
occupy :: BoardState -> (Int, Int) -> BoardState
occupy (BoardState c s d) (a,b)
   = BoardState (delete b c) (tofalse s (a+b)) (tofalse d (a-b))
   where tofalse arr i = arr // [(i, False)]

-- get free fields in a row
freefields :: BoardState -> Int -> [(Int, Int)]
freefields (BoardState c s d) a = filter freediag candidates
   where candidates = [(a,b) | b <- toList c]
         freediag (a,b) = (s ! (a+b)) && (d ! (a-b))

-- try all positions for a queen in row n-1
place :: BoardState -> Int -> [[(Int, Int)]]
place _ 0 = [[]]
place b n = concatMap place_ (freefields b (n-1))
   where place_ p = map (p:) (place (occupy b p) (n-1))

-- all possibilities to place n queens on a n*n board
queens :: Int -> [[(Int, Int)]]
queens n = place (board n) n

This works and is for n=14 roughly 25% faster than the version you mentioned. The main speedup comes from using the unboxed arrays bdonian recommended. With the normal Data.Array it has about the same runtime as the version in the question.

It might also be worth it to try the other array types from the standard library to see if using them can further improve performance.

I am becoming skeptical about the claim that pure functional is generally O(log n). See also Edward Kmett's answer which makes that claim. Although that may apply to random mutable array access in the theoretical sense, but random mutable array access is probably not what most any algorithm requires, when it is properly studied for repeatable structure, i.e. not random. I think Edward Kmett refers to this when he writes, "exploit locality of updates".

I am thinking O(1) is theoretically possible in a pure functional version of the n-queens algorithm, by adding an undo method for the DiffArray, which requests a look back in differences to remove duplicates and avoid replaying them.

If I am correct in my understanding of the way the backtracking n-queens algorithm operates, then the slowdown caused by the DiffArray is because the unnecessary differences are being retained.

In the abstract, a "DiffArray" (not necessarily Haskell's) has (or could have) a set element method which returns a new copy of the array and stores a difference record with the original copy, including a pointer to the new changed copy. When the original copy needs to access an element, then this list of differences has to be replayed in reverse to undo the changes on a copy of the current copy. Note there is even the overhead that this single-linked list has to be walked to the end, before it can be replayed.

Imagine instead these were stored as a double-linked list, and there was an undo operation as follows.

From an abstract conceptual level, what the backtracking n-queens algorithm does is recursively operate on some arrays of booleans, moving the queen's position incrementally forward in those arrays on each recursive level. See this animation.

Working this out in my head only, I visualize that the reason DiffArray is so slow, is because when the queen is moved from one position to another, then the boolean flag for the original position is set back to false and the new position is set to true, and these differences are recorded, yet they are unnecessary because when replayed in reverse, the array ends up with the same values it has before the replay began. Thus instead of using a set operation to set back to false, what is needed is an undo method call, optionally with an input parameter telling DiffArray what "undo to" value to search for in the aforementioned double-linked list of differences. If that "undo to" value is found in a difference record in the double-linked list, there are no conflicting intermediate changes on that same array element found when walking back in the list search, and the current value equals the "undo from" value in that difference record, then the record can be removed and that old copy can be re-pointed to the next record in the double-linked list.

What this accomplishes is to remove the unnecessary copying of the entire array on backtracking. There is still some extra overhead as compared to the imperative version of the algorithm, for adding and undoing the add of difference records, but this can be nearer to constant time, i.e. O(1).

If I correctly understand the n-queen algorithm, the lookback for the undo operation is only one, so there is no walk. Thus it isn't even necessary to store the difference of the set element when moving the queen position, since it will be undone before the old copy will be accessed. We just need a way to express this type safely, which is easy enough to do, but I will leave it as an exercise for the reader, as this post is too long already.


UPDATE: I haven't written the code for the entire algorithm, but in my head the n-queens can be implemented with at each iterated row, a fold on the following array of diagonals, where each element is the triplet tuple of: (index of row it is occupied or None, array of row indices intersecting left-right diagonal, array of row indices intersecting right-left diagonal). The rows can be iterated with recursion or a fold of an array of row indices (the fold does the recursion).

Here follows the interfaces for the data structure I envision. The syntax below is Copute, but I think it is close enough to Scala, that you can understand what is intended.

Note that any implementation of DiffArray will be unreasonably slow if it is multithreaded, but the n-queens backtracking algorithm doesn't require DiffArray to be multithreaded. Thanks to Edward Kmett for pointing that out in the comments for this answer.

interface Array[T]
{
   setElement  : Int -> T -> Array[T]     // Return copy with changed element.
   setElement  : Int -> Maybe[T] -> Array[T]
   array       : () -> Maybe[DiffArray[T]]// Return copy with the DiffArray interface, or None if first called setElement() before array().
}
// An immutable array, typically constructed with Array().
//
// If first called setElement() before array(), setElement doesn't store differences,
// array will return None, and thus setElement is as fast as a mutable imperative array.
//
// Else setElement stores differences, thus setElement is O(1) but with a constant extra overhead.
// And if setElement has been called, getElement incurs an up to O(n) sequential time complexity,
// because a copy must be made and the differences must be applied to the copy.
// The algorithm is described here:
//    http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/7194832#7194832
// Similar to Haskell's implementation:
//    http://www.haskell.org/haskellwiki/Arrays#DiffArray_.28module_Data.Array.Diff.29
//    http://www.haskell.org/pipermail/glasgow-haskell-users/2003-November/005939.html
//
// If a multithreaded implementation is used, it can be extremely slow,
// because there is a race condition on every method, which requires internal critical sections.

interface DiffArray[T] inherits Array[T]
{
   unset       : () -> Array[T]        // Return copy with the previous setElement() undone, and its difference removed.
   getElement  : Int -> Maybe[T]       // Return the the element, or None if element is not set.
}
// An immutable array, typically constructed with Array( ... ) or Array().array.

UPDATE: I am working on the Scala implementation, which has an improved interface compared to what I had suggested above. I have also explained how an optimization for folds approaches the same constant overhead as a mutable array.

I have a solution. However, the constant may be large, so I don't really hope beating anything.

Here is my data structure:

-- | Zipper over a list of integers
type Zipper = (Bool,  -- does the zipper point to an item?
               [Int], -- previous items
                      -- (positive numbers representing
                      --   negative offsets relative to the previous list item)
               [Int]  -- next items (positive relative offsets)
               )

type State =
  (Zipper, -- Free columns zipper
   Zipper, -- Free diagonal1 zipper
   Zipper  -- Free diagonal2 zipper
   )

It allows all of the required operations to be performed in O(1).

The code can be found here: http://hpaste.org/50707

The speed is bad -- it's slower than the reference solution posted in the question on most inputs. I've benchmarked them against each other on inputs [1,3 .. 15] and got the following time ratios ((reference solution time / my solution time) in %):

[24.66%, 19.89%, 23.74%, 41.22%, 42.54%, 66.19%, 84.13%, 106.30%]

Notice almost linear slow-down of the reference solution relative to mine, showing difference in asymptotic complexity.

My solution is probably horrible in terms of strictness and things like that, and must be fed to some very good optimizing compiler (like Don Stewart for example) to get better results.

Anyway, I think in this problem O(1) and O(log(n)) are indistinguishable anyway because log(8) is just 3 and constants like this are subject of micro-optimisations rather than of algorithm.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top