Question

I have a set of integer constraints that I would like to solve. The constraints can consist of additions of variables that are greater than, less than or equal to some constant.

Example:

A >= 20
A <= 30
B <= 10
A + B <= 25
...

There will be hundreds of such simple constraints, and the constants are much larger values (hundreds of thousands) in practice.

However, I don't just want a solution to these constraints: I want a random solution from the solution space. That doesn't mean each solution has to have equal probability (I don't think that's possible without enumerating them all?) but what I want is that for instance for the variable A the solution will typically not be 20 or 30, but rather that values in between are just as likely (or even more likely) to be picked.

What techniques would be appropriate for this kind of problem? I'm having trouble knowing where to look, because most algorithms focus on finding optimal or fast or minimal solutions rather than random ones.

Was it helpful?

Solution

Many Constraint Programming systems has a search heuristic (called "indomain_random" or something similar), which give solutions in random order (given some seed). Here's a MiniZinc model for a simple problem:

 var 20..30: A;
 var 0..10: B;     
 solve :: int_search([A,B], first_fail, indomain_random, complete) satisfy;
 constraint A + B <= 25;
 output [ show([A,B])];

Here's some solutions for a couple of seeds using Gecode's FlatZinc solver:

Seed   Solution
---------------
0      [22,0]
2      [25,0]
3      [22,2]

OTHER TIPS

I would start by establishing relationships between all nodes that interact with variables of other nodes.

Make a pass over your graph marking all nodes that depend on no other nodes as visited. Then iterate over each of the nodes that depend on those nodes, shrinking their range (increasing min and decreasing max) in such a way that their formulas are consistant. So if you have A.min=10, A.max=20, B.min = 10, A+B=25 you can change A.max down to 15 (because B must be 10, and 25-10=15). You've just reduced the scope of A by 50%.

This gets easier if you establish a master node: if A+B=25, does A depend on B or does B depend on A? Making your graph a directed graph is much easier to deal with, as the algorithms are simpler in directed graphs.

Once you've done all this you will notice islands appear: this a is a good thing, because islands represent individual graphs that provide walls of separation - if you attempt a trial and error method, you only need to retry islands that failed to enter a consistent state.

Not really a complete answer, but maybe useful, and too long for a comment:

It may help you to know that the solution space is convex. Meaning, if you have two solutions A1, B1, C1 and A2, C2, B2, then any triple in between them is also a solution.

(Here, "in between" means that there is some real number t in the range [0,1], so that:

  • Anew = t * A1 + (1 - t) * A2
  • Bnew = t * B1 + (1 - t) * B2
  • Cnew = t * C1 + (1 - t) * C2

To see why, you can just try plugging these expressions for Anew, Bnew, Cnew into your inequalities, and the inequalities will expand as true because they do so for A1, B1, C1 and A2, B2, C2.)

You can use this information to limit the region of n space you need to search. For instance, if you find one solution, and you want to know how far out in some direction your solution space extends, you can run something like a binary search down towards the known solution. Etc...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top