Question

I am trying to model the following constraint in a MIP:

x_1 +x_2 + ... +x_n != d

The idea is to introduce a variable z that is 1, if x_1 +x_2 + ... +x_n = d and to add the constraint

z <= 0.

But I cannot figure out how to model the constraint

(x_1 +x_2 + ... +x_n = d) ==> z=1 

in an integer program.

Was it helpful?

Solution

I assume all x_i are integers. Let L and U be constants such that

L <= x_1+x_2 + ... +x_n <= U

and y a binary variable. These constraints express what you are looking for:

x_1+x_2 + ... +x_n >= d+1 + (L-d-1)y

x_1+x_2 + ... +x_n <= d-1 + (U-d+1)(1-y)

If y=0 then the first constraint x_1 +x_2 + ... +x_n >= d+1 must hold and the second constraint x_1+x_2 + ... +x_n <= U is satisfied by the definition of U.

If y=1 then then the second constraint x_1 +x_2 + ... +x_n <= d-1 must hold and the first constraint x_1+x_2 + ... +x_n >= L is satisfied by the definition of L.

(Please check for typos.)


This is the infamous big M method in integer programming. It can lead to poor relaxations and it can also lead to ill-conditioned problems.


For further tricks, google "integer programming tricks". In particular, see AIMMS Modeling Guide - Integer Programming Tricks for this big M method trick.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top