سؤال

Consider the following algorithm for linear programming, minimizing [c,x] with A.x <= b.


(1) Start with a feasible point x_0

(2) Given a feasible point x_k, find the greatest alpha such that x_k - alpha.c is admissible (straighforward, look at the ratios of the components of A.x0 to A.c)

(3) take the normal unit vector n to the hyperplane we just reached, pointing inwards. Project n on the plane [c,.] giving r = n - [n,c]/[c,c].c, then look for the greatest beta for which x_k - alpha.c + beta.r is admissible. Set x_{k+1} = x_k - alpha.c + 1/2*beta.r

If x_{k+1} is close enough to x_k within tolerance, return it, otherwise go to (2)


The basic idea is to follow the gradient until one hits a wall. Then, rather than following the shell of the simplex, like the simplex algorithm would do, the solution is kicked back inside the simplex, on a plane where the solutions are no worse, in the direction of the normal vector. The solution moves halfway between the starting point and the next constraint in this direction. It's no worse than before, but now it's more "inside" the simplex, where is has a shot at taking long leaps towards the optimum.

  • Though the probability of hitting an intersection of more than one hyperplane is 0, if one gets close enough to multiple hyperplane within a certain tolerance, the average of the normals may be taken.

  • This can be generalized to any convex objective function by following geodesics on the levels of the function. For quadratic programming in particular, one rotates the solution towards the inside of the simplex.


Questions:

  • Does this algorithm have a name or fall within a category of linear-programming algorithms?

  • Does it have an obvious flaw that I'm overlooking?

هل كانت مفيدة؟

المحلول

I am pretty sure this doesn't work, unless I miss something: your algorithm will not start moving in most cases.

Assume your variable x is taken in R^n.

A polyhedron of the form Ax <= b is contained in a 'maximal' affine subspace P of dimension p <= n, and usually p is much smaller than n (you will have a lot of equality constraints, which can be implicit or explicit: you cannot assume that the expression of P is simple to obtain from A and b).

Now assume you can find an initial point x_0 (which is far from being obvious, btw) ; there is very little chance that the direction of the gradient c is a feasible direction. You would need to consider the projection of the direction c on P, and this is very difficult to do in practice (how would you compute such projection?).

Then, what you want in your step (3) is not the normal direction of the hyperplane you reached, but again its projection on P (visualize the polyedron as a 2d polyedron embedded in a 3d space can help).

There is a very good reason why barrier functions are used in the interior point methods: it is very difficult to describe in practice the geometry of the high-dimension convex sets from the constraints (even simple ones like polyedrons), and things that "seems obvious" when you draw a picture on a piece of paper will not usually work when the dimension of the polyedron increases.

One last point is that your algorithm would not give the exact solution, whereas the simplex does in theory (and I read somewhere it can be done in practice by working with exact rational numbers).

نصائح أخرى

read up on interior point methods: http://en.wikipedia.org/wiki/Interior_point_method

this approach can have nice theoretical properties, but the algorithm performance can tend to tail off in practice

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top