문제

I'm trying to find the min of a function in N parameters using gradient descent. However I want to do that while limiting the sum of absolute values of the parameters to be 1 (or <= 1, doesn't matter). For this reason I am using the method of lagrange multipliers so if my function is f(x), I will be minimizing f(x) + lambda * (g(x)-1) where g(x) is a smooth approximation for the sum of absolute values of the parameters.

Now as I understand, the gradient of this function will only be 0 when g(x)=1, so that a method to find a local minimum should find the minimum of my function in which my condition is also satisfied. The problem is that this addition my function unbounded so that Gradient Descent simply finds larger and larger lambdas with larger and larger parameters (in absolute value) and never converges.

At the moment I'm using python's (scipy) implementation of CG so I would really prefer suggestions that do not require me to re-write / tweak the CG code myself but use an existing method.

도움이 되었습니까?

해결책

The problem is that when using Lagrange multipliers, the critical points don't occur at local minima of the Lagrangian - they occur at saddle points instead. Since the gradient descent algorithm is designed to find local minima, it fails to converge when you give it a problem with constraints.

There are typically three solutions:

  • Use a numerical method which is capable of finding saddle points, e.g. Newton's method. These typically require analytical expressions for both the gradient and the Hessian, however.
  • Use penalty methods. Here you add an extra (smooth) term to your cost function, which is zero when the constraints are satisfied (or nearly satisfied) and very large when they are not satisfied. You can then run gradient descent as usual. However, this often has poor convergence properties, as it makes many small adjustments to ensure the parameters satisfy the constraints.
  • Instead of looking for critical points of the Lagrangian, minimize the square of the gradient of the Lagrangian. Obviously, if all derivatives of the Lagrangian are zero, then the square of the gradient will be zero, and since the square of something can never be less then zero, you will find the same solutions as you would by extremizing the Lagrangian. However, if you want to use gradient descent then you need an expression for the gradient of the square of the gradient of the Lagrangian, which might not be easy to come by.

Personally, I would go with the third approach, and find the gradient of the square of the gradient of the Lagrangian numerically if it's too difficult to get an analytic expression for it.

Also, you don't quite make it clear in your question - are you using gradient descent, or CG (conjugate gradients)?

다른 팁

Probably too late to be helpful to the OP but may be useful to others in the same situation:

A problem with absolute-value constraints can often be reformulated into an equivalent problem that only has linear constraints, by adding a few "helper" variables.

For example, consider problem 1:

Find (x1,x2) that minimises f(x1,x2) subject to the nonlinear constraint |x1|+|x2|<=10.

There is a linear-constraint version, problem 2:

Find (x1,x2,x3,x4) that minimises f(x1,x2) subject to the following linear constraints:

  1. x1<=x3
  2. -x1<=x3
  3. x2<=x4
  4. -x2<=x4
  5. x3+x4<=10

Note:

  • If (x1,x2,x3,x4) satisfies constraints for problem 2, then (x1,x2) satisfies constraints for problem 1 (because x3 >= abs(x1), x4 >= abs(x2) )
  • If (x1,x2) satisfies constraints for problem 1, then we can extend to (x1,x2,x3,x4) satisfying constraints for problem 2 by setting x3=abs(x1), x4=abs(x2)
  • x3,x4 have no effect on the target function

It follows that finding an optimum for problem 2 will give you an optimum for problem 1, and vice versa.

I've found an old paper titled "Constrained Differential Optimization" written in 1988 solving this problem really nice and easy.

In that paper, the author claimed that for a Lagrangian: L(x, b) = f(x) + b g(x)

by doing gradient descent on x while doing gradient 'ascend' on b, you will finally converge to a stationary point of L(x, b), which is a local minima of f(x) under the constraint g(x)=0. Penalty method could also be combined to make converge faster and stabler.

Generally just reversing the gradient of b will work.

I've try it in several simple case and it works, though I don't really understand why after reading that paper.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top