Question

i am trying to minimize (globally) 3 functions that use common variables, i tried to combine them into one function and minimize that using L-BFGS-B(i need to set boundaries for the variables), but it has shown to be very difficult to balance each parameter with weightings, i.e. when one is minimised the other will not be. I also tried to use SLSQP method to minimize one of them while setting others as constraints, but the constraints are often ignored/not met. Here are what need to be minimized, all the maths are done in meritscalculation and meritoflength, meritofROC, meritofproximity, heightorderreturnedare returned from the calculations as globals.

def lengthmerit(x0):
    meritscalculation(x0)  
    print meritoflength
    return meritoflength

def ROCmerit(x0):
    meritscalculation(x0)
    print meritofROC
    return meritofROC

def proximitymerit(x0):
    meritscalculation(x0)
    print  meritofproximity+heightorder
    return meritofproximity+heightorder

i want to minimize all of them using a common x0 (with boundaries) as independent variable, is there a way to achieve this?

Était-ce utile?

La solution

Is this what you want to do ?

minimize a * amerit(x) + b * bmerit(x) + c * cmerit(x)
over a, b, c, x:
    a + b + c = 1
    a >= 0.1, b >= 0.1, c >= 0.1 (say)
    x in xbounds

If x is say [x0 x1 .. x9], set up a new variable abcx = [a b c x0 x1 .. x9], constrain a + b + c = 1 with a penalty term added to the objective function, and minimize this:

define fabc( abcx ):
    """ abcx = a, b, c, x
        -> a * amerit(x) + ... + penalty 100 (a + b + c - 1)^2 
    """ 
    a, b, c, x = abcx[0], abcx[1], abcx[2], abcx[3:]  # split
    fa = a * amerit(x)
    fb = b * bmerit(x)
    fc = c * cmerit(x) 
    penalty = 100 * (a + b + c - 1) ** 2  # 100 ?
    f = fa + fb + fc + penalty
    print "fabc: %6.2g = %6.2g + %6.2g + %6.2g + %6.2g   a b c: %6.2g %6.2g %6.2g" % (
                f, fa, fb, fc, penalty, a, b, c )
    return f

and bounds = [[0.1, 0.5]] * 3 + xbounds, i.e. each of a b c in 0.1 .. 0.5 or so.
The long print s should show you why one of a b c approach 0 -- maybe one of amerit() bmerit() cmerit() is way bigger than the others ? Plots instead of prints would be easy too.

Summary:
1) formulate the problem clearly on paper, as at the top
2) translate that into python.

Autres conseils

here is the result of some scaling and weighting

objective function:

merit_function=wa*meritoflength*1e3+wb*meritofROC+wc*meritofproximity+wd*heightorder*10+1000 * (wa+wb+wc+wd-1) ** 2

input:

abcdex=np.array(( 0.5, 0.5, 0.1, 0.3, 0.1...))

output:

fun: array([ 7.79494644])

   x: array([  4.00000000e-01,   2.50000000e-01,   1.00000000e-01,
     2.50000000e-01...])


meritoflength : 0.00465499380753.  #target 1e-5, usually start at 0.1
meritofROC: 23.7317956542          #target ~1,  range <33
Heightorder: 0                     #target :strictly 0, range <28
meritofproximity : 0.0             #target:less than 0.02,   range <0.052

i realised after a few runs, all the weightings tend to stay at the minimum values of the bound, and im back to manually tuning the scaling problem i started with.

Is there a possibility that my optimisation function isnt finding the true global minimum?

here is how i minimised it:

minimizer_kwargs = {"method": "L-BFGS-B", "bounds": bnds, "tol":1e0 }

ret = basinhopping(merit_function, abcdex, minimizer_kwargs=minimizer_kwargs, niter=10)
zoom = ret['x']

res = minimize(merit_function, zoom, method = 'L-BFGS-B', bounds=bnds, tol=1e-6)
Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top