Question

Are there any libraries for sequential non-linear optimization with upper and lower bounds, as well as inequality constraints, that are written in or easily callable from Haskell?

Était-ce utile?

La solution

The bindings-levmar package provides bindings to a C Levenberg-Marquardt optimizer.

Autres conseils

A quick grep of Hackage suggests that nonlinear-optimization is the best (only) already-written thing; however, it doesn't seem to include anything for bounded optimization.

Your best bet seems to be one of these (in order of increasing attractiveness):

  1. Start your own project.
  2. Extend the above package.
  3. Find a decent C library and learn enough FFI to bind to it.

I know the OP asked for a general optimization library, where my experience is:

  • The levmar package depends on the blas and lapack libraries and that makes things very complicated to install, I didn't manage to install it such that ghci is still working on Windows.
  • The nonlinear-optimization package requires a gradient function
  • The optimization package seems also to require gradients, though I couldn't figure out how to actually use it.

Besides, all of the mentioned packages do not seem to have any real documentation.

Fortunately, for simple problems a simple solution can be enough. If you want to optimize a one-dimensional, smooth and convex function which has a single bracketed extremum but you do not know a gradient function (see below if you do1) then a simple method like Golden Section Search will do.

Translated from the Wikipedia page:

import Data.Maybe (fromMaybe)

-- 1 / phi
invphi = (sqrt 5 - 1) / 2
-- 1 / phi^2
invphi2 = (3 - sqrt 5) / 2

-- | Enable optional arguments syntax. Use with Maybe a as parameter type, then in the function write param // defaultValue
(//) :: Maybe a -> a -> a
(//) = flip fromMaybe

-- Just a wrapper function because of all the ugly Nothing's of the recursive function
goldenSectionSearch f a b tolerance = goldenSectionSearchRecursive f a b tolerance Nothing Nothing Nothing Nothing Nothing

-- | Golden section search, recursive.
-- Given a function f with a single local maximum in the interval [a, b], golden section search returns a subset interval [c, d] that contains the maximum with d-c <= tolerance
-- Taken from the python implementation at https://en.wikipedia.org/wiki/Golden-section_search
goldenSectionSearchRecursive ::
    (Double -> Double) -- ^ Function with a single maximum in [a, b]
    -> Double -- ^ One side of the interval
    -> Double -- ^ Other side of the interval
    -> Double -- ^ Tolerance
    -> Maybe Double -- ^ h, Current search interval
    -> Maybe Double -- ^ c, New left interval point. If Nothing, a new point is chosen.
    -> Maybe Double -- ^ d, New right interval point.
    -> Maybe Double -- ^ f(c), Function value at c
    -> Maybe Double -- ^ f(d), Function value at d
    -> (Double, Double) -- ^ The interval in which the maximum is

goldenSectionSearchRecursive f a' b' tolerance h' c' d' fc' fd'
    | h < tolerance = (a, b)
    | fc > fd = goldenSectionSearchRecursive f a d tolerance (Just (h * invphi)) Nothing (Just c) Nothing (Just fc)
    | otherwise = goldenSectionSearchRecursive f c b tolerance (Just (h * invphi)) (Just d) Nothing (Just fd) Nothing
    where
        a = min a' b'
        b = max a' b'
        h = h' // (b - a)
        c = c' // (a + invphi2 * h)
        d = d' // (a + invphi * h)
        fc = fc' // f c
        fd = fd' // f d

and then you call with goldenSectionSearch (\x -> -(x-2)^2) 1 5 1e-5 which returns (1.9999959837979107,2.0000050911830893). This simple function of course would be much easier to solve by hand, but it's just an example.

PS Interesting about Golden Section Search is that the convergence rate is exactly known: on each iteration the length of the interval in which the optimum resides is divided by the golden ratio.

PPS I put it on GitHub

[1] Note that if you know a gradient function, equating it to zero and applying a root finding method is often much faster. For example in one dimension, Will Ness pointed to his answer which has a simple method with a faster convergence rate than the Golden Section Search. You could also use one of the mentioned packages which require a gradient function, of course.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top