Question

I am trying to get a physical significant fit to some experimental data. I know that not only will the y values increase monotonically with x but also that dy/dx will also increase monotonically. I have tried a number of fitting functions including a polynomial fit and a univariate spline but neither of these has allowed me to produce the fit I am looking for.

So, I'm looking for a curve fitting function (in scipy?) that will allow me to define the known constraints of the final curve. Below is an example of my data, with a fitted line that does not display a monotonically increasing derivative.

import numpy as np
import matplotlib.pyplot as plt

data =  np.array([[  6.30991828, -10.22329935],
                  [  6.30991828, -10.2127338 ],
                  [  6.47697236, -10.01359361],
                  [  6.47697236,  -9.89353722],
                  [  6.47697236,  -9.81708052],
                  [  6.55108034,  -9.42113403],
                  [  6.55108034,  -9.21932801],
                  [  6.58617165,  -8.40428977],
                  [  6.62007321,  -7.6500927 ]])

interp = np.linspace(min(data[:,0]), max(data[:,0]), 20)
f = np.polyfit(data[:,0], data[:,-1], 3)
data_interp = np.polyval(f, interp)
plt.plot(data[:,0], data[:,1], 'x', interp, data_interp, '-')

EDIT: I believe that you can do this in MATLAB with the slmengine.

Was it helpful?

Solution

Edit: Another attempt. I posted half-baked answer before. And I failed in reading too. I hope this is better.

from scipy.optimize import minimize
import numpy as np
import matplotlib.pyplot as plt

data = np.array([[  6.30991828, -10.22329935],
                  [  6.30991828, -10.2127338 ],
                  [  6.47697236, -10.01359361],
                  [  6.47697236,  -9.89353722],
                  [  6.47697236,  -9.81708052],
                  [  6.55108034,  -9.42113403],
                  [  6.55108034,  -9.21932801],
                  [  6.58617165,  -8.40428977],
                  [  6.62007321,  -7.6500927 ]])

x = data[:, 0]

def polynomial(p, x):
    return p[0]+p[1]*x+p[2]*x**2+p[3]*x**3

def constraint_2nd_der(p):
    return 2*p[2]+6*p[3]*x

def constraint_1st_der(p):
    return p[1]+2*p[2]*x+3*p[3]*x**2

def objective(p):
    return ((polynomial(p, x)-data[:, 1])**2).sum()


cons = (dict(type='ineq', fun=constraint_1st_der), dict(type='ineq', fun=constraint_2nd_der))
res = minimize(objective, x0=np.array([0., 0., 0., 0.]), method='SLSQP', constraints=cons)
if res.success:
    pars = res.x
    x = np.linspace(data[:, 0].min(), data[:, 0].max(), 100)
    pol = polynomial(pars, x)
    plt.plot(data[:, 0], data[:, 1], 'x', x, pol, '-')
    plt.show()
else:
    print 'Failed'

OTHER TIPS

Your data's interesting: you have three discontinuities: at 6.30991828, 6.47697236 and 6.55108034. Are those real? Is that what you're trying to capture?

No continuous function can capture those discontinuities correctly. Your only hope is piecewise fitting on either side of the discontinuities. You'll have three fits:

  1. x < 6.47697236
  2. 6.47697236 < x < 6.55108034
  3. x > 6.55108034

The function is multi-valued at the discontinuities, of course.

If those aren't meaningful to you, I'd say that any cubic polynomial would give you a continuous, increasing first derivative.

enter image description here

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top