Вопрос

I have an equation that I am trying to fit to some experimental data. In the past I have used lsqcurvefit and passed in the experimental data, and the function that describes my fitted data. E.g.

model = @(p,x) exp(-p(1).*x);
startingVals = 0.5;
lsqcurvefit(model,startingVals,expData_x,exptData_y)

This would work perfectly with MATLAB returning the value of p that most closely fits my data. Internally I guess it is tweaking the values of p a smart way to minimise the sum of the differences squared.

Now I have a model that is not analytical and want to find the closest fitting B. An example is:

model = integral(@(v)besselj(0,x.*v.*B), 0, 40);

(just an example, maybe it can be solved analytically but my one definitively cant).

So to calculate the model I put in a trial version of B, and it calculates the function for each x. What I have been doing so far is calulating the model for a series of trial B terms, eg B = 1:1:10. This would give me 10 vectors each one with a different set of model points. I would then just run script which finds the smallest residual from each model calculation minus the experimental data.

This seems to work okay but now I am comming to an equation with more than one fitting term. For instance

model = integral(@(v)(C.*D).*besselj(0,x.*v.*B).^(E), 0, 40);

I now may want to find the best fitting values of B, C, D and E. My method would still work but there would be a crazy amount of experimental trials generated for instance looping through 10 values of each would be 10,000 individual curves generated.

Is my method fine or am I missing out on a much easier way to fit these kinds of functions?

Thanks

edit: Working code thanks to David.

Note that sometimes lsqcurvefit comes back with complex numbers but that is another issue. Obviously real data wont be a perfect fit but I had no idea you can pass such functions to lsqcurvefit.

A = 0.2; %input variables to 'solve' for later
B = 0.3;
C = 0.4;
D = 0.5;
x = logspace(-2,2,200); %x data

options = optimset('MaxFunEvals', 200,'MaxIter', 200,'TolFun',1e-10,'Display','off');

genData = arrayfun(@(x) integral(@(v) A.*B.*besselj(0,x.*v.*C).^D, 0, 40),x); %generate some data
genData = real(genData); 

model = @(p,x) real(arrayfun(@(x) integral(@(v) p(1).*p(2).*besselj(0,x.*v.*p(3)).^p(4), 0, 40),x)); 

startingVals = [0.5 0.5 0.5 0.5]; %guess values
lb = [0.1 0.1 0.1 0.1]; %lower bound
ub = [1 1 1 1]; %upper bound

[p] = lsqcurvefit(model,startingVals,x,genData,lb,ub,options); %do the fit, takes a while

fitData = real(arrayfun(@(x) integral(@(v) p(1).*p(2).*besselj(0,x.*v.*p(3)).^p(4), 0, 40),x)); %regenrate data based on fitted values

semilogx(x,genData,'ro')
hold on
semilogx(x,fitData,'b')
Это было полезно?

Решение

This should let lsqcurvefit be able to work on model:

model=@(p,x) arrayfun(@(x) integral(@(v) p(1).*p(2).*besselj(0,x.*v.*p(3)).^p(4), 0, 40),x);

however I made up some coefficients B, C, D and E and performance isn't very good. I'm not sure whether it's because I picked bad numbers or whether it is a slow method.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top