Question

I am fitting an exponential decay function with lsqvurcefit in Matlab. To do this I first normalize my data because they differ several orders of magnitude. However Im not sure how to denormalize my fitted parameters.

My fitting model is s = O + A * exp(-t/T) where t and s are known and t is in the order of 10^-3 and s in the order of 10^5. So I subtract from them their mean and divide them by their standarddeviation. My goal is to find the best A, O and T that at the given times t will result most near s. However I dont know how to denormalize my resulting A O and T.

Might somebody know how to do this? I only found this question on SO about normalisation, but does not really address the same problem.

Was it helpful?

Solution

When you normalize, you must record the means and standard deviations for each of your featuers. Then you can easily use those values to denormalize.

e.g.

A = [1 4 7 2 9]';
B = 100 475 989 177 399]';

So you could just normalize right away:

An = (A - mean(A)) / std(A)

but then you can't get back to the original A. So first save the means and stds.

Am = mean(A); Bm = mean(B);
As = std(A);  Bs = std(B);
An = (A - Am)/As;
Bn = (B - Bm)/Bs;

now do whatever processing you want and then to denormalize:

Ad = An*As + Am;
Bd = Bn*Bs + Bm;

I'm sure you can see that that's going to be an issue if you have a lot of features (i.e. you have to type code out for each feature, what a mission!) so lets assume your data is arranged as a matrix, data, where each sample is a row and each column is a feature. Now you can do it like this:

data = [A, B]

means = mean(data);
stds = std(data);

datanorm = bsxfun(@rdivide, bsxfun(@minus, data, means), stds);

%// Do processing on datanorm

datadenorm = bsxfun(@plus, bsxfun(@times, datanorm, stds), means);

EDIT:

After you have fit your model parameters (A,O and T) using normalized t and f then your model will expect normalized inputs and produce normalized outputs. So to use it you should first normalize t and then denormalize f.

So to find a new f by running the model on a normalized new t. So f(tn) where tn = (t - tm)/ts and tm is the mean of your training (or fitting) t set and ts the std. Then to get your correct magnitude f you must denormalize only f, so the full solution would be

 f(tn)*fs + fm

So once again, all you need to do is save the mean and std you used to normalize.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top