Question

I am testing a temperature sensor for a project. i found that there exist a variance between the expected and measured value. As the difference is non -linear over e temperature range i cant simply add an offset . Is there a way i can do a kind of offset to the acquired data ?

UPDATE I have a commercial heater element which heat up to a set temperature(i named this temperature as expected). On the other side i have a temp sensor (my proj)which measure the temperature of the heater (here i named it as measured).

I noticed the difference between the measured and expected which i would like to compensate so that measured will be close to the expected value.

Example If my sensor measured 73.3 it should be process by some means(mathematically or otherwise)so that it will show that it is close to 70.25.

Hope this clears thing a little.

Measured    Expected
30.5    30.15
41.4    40.29
52.2    50.31
62.8    60.79
73.3    70.28
83      79.7
94      90.39
104.3   99.97
114.8   109.81

Thank you for your time.

Was it helpful?

Solution

You are interested in describing deviation one variable from the other. What you are looking for is function

g( x) = f( x) - x

which returns approximation, a prediction, what number to add to x to get y data based on real x input. You need the prediction of y based on observed x values first, the f(x). This is what you can get from doing a regression:

x = MeasuredExpected ( what you have estimated, and I assume 
                                        you will know this value)
y = MeasuredReal ( what have been actually observed instead of x)

f( x) = MeasuredReal( estimated) = alfa*x + beta + e

In the simplest case of just one variable you don't even have to include special tools for this. The coefficients of equation are equal to:

alfa = covariance( MeasuredExpected, MeasuredReal) / variance( MeasuredExpected)
beta = average( MeasuredReal) - alfa * average( MeasuredExpected)

so for each expected measured x you can now state that the most probable value of real measured is:

f( x) = MeasuredReal( expected) = alfa*x + beta  (under assumption that error
                                                 is normally distributed, iid)

So you have to add

g( x) = f( x) - x = ( alfa -1)*x + beta

to account for the difference that you have observed between your usual Expected and Measured.

OTHER TIPS

Maybe you could use a data sample in order to do a regression analysis on the variation and use the regression function as an offset function.

http://en.wikipedia.org/wiki/Regression_analysis

You can create a calibration lookup table (LUT).

The error in the sensor reading is not linear over the entire range of the sensor, but you can divide the range up into a number of sub-ranges for which the error within the sub-range is nearly linear. Then you calibrate the sensor by taking a reading in each sub-range and calculating the offset error for each sub-range. Store the offset for each sub-range in an array to create a calibration lookup table.

Once the calibration table is known, you can correct a measurement by performing a table lookup for the proper offset. Use the actual measured value to determine the index into the array from which to get the proper offset.

The sub-ranges don't need to be same-sized although that should make it easy to calculate the proper table index for any measurement. (If the sub-ranges are not same-sized then you could use a multidimensional array (matrix) and store not only the offset but also the beginning or end point of each sub-range. Then you would scan through the begin-points to determine the proper table index for any measurement.)

You can make the correction more accurate by dividing into smaller sub-ranges and creating a larger calibration lookup table. Or you may be able to interpolate between two table entries to get a more accurate offset.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top