문제

I have a control loop running at high frequency and need to compute a square root each cycle. Typical square root functions work fine but take excessive time. Since the value I'm taking the square root of doesn't change too much on each cycle, I would like to find an iterative square root that will converge and then track the correct result. This way I could do a single iteration at each time step, rather than many.

The problem is that all of the iterative square root methods I've seen will probably fail when the input is changing. In particular it looks like there will be problems when the input goes to zero and then increases again - the methods don't like to start with a guess of zero.

My input range is 0-4.5 and I need a precision of around 0.01 so using an increment/decrement of 0.01 could take far too long - I want it to mostly converge in 10 cycles or less.

FYI I'm using 16/32bit fixed point the input is 16bit q12. It's on a micro-controller so I'm not interested in using 1K for a lookup table. The code is also generated from a simulink model and their table lookup functions are rather full of overhead.

Is there a nice solution to this?

도움이 되었습니까?

해결책

You can use one shot of Halley method. It has cubic convergence and therefore should be quite precise if the value moves slightly:

x_{n+1} = x_n * (x_n^2 + 3Q) / (3 x_n^2 + Q)

This converges cubcially to sqrt(Q).

Reference: http://www.mathpath.org/Algor/squareroot/algor.square.root.halley.htm

다른 팁

the range 0-4.5 is fairly small. With precision of 0.01, that's only 450 possible calculations. You could compute them all at compile time as constants and just do a look up during runtime.

I would suggest you use a lookup table, if you know in advanced the ranges you are dealing with. Generate an array or hash table (depending on the language you're working in) to the level of precision you need and refer to this when you need your roots.

I tried a second order Taylor expansion on sqrt(x) and go the following result

if y=sqrt(x) and you know y_c = sqrt(x_c) already then:

t = x-3*x_c;
y = (12*x_c*x_c-t*t)/(8*y_c*y_c*y_c);

The larger x is the better the approximation. For the worst case with x_c=0.01 and x=0.02 the result comes out 0.1375 vs. the real result of sqrt(0.02)=0.1414 or a difference of 0.0039 which is under 0.01.

I tested the code with C# and saw a steady 33% speedup vs Math.Sqrt().

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top