質問

This guy: http://andrew-hoyer.com/experiments/cloth/ made a javascript algorithm to simulate a cloth. He pointed out the code was too slow so he had to optimize square roots using the taylor series.

  1. Couldn't this be optimized instead by pre-calculing every possible value and using a lookup table to get it?

  2. Is this often used? For instance, on 3d games, do they actually perform the calculations or they already have a lookup table for every sin, cos, tg, sqrt and similar functions?

  3. Why isn't this pre-programmed into processors?

役に立ちましたか?

解決

Lookup tables are pretty much dead and buried with modern processors. Especially for things like sqrt. Most FPUs can do them in 9-20 cycles and they usually interleave with other calculations. Memory access is now often the bottleneck for CPUs with cache misses taking hundreds of cycles. Even second level caches can take 20-30 cycles. Often it's faster to do a calculation than hold precomputed values.

他のヒント

Because your average CPU by default defines its floating operations based on the IEEE-754 standard which defines rather strictly what the result of any mathematical operation should be. A lookup table by design is only an approximation and will only contain a certain range and granularity that you need for your specific problem - that makes it rather impossible to implement sensibly in hardware. If you wanted to store any possible value - do the math yourself.

That does not mean that we never approximate results - lookup tables just aren't such a great solution for this. Ie SSE has both sqrtss and rsqrtss - the later returns an approximation of the real result and is a good bit faster. Just a bit of math there.

Using a lookup table is probably not wrong but maybe micro optimization. It's smarter to use an algorithm or focus on other code. Maybe it's because of the deadline when it's not hard coded into the cpu. What about the logarithm? Is there a lookup table in the cpu?

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top