Question

How is floating point math performed on a processor with no floating point unit ? e.g low-end 8 bit microcontrollers.

Était-ce utile?

La solution

Have a look at this article: http://www.edwardrosten.com/code/fp_template.html

(from this article)

First you have to think about how to represent a floating point number in memory:

struct this_is_a_floating_point_number
{
    static const unsigned int mant = ???;
    static const int          expo = ???;
    static const bool         posi = ???;
};

Then you'd have to consider how to do basic calculations with this representation. Some might be easy to implement and be rather fast at runtime (multiply or divide by 2 come to mind)

Division might be harder and, for instance, Newtons algorithm could be used to calculate the answer.

Finally, smart approximations and generated values in tables might speed up the calculations at run time.

Many years ago C++ templates helped me getting floating point calculations on an Intel 386 SX

In the end I learned a lot of math and C++ but decided at the same time to buy a co-processor.

Especially the polynomial algorithms and the smart lookup tables; who needs a cosine or tan function when you have sine function, helped a lot in thinking about using integers for floating point arithmetic. Taylor series were a revelation too.

Autres conseils

In systems without any floating-point hardware, the CPU emulates it using a series of simpler fixed-point arithmetic operations that run on the integer arithmetic logic unit.

Take a look at the wikipedia page: Floating-point_unit#Floating-point_library as you might find more info.

It is not actually the cpu who emulates the instructions. The floating point operations for low end cpu's are made out of integer arithmetic instructions and the compiler is the one which generates those instructions. Basically the compiler (tool chain) comes with a floating point library containing floating point functions.

The short answer is "slowly". Specialized hardware can do tasks like extracting groups of bits that are not necessarily byte-aligned very fast. Software can do everything that can be done by specialized hardware, but tends to take much longer to do it.

Read "The complete Spectrum ROM disassembly" at http://www.worldofspectrum.org/documentation.html to see examples of floating point computations on an 8 bit Z80 processor.

For things like sine functions, you precompute a few values then interpolate using Chebyshev polynomials.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top