Question

I'm looking for an extremely fast atof() implementation on IA32 optimized for US-en locale, ASCII, and non-scientific notation. The windows multithreaded CRT falls down miserably here as it checks for locale changes on every call to isdigit(). Our current best is derived from the best of perl + tcl's atof implementation, and outperforms msvcrt.dll's atof by an order of magnitude. I want to do better, but am out of ideas. The BCD related x86 instructions seemed promising, but I couldn't get it to outperform the perl/tcl C code. Can any SO'ers dig up a link to the best out there? Non x86 assembly based solutions are also welcome.

Clarifications based upon initial answers:

Inaccuracies of ~2 ulp are fine for this application.
The numbers to be converted will arrive in ascii messages over the network in small batches and our application needs to convert them in the lowest latency possible.

Was it helpful?

Solution

What is your accuracy requirement? If you truly need it "correct" (always gets the nearest floating-point value to the decimal specified), it will probably be hard to beat the standard library versions (other than removing locale support, which you've already done), since this requires doing arbitrary precision arithmetic. If you're willing to tolerate an ulp or two of error (and more than that for subnormals), the sort of approach proposed by cruzer's can work and may be faster, but it definitely will not produce <0.5ulp output. You will do better accuracy-wise to compute the integer and fractional parts separately, and compute the fraction at the end (e.g. for 12345.6789, compute it as 12345 + 6789 / 10000.0, rather than 6*.1 + 7*.01 + 8*.001 + 9*0.0001) since 0.1 is an irrational binary fraction and error will accumulate rapidly as you compute 0.1^n. This also lets you do most of the math with integers instead of floats.

The BCD instructions haven't been implemented in hardware since (IIRC) the 286, and are simply microcoded nowadays. They are unlikely to be particularly high-performance.

OTHER TIPS

This implementation I just finished coding runs twice as fast as the built in 'atof' on my desktop. It converts 1024*1024*39 number inputs in 2 seconds, compared 4 seconds with my system's standard gnu 'atof'. (Including the setup time and getting memory and all that).

UPDATE: Sorry I have to revoke my twice as fast claim. It's faster if the thing you're converting is already in a string, but if you're passing it hard coded string literals, it's about the same as atof. However I'm going to leave it here, as possibly with some tweaking of the ragel file and state machine, you may be able to generate faster code for specific purposes.

https://github.com/matiu2/yajp

The interesting files for you are:

https://github.com/matiu2/yajp/blob/master/tests/test_number.cpp

https://github.com/matiu2/yajp/blob/master/number.hpp

Also you may be interested in the state machine that does the conversion:

Number parsing state machine diagram

I've implemented something you may find useful. In comparison with atof it's about x5 faster and if used with __forceinline about x10 faster. Another nice thing is that it seams to have exactly same arithmetic as crt implementation. Of course it has some cons too:

  • it supports only single precision float,
  • and doesn't scan any special values like #INF, etc...
__forceinline bool float_scan(const wchar_t* wcs, float* val)
{
int hdr=0;
while (wcs[hdr]==L' ')
    hdr++;

int cur=hdr;

bool negative=false;
bool has_sign=false;

if (wcs[cur]==L'+' || wcs[cur]==L'-')
{
    if (wcs[cur]==L'-')
        negative=true;
    has_sign=true;
    cur++;
}
else
    has_sign=false;

int quot_digs=0;
int frac_digs=0;

bool full=false;

wchar_t period=0;
int binexp=0;
int decexp=0;
unsigned long value=0;

while (wcs[cur]>=L'0' && wcs[cur]<=L'9')
{
    if (!full)
    {
        if (value>=0x19999999 && wcs[cur]-L'0'>5 || value>0x19999999)
        {
            full=true;
            decexp++;
        }
        else
            value=value*10+wcs[cur]-L'0';
    }
    else
        decexp++;

    quot_digs++;
    cur++;
}

if (wcs[cur]==L'.' || wcs[cur]==L',')
{
    period=wcs[cur];
    cur++;

    while (wcs[cur]>=L'0' && wcs[cur]<=L'9')
    {
        if (!full)
        {
            if (value>=0x19999999 && wcs[cur]-L'0'>5 || value>0x19999999)
                full=true;
            else
            {
                decexp--;
                value=value*10+wcs[cur]-L'0';
            }
        }

        frac_digs++;
        cur++;
    }
}

if (!quot_digs && !frac_digs)
    return false;

wchar_t exp_char=0;

int decexp2=0; // explicit exponent
bool exp_negative=false;
bool has_expsign=false;
int exp_digs=0;

// even if value is 0, we still need to eat exponent chars
if (wcs[cur]==L'e' || wcs[cur]==L'E')
{
    exp_char=wcs[cur];
    cur++;

    if (wcs[cur]==L'+' || wcs[cur]==L'-')
    {
        has_expsign=true;
        if (wcs[cur]=='-')
            exp_negative=true;
        cur++;
    }

    while (wcs[cur]>=L'0' && wcs[cur]<=L'9')
    {
        if (decexp2>=0x19999999)
            return false;
        decexp2=10*decexp2+wcs[cur]-L'0';
        exp_digs++;
        cur++;
    }

    if (exp_negative)
        decexp-=decexp2;
    else
        decexp+=decexp2;
}

// end of wcs scan, cur contains value's tail

if (value)
{
    while (value<=0x19999999)
    {
        decexp--;
        value=value*10;
    }

    if (decexp)
    {
        // ensure 1bit space for mul by something lower than 2.0
        if (value&0x80000000)
        {
            value>>=1;
            binexp++;
        }

        if (decexp>308 || decexp<-307)
            return false;

        // convert exp from 10 to 2 (using FPU)
        int E;
        double v=pow(10.0,decexp);
        double m=frexp(v,&E);
        m=2.0*m;
        E--;
        value=(unsigned long)floor(value*m);

        binexp+=E;
    }

    binexp+=23; // rebase exponent to 23bits of mantisa


    // so the value is: +/- VALUE * pow(2,BINEXP);
    // (normalize manthisa to 24bits, update exponent)
    while (value&0xFE000000)
    {
        value>>=1;
        binexp++;
    }
    if (value&0x01000000)
    {
        if (value&1)
            value++;
        value>>=1;
        binexp++;
        if (value&0x01000000)
        {
            value>>=1;
            binexp++;
        }
    }

    while (!(value&0x00800000))
    {
        value<<=1;
        binexp--;
    }

    if (binexp<-127)
    {
        // underflow
        value=0;
        binexp=-127;
    }
    else
    if (binexp>128)
        return false;

    //exclude "implicit 1"
    value&=0x007FFFFF;

    // encode exponent
    unsigned long exponent=(binexp+127)<<23;
    value |= exponent;
}

// encode sign
unsigned long sign=negative<<31;
value |= sign;

if (val)
{
    *(unsigned long*)val=value;
}

return true;
}

It seems to me you want to build (by hand) what amounts to a state machine where each state handles the Nth input digit or exponent digits; this state machine would be shaped like a tree (no loops!). The goal is to do integer arithmetic wherever possible, and (obviously) to remember state variables ("leading minus", "decimal point at position 3") in the states implicitly, to avoid assignments, stores and later fetch/tests of such values. Implement the state machine with plain old "if" statements on the input characters only (so your tree gets to be a set of nested ifs). Inline accesses to buffer characters; you don't want a function call to getchar to slow you down.

Leading zeros can simply be suppressed; you might need a loop here to handle ridiculously long leading zero sequences. The first nonzero digit can be collected without zeroing an accumulator or multiplying by ten. The first 4-9 nonzero digits (for 16 bit or 32 bits integers) can be collected with integer multiplies by constant value ten (turned by most compilers into a few shifts and adds). [Over the top: zero digits don't require any work until a nonzero digit is found and then a multiply 10^N for N sequential zeros is required; you can wire all this in into the state machine]. Digits following the first 4-9 may be collected using 32 or 64 bit multiplies depending on the word size of your machine. Since you don't care about accuracy, you can simply ignore digits after you've collected 32 or 64 bits worth; I'd guess that you can actually stop when you have some fixed number of nonzero digits based on what your application actually does with these numbers. A decimal point found in the digit string simply causes a branch in the state machine tree. That branch knows the implicit location of the point and therefore later how to scale by a power of ten appropriately. With effort, you may be able to combine some state machine sub-trees if you don't like the size of this code.

[Over the top: keep the integer and fractional parts as separate (small) integers. This will require an additional floating point operation at the end to combine the integer and fraction parts, probably not worth it].

[Over the top: collect 2 characters for digit pairs into a 16 bit value, lookup the 16 bit value. This avoids a multiply in the registers in trade for a memory access, probably not a win on modern machines].

On encountering "E", collect the exponent as an integer as above; look up accurately precomputed/scaled powers of ten up in a table of precomputed multiplier (reciprocals if "-" sign present in exponent) and multiply the collected mantissa. (don't ever do a float divide). Since each exponent collection routine is in a different branch (leaf) of the tree, it has to adjust for the apparent or actual location of the decimal point by offsetting the power of ten index.

[Over the top: you can avoid the cost of ptr++ if you know the characters for the number are stored linearly in a buffer and do not cross the buffer boundary. In the kth state along a tree branch, you can access the the kth character as *(start+k). A good compiler can usually hide the "...+k" in an indexed offset in the addressing mode.]

Done right, this scheme does roughly one cheap multiply-add per nonzero digit, one cast-to-float of the mantissa, and one floating multiply to scale the result by exponent and location of decimal point.

I have not implemented the above. I have implemented versions of it with loops, they're pretty fast.

I remember we had a Winforms application that performed so slowly while parsing some data interchange files, and we all thought it was the db server thrashing, but our smart boss actually found out that the bottleneck was in the call that was converting the parsed strings into decimals!

The simplest is to loop for each digit (character) in the string, keep a running total, multiply the total by 10 then add the value of the next digit. Keep on doing this until you reach the end of the string or you encounter a dot. If you encounter a dot, separate the whole number part from the fractional part, then have a multiplier that divides itself by 10 for each digit. Keep on adding them up as you go.

Example: 123.456

running total = 0, add 1 (now it's 1) running total = 1 * 10 = 10, add 2 (now it's 12) running total = 12 * 10 = 120, add 3 (now it's 123) encountered a dot, prepare for fractional part multiplier = 0.1, multiply by 4, get 0.4, add to running total, makes 123.4 multiplier = 0.1 / 10 = 0.01, multiply by 5, get 0.05, add to running total, makes 123.45 multipiler = 0.01 / 10 = 0.001, multiply by 6, get 0.006, add to running total, makes 123.456

Of course, testing for a number's correctness as well as negative numbers will make it more complicated. But if you can "assume" that the input is correct, you can make the code much simpler and faster.

Have you considered looking into having the GPU do this work? If you can load the strings into GPU memory and have it process them all you may find a good algorithm that will run significantly faster than your processor.

Alternately, do it in an FPGA - There are FPGA PCI-E boards that you can use to make arbitrary coprocessors. Use DMA to point the FPGA at the part of memory containing the array of strings you want to convert and let it whizz through them leaving the converted values behind.

Have you looked at a quad core processor? The real bottleneck in most of these cases is memory access anyway...

-Adam

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top