Question

I need to implement a 1024bit math operations in C .I Implemented a simple BigInteger library where the integer is stored as an array "typedef INT UINT1024[400]" where each element represent one digit. It turned up to be so slow so i decided to implement the BigInteger using a 1024bit array of UINT64: "typedef UINT64 UINT1024[16]"

so for example the number : 1000 is represented as {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1000}, 18446744073709551615 as {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xFFFFFFFFFFFFFFFF} and 18446744073709551616 as {0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0}.

I started wih writing the function to convert a char array number to an UINT1024 and an UINT1024 to a char array, it worked with numbers <= 0xFFFFFFFFFFFFFFFF. Here's what i did:

void UINT1024_FROMSTRING(UIN1024 Integer,const char szInteger[],UINT Length) {
int c = 15;
UINT64 Result = 0,Operation,Carry = 0;
UINT64 Temp = 1;
while(Length--)
{
    Operation = (szInteger[Length] - '0') * Temp;
    Result   += Operation + Carry;
   /*Overflow ?*/
    if (Result < Operation || Temp == 1000000000000000000)
    {
        Carry  = Result - Operation;
        Result = 0;
        Integer[c--] = 0;
        Temp = 1;
    }
    else Carry = 0;

    Temp *= 10;
}

if (Result || Carry)
{
    /* I DONT KNOW WHAT TO DO HERE ! */
}

while(c--) Integer[c] = 0;}

So please how can i implement it and is it possible to implement it using UINT64 for speed or just to stick with each array element is a digit of the number which is very slow for 1024bit operations.

PS: I can't use any existing library !

Thanks in advance !


Update Still can't figure out how to do the multiplication. I am using this function:

    void _uint128_mul(UINT64 u,UINT64 v,UINT64 * ui64Hi,UINT64 * ui64Lo)
{
   UINT64 ulo, uhi, vlo, vhi, k, t;
   UINT64 wlo, whi, wt;
   uhi = u >> 32;
   ulo = u & 0xFFFFFFFF;
   vhi = v >> 32;
   vlo = v & 0xFFFFFFFF;
   t = ulo*vlo;    wlo = t & 0xFFFFFFFF;
   k = t >> 32;
   t = uhi*vlo + k;
   whi = t & 0xFFFFFFFF;
   wt = t >> 32;

   t = ulo*vhi + whi;
   k = t >> 32;
   *ui64Lo = (t << 32) + wlo;
   *ui64Hi = uhi*vhi + wt + k;
}

Then

void multiply(uint1024_t dUInteger,uint1024_t UInteger)
{
    int i = 16;
    UINT64 lo,hi,Carry = 0;

        while(i--)
        {
            _uint128_mul(dUInteger[i],UInteger[15],&hi,&lo);
            dUInteger[i] = lo + Carry;
            Carry = hi;
        }
}

I really need some help in this and Thanks in advance !

Was it helpful?

Solution

You need to implement two functions for your UINT1024 class, multiply by integer and add integer. Then for each digit you convert, multiply the previous value by 10 and add the value of the digit.

OTHER TIPS

Writing, debugging, defining test cases, and checking they do work right is a huge undertaking. Just get one of the packaged multiprecission arithmetic libraries, like GMP, perhaps though NTL or CLN for C++. There are other alternatives, trawl the web. Jôrg Arndt's Matters Computational gives source code for C++.

If you are doing this for your education, you should take the middle road between your two previous approaches. Put more than 1 bit into a leaf or digit, but do not use the full bit range of the integer type.

The reason is that this may significantly simplify the multiplication operation if you can at first just accumulate the products a[i]*b[j] in c[i+j]. And then normalize the result to the fixed digit range. c has length 2N-1, and this should fit into 1024 bit, so a and b are restricted to 512 bit.

If the arrays a and b hold N digits with maximum value B-1, B=2^b, then the largest of the c[k] is c[N-1] with bound N*(B-1)^2. Thus the design constraints are

(2N)*b>=1024
ld(N)+(2b)<=64

b   N   2N*b    ld(N)+2b

32  16  1024    68
24  22  1056    53
28  19  1064    61

So one possibility is to set b=28, B=1<


Even more suited for educational purposes would be to set B=10^d, e.g. with d=9, so that conversion from and to string is relatively trivial.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top