Pergunta

I need to write an algorithm(I cannot use any 3rd party library, because this is an assignment) to divide(integer division, floating parts are not important) very large numbers like 100 - 1000 digits. I found http://en.wikipedia.org/wiki/Fourier_division algorithm but I don't know if it's the right way to go. Do you have any suggestions?

1) check divisior < dividend, otherwise it's zero (because it will be an int division)
2) start from the left
3) get equal portion of digits from the dividend
4) if it's divisor portion is still bigger, increment digits of dividend portion by 1
5) multiply divisor by 1-9 through the loop
6) when it exceeds the dividend portion, previous multiplier is the answer
7) repeat steps 3 to 5 until reaching to the end
Foi útil?

Solução

Knuth, Donald, The Art of Computer Programming, ISBN 0-201-89684-2, Volume 2: Seminumerical Algorithms, Section 4.3.1: The Classical Algorithms

Outras dicas

I'd imagine that dividing the 'long' way like in grade school would be a potential route. I'm assuming you are receiving the original number as a string, so what you do is parse each digit. Example:

Step 0:

   /-----------------
13 | 453453453435....

Step 1: "How many times does 13 go into 4? 0

     0
   /-----------------
13 | 453453453435....

Step 2: "How many times does 13 go into 45? 3

     03
   /-----------------
13 | 453453453435....
   - 39
     --
      6

Step 3: "How many times does 13 go into 63? 4

etc etc. With this strategy, you can have any number length and only really have to hold enough digits in memory for an int (divisor) and double (dividend). (Assuming I got those terms right). You store the result as the last digit in your result string.

When you hit a point where no digits remain and the calculation wont go in 1 or more times, you return your result, which is already formatted as a string (because it could be potentially larger than an int).

The easiest division algorithm to implement for large numbers is shift and subtract.

if numerator is less than denominator then finish
shift denominator as far left as possible while it is still smaller than numerator
set bit in quotient for amount shifted
subtract shifted denominator from numerator
repeat
the numerator is now the remainder

The shifting need not be literal. For example, you can write an algorithm to subtract a left shifted value from another value, instead of actually shifting the whole value left before subtracting. The same goes for comparison.

Long division is difficult to implement because one of the steps in long division is long division. If the divisor is an int, then you can do long division fairly easily.

You should probably try something like long division, but using computer words instead of digits.

In a high-level language, it will be most convenient to consider your "digit" to be half the size of your largest fixed-precision integer. For the long division method, you will need to handle the case where your partial intermediate result may be off by one, since your fixed-precision division can only handle the most-significant part of your arbitrary-precision divisor.

There are faster and more complicated means of doing arbitrary-precision arithmetic. Check out the appropriate wikipedia page. In particular, the Newton-Raphson method, when implemented carefully, can ensure that the time performance of your division is within a constant factor of your arbitrary-precision multiplication.

Unless part of your assignment was to be completely original, I would go with the algorithm I (and I assume you) were taught in grade school for doing large division by hand.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top