Question

For integers x, y, and n, (only x and y are given) test if xn = y? If x = 8, y = 512, then n = 3 so that is true. But if x = 8 and y = 500, n would have to be around 2.98 (not an integer) so the statement evaluates as false. Is using a logarithm the best method to do this test?

Check if one integer is an integer power of another offers some solutions:

int n = y; while(n < x) n *= y; return n == x,

while (x%y == 0)  x = x / y
return x == 1

and the logarithm method (this is my version):

return ((log(y, x) % 1) == 0) // log(expression, base)

log(y, x) = log x y

Which method evaluates faster, especially for big numbers?

Was it helpful?

Solution

The logarithm method needs more care, because the logarithm function has a small amount of inaccuracy due to the use of floating-point approximations. For example 310 = 59049, but:

log(59049, 3)
   ===> 9.999999999999998

Whether you can compensate for this (by checking to see if the answer is "close enough" to the nearest integer) depends on the range of your x and y. If y is less than 232, then I think the closest the logarithm can proportionately get to an integer (without the true answer being an integer) is:

1 - log(4294967295, 65536) / 2
   ===> 1.049693665322593e-11

so pick an epsilon smaller than this and you can use the logarithm method with confidence:

n = log(y, x);
e = round(n);
if (abs(1 - n / e) < epsilon) {
    /* y == x to the power of e */
} else {
    /* y not a power of x */
}

If the allowable range for y is bigger, then you'll have to find the appropriate value for epsilon. But beware: for large enough y, there might be no suitable epsilon in double-precision floating-point. For example, if y can be as large as 248 − 1 then that's the case, because

log(281474976710655, 16777216)
   ===> 2.0 exactly

So for large enough y, you can't rely on the logarithm: you'll need to explicitly perform the exponentiation afterwards as a check.

OTHER TIPS

A number n is a perfect power if there exists a b and e for which b ^ e = n. For instance 216 = 6^3 = 2^3 * 3^3 is a perfect power, but 72 = 2^3 * 3^2 is not. If the number n is a perfect power, then the exponent e must be less than log2 n, because if e is greater then 2 ^ e will be greater than n. Further, it is only necessary to test prime *e*s, because if a number is a perfect power to a composite exponent it will also be a perfect power to the prime factors of the composite component; for instance, 2^15 = 32768 = 32^3 = 8^5 is a perfect cube root and also a perfect fifth root.

I have an implementation at my blog.

For reasonably small inputs, I'm fairly sure your third method will work best. (You will need to be much more careful about checking for integrality, though.)

Some food for thought:

  1. You can compute a power of x that is roughly sqrt(y) very quickly (i.e. O(M(log y)) time), then mod by this power (in O(M(log y) log log y) time) to get two subproblems that are half the size.

  2. You can use fairly poor approximations to the logarithm to get fairly tight bounds on n. If I know an integer that's within a constant of log_x(y), then I only have to check a constant number of powers of x. These checks are done fastest using the "square and multiply technique" exemplified in Edward Falk's answer.

  3. Given even a fairly wide bound on n, you can use arithmetic modulo a random prime of modest size to considerably narrow down the set of candidate n. It should be possible to use arithmetic modulo several random primes of modest size together with the Chinese Remainder Theorem to narrow down the possible n to zero or one very efficiently.

Running with the second idea: Notice that I only need the top bit of y to get an approximation to log y that is off by at most a constant; everything else is gravy. You should just be able to take the logarithm of the top 53 bits using your FPU, adjust using the length of the number, and then check the closest integer in O(M(log y)) time. The third idea lets you use randomisation to check whether x^n == y in O(log y) time, which is optimal.

For large numbers, logs might be faster, but you should benchmark it. This will involve conversions to double and back, which could be a problem.

I think this might be a better solution:

long y = 512;
long x = 8;
if (x <= 1) return false;
while (y>1) {
    // Find maximum x^(2^n) <= y
    long xp = x;   // x to some maximum power
    long xp2;      // experimental value of xp
    while ((xp2 = xp*xp) <= y)
      xp = xp2;
    if (y%xp != 0) return false;  // reject
    y /= xp;
}
return y == 1;

There are still ways this could be improved, and I've only tested a few cases, but it seems to work.

This method should solve the problem:

void toCheckPower(){
    Scanner sc=new Scanner(System.in);

    int n=sc.nextInt();
    System.out.println("No of which power is checked="+n);

    int pow=sc.nextInt();
    int num=pow;
    System.out.println("No to check power="+pow);
    int sum=1;

    while(sum<=pow){
        sum=sum*n;
        if(sum==pow){
            System.out.println(pow+" is a Power of "+n);
            break;
        }
        else if(sum>pow){
            System.out.println(pow+" is not a Power of "+n);
            break;
        }
     }
}

Without benchmarking, and just eyeballing what's happening, this one should be the fastest.

int n = y; while(n < x) n *= y; return n == x;

(note, however, you've reversed the meaning of x & y from your description, and n is completely different)

while (x%y == 0)  x = x / y;

this is doing divide and a modulo (which is essentially a second divide), so it's doing twice as much work, but it's can exit the loop earlier, so it may win.

return ((log(y, x) % 1) == 0)

This uses floating point which is inherently evil (although floating point is getting better & faster in modern CPU chips). However the modulo your doing it testing if it's even or odd, when you need to know if it whole or not.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top