This code changed multiple times while I was experimenting with it, but one problem the current version is that since factFactors
is a generator,
x = factFactors(100)
print list(x), reduce(op.mul, [p**e for p, e in x], 1)==math.factorial(100)
calling list
will exhaust the generator and so reduce
has nothing to act on. Use x = list(factFactors(100))
instead.
-
After correcting the result
/results
typo (well, the one that existed when I started writing this!) I can't run the code:
~/coding$ python2.7 factbug4.py
factbug4.py:31: RuntimeWarning: overflow encountered in long_scalars
print x, reduce(lambda a, b: a*b, [p**e for p, e in x], 1)==math.factorial(100)
[(2, 97), (3, 48), (5, 24), (7, 16), (11, 9), (13, 7), (17, 5), (19, 5), (23, 4), (29, 3), (31, 3), (37, 2), (41, 2), (43, 2), (47, 2), (53, 1), (59, 1), (61, 1), (67, 1), (71, 1), (73, 1), (79, 1), (83, 1), (89, 1), (97, 1)]
Traceback (most recent call last):
File "factbug4.py", line 31, in <module>
print x, reduce(lambda a, b: a*b, [p**e for p, e in x], 1)==math.factorial(100)
File "factbug4.py", line 31, in <lambda>
print x, reduce(lambda a, b: a*b, [p**e for p, e in x], 1)==math.factorial(100)
TypeError: unsupported operand type(s) for *: 'long' and 'numpy.int32'
but it does hint at what the problem probably is. (Since the code won't run for me, I can't be certain, but I'm reasonably sure.) Most of the elements returned by primes
aren't Python arbitrary-precision integers but limited-range numpy integers:
>>> primes(10)
[2, 3, 5, 7]
>>> map(type, primes(10))
[<type 'int'>, <type 'numpy.int32'>, <type 'numpy.int32'>, <type 'numpy.int32'>]
and operations on those can overflow. If I convert p
and e
to int
:
print x, reduce(lambda a, b: a*b, [int(p)**int(e) for p, e in x], 1)==math.factorial(100)
I get
[(2, 97), (3, 48), (5, 24), (7, 16), (11, 9), (13, 7),
(17, 5), (19, 5), (23, 4), (29, 3), (31, 3), (37, 2),
(41, 2), (43, 2), (47, 2), (53, 1), (59, 1), (61, 1),
(67, 1), (71, 1), (73, 1), (79, 1), (83, 1), (89, 1), (97, 1)] True
If you want the convenience of numpy
array indexing with arbitrary precision, you can use a dtype of object
, i.e.
>>> np.arange(10,dtype=object)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=object)
but honestly, I'd recommend not using numpy
here at all.