You are using the wrong approach to time execution differences.
Use the timeit
module instead; it uses the most optimal clock for your system, the most optimal loop implementation for repeated testing and disables the garbage collector to minimize system process fluctuations.
Using timeit
you'll find that for a single-digit input, your method is faster:
>>> import timeit
>>> def manual(n):
... x = 0
... for i in n:
... x = ord(i)-48 + 10*x
...
>>> def using_int(n):
... int(n)
...
>>> timeit.timeit('manual("5")', 'from __main__ import manual')
0.7053060531616211
>>> timeit.timeit('using_int("5")', 'from __main__ import using_int')
0.9772920608520508
However, using a large inputstring slows it down to a crawl; I tried this with 1000 digits first but ran out of patience after 10 minutes. This is with just 50 digits:
>>> timeit.timeit('manual("5"*50)', 'from __main__ import manual')
15.68298888206482
>>> timeit.timeit('using_int("5"*50)', 'from __main__ import using_int')
1.5522758960723877
int()
now beats the manual approach by a factor of 10.