Question

I need to compare a few CDN services, so I write a short python script to repeatedly send get requests to resources deployed on these CDN, and record the round-trip time. I run the scripts on several PCs in different cities. This is how I did it:

t0 = time.clock()
r = requests.get(test_cdn_url)
t1 = time.clock()
roundtrip = t1-t0 # in seconds

For most requests, the roundtrip time is within 1 second:200-500ms, but occasionally, it reports a request that finishes in several seconds: 3-5 seconds, once 9 seconds.

Is this just the way it is, or am I using the wrong tool to measure? In other words, does requests lib do something (caching or some heavy-weight operations) that makes the metric totally wrong?

Was it helpful?

Solution

The Response object provides an elapsed attribute:

The amount of time elapsed between sending the request and the arrival of the response (as a timedelta)

Your code would then look like:

r = requests.get(test_cdn_url)
roundtrip = r.elapsed.total_seconds()

OTHER TIPS

If you're worried that requests is doing anything heavy-weight (or caching), you could always use urllib:

nf = urllib.urlopen(url)
t0 = time.time()
page = nf.read()
t1 = time.time()
nf.close()

roundtrip = t1 - t0

Alternatively, if you include a Cache-Control: no-cache header along with your request, then that should ensure that no caching happens along the way - and your original code should time the request effectively.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top