I'm writing a program that downloads files anywhere up to a 1Gb in size. Right now I'm using the requests package to download files, and although it works (I think it times out sometimes) it is very slow. I've seen some examples multi-part download examples using urllib2 but I'm looking for a way to use urllib3 or requests, if that package has the ability.

没有正确的解决方案

其他提示

How closely have you looked at requests' documentation?

In the Quickstart documentation the following is described

r = requests.get(url, stream=True)
r.raw.read(amount)

The better way, however, to do this is:

fd = open(filename, 'wb')
r = requests.get(url, stream=True)
for chunk in r.iter_content(amount):
    fd.write(chunk)
fd.close()

(Assuming you are saving the downloaded content to a file.)

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top