質問

I'm writing a program that downloads files anywhere up to a 1Gb in size. Right now I'm using the requests package to download files, and although it works (I think it times out sometimes) it is very slow. I've seen some examples multi-part download examples using urllib2 but I'm looking for a way to use urllib3 or requests, if that package has the ability.

正しい解決策はありません

他のヒント

How closely have you looked at requests' documentation?

In the Quickstart documentation the following is described

r = requests.get(url, stream=True)
r.raw.read(amount)

The better way, however, to do this is:

fd = open(filename, 'wb')
r = requests.get(url, stream=True)
for chunk in r.iter_content(amount):
    fd.write(chunk)
fd.close()

(Assuming you are saving the downloaded content to a file.)

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top