In Python 3.3, is there a way to save a url to a file and keep the file in memory, all in one step? I want to avoid downloading the file twice (wasted bandwidth), and also want to avoid downloading the file to disk, and then reading the file from disk (wasted disk IO).

To download it into memory I see this:

import urllib.request
myurl = 'http://blahblah/acsvfile.csv'
myreq = urllib.request.urlopen(myurl)    
mydata = myreq.read()

and to download it straight to disk I see this (which it appears might soon be deprecated):

urllib.request.urlretrieve(myurl, myfilename)

But I can't see how I can get either urlopen to also save an original copy of the file to disk, or urlretrieve to also let me read the url's data.

Thanks in advance for any ideas.

有帮助吗?

解决方案

Just write mydata to a file:

import urllib.request
myurl = 'http://blahblah/acsvfile.csv'
myreq = urllib.request.urlopen(myurl)    
mydata = myreq.read()
with open('mydata.html', 'wb') as ofile:
    ofile.write(mydata)
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top