Question

In Python 3.3, is there a way to save a url to a file and keep the file in memory, all in one step? I want to avoid downloading the file twice (wasted bandwidth), and also want to avoid downloading the file to disk, and then reading the file from disk (wasted disk IO).

To download it into memory I see this:

import urllib.request
myurl = 'http://blahblah/acsvfile.csv'
myreq = urllib.request.urlopen(myurl)    
mydata = myreq.read()

and to download it straight to disk I see this (which it appears might soon be deprecated):

urllib.request.urlretrieve(myurl, myfilename)

But I can't see how I can get either urlopen to also save an original copy of the file to disk, or urlretrieve to also let me read the url's data.

Thanks in advance for any ideas.

Was it helpful?

Solution

Just write mydata to a file:

import urllib.request
myurl = 'http://blahblah/acsvfile.csv'
myreq = urllib.request.urlopen(myurl)    
mydata = myreq.read()
with open('mydata.html', 'wb') as ofile:
    ofile.write(mydata)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top