If you want to download urls to different files in sequence (no concurrent connections):
for i, url in enumerate(urls):
c.setopt(pycurl.URL, url)
with open("output%d.html" % i, "w") as f:
c.setopt(c.WRITEDATA, f) # c.setopt(c.WRITEFUNCTION, f.write) also works
c.perform()
Note:
storage.getvalue()
returns everything that was written tostorage
from the moment it is created. In your case you should find the output from multiple urls in itopen(filename, "w")
overwrites the file (previous content is gone) i.e.,update.html
contains whatever is incontent
on the last iteration of the loop