Question

I am working on an appliance using an old version of python (2.5.2).

I'm working on a script which needs to read a webpage, but I can't access the normal libraries - urllib, urllib2 and requests are not available.

How did people collect this in the olden days?

I could do a wget/curl from the shell, but I'd prefer to stick to python if possible. I also need to be able to go through a proxy which may force me into system calls.

Was it helpful?

Solution

If you really want to do it old-school entirely within Python but without urllib, then you'll have to use socket and implement a tiny subset of HTTP 1.0 to fetch the page. Jumping through the hoops to get through a proxy will be really painful though.

Use wget or curl and save yourself a few days of debugging.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top