Question

I found that you can't read from some sites using Python's urllib2(or urllib). An example...

urllib2.urlopen("http://www.dafont.com/").read()
# Returns ''

These sites work when you visit the site with a browser. I can even scrape them using PHP(didn't try other languages). I have seen other sites with the same issue - but can't remember the URL at the moment.

My questions are...

  1. What is the cause of this issue?
  2. Any workarounds?
Was it helpful?

Solution

I believe it gets blocked by the User-Agent. You can change User-Agent using the following sample code:

USERAGENT = 'something'
HEADERS = {'User-Agent': USERAGENT}

req = urllib2.Request(URL_HERE, headers=HEADERS)
f = urllib2.urlopen(req)
s = f.read()
f.close()

OTHER TIPS

Try setting a different user agent. Check the answers in this link.

I'm the guy who posted the question. I have some suspicions - but not sure about them - that's why I posted the question here.

What is the cause of this issue?

I think its due to the host blocking the urllib library using robot.txt or htaccess. But not sure about it. Not even sure if its possible.

Any workaround for this issue?

If you are in Unix, this will work...

contents = commands.getoutput("curl -s '"+url+"'")
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top