Question

How can I have inside my spider something that will fetch some URL to extract something from a page via HtmlXPathSelector? But the URL is something I want to supply as a string inside the code, not a link to follow.

I tried something like this:

req = urllib2.Request('http://www.example.com/' + some_string + '/')
req.add_header('User-Agent', 'Mozilla/5.0')
response = urllib2.urlopen(req)
hxs = HtmlXPathSelector(response)

but at this moment it throws an exception with:

[Failure instance: Traceback: <type 'exceptions.AttributeError'>: addinfourl instance has no attribute 'encoding'
Was it helpful?

Solution

You will need to construct a scrapy.http.HtmlResponse object with the body=urllib2.urlopen(req).read() - but why exactly do you need to use urllib2 instead of returning the request with a callback?

OTHER TIPS

scrapy is not explicit to show how to do unittest, i don't recommend use scrapy to crawl data if you want do unittest for each spider.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top