Basically, at least one thing you can do is to send User-Agent
header:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:20.0) Gecko/20100101 Firefox/20.0'}
response = requests.get(url, headers=headers)
Besides requests
, you can simulate a real user by using selenium - it uses a real browser - in this case there is clearly no easy way to distinguish your automated user from other users. Selenium can also make use a "headless" browser.
Also, check if the web-site you are scraping provides an API. If there is no API or you are not using it, make sure you know if the site actually allows automated web-crawling like this, study Terms of use
. You know, there is probably a reason why they block you after too many requests per a period of time.
Also see:
- Sending "User-agent" using Requests library in Python
- Headless Selenium Testing with Python and PhantomJS
edit1: selenium uses a webdriver rather than a real browser; i.e., it passes a webdriver = TRUE
in the header, making it far easier to detect than requests
.