Question

import urllib.request

url = 'http://www.oddsportal.com/ajax-next-games/1/0/1/20130820/'
print(url)
page = urllib.request.urlopen(url)
print(page)

Any idea why this script gives an error code "urllib.error.HTTPError: HTTP Error 405: Not Allowed" when trying to open the url? Couldn't find anything with Google. The url opens normally with Google Chrome. The script has been working normally for a couple of months until today.

Edit: Thanks to the first comments, I managed to create a script which fixes the problem descibed above. Here's the script with the necessary cookie retrieved with Chrome:

import urllib.request
import http.cookiejar

url = 'http://www.oddsportal.com/ajax-next-games/1/0/1/20130820/'

cj = http.cookiejar.CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
opener.addheaders = [('User-Agent', 'Mozilla/5.0')]
opener.addheaders = [('Cookie', 'D_UID=F1BC6DD9-DF9C-380C-A513-6124F4C86999')]

for cookie in cj:
    print(cookie.name, cookie.value)

print(opener.open(url).read()[:50]) # the full page is very long

Removing the cookie header will cause an unwanted web page to be retrieved (displays 'ROBOTS' etc. on the last line of the script). Why doesn't Cookiejar store that cookie automatically?

Edit (2): Apparently that cookie changes regularly, so it has to be retrieved automatically. But how?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top