Question

I need to crawl a few websites for a university project and I have reached a dead end for a site that requires log-in. I am using the urllib,urllib2,cookielib modules in Python to log in. It does not work for http://www.cafemom.com. The http reponse that I receive gets saved in a .txt file and corresponds to the 'unsuccessful log-in' page.

I also tried using the package "twill" for this purpose, which didn't work out for me either. Can anyone suggest what I should do?

Below is the main login() method that I used for this purpose.

def urlopen(req):
    try:
            r = urllib2.urlopen(req)
    except IOError, e:
            if hasattr(e, 'code'):
                    print 'The server couldn\'t fulfill the request.'
                    print 'Error code: ', e.code
            elif hasattr(e, 'reason'):
                    print 'We failed to reach a server.'
                    print 'Reason: ', e.reason
            raise

    return r

class Cafemom:
    """Communication with Cafemom"""

    def __init__(self, cookieFile = 'cookie.jar', debug = 0):
            self.cookieFile = cookieFile
            self.debug = debug
            self.loggedIn = 0
            self.uid = ''
            self.email = ''
            self.passwd = ''
            self.cj = cookielib.LWPCookieJar()

            if os.path.isfile(cookieFile):
                    self.cj.load(cookieFile)

            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.cj))
            urllib2.install_opener(opener)

    def __del__(self):
            self.cj.save(self.cookieFile)

    def login(self, email, password):
            """Logging in Cafemom"""

            self.email  = email
            self.passwd = password
            url='http://www.cafemom.com/lohin.php?'
            cnt='http://www.cafemom.com'
            headers = {'Content-Type': 'application/x-www-form-urlencoded'}
            body = {'identifier': email, 'password': password }
            if self.debug == 1:
                    print "Logging in..."

            req = urllib2.Request(url, urllib.urlencode(body), headers)
            print urllib.urlencode(body)
            #print req.group()
            handle = urlopen(req)

            h = handle.read()
            f = open("responseCafemom.txt","w")
            f.write(f)
            f.close()

I also tried using this code and was unsuccessful

import urllib, urllib2, cookielib

username = myusername
password = mypassword

cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'identifier' : username, 'password' : password})
opener.open('http://www.cafemom.com/login.php', login_data)
resp = opener.open('http://www.cafemom.com')
print resp.read()
Was it helpful?

Solution

I'm not sure if this is exactly what you need, but it's worth a try.The excellent requests module for python supports both cookies, and HTTP basic auth.

These examples are straight from its documentation.

Here is the basic auth example

payload = {'identifer': email, 'password': password}
r = requests.post("http://www.cafemom.com/login.php?", data=payload)

Here is how to pass cookies previously saved(which you could access from a previous request with "r.cookies".Cookie stores are just dictionaries.

r = requests.get(url, cookies=cookies)

Here is how to read the response of your request

f = open("responseCafemom.txt","w")
f.write(r.text)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top