Question

So I have this bit of python code that runs through a delicious page and scrapes some links off of it. The extract method contains some magic that pull out the required content. However, running the page fetches one after another is pretty slow - is there a way to do this async in python so i can launch several get requests and process pages in parallel?

url= "http://www.delicious.com/search?p=varun"
page = br.open(url)
html = page.read()
soup = BeautifulSoup(html)
extract(soup)

count=1
#Follows regexp match onto consecutive pages
while soup.find ('a', attrs={'class': 'pn next'}):
    print "yay"
    print count
    endOfPage = "false"
    try :
        page3 = br.follow_link(text_regex="Next")
        html3 = page3.read()
        soup3 = BeautifulSoup(html3)
        extract(soup3)
    except:
        print "End of Pages"
        endOfPage = "true"
    if valval == "true":
        break
    count = count +1
Was it helpful?

Solution

Beautiful Soup is pretty slow, if you want better performance use lxml instead or if you have many CPU's perhaps you can try using multiprocessing with queues.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top