سؤال

I have a list (nearly 500) of RSS/ATOM feeds urls to parse and fetch the links.

I am using python feedparser libary to parse the url. To parse the list of urls parallely, I thought of using threading library in python.

My code looks something like this

import threading
import feedparser

class PullFeeds:
    def _init__(self):
        self.data = open('urls.txt', 'r')

    def pullfeed(self):
        threads = []
        for url in self.data:
             t = RssParser(url)
             threads.append(t)
        for thread in threads:
             thread.start()
        for thread in threads:
             thread.join()

class RssParser(threading.Thread):
     def __init__(self, url):
         threading.Thread.__init__(self)
         self.url = url

     def run(self):
         print "Starting: ", self.name
         rss_data = feedparser.parse(self.url)
         for entry in rss_data.get('entries'):
             print entry.get('link')
         print "Exiting: ", self.name


pf = PullFeeds()
pf.pullfeed()

The problem is, an empty list is returned from Feedparser as a result when I run this script. But without threading feedparser prints out the list of links parsed from the supplied URL.

How do i fix this?

هل كانت مفيدة؟

المحلول 2

The problem was with Vagrant. I was running the script inside one of my vagrant machine. The same script runs fine out of the vagrant box.

This needed to be reported. I am not yet sure where to report this bug, whether it is a problem with Vagrant or Python threading or Feedparser library.

نصائح أخرى

To see whether the issue is with multithreading, you could try to use multiple processes instead:

#!/usr/bin/env python
####from multiprocessing.dummy import Pool # use threads
from multiprocessing import Pool # use processes
from multiprocessing import freeze_support
import feedparser

def fetch_rss(url):
    try:
        data = feedparser.parse(url)
    except Exception as e:
        return url, None, str(e)
    else:
        e = data.get('bozo_exception')
        return url, data['entries'], str(e) if e else None

if __name__=="__main__":
    freeze_support()
    with open('urls.txt') as file:
        urls = (line.strip() for line in file if line.strip())
        pool = Pool(20) # no more than 20 concurrent downloads
        for url, items, error in pool.imap_unordered(fetch_rss, urls):
            if error is None:
                print(url, len(items))
            else:
                print(url, error)
مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top