Pregunta

I would like to use scrapy in conjunction with couchbase to store/retrieve data.

In order to store and retreive my data I am confused on the solution to adopt :

  1. Should I implement a pipeline ?

I mean something like :

    Class CouchbasePipeline(object):
      def __init__(self):
          ## init client here using settings

    def process_item(self, item, spider):
          ## store item here 
  1. Or should I implement a downloader middleware ?

Something like :

Class CouchBaseCacheStorage(object):

   def __init__(self, settings):
      ## init client here using settings

   def get_response(self, spider, request):
       pass


   def save_response(self, spider, request, response):
      pass

Or maybe I should implement both ? (manage cache/data base).

I am really confused specially I am new to python/couchebase/scrapy? My question is not about the best implementation/tool to do things but more about the standard way to do this scrapy stuff since I can't find this in the sources documentation or on the web.

Thank's in advance for any help.

¿Fue útil?

Solución

Code suggestion after @agstudy published answer.

See below:

from scrapy import signals
from couchbase.exceptions import CouchbaseError
from couchbase import Couchbase

class CouchbaseStore(object):

    @classmethod
    def from_crawler(cls, crawler):
        o = cls(crawler.settings)
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        return o

    def __init__(self, settings):
        self._server = settings.get('COUCHBASE_SERVER')
        self._port = settings.get('COUCHBASE_PORT', 8091)
        self._bucket = settings.get('COUCHBASE_BUCKET')
        self._password = settings.get('COUCHBASE_PASSWORD')

    def process_item(self, item, spider):
        data = {}
        for key in item.keys():
            if isinstance(item[key], datetime):
                data[key] = item[key].isoformat()
            else:
                data[key] = item[key]
        ## I assume item have a unique time field
        key = "{0}".format(item['time'].isoformat())
        self.couchbase.set(key, data)
        log.msg("Item with key % s stored in bucket %s/ node %s" %
                    (key, self._bucket, self._server),
                level=log.INFO, spider=spider)  
        return item

    def spider_opened(self, spider):
        try:
            self.couchbase = Couchbase.connect(bucket=self._bucket,
                                               host=self._server,
                                               post=self._port,
                                               password=self._password)
        except CouchbaseError:
            log.msg('Connection problem to bucket %s'% self._bucket,
                     log.ERROR)
        log.msg("CouchbaseStore.spider_opened called", level=log.DEBUG)

    def spider_closed(self, spider):
        self.couchbase._close()
        log.msg("CouchbaseStore.spider_closed called", level=log.DEBUG)

Otros consejos

This the solution I implemented:

  1. I used signals/event to be sure that I Initialize/close the couchbase only once for each spider since the connection require some overhead to discover the server.
  2. For each item I assume have an item field used to create the key. You should modify it according to your use case.

The code :

from scrapy.conf import settings
from couchbase.exceptions import CouchbaseError
from couchbase import Couchbase

class CouchbaseStore(object):
    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler.settings)


    def __init__(self,settings):
        self._server = settings.get('COUCHBASE_SERVER')
        self._bucket = settings.get('COUCHBASE_BUCKET')
        self._password = settings.get('COUCHBASE_PASSWORD')
        dispatcher.connect(self.spider_opened, signals.spider_opened)
        dispatcher.connect(self.spider_closed, signals.spider_closed)


    def process_item(self, item, spider):
        data = {}
        for key in item.keys():
            if isinstance(item[key], datetime):
                data[key] = item[key].isoformat()
            else:
                data[key] = item[key]
        ## I assume item have a unique time field
        key = "{0}".format(item['time'].isoformat())
        self.cb.set(key,data)
        log.msg("Item with key % s stored in bucket %s/ node %s" %
                        (key, settings['COUCHBASE_BUCKET'],   
                              settings['COUCHBASE_SERVER']),
                        level=log.INFO, spider=spider)  
        return item

    def spider_opened(self, spider):
        self._server = settings['COUCHBASE_SERVER']
        self._bucket = settings['COUCHBASE_BUCKET']
        self._password = settings['COUCHBASE_PASSWORD']
        try:
            self.cb = Couchbase.connect(self._bucket)
        except CouchbaseError:
            log.msg('Connection problem to bucket %s'%self._bucket,
                     log.ERROR)
        log.msg("CouchbaseStore.spider_opened called", 
                     level=log.DEBUG)
    def spider_closed(self, spider):
        self.cb._close()
        log.msg("CouchbaseStore.spider_closed called", 
                 level=log.DEBUG)

The standard way to store your data is using item pipeline, but to retrieve data I think you should use downloader middleware. For clarity check the Scrapy architecture overview, in particular this diagram:

Scrapy architecture

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top