Question

I have a list of URLs that I want to test scraping using Nutch... specifically that list of URLs and no crawling..

I was referring to this post for disabling crawling..

And I noticed that my 5 test URLs turned out to be 0 after normalization and filtering.

$:~/apache-nutch-1.7$ bin/nutch crawl urls -dir crawl -depth 3 -topN 1000
solrUrl is not set, indexing will be skipped...
crawl started in: crawl
rootUrlDir = urls
threads = 10
depth = 3
solrUrl=null
topN = 1000
Injector: starting at 2013-12-18 23:07:32
Injector: crawlDb: crawl/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: total number of urls rejected by filters: 5
Injector: total number of urls injected after normalization and filtering: 0
Injector: Merging injected urls into crawl db.
Injector: finished at 2013-12-18 23:07:39, elapsed: 00:00:06
Generator: starting at 2013-12-18 23:07:39
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 1000
Generator: jobtracker is 'local', generating exactly one partition.
Generator: 0 records selected for fetching, exiting ...
Stopping at depth=0 - no more URLs to fetch.
No URLs to fetch - check your seed list and URL filters.
crawl finished: crawl

And actually I leave the filter and normalization as default which I guess doesn't filter anything..

Can anyone help me understand what is going on?

Injector: total number of urls rejected by filters: 5

Can anyone tell me which configuration file should I change to remove the 'filters' mentioned in the line above

Also my test URLs looks like this:

http://example.com/store/em?action=products&cat=1&catalogId=500201&No=0
http://example.com/store/em?action=products&cat=1&catalogId=500201&No=25
http://example.com/store/em?action=products&cat=1&catalogId=500201&No=50
http://example.com/store/em?action=products&cat=1&catalogId=500201&No=75
http://example.com/store/em?action=products&cat=1&catalogId=500201&No=100
Was it helpful?

Solution

http://svn.apache.org/repos/asf/nutch/tags/release-2.2.1/conf/

The file is regex-urlfilter.txt (make a copy of the template). Take a look at the regular expressions there. Specifically the line:

# skip URLs containing certain characters as probable queries, etc.
-[?*!@=]

that surely is filtering out your urls considering them "queries, etc".

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top