If parsing logic is same then there are two methods,
- For large number of webpages, you can create a list and read that list at the start may b in start_requests method or in constructor and assign that list to start_urls
- You can pass you webpage link as a parameter to the spider from command line arguments, ans same in requests_method or in constructor you can access this parameter and assign it to start_urls
Passing parameters in scrapy
scrapy crawl spider_name -a start_url=your_url
In scrapyd replace -a with -d