문제

Does ScraperWiki somehow automatically rate limit scraping, or should I add something like sleep(1 * random.random()) to the loop?

도움이 되었습니까?

해결책

There is no automatic rate limiting. You can add a sleep command written in your language to add rate limiting.

Very few servers check for rate limiting, and usually servers containing public data don't.

It is, however, good practice to make sure you don't overrun the remote server. By default, scrapers only run in one thread, so there is a built in limit to the load you can produce.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top