Domanda

Is it possible to prevent a site from being scraped by any scrapers, but in the same time allow Search engines to parse your content.

Just checking for User Agent is not the best option, because it's very easy to simulate them.

JavaScript checks could be(Google execute JS) an option, but a good parser can do it too.

Any ideas?

È stato utile?

Soluzione

Use DNS checking Luke! :)

  1. Check the user agent to see if it's identifying itself as a search engine bot
  2. If so, get the IP address requesting the page
  3. Reverse DNS lookup the IP address to get a hostname
  4. Forward DNS lookup the hostname to get an IP address

Same idea provided in help article Verifying Googlebot by Google

Altri suggerimenti

Checking link access times might be possible, in other words, if the front page is hit, then the links on the front page are all hit "quickly".

Even easier, drop some hidden links in the page; bots will follow, people almost never will.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top