one robots.txt to allow crawling only live website rest of them should be disallowed

StackOverflow https://stackoverflow.com/questions/3828597

  •  26-09-2019
  •  | 
  •  

Question

I need guideline about using of robots.txt problem is as following.

I have one live website "www.faisal.com" or "faisal.com" and have two testing web servers as follows

"faisal.jupiter.com" and "faisal.dev.com"

I want one robots.txt to handle this all, i don't want crawlers to index pages from "faisal.jupiter.com" and "faisal.dev.com" only allowed to index pages from "www.faisal.com" or "faisal.com"

I want one robots.txt file which will be on all web servers and and should allow indexing only live website.

Was it helpful?

Solution

The disallow commands specifies only relative URL so I guess you cannot have the same robots.txt file for all.

Why not force HTTP authentification on the dev/test servers ?

That way the robots wont be able to crawl these servers.

Seems like a good idea if you want to allow specific people to check them but not everybody trying to find flaws in your not yet debugged new version ...

Especially now that you gave the adresses to everybody on the web.

OTHER TIPS

Depending on who needs to access the dev and test servers -- and from where, you could use .htaccess or iptables to restrict at the IP address level.

Or, you could separate your robots.txt file from the web application itself, so that you can control the contents of it relative to the environment.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top