Question

Say I have a site on http://example.com. I would really like allowing bots to see the home page, but any other page need to blocked as it is pointless to spider. In other words

http://example.com & http://example.com/ should be allowed, but http://example.com/anything and http://example.com/someendpoint.aspx should be blocked.

Further it would be great if I can allow certain query strings to passthrough to the home page: http://example.com?okparam=true

but not http://example.com?anythingbutokparam=true

Was it helpful?

Solution

So after some research, here is what I found - a solution acceptable by the major search providers: google , yahoo & msn (I could on find a validator here) :

User-Agent: *
Disallow: /*
Allow: /?okparam=
Allow: /$

The trick is using the $ to mark the end of URL.

OTHER TIPS

Google's Webmaster Tools report that disallow always takes precedence over allow, so there's no easy way of doing this in a robots.txt file.

You could accomplish this by puting a noindex,nofollow META tag in the HTML every page but the home page.

Basic robots.txt:

Disallow: /subdir/

I don't think that you can create an expression saying 'everything but the root', you have to fill in all sub directories.

The query string limitation is also not possible from robots.txt. You have to do it in the background code (the processing part), or maybe with server rewrite-rules.

Disallow: *
Allow: index.ext

If I remember correctly the second clause should override the first.

As far as I know, not all the crawlers support Allow tag. One possible solution might be putting everything except the home page into another folder and disallowing that folder.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top