Note that the *
wildcard in Disallow
is not part of the original robots.txt specification. Some parsers support it, but as there is no specification, they might all handle it differently.
As you seem to be interested in Googlebot, have a look at Google’s robots.txt documentation.
In the examples it becomes clear that *
means
any string
"Any string" may, of course, also contain /
.
So your first line Disallow: /parent/*
should block every URL whose path starts with /parent/
, including path segments separated by slashes.
Note that this would be the same as Disallow: /parent/
in the original robots.txt specification, which also blocks any URL whose paths starts with /parent/
, for example:
http://example.com/parent/
http://example.com/parent/foo
http://example.com/parent/foo.html
http://example.com/parent/foo/bar
http://example.com/parent/foo/bar/
http://example.com/parent/foo/bar/foo.html