lucy24 - 4:30 am on Mar 29, 2012 (gmt 0)
Robots meta tags and X-Robots-Tag HTTP headers are discovered when a URL is crawled. If a page is disallowed from crawling through the robots.txt file, then any information about indexing or serving directives will not be found and will therefore be ignored. If indexing or serving directives must be followed, the URLs containing those directives cannot be disallowed from crawling.
Know what's scary? To the folks at g### who wrote that paragraph, it is perfectly reasonable and logical. They seem to be saying:
You're not allowed to shut your door. You have to allow everyone into your house so you can ask them individually not to tell anyone what they've seen.
Or am I reading it backward? Wouldn't be the first time.