Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: goodroi
so you have to disallow for each directory or file you don't want indexed. For example :
Will disallow /cgi-bin/, /images/ and noindex.html
Or does it matter? Do I need a / before files if I'm excluding files in root web dir?
Also once this is placed on server, will previously indexed pages be eliminated from the google index once they are listed as disallow in robots.txt?
There is a wildcard nature to the Disallow directive. The standard dictates that /bob would disallow /bob.html and /bob/indes.html (both the file bob and files in the bob directory will not be indexed).
To block all robots from one file in a subdirectory:
To block all robots from the files in an entire subdirectory:
Other techniques may work but I'm confident these work as advertised.
A sticky question, that one...
If Google finds a robots.txt Disallow for a page, it will remove the page's title and description from its search results. It will also no longer match search terms to the words on that page. So, the page essentially disappears from the Google search results pages. However, if Google finds a link to that page, it will still show that page in results when someone clicks on "More results from <this domain>".
I went around and around with this, trying to find a way to tell them "don't mention my contact forms pages at all, please", and here's what I ended up with:
For Google, don't Disallow the page in robots.txt, but place a <meta name="robots" content="noindex"> tag in the head section of the page itself.
You'll also need to do this for Ask Jeeves/Teoma as well; their handling of robots.txt is the same as Google's.
All the others seem to interpret a robots.txt Disallow as "don't mention this page at all." (I'm speaking of major U.S. search engines here - there may be other national and regional search engines which act like Google and AJ/T, but I am not aware of them.)
After reading the above, you may ask, "Well then, what good is robots.txt, if these search engines treat Disallows this way? Why not just use the robots metatag and forget robots.txt?"
The answer is that using robots.txt saves bandwidth. If a page is Disallowed in robots.txt, Google and AJ/T will list the page URL (with no title or description) if they find a link to it, but they will not download the page. On the other hand, in order to see the on-page robots metatag, a search engine *must* download the page. So using a robots.txt Disallow for those engines which treat it as "don't mention it" can save you a lot of bandwidth if the pages are large or spidered often because the site has high PR or link popularity. As a result, I have many pages which are disallowed for all engines except Google and AJ/T, and also are tagged with a meta name="robots" content="noindex,nofollow" specifically for Google and AJ/T.
I've tried to use my language carefully and specifically above - hopefully, this isn't too confusing...
Bandwidth is not a big issue so I will prob. use <meta name="robots" content="noindex"> instead. Oh well, wish I knew this before, haev to start all over.
Does <meta name="robots" content="noindex"> also work for most or all other engines besides google?