Forum Moderators: goodroi
User-agent: *
Disallow: /ord/
Disallow: /scan/
Disallow: /images/
Disallow: /customerservice.html
Disallow: /login.html
...
and Google seems to be obeying it because it hasn't downloaded any disallowed files since August 03.
However it still hasn't dropped the pages from it's index, there's just no title, snippet or cached copy. Some even have a Page Rank of 5!
So how long does it take for Google to actually drop these pages? or do they stay there indefinately?
Hmm - read through that and can't find anything I don't already know: robots.txt, robots metatag etc. Tried the automatic URL link - seems to be dead.
These entries are over 8 months old - surely they should have been removed by now (according to Google's FAQ it takes 6-8 weeks).
(BTW our site is crawled heavily every day).
Google will list any page it finds a link to, whether or not it is allowed (by robots.txt) to fetch and analyze that page. If the page is disallowed in robots.txt, it just lists it as a URL, with no title or description.
The solution to this problem is non-intuitive: You must *allow* the pages to be fetched in robots.txt, and then use the on-page html <meta name="robots" content="noindex"> tag to tell them to ignore the page.
Ask Jeeves/Teoma does the same thing, and as of this month, Yahoo's Slurp is now apparently doing it, too. Yahoo adds the interesting twist of using the link text it finds on the link as the title for the listing.
Jim