homepage Welcome to WebmasterWorld Guest from 54.242.18.232
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Visit PubCon.com
Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
Forum Library, Charter, Moderators: goodroi

Sitemaps, Meta Data, and robots.txt Forum

    
Disallowing content previously indexed
What happens to indexed content subsequently blocked by robots.txt
daveozzz




msg:3334652
 8:28 am on May 9, 2007 (gmt 0)

If you have certain pages which have been indexed, and subsequently you decide to block those pages via robots.txt, will search engines eventually drop those pages from the index? (Specifically Google)

I know I can manually remove URLs with the Google URL removal tool as long as they are blocked by robots.txt or return a 404.
It's stated that the pages returning 404 will be dropped eventually anyway. But I couldn't see what happens over time with pages which are just blocked in robots.txt if you do not manually remove the URLs.

 

Quadrille




msg:3334687
 10:08 am on May 9, 2007 (gmt 0)

I don't think it's reliable or rapid.

You may get better results with on-page instructions.

May also be worth removing that page entirely, and placing the content at a new easily-blockable URL

daveozzz




msg:3334733
 11:22 am on May 9, 2007 (gmt 0)

Ah good idea.
I've vhanged the unwanted indexed items to return 404 and I've remake them in a way that'll be blocked by robots.txt.
Hopefully that'll sort it out eventually.
Cheers.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved