homepage Welcome to WebmasterWorld Guest from 54.145.172.149
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
Forum Library, Charter, Moderators: goodroi

Sitemaps, Meta Data, and robots.txt Forum

    
Noindex vs Disallow within a Sitemap
What is the use of using Noindex within a Sitemap?
shaunm



 
Msg#: 4478700 posted 9:03 am on Jul 25, 2012 (gmt 0)

Hi,

I never heard of using 'noindex' within a sitemap. What is the difference between the 'disallow' and 'noindex' parameters?

Suppose I want to block a single URL in my website not to be crawled and indexed, which is the best way?

user-agent: googlebot
noindex: /fr/content.aspx

user-agent: googlebot
disallow: /fr/content.aspx


I just came across a website which uses both in their robots.txt file to make sure that their pages are not indexed in the search results.
[korinaithacahotel.com...]


I would very much appreciate your help guys. Thanks a lot!


Best,

 

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4478700 posted 7:17 pm on Jul 25, 2012 (gmt 0)

Short answer: the word 'noindex' is not part of the Robots Exclusion Standard. Use it at your own risk.

Disallow = robots stay out, no crawling allowed
Noindex = page is not mentioned in google's* search index

Yes, a page can be indexed even if a search engine has not seen it. They only have to know it exists.


* I say specifically google, because That Other Search Engine has indexed a few pages that are clearly and explicitly labeled noindex.

phranque

WebmasterWorld Administrator phranque us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4478700 posted 12:30 am on Jul 26, 2012 (gmt 0)

the problem with Noindex: in a robots exclusion protocol is that robots are for crawling, not indexing.


according to their documentation google only supports the Disallow: and Allow: directives in robots.txt.

Block or remove pages using a robots.txt file - Webmaster Tools Help:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449 [support.google.com]

shaunm



 
Msg#: 4478700 posted 6:58 am on Jul 26, 2012 (gmt 0)

@phranque
Thank you so much for answering! "robots are for crawling, not indexing" - It cannot be explained in any other words :)

Cheers!

shaunm



 
Msg#: 4478700 posted 7:06 am on Jul 26, 2012 (gmt 0)

@lucy24

Thanks! After so much of research, I found that 'noindex' in a robots.txt is not a directive. But still I am very much confused since the 'robots.txt checkers' available online do not find the use of 'noindex' as an error and report to us, why is that?

And also whey you say "I say specifically google, because That Other Search Engine has indexed a few pages that are clearly and explicitly labeled noindex"

Do you refer to the NOINDEX in robots.txt or NOINDEX in Meta Tags?

Thanks again.

shaunm



 
Msg#: 4478700 posted 7:07 am on Jul 26, 2012 (gmt 0)

And sorry about the mistyped 'Title'

The correct one is Noindex vs Disallow within a Robots.txt

:)

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4478700 posted 9:54 pm on Jul 26, 2012 (gmt 0)

Oops. I meant the "noindex" meta tag. It would never occur to me to say "noindex" in robots.txt. I don't even use "allow", since only a handful of robots recognize the word.

Incidentally, when I first saw the topic header I thought it was going to be the perennial unanswered question: how the bleepity bleep do you prevent g### from indexing your sitemap and robots.txt? :)

phranque

WebmasterWorld Administrator phranque us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4478700 posted 11:19 pm on Jul 26, 2012 (gmt 0)

how the bleepity bleep do you prevent g### from indexing your sitemap and robots.txt?


you could always try using the X-Robots-Tag HTTP header:
http://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag [developers.google.com]

shaunm



 
Msg#: 4478700 posted 5:28 am on Jul 27, 2012 (gmt 0)

@lucy24

hahaha...yes it's all because of my wrong title. Good that you asked otherwise phranque would not have shared that resource link :)

Thanks both lucy24 and phranque for your detailed answering.

Cheers!

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved