|Robots.txt disallow everything in folder, but not folder itself|
| 1:55 pm on Sep 10, 2009 (gmt 0)|
Hi there, here's the situation:
I'm having about 10 brand pages like this, which are very important to remain indexed
Next, we have a lot of clickouts which need to be blocked by robots.txt. These clickouts are located as ID's under the brands:
How do we block the latter links without disallow the brand pages?
| 3:04 pm on Sep 10, 2009 (gmt 0)|
There's no good way to do this that will work for all robots. You should really put URLs you don't want spidered into a separate directory, or divide the brands directory into spiderable and non-spiderable directories, such as
/brands/public/brand/ and brands/private/brand
/brands-public/brand/ and brands-private/brand
/brands/brand-public/ and brands/brand-private
That is, spidering should be considered in the design of the directory layout.
For Google and some other major search engines, you can use the "Allow:" directive and/or wild-card paths in robots.txt. But many search engines don't support "Allow:" and wild-card patsh because they is not part of the original Standard for Robot Exclusion. That leaves you with using the on-page (HTML meta-tag) robots control method, which may or may not be applicable to your situation. Or look into the X-Robots HTTP header -- but again, this is not supported by all robots.
Really, the best approach is to consider file organization, spiderability, access-control, and cacheability as a fundamental part of directory-layout design...
| 3:15 pm on Sep 10, 2009 (gmt 0)|
Ok thanks for the information. So the best way is to move clickouts to a subfolder, say:
That wouldn't hurt the brand pages itself wouldn't it?
| 3:34 pm on Sep 10, 2009 (gmt 0)|
No, it won't "hurt" the brand pages. Only /brand<numbers>/go/<numbers>, /brand<numbers>/go/, and /brand<numbers>/go (if they exist) would be Disallowed.
Robots.txt uses prefix-matching; Any URL-path that begins with the specified string is affected.