Welcome to WebmasterWorld Guest from

Forum Moderators: goodroi

Message Too Old, No Replies

Robots.txt exclusion for dynamic page



8:42 pm on Jun 17, 2003 (gmt 0)

WebmasterWorld Administrator rogerd is a WebmasterWorld Top Contributor of All Time 10+ Year Member

For some reason, I am having difficulty with some shopping cart pages getting indexed despite attempts to stop this. The pages take the form of,

My robots.txt file contains the following:

User-agent: *
Disallow: /cgi-bin/
Disallow: /store/cart.asp

The file validates using Brett's checker. Do I need to change the syntax of the URL to make this work?


9:22 pm on Jun 17, 2003 (gmt 0)

WebmasterWorld Senior Member jdmorgan is a WebmasterWorld Top Contributor of All Time 10+ Year Member


Your robots.txt should prevent spiders from fetching your shopping cart pages as-is. However, Google will list any page it finds a link to, even without crawling that page. So, the answer depends entirely on what you mean by "cart pages getting indexed". Are these pages listed with a title and description, or is it just the URL that is showing in the SERPs?

In order to prevent the "list just the URL" scenario, the solution is counter-intuitive: You must allow Google to fetch the page, and then use the <meta name="robots" content="noindex"> tag on each page. In the case of a large site with dynamic URLs, this might be easiest done with a "light cloak" by redirecting SEs to a "noindex" page. Since there is no attempt to mislead a searcher, there should be no risk of penalty.

The "list just the URL" problem exists with Ask Jeeves/Teoma, as well as Google.



Featured Threads

Hot Threads This Week

Hot Threads This Month