Welcome to WebmasterWorld Guest from

Forum Moderators: goodroi

Message Too Old, No Replies

Robots.txt disallow: /index.php?

then /index.php?param=example still allowed?

6:20 pm on Sep 20, 2003 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member

joined:Apr 8, 2002
votes: 0

I want to allow googlebot to crawl my index page named index.php but i want to disallow index.php?param=example. So if i make a robots.txt entry Disallow: /index.php? will index.php still get crawled? I saw in google's own robots.txt [google.com] that they themself exlude /mac? but i assume this would allow /mac ...!? Is this valid? Can i use it? Couldn't find ANY info about it neither on google nor within the w3c specs ...

In clear words:

Disallow: /this.php?

-> /this.php?param=example
==> DOES NOT get crawled
-> /this.php
==> DOES get crawled?

2:26 am on Sept 21, 2003 (gmt 0)

Full Member

10+ Year Member

joined:May 8, 2002
votes: 0

Following on from your Google example,
A direct request to:
Shows this URL has both pagerank and cache info.
(which is blocked in the robots.txt) has neither.

From this behaviour I would take the leap and say that:
/this.php will be crawled and indexed if your robots.txt has a directive to disallow: /this.php?

At least for Google, not sure how the other engines will take it. I checked the robots documentation as well and couldn't find any specific examples..

3:42 am on Sept 21, 2003 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member

joined:June 18, 2003
votes: 0

Hmm. I have a reversed situation. I blocked every SE from accessing /cgi-bin/script.pl, but Googlebot still took all the pages with parameters (/cgi-bin/script.pl?something=here), now there are bunch of them in the index, but they have no info.
4:44 am on Sept 21, 2003 (gmt 0)

Senior Member

WebmasterWorld Senior Member jdmorgan is a WebmasterWorld Top Contributor of All Time 10+ Year Member

joined:Mar 31, 2002
votes: 0

Regarding Moltar's comment:
> but they have no info.

Google and Ask Jeeves have a behaviour which is different from most other spiders: If either of these spiders finds a link to a page, they will list the page, regardless of whether robots.txt disallows crawling of that page. If the page is disallowed, they won't crawl (fetch) it, but they will list it by URL. Other search engine spiders interpret a Disallow as meaning "don't mention this page at all," but the "listing" behaviour of Google and AJ is not explicitly defined by A Standard for Robots Exclusion; all it describes is fetching behaviour.

Yidaki brings up another grey-area question: Since a query string is not technically part of a URL (it is instead an argument passed to an agent at a specific URL), then is a robot expected/required to recognize different query string values as part of the URL for the purposes of matching a Disallow directive? My guess is that it is not a good idea to depend on any standard behaviour of different robots with respect to query strings. This may be another good argumant in favor of using URL rewriting to make dynamic URLs look like static ones.

Just some comments...

12:17 pm on Sept 21, 2003 (gmt 0)

New User

10+ Year Member

joined:Sept 3, 2003
votes: 0

In my case google will allow you using parameter
like this

subcategory.php?param=16&subcat=blablaa ==> allow
subcategory.php?param=16 ==> allow
or even just page.php ==> allow

so dont be worry using param as long you feel that your pages will crawling by googlebot

i have more 2000 pages and still increasing, using php,including parameter.


Join The Conversation

Moderators and Top Contributors

Hot Threads This Week

Featured Threads

Free SEO Tools

Hire Expert Members