Welcome to WebmasterWorld Guest from 54.147.220.66

Forum Moderators: goodroi

Message Too Old, No Replies

How To stop crawler to crawl https:// pages

   
10:20 am on Feb 27, 2008 (gmt 0)

5+ Year Member



How To stop crawler to crawl https:// pages?
6:26 pm on Feb 28, 2008 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Q: Are you serving identical content via both http and https?

If I visit your site and request http://www.example.com/foo.html is the very same content also available at https://www.example.com/foo.html?

6:18 am on Mar 3, 2008 (gmt 0)

5+ Year Member



hi, Webdoctor,
Ya, when in google some of my pages with index page crawled with http:// and https:// also.
Ex: [mydomain.co.uk...]
[mydomain.co.uk...]
[mydomain.co.uk...]
[mydomain.co.uk...]

Above both url contain same content.
So how can i stop google to crawl my page with https://
and is there any bad affect to my site or my ranking with this type of issue?

6:29 am on Mar 31, 2008 (gmt 0)

WebmasterWorld Administrator incredibill is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



If I follow the problem you probably have ecommerce on your site and allow the bots to crawl your shopping cart page and/or checkout page which is where this error typically starts.

From an old Google web page, assuming this is still accurate:

Each port must have its own robots.txt file. In particular, if you serve content via both http and https, you'll need a separate robots.txt file for each of these protocols. For example, to allow Googlebot to index all http pages but no https pages, you'd use the robots.txt files below.

For your http protocol (http://yourserver.com/robots.txt):
User-agent: *
Allow: /

For the https protocol (https://yourserver.com/robots.txt):
User-agent: *
Disallow: /

However, this is a problem if your HTTP and HTTPS share the same root directory and would require a small PERL or PHP script to serve up the proper robots.txt file depending on whether or not the secure server was being used.

6:31 am on Mar 31, 2008 (gmt 0)

WebmasterWorld Administrator incredibill is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



BTW, another solution is to conditionally add the robots meta tag into your pages being served by the HTTPS server to contain "NOINDEX,NOFOLLOW"
6:41 pm on May 6, 2008 (gmt 0)

5+ Year Member



agree with the last post. :) the best thing is that to use meta tags (noindex, nofollow)
6:56 pm on May 6, 2008 (gmt 0)

WebmasterWorld Senior Member jdmorgan is a WebmasterWorld Top Contributor of All Time 10+ Year Member



You could also use mod_rewrite to detect the protocol and serve an alternate robots.txt file for HTTPS.

And while you're at it, add some rules so that HTTPS pages are redirected if requested via HTTP, and HTTP pages are redirected if requested via HTTPS. Just one of many "canonicalizations" you should do so that each page on your site is directly-accessible by one and only one URL...

Jim

6:03 am on May 7, 2008 (gmt 0)

5+ Year Member



Hey jdMorgan,

Can you please explain which rule and how to write it.