Msg#: 3585963 posted 6:18 am on Mar 3, 2008 (gmt 0)
hi, Webdoctor, Ya, when in google some of my pages with index page crawled with http:// and https:// also. Ex: [mydomain.co.uk...] https://www.mydomain.co.uk/ [mydomain.co.uk...] https://www.mydomain.co.uk/product1.html
Above both url contain same content. So how can i stop google to crawl my page with https:// and is there any bad affect to my site or my ranking with this type of issue?
Msg#: 3585963 posted 6:29 am on Mar 31, 2008 (gmt 0)
If I follow the problem you probably have ecommerce on your site and allow the bots to crawl your shopping cart page and/or checkout page which is where this error typically starts.
From an old Google web page, assuming this is still accurate:
Each port must have its own robots.txt file. In particular, if you serve content via both http and https, you'll need a separate robots.txt file for each of these protocols. For example, to allow Googlebot to index all http pages but no https pages, you'd use the robots.txt files below.
For your http protocol (http://yourserver.com/robots.txt): User-agent: * Allow: /
For the https protocol (https://yourserver.com/robots.txt): User-agent: * Disallow: /
However, this is a problem if your HTTP and HTTPS share the same root directory and would require a small PERL or PHP script to serve up the proper robots.txt file depending on whether or not the secure server was being used.
Msg#: 3585963 posted 6:56 pm on May 6, 2008 (gmt 0)
You could also use mod_rewrite to detect the protocol and serve an alternate robots.txt file for HTTPS.
And while you're at it, add some rules so that HTTPS pages are redirected if requested via HTTP, and HTTP pages are redirected if requested via HTTPS. Just one of many "canonicalizations" you should do so that each page on your site is directly-accessible by one and only one URL...