Two part question ...
1) Has anyone else observed an increasing number of results showing up as secure (httpS) pages instead of non-secure?
2) Of all of the types of duplicate content issues Google could need to try and identify and subsequently take action on, shouldn't they be able to tell that a page is an exact copy just in a secure version? (and ignore it) Errors in coding and design aside, this is by far the dumbest Google issue I've had to deal with.
I've only really began to take notice of #1 because I've been dealing with issues related to #2 in the last few weeks, so I'm unsure if the secure results have been there all along.
Is it possible that Google's push for secure searches has led them to look for and overvalue httpS pages on a site?
Our issue appears to stem from relative links within secure areas of the website, allowing bots to begin crawling a copy of all pages, just in secure form.
As a result, most pages previously ranking high on page two have been replaced by other options (related pages, child pages, etc.) farther down on pages 2-3. These option B's, if you want to call them that, are pages that are still being cached as non-secure.
The occasional page does still rank in the same or similar spot even though it is know cached as the secure version.
We are attempting to remedy the issue by offering a secure version of robots.txt, with some success in what has been recrawled so far.
Any thoughts? Suggestions? Similar experiences?