Due to duplicate http and https content I have a redirection on every page telling if page should be http or https,
and I am seing some changes in indexing already.
However in a folder I have similar pages as in root, ie the content is equal but the menu, footer etc. are diferent. My robots.txt states that these pages should not be spidered.
When I am checking my indexed pages I see to many, these pages are indexed both as http and https, but stating in google search that there is no content due to robots.txt
In order to get the https pages deleted quicker should I let google spider them?
On the other hand I suppose google will sooner or later spider them and deindex the https pages, but I suppose it takes longer as robots.txt tells not to index these pages.