Msg#: 4579374 posted 9:54 am on May 30, 2013 (gmt 0)
I'm running the online store that has the problem with content duplication in google. The category pages have a lot of filters, sorting and display options - of course each option modifies the url. The plugin that sets rel=canonical for each of these pages was configured improperly leading not the basic category page with the products (as it should) but to not existing page. Of course google ignored it and generated many duplicate pages in search results (we have about 6000 real pages and google thinks it's about 42000). Because of this we have now only about 200 pages in google's main index and all the rest are in supplemental so the whole site is pretty muched considered like being low quality to google. We already fixed the problem with the plugin so rel=canonical is set properly now, besides that I configured google crawler in Webmaster Tools to ignore all url parameters. I assume it will take a lot of time for google now to delete the duplicates so I have 2 questions now: 1. Should I block the duplicate pages in robots.txt? I heard it's not too good idea as it's suspicious to algorithm to hide almost whole site. 2. Is there anything else I can do to get my site to main index?