Forum Moderators: goodroi
There are several hundred articles using this convention and thus many duplicate pages. The site does poorly in Google serps in spite of regular indexing and bucket loads of good content. I can only assume this is at least part of the reason.
From my understanding of robots.txt it isn't possible to exclude an extension. Is there a way?
I have already added a noindex nofollow to the print pages - will this be effectve? Or will it be interpreted as an attempt to hide duplicate content?