Forum Moderators: open
Needless to say, it screws up our tracking, it looks bad and is generally confusing.
Would adding a meta-noindex tag to pages with the tracking codes in their query strings do the trick? Any other better ways?
This seems obvious, but I don't want to discourage Google from indexing the real page. I would rather have ugly URL and confusing logs than no traffic!
I just wanted to make sure I am not doing something stupid first :)
Thanks in advance.
I'm wondering if by any remote chance it could be fouling up with works with duplicate content. Too many homepages have been disappearing lately for no apparent reason.
And in our case, neither show up in the SERPs since our site was a victim of the recent issues I have been blathering about elsewhere.
Curious...
www.domain.com%2F&engid=1694&gid=2370&af=0&qtype=0&qw=+keyword1+keyword2&ts=1082392794&cs=1dc0f/2
www.domain.com/pagename.html?trackcode=variousads
and dozens of others. How can you make a robots file tell google to not spider these pages when the pages are on another server, or when there are way too many different tracking urls to include. We must have over 3,000 different tracking urls we have used over the past year for doazens of different advertising networks and hundreds of keywords. Now there are all of these others that have been dynamically generated. I don't think a robots file can be made for these.
Also I note that the allinurl has been showing several old pages that have been gone from our site for over a year and has cached versions of other pages that are also very old.
Our website took a pretty big hit when our home page and several main category pages were replaced in the index with a tracking url. Our ranking are way down for these pages and have titles and descriptions, probably because the pages have several duplicate listings with different tracking urls behind them.
Now watch G slap me with a duplicate content penalty. I wish they could get this straight: 1 Page = 1 url