Forum Moderators: open
This morning, I noticed that the url, including the affiliate querystring with the ID I'd given the site owner, was listed in google. This brought up a couple of questions:
1) Does google see the affiliate tagged URLS as completely different pages? Could I get penalized for duplicated pages if I have a lot of affiliates linking back with different IDs?
2) Does google make efforts to identify sites that use affiliate programs and penalize them, for example sites that use a querystring variable named afid/af/affiliateid/referer/etc
I have been having doubts about whether to continue the affiliate program or shut it down, so any thoughts on this (or links to relevant topics) would be greatly appreciated. I did notice that of the few people who have set up affiliate links, none are really directly related subject matter. So that kind of kills off some hope for affiliate-donated PR, another reason I'm considering dropping the program.
Thanks!
now, if YOU set up 100 domains to promote YOUR own affiliate program, and content across each domain was exact, then you're in grave danger.
[domain.com...]
[domain.com...]
[domain.com...]
...
[domain.com...]
All these urls above would be linked to from different affiliates. If google sees each URL as a different "page" on my site, since the url's are different (in querystring only), it might look to google like I have n different pages, but they're all just identical to my home page. Maybe I'm just being paranoid. :)
Another thing I was wondering is whether the pagerank donated would be split among all those URLs, or donated to the base url without the querystring. I would be very interested to know if anyone knows what would happen there.
Thanks for everyone's input on the Affiliate program, I was worried that maybe it was considered to be a bad practice by search engines. I did notice today that my affiliate page has a PR0 (empty white box), but none of the rest of my pages seem affected. I wonder if maybe the word affiliate is bad?
GoogleGuy has stated that they are getting much better at crawling dynamic pages, so I would expect that they are also working on eventually figuring out which query strings cause duplicate content.
I think it would be more important to try and figure out which clicks come from google so you can reduce the affiliate payments, unless you want to give extra credit to those sites for giving you the high results in google. You could check the referer and do a reverse domain lookup.
Alternatively they're choosing to spider more of them. This will cause quite a lot of duplication, but Google is very good at putting the duplicates and other fluff at the bottom of the listings.
I guess that most people will only find the extra pages when using very specific word combinations, in which case the extra results will be a good thing.
<?php
if ($_GET['print'] == "yes") {
echo '<meta name="robots" content="noindex">';
}
?>
While I'm just a beginner with this sort of thing, if you are using PHP, this might work if you add it to your header:
<?php
if ($_GET['afid'] != "") {
echo '<meta name="robots" content="noindex">';
}
?>
In other words, if AFID exists, the script will add:
<meta name="robots" content="noindex">
... to the header, thus preventing Google from indexing it.
At least that's what I want it to do!