| 11:16 pm on Mar 31, 2006 (gmt 0)|
I doubt it's the links, content is more like it.
| 10:51 am on Apr 1, 2006 (gmt 0)|
I have the same exact problem.
Google will not respond to emailed questions.
Imo this was/is a planned attack on affiliates.
Companies with affiliate programs are feeling the
negative impact and dropping their programs. All affiliate
companies should pressure Google to give their affiliate
webmasters fair treatment by letting them compete in the
Google has a cadre of very smart intellectuals who could
remedy this problem. Why won't they?
| 1:33 pm on Apr 1, 2006 (gmt 0)|
Interesting to note this and i would also agree with seeing similar
What is funny is that if you run an affiliate directory thats directly selling your own affiliate links with the link urls going via your own site prior to them going to the customer- your site will rank top of google.
The reason being that Google is stupid enough to think that its links to own content and doesnt pick up on the affiliate issue.
Meanwhile if your links show, overture, adwords, miva or another you have a problem imo
| 1:43 pm on Apr 1, 2006 (gmt 0)|
Here's a dumb question (forgive me)
What exactly are 'affiliate links' in the contexts as discussed here?
Are they simply swapped or traded links? Mutual link farms and the like?
How about 'affiliate sites'? Same thing mostly? Sounds rather fishy so far. -LH
| 11:53 pm on Apr 1, 2006 (gmt 0)|
Thanks for the feedback. I have to go somewhere in a moment and will write more later...
However, just wanted add that I have done some testing on whether my pages are coming up as "duplicate content", ie Copyscape, and on average they are not or only have like 1 dupe result (which I also have on other sites which are not flagged supplmental).
Therefore this corroborates my belief they are not flagged for duplicate content. The other thought is they are ranked just too low (a minimum of inbounds/pr). More thoughts later... thanks
| 12:10 am on Apr 2, 2006 (gmt 0)|
larryhatch, I'm talking about well-known affiliate programs. For example, a huge corporation has an affiliate program to sell their products.
I get the database of all the products from them with info on the product, image etc.--so that part is what I am calling the duplicate content part, since other sites are using the same exact info. I pull the product images from their servers, so the img src would show their url. Also on each page there are several links and a form submit that go to their site with my affiliate code on it.
This is mixed in with some unique content.
Previously it was not supplemental, now it is. Just wondering what changed--algo/dupe filter got more strict or if anybody else is having similar issues...
| 12:49 am on Apr 2, 2006 (gmt 0)|
I believe that recently (Big Daddy) Google started to boosts the original content relative to derivatives.
I mean not exactly duplicate but rather "nothing especially new compare with the original".
35% original content probably is not enough to be original.
Goggles priority, probably, are the interest of the searches and not the site owners.
For the searches it is in general better to find the original content. So I believe this feature is going to stay.
| 11:37 am on Apr 2, 2006 (gmt 0)|
|I get the database of all the products from them with info on the product, image etc.--so that part is what I am calling the duplicate content part, since other sites are using the same exact info. I pull the product images from their servers, so the img src would show their url. Also on each page there are several links and a form submit that go to their site with my affiliate code on it. |
Even without Google's evident elan for complex algorithms, this reads like duplicate content to me.
In my experience there are many online traders who's idea of ecommerce is to throw a load of images and biolerplate descriptions online, and call it a 'store'. That's fine for ebay, but the trend online is for more unique, hand-crafted sites that offer something new. Why would you expect your similar/identical site to be considered any better than all the other people doing exactly the same thing?
In my view this is a throwback to bricks and mortar style stores. A big supplier might supply thousands of retailers across the country, each with the same products and info on the products, which was fine because they were all in different towns and cities. Customers wouldn't really have the opportunity to see lots of them in one go because of the geographical distance.
In short, this model doesn't transfer well online. If I search for consumer goods, e.g. a TV, I can sample lots of different stores in one go.
It makes sense for Google to try and do something about this. A very simple method is to make sure unique content is considered more important than duplicate content. At the very least this favors people who are doing something different.
It has been a long time coming but I reckon all you're seeing is the death of eccomerce operations who have put in little or no effort to develop and sell their products. It was only a matter of time really.
| 12:20 pm on Apr 2, 2006 (gmt 0)|
Interestingly enough, I was actually going to post about this.
I have been persitently seeing affiliate urls ( of the form [affiliate.networksite.com...] Etc)in the serps and actually ranking for terms, albeit not so great.
It seems that if it smells like a url, might be a url, could be a url google is gonna try and index it.
| 2:01 pm on Apr 2, 2006 (gmt 0)|
One of my company's two main websites has an affiliate program with 1,400 affiliates that has been running for almost 3 years now. Google crawling activity on this site fell dramatically on Dec 15, 2006 and rankings plummeted the following week. Since Feb 06 the entire site (80K pages) has been supplemental except for the home page on BD servers. We have not noticed a significant loss of affiliate traffic/orders as of yet.
I am not sure if Google is targeting actual affiliates, the merchants, or both? HiltonHead made a comment that sounded like merchants are dropping their affiliate programs. Are they really dropping their entire aff program to remain in the Google index?
Does anyone have any info on whether Google has really declared war on both affiliates and merchant sites w/ aff programs, or do you think it could have more to do with the type of links/redirects that are being employed, i.e. 302's or other techniques Google may find too spammy?
| 8:15 pm on Apr 2, 2006 (gmt 0)|
Is there a published method to look up only supplemental results for a search query?
I know this can be done fairly easily with the Google Search api, but I'm just wondering if anybody knows of a tool or syntax already out there.
| 8:26 pm on Apr 2, 2006 (gmt 0)|
There is no way to look up purely Supplemental Results. Did you know that a page might be Supplemental for one keyword search, and then show up as a normal result for a different keyword search that brings up the exact same page with the exact same URL?
As for links with affiliate IDs actually showing up in the SERPs, you should disallow those in your robots.txt file so that they cannot be indexed.
You want their page to be indexed with their URL. You want your page to be indexed using its native URL. You do not want your page with their affiliate code in the URL to be indexed.
You can control that using robots.txt, or by adding <meta name="robots" content="noindex"> to the page whenever it has been called with an affilate ID tagged on to the URL. The code for doing that is very easy.
| 9:02 pm on Apr 2, 2006 (gmt 0)|
google has totally ignored all my pages tagged noindex, nofollow. Only by using robots can this be achieved.
Also regarding a url being supplemental for one search and normal for another. Are you sure its the same url from googles end? I have urls that i can bring up as both supplemental and normal at the same time. Clearly they are two different urls to google.
| 9:07 pm on Apr 2, 2006 (gmt 0)|
Yes. The very same exact page.
I changed a phone number on a page.
If you search for the old phone number, the page is returned as a supplemental result, and the snippet shows the old phone number.
If you search for the new phone number, the page is shown as a normal result, and the snippet shows the new phone number.
In both cases the page links through to a cache page that is only a week old, and shows the new phone number.
| 9:16 pm on Apr 2, 2006 (gmt 0)|
g1smd, thanks for the info, no I didn't know that the page can be both supplemental or non-supp depending on the query.
As far as the robots.txt--I don't think that really covers my problem, although it may be tangentially related. My urls pointing to the affilates site are not being indexed. The problem is the pages on my site (which have those urls) are not being indexed.
| 9:20 pm on Apr 2, 2006 (gmt 0)|
right thats what i see. Different versions of the same url. But you are saying its the same url. I am saying it appears to me that the supplemental version and the normal version are infact different urls to google.
In other words google see's them as two different urls and my observation is that you can get both to appear in a single search.
| 9:26 pm on Apr 2, 2006 (gmt 0)|
What do you mean by different "versions" of "the same" url? If all the characters are not 100% identical, beginning with the "h" at the beginning, and going right through to the last character in any query string, then those ARE two different urls.
| 9:35 pm on Apr 2, 2006 (gmt 0)|
i mean the html of the page is different. The actual url is identical. Some chnages were made to the html but how significant that is im not sure. As g1smd
said his difference was minimal. All im saying is the same url is indexed as both supplemental and normal. However the rankings of the normal page are affected by the supplemental. Im saying the same url seems to be treated as two urls by google by being both supplemental and normal at the same time. It seems that the two indexes both think they have a unique url and you can get both to show as a result in a single search, one the indented version of the other.
[edited by: soapystar at 9:36 pm (utc) on April 2, 2006]
| 9:35 pm on Apr 2, 2006 (gmt 0)|
|MikeAA - Does anyone have any info on whether Google has really declared war on both affiliates and merchant sites w/ aff programs |
Within the last year I read a quote from a Google engineer to a meeting participant to the effect "you
won't have to worry about affiliate webpages in Google much longer because they are going away"
I don't recall where I read it or what meeting was involved but surely someone on this forum does.
|or do you think it could have more to do with the type of links/redirects that are being employed, i.e. 302's or other techniques Google may find too spammy? |
I don't use any spammy techniques.
All my affiliate webpages include links directly back to each company and they all are Supplemental.
No one from Google will answer these questions because it was part of their plan to bust affiliates
and increase their own revenues. I've had a ton of cash flow based on historic sales of my websites
go somewhere and the suspected recipient is Google.
It would be shocking to receive an explanation from Google - they will not respond to email requests
| 9:39 pm on Apr 2, 2006 (gmt 0)|
>>>because they are going away<<<
personally right now i see an increase in pure affiliate links being returned for competitive phrases. However the domain is the main site but with the affiliate id in the url. Not isolated urls but for different searches different affiliate urls.
Of course these are the links FROM the webpages. The pages that link to them dont rank. Is that what he was saying?
| 10:22 pm on Apr 2, 2006 (gmt 0)|
|personally right now i see an increase in pure affiliate links being returned for competitive phrases. However the domain is the main site but with the affiliate id in the url. Not isolated urls but for different searches different affiliate urls. |
yes, that is what i am seeing everywhere too. i hate it.
of course, i congratulate the guy who invented it first. he is making a lot of money now as I am seeing mostly one affiliate ID for anything i search.
on the other hand, I think merchants do not like/allow that activity. it is like paying to an independent agent for each sale performed in your store.
| 10:48 pm on Apr 2, 2006 (gmt 0)|
If the links are taking vistors to the main merchant site, and the links include an affiliate ID, then the owner of the main site has everything in their power to get those duplicate URLs delisted.
They simply need to make those URLs serve a <meta name="robots" content="noindex"> tag if the affiliate ID is present in the URL. This involves a very simple change to the script that serves the page.
| 10:08 am on Apr 3, 2006 (gmt 0)|
Thanks g1smd for the tip! We are going to give that a try.
| 12:21 pm on Apr 3, 2006 (gmt 0)|
I think Google should first help honest webmasters and sort out any problems with their index and AFTER THAT if they want to kill affiliates and a big part of the Internet along the way they can try do what they want.
Sadly this is already happening: one of my clean sites was manually checked and then penalized by Google. The only reason was that the main content came from an outside source (think about 99% of all ringtone sites). I think it's outrageous. Yes, I will use Adwords but how willing am I do to that if my domain has been manually filtered?
The Internet is a good place to find shopping information and do your shopping and affiliates are doing a good job helping people find good online stores. Now many affiliate sites are gone from the Google index and have been replaced with non-relevant content, forum messages and sblogs. For some keywords I follow I can no longer find what I'm looking for where as 6 months ago the same keyword listed several pages of what I was searching. Mostly affiliate content but it was better than nothing.
| 7:39 pm on Apr 5, 2006 (gmt 0)|
whats happening is this:
bd dc's now try to resolve all index pages to one url. So supposse domain.com has its affiliate links as
domain.com/index.jsp/pageName=hotNetList&cid=#*$!&city=xxxxxx&stateProvince=&country tect etc
all those urls are given the pr of domain.com, not of the linking page. The urls are then shown in the serps as though they were the best choice for domain.com. So if it has millions of pages all under index.jsp..then they are all considered relevant for 1000's of different searches with the url most relevent being shown. Dio you follow? Its though google now allows 1000's of pages all to be considered the legitimate index page and will root through them all to get the most relevent one for each search.
| 4:47 am on Apr 7, 2006 (gmt 0)|
soapystar, I'm not quite follow this. Can you explain a bit more?
Did you mean all affiliate url which create under domain.com/index.jsp/ (target product A) such as...
...will rank better than real url of merchant? such as...
www.domain.com/abcde/abce.htm (target product A as well)...
Please advice more...
| 10:01 am on Apr 7, 2006 (gmt 0)|
| 10:36 am on Apr 10, 2006 (gmt 0)|
Given my limited knowlege this looks more like a 302 hijack by the affiliate network operators themselves.
A header check of the URL gives this
HTTP/1.1 302 Found
Date: Mon, 10 Apr 2006 09:53:20 GMT
Cache-Control: private, max-age=0
P3P: policyref="#*$! CP="NOI DSP COR NID CUR OUR NOR"
Content-Type: text/html; charset=iso-8859-1
#2 Server Response: [xxx.com...]
HTTP Status Code: HTTP/1.0 302 Object moved
Date: Mon, 10 Apr 2006 09:54:55 GMT
P3P: CP="ALL DSP ETC"
Set-Cookie: Session etc etc
Redirect Target: [xxx.com...]
#3 Server Response: [xxx.com...]
HTTP Status Code: HTTP/1.0 200 OK
Date: Mon, 10 Apr 2006 09:54:56 GMT
P3P: CP="la di la"
cache-control: no-store, must-revalidate, private
Content-Type: text/html; Charset=ISO-8859-1
Expires: Mon, 10 Apr 2006 09:53:56 GMT
The final target page is relegated to [more results from www.xxx.com] for another page that appears in the results yet seems less relevant for the search phrase, with the affiliate URL positioned 3 or 4 places below.
If anyone can tell me whats going on here I'd be interested to know.