| 5:27 am on Aug 12, 2005 (gmt 0)|
So, what I'm hearing is that I may have cut my own throat with adsense? I have a site that sells "stuff", I added some adsense and found I could make more money with the adsense than I could when I actually sold some "stuff", so I papered my website in adsense. (OH Yea, I papered it baby! LOL)
On July 16, 2005 every page on my site was moved back 40-50 places in the serps. It was a huge site wide penalty and I'm waiting for the next update to see if it was the ads (Which are now the only things supporting my sorry ***, cause I didn't take them down.) Or, if it was some of the other little things I've found that I did wrong.
Whatever it ends up being, I'm not pulling down the adsense, untill I've tried everything else first.
I'll let you know...
| 5:52 am on Aug 12, 2005 (gmt 0)|
>>On July 16, 2005 every page on my site was moved back 40-50 places in the serps.<<
Lucky you ;-)
Most of my pages moved down to category 300+ on the serps.
And OH Yea, I papered my website in AdSense too ;-)
| 6:26 am on Aug 12, 2005 (gmt 0)|
I think since it's quiet obvious that numerous sites with reciprocal link directory/resources pages which link out to many unrelated sites, still rank very well in all sectors. The question is: what causes some other sites - that also use the same link directory structure - to be banned or penalized (which I'm assuming has happened to a few sites)? Could it just be linking out to a few bad neighborhoods, or does Google ban a site the instant a manual review finds a reciprocal link directory on that site? I don't think there's an algorithmic way of detecting a custom-programmed reciprocal link directory, and even if there was, many innocent sites could get banned if a filter was applied across the entire index.
| 6:41 am on Aug 12, 2005 (gmt 0)|
|Could it just be linking out to a few bad neighborhoods, or does Google ban a site the instant a manual review finds a reciprocal link directory on that site? I don't think there's an algorithmic way of detecting a custom-programmed reciprocal link directory, and even if there was, many innocent sites could get banned if a filter was applied across the entire index. |
I understand that Google only does manual reviews by exception and previous experience has shown that it is willing to accept the collateral damage of a few innocent sites getting banned with the introducion of any any new filter or algo change.
| 7:36 pm on Aug 12, 2005 (gmt 0)|
"The question is: what causes some other sites - that also use the same link directory structure - to be banned or penalized"
Airpal, that's exactly my question currently as well. I'm leaning towards some type of manual intervention due to the factors you mentioned, I'm not seeing any consistent application of this penalty. And not only the same link directory structure; the same directory generating software, at least judging by the sites I'm seeing remaining in the serps.
| 8:16 pm on Aug 12, 2005 (gmt 0)|
Should Google Panelize Resources Pages with lots of Outbound links?
Nope. They should "Wallpaperize" them instead.
| 10:10 pm on Aug 12, 2005 (gmt 0)|
I still don´t have solid facts for this one. But I have noticed that its seldom to see an AdSense publisher site on top 10 of Google´s serps. Maybe its only on the keywords / keyphrases I´m testing.
Any of you folks have noticed the same or the opposite?
| 12:49 am on Aug 13, 2005 (gmt 0)|
Well, I think the significant majority of sites out there do not use Adsense, so that is sort of what I'd be expecting. If an Adsense publisher was on top of every search, I think I'd be a little suspicious of Google's intentions, wouldn't you? (-:
| 6:48 am on Aug 13, 2005 (gmt 0)|
Speaking of Directories and scraped stuff, on my "papered with adsense site", I had about 10 pages out of 100 where most of the information was "taken" from a gov't website. I give full credit to the gov't and even go so far as to quote what I took, (just like we learned in school. LOL) and don't really want to take down the info. BUT, looking at the reply Matt Cutts has on his blog about scraped info, he says you have to get rid of it before you can get reincluded.....
Here is the big question...... drum roll.....
Can I just disallow the pages in robots.txt and still use them for my visitors, or do I have to dump them altogether from the site? (I like them and want to keep them. They're mine dang it, I stole them fair and square.)
| 7:34 am on Aug 13, 2005 (gmt 0)|
>>Can I just disallow the pages in robots.txt and still use them for my visitors, or do I have to dump them altogether from the site?<<
Not sure if this will work. But there is still the possibility of adding this metatag to the pages you wish Google to exclude:
<META NAME="GOOGLEBOT" CONTENT="NOINDEX, NOFOLLOW">
Have any of you folks done that before?
| 3:44 pm on Aug 13, 2005 (gmt 0)|
Don't cut your nose off with the nofollow in the meta,
noindex tells Google not to index it. There may be pages that you have linked it to that s/b indexed.
Google is interested in only having one copy in its index and the noindex does that.
Now if you are quoting a small amount of a document and have lots of your own content on the page then I'd say to have the page indexed and to cite the original via a link.
Have you had any luck finding anything that is common to the banned sites that also doesn't exist on unbanned sites.
If the banning was the result of complaints to Google from other parties then I'll bet you won't find anything.
All I have seen so far look like possible duplicate content issues that got out of hand.
However it is hard to be certain since the sites are no longer in the index.
BTW I'm still trying to figure out what a resource page is.
| 3:49 pm on Aug 13, 2005 (gmt 0)|
|Can I just disallow the pages in robots.txt and still use them for my visitors, or do I have to dump them altogether from the site? (I like them and want to keep them. They're mine dang it, I stole them fair and square.) |
Have you considered just linking to the government documents?
If not, why would google want to index those pages on your site when it can index the original (and more authoritative) sources?
| 4:09 pm on Aug 13, 2005 (gmt 0)|
Well said. The more European beer I drink [burp!] the more sense you make. -Larry
| 4:25 pm on Aug 13, 2005 (gmt 0)|
Of course we are assuming that the government document is in fact:
1: The original.
and of course
3: On the web.
Beer good, with that I invoke rule four.
| 4:29 pm on Aug 13, 2005 (gmt 0)|
"Have you considered just linking to the government documents?"
I have a similar situation. I link to societies and organizations that have to do with my topic. It took years for me to build this resource section and I am happy that my readers have another place to look for information on my topic. Maybe I should prevent google from indexing it. But then again, they completely banned me, for what I have no idea. I just changed from a shared IP to a dedicated one last week, so I am hoping that was my problem. I had no idea that I had a shared IP and that it could be a problem.
I do not use adsense. I never did.
| 6:14 pm on Aug 13, 2005 (gmt 0)|
Much of the world suffers from Beer Deficiency Anemia, and they don't even know it. -Larry
| 10:09 pm on Aug 13, 2005 (gmt 0)|
Hmmm, I did link directly to the documents on the gov't site AND I quoted about 1/3 to 1/2 of the information (the important stuff), so that my readers would have the important info, but be able to follow to the original document if they wanted more.
It was better to put the info on my site, because that is where all the other supporting info is located, like adsense... Just kidding.
I like porter and stout, other types of beer don't have enough nutritional value to replace solid food.
| 6:57 am on Aug 14, 2005 (gmt 0)|
Don't cut your nose off with the nofollow in the meta,
noindex tells Google not to index it. There may be pages that you have linked it to that s/b indexed.<<
Thanks for the info. Much appreciated.
I myself also considered adding the metatag
<META NAME="GOOGLEBOT" CONTENT="NOINDEX, FOLLOW">
to the pages causing problems with Google. But honestly I have no idea which pages are in question.
As to my site, most of my resource pages are doing very well on Yahoo, Ask Jeeves, Wanadoo UK and MSN and I don´t intend to make big changes there. The present 10% left of Google´s referral are mostly from international Googles, especially Google UK.
And when run command site:www.mysite.dk everything seem perfect (complete correct listings, no duplicates, no supplemental results, no no-wwww vs www.mysite.dk problem etc.). also a search for www.mysite.dk and site name return top placements. Googlebot visits once/twice a day. etc...
So I´m waiting for next "crawl" which Google Team wrote me about, whatever that means ;-)
| 6:00 am on Aug 17, 2005 (gmt 0)|
It seems that we have to classify the site affected by the latest Google updates in an effort to understand and adopt to the changes:
- Sites which are removed from the index totally. Such sites might be called banned sites
- Sites which all pages have lost their positions on the serps, but the site is still indexed correctly. Lets call these sites penalized sites.
- Sites which only some of their pages lost their positions on the serps, but the site is still indexed correctly. These sites might be called semi-penalized sites
Based on the posts I read on several threads and the replies I got from Google Team (could be just standard ones), I guess what happened is:
Sites with resource pages have ended as either penalized sites or semi-penalized sites. Reason(s) could be very tightened filters/algos. The question now is; shall such sites regain their positions on the serps after the next "crawl", update, reshuffling etc..?
| 9:41 am on Aug 18, 2005 (gmt 0)|
Just to add my twopennyworth to the discussion.
1) Google may discriminate between resource pages that are kept up to date and those that are not maintained and contain a percentage of broken links.
2) Google may discriminate against resource pages where the amount of link text exceeds (or is some percentage of) the other page text. A useful resource page should have explanatory comments about each link.
3) As someone stated earlier, Google may discriminate against alphabetized resource pages. A useful resource page is usually divided into categories.
4) Google may discriminate against resource pages that carry AdSense. (Not sure about that.)
Having said all that, the discussions in this thread take little account of what actual users do, which no doubt Google is continuously researching and changing its algo to accomodate. I suspect the average user doesn't even know what search engine they are using, and wouldn't recognize the difference between a scraper site and a legitimate resource.
Let's say they search on "foobar hotels". If they land on a scraper directory, so what? It's got a list of "foobar hotels". It's also got adverts about "foobar hotels". Does the average user back off to the original Google results? I doubt it. I suspect they mindlessly plough on, clicking a link here, or an advert there, until they find what they want or give up. On the web you are not dealing with individuals, but the mob, and the chief characteristic of the mob is stupidity.
We must all have encoutered this stupidity. I get continual traffic to a sitemap page where users have searched for "map of foobar". Why? Don't they read the snippet which clearly shows it's not what they want? Of course not. It's just a world of click, click, click, click, click...
| 5:16 pm on Aug 18, 2005 (gmt 0)|
Us users be lusers and really stupid, didn't your mouse buttons come as part of your origanal equipment.
Mine did, now if I can just get the mouse and my hand out of my cat's mouth. I feel a good case of the clickies coming on.
BTW, the mouse says squeak ;-).
| 7:09 pm on Aug 18, 2005 (gmt 0)|
"Have you had any luck finding anything that is common to the banned sites that also doesn't exist on unbanned sites."
theBear, it's hard to say, there are possibilities, but it's hard to say for sure, a lot of errors were made recently on the banned sites, so it's hard to pinpoint any real unique thing. I suspect manual intervention of some type, or possibly a very loose algo component where there is at least one more factor that nobody here has found that will trigger it.
| 9:36 pm on Aug 18, 2005 (gmt 0)|
The strange thing is that most of the 10% Google´s referrals left for my site after 22nd July "disaster" is originated from international Googles, especially Google UK. Though my site is a www.mysite.dk and not hosted in UK.
However.. a big thank you to the folks of UK ;-)
| 1:55 am on Aug 21, 2005 (gmt 0)|
Our site just got back up as well. It took a couple of reviews before we got back in the index.
We found that if you remove pages it is extremely important to delete them so the user gets a 404 file not found.
Initially we just put up a redirect to our main page from the pages in question and we were not reinstated into the index after Googles initial review.
We then deleted them completely and submitted a new re inclusion request. This time our efforts paid of.
The problem page were a DMOZ section on our site!
| 6:23 am on Aug 21, 2005 (gmt 0)|
I have noticed today some movement on several DCs for my testing keyphrases. Any of you have noticed the same?
| 10:45 am on Aug 21, 2005 (gmt 0)|
I have noticed a change, although it is too early to be certain. I lost all my Google traffic on June 16, and when it returned it was from everywhere in the world except the US. Today I am getting US traffic again. Have to see if it holds up.
| 12:40 pm on Aug 22, 2005 (gmt 0)|
I noticed the same starting on the 21st and continuing today. Might be something brewing for the end of August or the beginning of September.
|Small Website Guy|
| 1:21 pm on Aug 22, 2005 (gmt 0)|
There is no way that Google would punish outbound links in general, because that's what the web is about! Without websites linking ot each other, how would search engines even be able to figure out which websites are more important?
BUT, if I were trying to make a good search engine, I would want to show pages that had natural links, and not links in which the webmasters gamed the system through unnatural link exchanges.
But some link exchanges are natural. Blogs link to each other, not because they are trying to game Google, but because the two bloggers feel that the other blog is interesting to their readers. (Well, bloggers also want traffic, but unlike most SEO link exchanges, you actually get quality traffic to your blog via having lots of blogroll links. Blog readers do click on them.)
Google has definitely done something, because in the keyword I follow, you used to get nothing in the top results but zillions of widgeting "directories" that boosted their PR by exchanging links. Now you get a lot of sites that actually sell widgeting services. The new results of more useful to real people doing searches.
My site has been getting a lot more traffic since early July, so whatever Google did I'm really happy about it. And my site does have many pages of outbound links and does link exchanges. Maybe human beings at Google actually looked at my site and found it better than most of the other sites doing the same thing as me?
| 2:22 pm on Aug 22, 2005 (gmt 0)|
And I thought I´m a SEO Deluxe ;-)
The truth is the more I do cleaning on my site and optimize pages to please Google, the more I get referrals from Yahoo.... while Google referrals are still only 10% .
Accordingly the title of my next e-book shall be:
Rankings Revealed - How to get your site to the top of Yahoo by pleasing Google ;-)
| 11:23 am on Aug 23, 2005 (gmt 0)|
>>On the web you are not dealing with individuals, but the mob, and the chief characteristic of the mob is stupidity.
That is both the funniest and truest thing I have read today. Sad, but so true.
| This 90 message thread spans 3 pages: < < 90 ( 1 2  ) |