Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Google Updates and SERP Changes - March 2011

         

Whitey

4:53 am on Mar 1, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



< continued from [webmasterworld.com...] >

< related Panda Farm Update [webmasterworld.com] >


I keep dropping mentions of this , but no takeup , so i did some digging, for clues to my theory Chrome's passing back intelligence that could influence this new algo and future changes :

New Chrome extension: block sites from Google's web search results
Monday, February 14, 2011 | 12:00 PM

Today the Google web search team launched a new Chrome extension to block low-quality sites from appearing in Google’s web search results. Read more in the post below, cross-posted from the Official Google Blog. - Ed


[chrome.blogspot.com...]

Also - [webmasterworld.com...]

I think user behaviour data is being underestimated in this thread. Each website will have an depth profile building that feeds into a potential quality assessment by Google. What say you ?

[edited by: tedster at 8:15 pm (utc) on Mar 15, 2011]

browsee

3:59 am on Mar 17, 2011 (gmt 0)

10+ Year Member



@walkman, initially I thought of adding noindex just for Googlebot. But, you know other search engines will follow this soon. So, I decided to add robots - noindex.

crobb305

4:19 am on Mar 17, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I'm still contemplating the issue of how noindex will make a site recover. If some thin pages brought a whole site down (as it did mine, with a 60% drop in traffic), presumably because Google wants to eradicate "low quality", then how will "noindex" on the thin pages allow a site to recover in ranking? For all practical purposes, the site is still "low quality" with just noindex on the pages that were graded poorly. Would G still allow that site to rerank? 85% of my traffic comes in through the homepage, with less than 28% bouncerate (so my users tend to like my site), alas Google spanked it. I could easily add "noindex" but I just have a hard time believing that my site will recover from the use of noindex.

Now, I could add rel=nofollow to links from my homepage (allowing for one followable link), which may indicate that I want those links for usability and NOT for SEO. The "thin" pages they link to will get to stay in the index, and I can add upon them with new content to make them better. I have heard of ranking mishaps shortly after implementing nofollow (I have never used nofollow, so worry a little about what will happen) -- then again, I am already down 60%, it can't get much worse.

tedster

4:33 am on Mar 17, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



The full recommendation started with "get rid of the thin content". Then he went on to talk about using noindex if you planned to fill out a page but couldn't get to it immediately.

Problem is, no one seems to have recovered because of changes - at least so far. So we're really in the dark.

supercyberbob

5:00 am on Mar 17, 2011 (gmt 0)

10+ Year Member



Just speaking in general terms, not directed at anyone.

I really see no point in rewriting content or making any onsite changes, especially if you are seeing scrapers above you in the serps.

Unless of course you see something suuuuper obviously wrong with your site, that is, like useless content etc.

The reason I say this is because google has acknowledged the fact that they are working on issues with Panda, like scrapers outranking original content in some cases.

I'm waiting till the end of March, and maybe even April for it to shake out and for Google to fix some of the issues mentioned. Hopefully.

Definitely not expecting a full recovery, don't misunderstand me, this is a huge algo change, and I just think it needs to settle.

crobb305

5:01 am on Mar 17, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Well I am going to bed, still very much in the dark, but I am very intrigued by my finding tonight, that one surviving and well-ranking semi-thin affiliate page is still showing up on G for some big money terms. It seems that page is untouched by Panda. So, tomorrow I will spend my day emulating that page, and trying to get back into GoogleBot's good graces :)

By the way, earlier I reported that my biggest drop was a -300 average position on a unique content page, with NO ads, well-written, about 1,000 words, and spanked down 300 positions. So, I started analyzing the page and the VERY first thing I found was a major word, in the page header <h3> that was mispelled. To boot, the title/desc metatags were very short and were exact duplicates of each other. Nevertheless, these were all signs of poor quality on an otherwise GREAT piece. So, I immediately made changes and corrected the spelling error in the <h3> tag this past Saturday. As of today, WMT reports that page has GAINED 200 positions! So basically, we are nearly completely offset the losses on the 11th for that page. Maybe this is a sign that about what John Mu said yesterday, that pages are being reevaluated and that it can and will take time. It's a page by page, day by day, chore....for all of us.

Jane_Doe

6:02 am on Mar 17, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I think the collective wisdom in these threads has been very good. I'm guessing about 70% of the things people reporting they've found that might be problematic are actually true. I see the same issues in my downgraded pages.

I doubt Google will back out any major parts of this algorithm. I think overall they achieved their objectives.

I'm busy bringing all of my sites "up to code". I guess it is hard to see what is wrong if your whole site got hit, but with mine I have entire sites and sections of sites that are still doing fine. So I'm just making the other pages more like those pages.

anne10

6:56 am on Mar 17, 2011 (gmt 0)

10+ Year Member



Hi,

For the past few weeks, traffic to my site was decreasing gradually and particularly my Google shopping Traffic had dropped more than the usual hits.

zerillos

9:13 am on Mar 17, 2011 (gmt 0)

10+ Year Member Top Contributors Of The Month



This is the first time, since the Panda update, that i see better results in the USA SERPs than in global ones.

for a new search term, for which i just wrote a new page yesterday, i'm seeing better results in the US SERPS than the ones from global or other countries.

it's a long-tail term that probably no one is searching for. the global results are filled with spam and pages that probably never meant to rank for that search term. the US SERPs however, do offer some information on it.

AlyssaS

2:17 pm on Mar 17, 2011 (gmt 0)

10+ Year Member



By the way, earlier I reported that my biggest drop was a -300 average position on a unique content page, with NO ads, well-written, about 1,000 words, and spanked down 300 positions. So, I started analyzing the page and the VERY first thing I found was a major word, in the page header <h3> that was mispelled. To boot, the title/desc metatags were very short and were exact duplicates of each other. Nevertheless, these were all signs of poor quality on an otherwise GREAT piece. So, I immediately made changes and corrected the spelling error in the <h3> tag this past Saturday. As of today, WMT reports that page has GAINED 200 positions!



crobb305 - that's incredibly interesting! It ties in with what Google said on Jan 21st about their new document classifiers:

[googleblog.blogspot.com...]

To respond to that challenge, we recently launched a redesigned document-level classifier that makes it harder for spammy on-page content to rank highly.

The new classifier is better at detecting spam on individual web pages, e.g., repeated spammy words—the sort of phrases you tend to see in junky, automated, self-promoting blog comments.


Perhaps they are cracking down on what they see as deliberate mis-spellings and the like to try to rank for terms where people mis-spell the keywords. Maybe they feel their spell suggestion tool does a good job in inferring what the searcher really wanted and they don't want people muddying things by deliberately putting in bad spellings.

AlyssaS

2:30 pm on Mar 17, 2011 (gmt 0)

10+ Year Member



Here's something else people may want to chew on: A patent on "manipulated articles" which was filed in 2003 but not granted till Nov 2007:

[patft.uspto.gov...]

Some interesting bits:

Manipulation techniques that can, for example, be used are: using the domain name of a once legitimate document; filling the text of the document or anchor text associated with links in the document with certain popular query terms; automatically creating links from other documents to the manipulated document;


The bold bit is mine - are people overdoing the internal linking?

Here's another interesting bit:

Any one or a variety of document signals may be used by various embodiments of the invention. Examples of document signals include, without limitation, one or more of the following: The text of the document--whether the text appears to be normal English (or other language) text or text generated by a computer, such as containing a large number of keywords and not containing any sentences; Meta tags--whether the document has meta tags and whether the meta tags contain a large number of repeated keywords; Redirect--whether there is any script in the document such as JavaScript or HTML script that redirects a user to another document upon access; Similarly colored text and background--whether there is a large amount of text in the document that is the same color as the background of the document (Systems and methods for detecting hidden text and links in articles are described in U.S. patent application Ser. No. 10/726,483, filed Dec. 4, 2003, which is hereby incorporated by this reference); A large number of random links--whether the document contains a large number of unrelated links; History of the document--whether the text of the document, the link structure of the document, or the ownership of the website on which the document appears has changed recently Anchor text--whether there are a lot of links on the page and there is little or no text that is not anchor text.


And this with respect to machine learning:

These rules can be designed manually to determine whether all documents in the cluster or a subset of documents in the cluster are manipulated. Alternatively, a machine learning approach can be used to define the rules. With the machine learning approach, a set of clusters, know as a training set, can be hand classified as manipulated or not manipulated. This information is provided to a classification system to train the system and allow the system to compute which signals to use and in what way.

tedster

2:59 pm on Mar 17, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Nice find. So Google has methodology they can put to work for article spinning. It would be good to see that achieve effectiveness. Right now, even word-for-word scraping is an obvious problem, to say nothing of automated spinning.

crobb305

5:10 pm on Mar 17, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



AlyssaS, great finds!

Based on the second document that you linked to, you asked:

are people overdoing the internal linking?


Seems I may have. In addition to the discovery I made regarding a misspelling in an <H3> and a very short/duplicate title/description for my most heavily penalized page (-300 positions), I also discussed another correlation between the number of links from my homepage to my thin/money pages. I discussed this on the previous page of this thread (page 7, post#4282753). One of my thinnest money pages that still ranks well (and GAINED position) is a page that I linked to most conservatively from the homepage (only once); pages that were hit hardest were linked to the most frequently and with a large variance of anchor text.

Also, a couple of my most heavily penalized pages were warned for in the "html suggestions" section of my WMT account for having short descriptions.

There is virtually no duplication of any of my content. Searching snippets of my articles in both Google and Yahoo reveals very little, if any, duplication. In the rare instance of another site stealing my content, my pages were still listed #1. So for me, duplication doesn't seem to be the issue. My site's penalty appears to be a matter of perceived quality, based on spelling, grammar, and excessive internal linkage to the 6 thinnest pages that happen to have ads.

outland88

7:56 pm on Mar 17, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



So Google has methodology they can put to work for article spinning


You've probably not had much reason to do "phrase searches" but Google at many times will show you the spinners even when their caches are updated. Many businesses that deal in volume DMCA's or user generated content (outside of Google)readily spot the spinners and will quickly remove the pages. Google does use a thesaurus.

sjgreatdeals

8:17 pm on Mar 17, 2011 (gmt 0)

10+ Year Member



I am seeing some new results just poping in and out now, showing new results on some searches, not good news on our end looks like more drops

helpnow

9:48 pm on Mar 17, 2011 (gmt 0)

10+ Year Member



@sjgreatdeals - yes, we are also seeing big shifts at the moment across a variety of SERPs (Thursday, PM). No pattern apparent yet.

maximillianos

1:28 am on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I see one site that shot to the top of SERPS that has no quality control in their user generated content. Vulgar language, misspellings, you name it. Adsense ads show blank on some of their pages the quality is so bad.

So I don't think quality of content is a big factor from a site wide penalty perspective. They also scrape and repost hundreds of articles each day from other sites from 2-3 years ago.

I don't understand how Google can't figure it out. They post a copied article. A few minutes later G has it ranking above the source. Now mind you the source published it 2 years ago. And google has it indexed. At what point does G decide that the scraped page is now the original and drops the rank of the 2 year old listing?

Just seems so trivial I can't figure out how they are messing this up.

tedster

1:48 am on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Google is messing it up because their goal is not journalistic or academic accuracy - crediting the original source at the top of search rankings. Their goal is giving users links that they will click on and be happy with - and that's not always the original source. It might be a bigger brand, it might be a fresher web page, and so on.

The minute that second goal comes in, we have serious complications. There are definitely other technical complications - the original source may not be the first one crawled, for instance. Or the content may be modified slightly, or broken up across the content area.

But in my opinion, the big reason is that giving search prominence to the original source has not ever been a priority. They say they intend to work on it this year. We'll see, but I don't hope for too much. For Google, end-user satisfaction will always be the most important metric.

crobb305

1:56 am on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



things are going from bad to worse. I have never seen such thin, junk, spam sites at the top. It's a scammer's haven now. Create a junk site and you can sit at the top of Google and scam away. The authority sites who worked hard to serve their visitors or fight fraud (like our site did), are being demoted further and further. Google shouldn't care about ads if the site has proven itself with content, age, etc., If the site makes money via sales or even affiliate links, and has a low bounce rate (like ours at 28%) then, BY DEFINITION, it gives the user what they wanted and deserves to be found...not spanked by "assumed" quality. So much for using click/performance data. You don't often see a site with under 30% bounce rate, but ours has been doing that well for 6 years.

Dan01

2:11 am on Mar 18, 2011 (gmt 0)

10+ Year Member



This new update kinda reminds me of a "reverse" Page rank. That old ranking system gave some sites an advantage, but the new system is to penalize sites. It is an interesting way to deal with spam.

minnapple

2:18 am on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Google recently said
That process is usually not something that takes place overnight after a webmaster has uploaded a fresh copy of the code for the website. For example, it takes time for us to recrawl the pages, the bigger the site, the longer it will take. The better a site is structured (less duplicate content, no infinite URL spaces, etc), the faster we'll be able to recrawl parts of a site and take that content into consideration. Sometimes, even after recrawling parts of a site, our algorithms will need a bit of time to confirm that the site has really changed for good.

maximillianos

3:26 am on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Frustrating part is for those of us who got caught in this net and really do have good quality sites. It is almost like we have to start over to reprove ourselves to G after building a successful site for 11 years. All because we got stuck in their filter for whatever reason on the one day they decided to flip the switch.

And the mist frustrating part is watching the site that has scraped us for years shoot to the top. And continue scraping.

It seems G doesn't care and they reward it as long as you provide a better user experience as defined by a program and one week of data from a chrome plugin that was probably abused by these same scraper spammer sites.

SEOPTI

3:35 am on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I will wait 3-4 weeks. If my domains stay punished (-50% traffic and earnings) I will 301 the contents and use new domains.

This reminds me of the -50 manual penalty where I waited 18 months and nothing happened. They simply closed all doors which is the case right now with the panda mess.

Dan01

3:41 am on Mar 18, 2011 (gmt 0)

10+ Year Member



Max.

I too have seen scrapers take my content and rank higher.

walkman

5:53 am on Mar 18, 2011 (gmt 0)



This reminds me of the -50 manual penalty where I waited 18 months and nothing happened. They simply closed all doors which is the case right now with the panda mess.

SEOPTI, did you change anything? Everyone so far has said that this is not manual and can be removed automatically when /if the content changes and google sees it. What I really got was that enough thin pages can hurt your entire site. Manual penalties are nasty, either wait it out or beg google.

My main concern is how long will this initial assessment last or when will google give the site a new quality ranking or whatever this new thing is. Once a month? Once a quarter?


I too have seen scrapers take my content and rank higher.

This week, I have seen CNN and Forbes outranked scrappers (of CNN and Forbes content) Something is really, really wrong.

Interista

10:43 am on Mar 18, 2011 (gmt 0)

10+ Year Member



Hi all, as always it is an interesting read in this forum!

not sure if I am in the right thread rigth now. But we got hit heavily on google.it in January where we ranked (very competitive kw) #1 we dropped to 2 page, sometimes 3rd page.

The odd thing is that now our home page outranks the more relevant subdomain that used to rank for those keywords. And this has happened across the board for us. has anyone else seen something similar?

It is utterly annoying to see crappy spam sites ranking on the first page with unrelevant spam links coming from different languages ranking where we used to be before. And don't get me started on the content on those sites...

If I come off as bitter, well as of now I am ;)

TheMadScientist

10:55 am on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



The odd thing is that now our home page outranks the more relevant subdomain that used to rank for those keywords. And this has happened across the board for us. has anyone else seen something similar?

Hi Interista,
Welcome to WebmasterWorld!

It sounds like you've been reading a bit, and I hope you don't feel like I'm picking on you too much, especially since you're new, but I don't understand why people complain about this. (I'm not talking about garbage out ranking you and the frustration that can cause, because I feel ya there, but the main page out ranking a more specific page(s)).

Why not take advantage of the situation and try to capitalize on it?

You get another page view and chance to 'sell' visitors. I've actually been know to cause this situation to happen on occasion, so I get more 'on site time' from visitors. I don't always want them landing at precisely the right answer.

I want it to be easy to find, so I might grab the keywords they searched for and make sure there's a prominent link or graphic or something directly related to the search query in an easy to spot landing location rather than only appearing half way down the page, but I would definitely not complain about receiving an extra page view opportunity and 'on site time' from a visitor.

Anyway, personally, I'd try my best to take advantage of it, and like I said I've actually gone out of my way to cause the situation on certain sites, so maybe you can find a way to do something similar in your situation?

TheMadScientist

12:52 pm on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Even when you start to see changes, don't stop there -- make your site into the best resource of it's kind.

From John Mu's post, linked earlier in this thread ... Here I go over analyzing again, but stick with me for a second, because it might be important.

Don't stop there? That's 'official' advice, so how could it apply and be solid? It might indicate they are looking for sites that are continually upgraded, and I don't mean the type of 'upgrade' or 'update' a WP blog gets with new template, plug-in or posts being added to it ... I mean I think things are moving more and more pattern based, because there are things that are much more difficult to 'simulate' over time than simply adding content or installing another plug-in to a site at some interval, and if things are moving to 'historical patterns' it would make the 'don't stop there' advice really important, because it would mean future rankings would be based on partially on what you did yesterday, today and do tomorrow, rather than the 'state of the site' right now.

I hope I'm saying what I mean well enough for people to 'get it', because I think it could be important, so I'll try to 'examplify' a bit: If a person owns 50 blogs how much time can they put in to the function and 'uniqueness' of each compared to the person who owns one site and really focuses on it?

IMO the person who owns (or focuses on) a lower number of sites and really works on improving those sites will (and imo should) have an advantage over the person managing more sites, because they can have a constant pattern of 'upgrading' and 'improving' where to try and do the same with 50 sites would really take too much time for a single person if the sites were unique.

Interista

1:13 pm on Mar 18, 2011 (gmt 0)

10+ Year Member



Hi MadScientist (awesome name btw) thanks for the welcome!

Yes I have been reading quite a bit on this forum, you can never get too much information. Don't worry you didn't make me cry or anything, I am tougher than that. I see your point.

The thing is I don't mind our main page ranking well for the terms, considering the amount of flash and iframes we have on our site it is a miracle we rank at all...

I was just freaked out when my number 1 position was transformed into a 2nd, 3rd page ranking over night. And shortly after that I saw our home page outranking the relevant subdomain (on other keywords as well). I am just trying to figure out what happened. Because after doing a backlink analysis on our own links built the last 6 months I am still clueless..

TheMadScientist

1:27 pm on Mar 18, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Thanks on the user name ... I had another one for years, but it was a 'late night, I need an account' user name I didn't ever really like, so one day I finally took the plunge and changed it.

Glad you weren't offended or anything.

Something to think about with your situation is you're not talking about google.com, so Panda hasn't hit you yet ... I would guess it has to do with one of the more minor 'we changed the way we score backlinks' updates we've been hearing about for a while and I would guess you're looking in the right direction when you talk about backlinks, but you might try looking at how you can increase the 'home page importance' for those terms or even in your situation, if that's not seeming like a great idea, transfer more of the weight from the home page to the sub-pages that used to rank.

It's interesting they're on sub-domains, and that could be something I would look at ... They're not something I use, so I don't know if maybe one of the 'backlink scoring' changes maybe affected sub-domain inbound link scoring more than main domain? Not sure, but would probably include that in any analysis.

browsee

2:12 pm on Mar 18, 2011 (gmt 0)

10+ Year Member



@Interista, this is clearly Panda effect. If I type my domain, my twitter account is showing up first. Seriously, how low can they go...

You proved one more point here, G is treating sub domains as different entities, that's good news. We can create sub domains and redirect there if this thing does not improve soon.
This 366 message thread spans 13 pages: 366