homepage Welcome to WebmasterWorld Guest from 54.196.136.119
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

    
Panda recovery time
whatson




msg:4468620
 2:41 am on Jun 23, 2012 (gmt 0)

Just following on from Search engines need time & other signals [webmasterworld.com] but in a Panda direction.

So we see that Google definitely have time built in to the algo. Therefore we may be able to assume that when a site crosses the Panda line, it takes a while to be affected, rather than just the latest cache. And thus again, for recovery, it too will take time.

Judging by the above post, it might take about 6 months. Which is about how long it took one of my sites to get hit by Panda from the launch.

This means when you make your Panda changes, likely dramatically reducing income, and increasing costs, you have to bear with it for 6 months or so, for the hope of recovery.

Has anyone actually experienced any of this?

[edited by: tedster at 4:03 am (utc) on Jun 23, 2012]

 

tedster




msg:4468624
 4:09 am on Jun 23, 2012 (gmt 0)

The first site I worked with that had Panda troubles recovered within a month of making their changes live. And the recovery was to BETTER traffic than they had before. The recovery happened right after the next Panda refresh was confirmed by Google. I'll bet that, if the Panda algorithm could run in "real time" it would have recovered even faster.

I don't think "time is built into the algo", as in a rule that says "don't rank before X days have gone by." Rather, certain signals (user engagement, trust) need to be detected, and usually those things do take time to manifest stably.

I agree that recovery could take months in many cases, even with all the right changes in place. At the same time, there were several high visibility Panda cases (e.g Hubpages) that recovered a lot faster after they made changes. So there is no hard and fast rule. Given the wide variety of websites and the wide variety of Panda factors that can be in play, I guess that's to be expected.

Lenny2




msg:4468730
 5:05 pm on Jun 23, 2012 (gmt 0)

My opinion is that your worst nightmare isn't how long it takes to recover... But figuring out how to fix your site.... My site was hit by panda 1.0 and while we have a ways to go before I'd say I'd exhausted all that we can to fix it.... When you are talking about 5,000 pages.... And quickly dwindling income streams its no easy juggling act! Good luck!

manny123




msg:4468733
 5:29 pm on Jun 23, 2012 (gmt 0)

tedster, there have been a number of panda recovery stories. Have you told yours? I searched around a bit and didn't see where you might have. I bet there are a lot of folks that could benefit from your recovery story.

tedster




msg:4468749
 6:30 pm on Jun 23, 2012 (gmt 0)

I have mentioned the details in several places, but here's another summary. In this case, it looked like the site was hit because other sites were either reprinting their articles (with permission) or else flat-out scraping. The site had lost the authority/trust needed to be credited as the original publisher. This fact jumped out during the analysis.

In addition, there was some fluff on the site, written basically to rank. And finally, some canonical issues were creating internal duplicate URL issues. Nevertheless, I should emphasize that the site's foundations were and are really solid.

Step 1 - get rid of the fluff

Steo 2 - fix the canonical issues

Step 3 - get the content to begin higher on the page (it was sometimes below the fold)

Step 4 - the pages were overly ad-stuffed. That was backed off

Step 5 - begin rel="author" mark-up

Step 6 - begin using pubsubhubbub to send "fat pings" to Google whenever something new is published

Step 7 - delay the RSS feed for an hour after publication

My gut feeling is that the entire combination of steps probably helped - except I'm not convinced that canonical issues are part of Panda. But the most important steps, IMO, were 5, 6 & 7. They are aimed squarely at regaining credit for the content. The site has some really good authors and they deserve the credit, which they now get, little headshot photos and all ;)

manny123




msg:4468752
 6:44 pm on Jun 23, 2012 (gmt 0)

Thanks tedster. If you didn't have 35,806 posts here, it would be easier to find some of your past advice. :-)

claaarky




msg:4468758
 8:02 pm on Jun 23, 2012 (gmt 0)

Well done for asking that question manny123. I was thinking the same thing!

This week I hit on a theory which seems to fit with all the advice from Amit Singhal last year, so I started re-reading lots of posts here to see if someone had already said what I've discovered and perhaps I didn't interpret it correctly. It seems clear that people who have recovered from Panda don't know which factors made the difference.

If I'm right it's actually very simple. If I'm right, I'll be very angry at google for being so unspecific with their guidelines. If what they want is quality, why don't they help people understand how to identify good and bad content on their site so they know what to attack. If I'm right, a lot of people may be destroying their sites and spending huge amounts of time and money unnecessarily trying to solve this silly riddle (the goal is admirable, but the secrecy is almost childish).

As for recovery time, it may depend on how busy your site is. A month is long enough for google to collect reliable stats for a busy'ish site. For example, if you were conducting a survey in a busy high street it wouldn't take long to gather a reliable sample but a quieter location would take longer. Stats aren't reliable with low numbers.

If I'm right about my discovery, Panda is not even about google, it's about your users and what they inadvertently tell you about your site while using it. Google has just figured out the connection and used that to classify sites. Now I can see that, everything about panda makes sense - why they implement it each month, why they say it's about content and to think about your users. Once I could see that, I could see where my bad content was.

All the guidelines are appropriate, but what I found is it's so difficult to judge your own content without some help. Once you know a page is bad you'll understand why. I found its very difficult to judge what most people think is good content. You need stats. The clues are there and the answers are in front of your eyes. It's so simple it's brilliant.

tedster




msg:4468814
 1:51 am on Jun 24, 2012 (gmt 0)

I agree with your direction - but I wouldn't exactly call Panda "simple" as an entirety. In the case of any one site's demotion, maybe it is. For my site basically what it took was staking a claim to our content authorship instead of literally giving it away. But across the whole spectrum of sites being analyzed by Panda, I don't think the algorithm is at all simple. It's just that the slice of the algorithm that YOUR site gets hurt for may be.

However, I do think that gathering in-depth user data points toward an important future direction for SEO. Here's my current thinking on this. Google is learning to use this data. Bing is learning to use it. But we site owners have direct access to our visitor's browser experience. We can, potentially, do this analysis better and more completely than any search engine.

Not only that, but when we then take action to improve the visitor experience based our user data, we essentially "cut out the middle man" (the search engine). Instead we're doing something directly for our visitors and only indirectly for the search engines. Since they will see the improved user engagement, this kind of action still optimizes search engine rankings - it's still most definitely SEO.

If site owners begin to gather the kind of browser data that search engines can (did the visitor scroll the page? did they hover over a link?) then we also have a real advantage when we do A/B testing because we have some major clues about what changes have the best chance to improve things. Often that's the hardest part of A/B testing - figuring out what changes to test!

Right now, as far as I know, there are just a few paid services that will report on just some of these browser measurements. There is also a free application called boomerang.js that can give some great insight into page speed for each user - instead of some abstract testing service.

The browser is capable of yielding a lot of data. All kind of events are firing and able to be captured with javascript beacons of some kind. Time to get programming!

Zivush




msg:4468830
 7:23 am on Jun 24, 2012 (gmt 0)

There is also a free application called boomerang.js that can give some great insight into page speed for each user - instead of some abstract testing service.


Haven't tested boomerang.js yet.
Just some initials - What is the difference between boomerang.js and G Analytics?
G Analyitcs gives: in page analytics, page speed and time on site per landing page?

claaarky




msg:4468837
 7:58 am on Jun 24, 2012 (gmt 0)

Tedster, what I've discovered is there is one piece of data, already collected in Google Analytics, which has a direct correlation with page quality. I think it really is that simple.

I started comparing our good and bad pages (as defined by this statistic) side by side and it just smacked me between the eyes. I could suddenly see very clearly WHY one page is better than another and I could immediately see what to fix. The more I did this, the more I realised how undefinable quality is, how subtle things can make a huge difference.....and that this is how Google are doing it. This is what Panda is all about.

I think this single piece of information, available to everyone, is what Google is using to determine page quality. They then assess the ratio of bad pages to good, how bad the bad ones really are and where they are in your site and if there are too many throughout your site or in one particular area, they hit your traffic to the bad areas and, to a lesser extent, any good areas that users can quickly reach bad areas from (I'd guess within one click). The result is Google visitors are protected from your bad pages.

The beauty is Google don't even need to analyse your site, they just collect this statistic from users via their browser and it tells them everything they need to know about every page people visit on your site without looking at anything else.

I am now working through our entire site trying to find an example that destroys my theory, but it just works every time. I've also been working through Google's guidelines on Panda and it's works for every point. It tells you if there's something about a page people don't like, if they trust your site, everything. You then just look at the page armed with that knowledge and it's so obvious it makes you laugh. It's genius.

I think when Google sat people down and asked them to compare good and bad pages (not telling them which is which) they had a feeling this statistic told them everything they needed to know about page quality, and when their human research confirmed it they must have been wetting themselves with excitement.

It also explains why Panda is 'run' each month. Anyone looking at their own site's stats for trends would look at data over an extended period. Analytics defaults to one month because it's a good period in which to make a judgement. They need one month's stats on your site to make a judgement.

What I've found really amazing about using this method though is it reveals how vast the range of things that cause a bad experience can be, and how each page is an individual case. There are no rules. We're an ecommerce site and we're finding that price, image, product description, a fact about a product, reviews, etc. can all have positive or negative effects on how people react to a page. In some cases where we worked on a page to improve the content it actually made things worse, that's how hard it is to judge quality - looking at those pages again, knowing they are not so good, it's obvious what we did wrong.

Absolutely everything about Panda now makes sense to me. There may be other factors in the mix but I have a suspicion it could be all about this one simple statistic because it is all encompassing. If people react badly to a page or your site overall, there is a reason - it may be isolated to your site or it could be because they've already seen that content on another site. So this brings in unique content. It's not essential but if lots of sites have the same content as you, people could react badly to your site if they saw the other sites first (which is perhaps why getting rid of sites that scrape your content can improve your own quality signals and get you out of Panda).

I have an ecommerce site and when Panda hit I thought it was the obvious ecommerce duplicate content issues but now I realise it wasn't, not really. People didn't like a large proportion of our pages for whatever reason. One reason could be that they saw similar products on another site, but equally it could just be down to price, our description, the way our site looks, anything. What I now realise is we have many, many good pages and most of the bad ones are actually bad products that don't sell or generate search engine traffic, but people see them and react badly. Removing them won't cost us traffic or sales but it will bring down the bad signals and maybe even improve conversions. Some we have to keep, so we'll address what we think is causing the bad experience and see how that affects things. This, I believe, will get us out of Panda. Understanding this will then keep us out of Panda and make our site much, much better.

If only Google could have told me this so I could have made my site better a year ago. Instead I've been paying out for professional content, adding articles, rewriting product descriptions, rebranding and redesigning the site, link building, you name it. Looking at our stats from before Panda until now, I can see all that made no difference (and in some cases created new 'bad' pages which increased my Panda demotion!).

It just explains everything. Why small sites go under the radar (not enough traffic to produce reliable stats over a month), why bigger sites might recover faster, why duplicate content can harm your site, why linking out to bad sites can harm your site. It's all about the user experience, as Google and many others keep saying. Think about your users. Well, yes Google, I do, but I didn't know how to judge what they like........until now.

It won't take me long to fix this, now I know what I'm looking for, so hopefully I'll be able to report back in a month or two with some good news!

LostOne




msg:4468848
 9:43 am on Jun 24, 2012 (gmt 0)

The beauty is Google don't even need to analyse your site, they just collect this statistic from users via their browser and it tells them everything they need to know about every page people visit on your site without looking at anything else.


It almost makes sense that the problem could be as simple as that.

As far as author markup. Does it make sense to use it after all this time?

Good stuff Claarkey. Let us hope you're not just lucky. It doesn't explain why empty big brand pages rank. All they have to do nowadays, or so it seems, is throw up a page, put a good title on it and bamm..it's sticking to the top of the SERPS. It's almost sounds as if they could be the next breed of farmers.

claaarky




msg:4468849
 9:59 am on Jun 24, 2012 (gmt 0)

Lostone, it does explain brands as well (in fact any site not hit by Panda).

As long as your ratio of bad pages overall is low you won't be hit by Panda. Therefore, some terrible pages can rank. Users of brand sites already know and trust the brand, so they may navigate to a bad page and it won't seriously affect their interaction with that site.

Hence really big brands can avoid panda because even their bad pages don't cause a user reaction so bad that it reduces the quality signals.

If I'm right on this it's a result of over 12 months of studying everything written about Panda, experimenting, spending money unnecessarily, having to let staff go, finding a good SEO, knowing other people with sites that were and werent affected by Panda. In short, if I've cracked it, it's been through bloody hard work. If I haven't the quest goes on.

MarvinH




msg:4468869
 12:42 pm on Jun 24, 2012 (gmt 0)

claaarky wrote:
I've discovered is there is one piece of data, already collected in Google Analytics, which has a direct correlation with page quality.

Which piece of data are you referring to?

I think this single piece of information, available to everyone, is what Google is using to determine page quality.

Which single piece of information are you talking about?

Claaarky, am I missing something in your post, or you purposely don't wish to share, in which case I see no point in posting.

:-)

onebuyone




msg:4468873
 12:44 pm on Jun 24, 2012 (gmt 0)

imo it's the (organic traffic : direct visits) ratio, the more direct visits you have, the better

obviously only google chrome users with old cookies count

Lenny2




msg:4468932
 3:59 pm on Jun 24, 2012 (gmt 0)

Simple and elegant observations Clark!m thanks!

synthese




msg:4468991
 9:25 pm on Jun 24, 2012 (gmt 0)

In short, if I've cracked it, it's been through bloody hard work. If I haven't the quest goes on.


C'mon claaarky put me out of my misery. I've done all of the above (researched, read everything, laid off 3 part-timers, wasted far too many $$$). What is this mystical piece of info that helps define quality?

PS. Great post, enjoyed reading it.

zeus




msg:4468999
 9:43 pm on Jun 24, 2012 (gmt 0)

Im also in Panda hell with my main site since april 2011, visits down from 30.000 a day to 2800 and I dont have to mention earnings. Im not sure I will ever get out of panda.

claaarky




msg:4469003
 10:06 pm on Jun 24, 2012 (gmt 0)

Sorry, I thought it would be obvious from what I said but before spilling all the beans I'd like to verify it against the stats of sites that have recovered from Panda and sites that were never hit.

Will be speaking to a few people this week who can help me with that and I'll post here with the conclusions.

claaarky




msg:4469117
 7:33 am on Jun 25, 2012 (gmt 0)

Okay people, I'm not waiting to see whether research confirms my theory, I'm going to open it up for wider discussion with a new thread. I've just posted it - [webmasterworld.com...]

Hope it helps.

[edited by: tedster at 12:15 pm (utc) on Jun 25, 2012]

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved