| This 33 message thread spans 2 pages: < < 33 ( 1  ) || |
|Alexa data on Pandalized sites from Google thread|
I went through the first five pages (200 posts) of the famous Google Forum thread "Think you're affected by the recent algorithm?". I found 31 sites that were actually hit by Panda (used Alexa), most of the full recoverys were in September, others recovered temporarily then crashed again. It was a very strange tour.
The number in parenthesis is the current Alexa US rank.
#1 (2588) recovered late September
#2 (10,948) recoverd mid May, fell again late June
#3 (26,451) down and flat
#4 (38,329) down and flat
#5 (56,683) down and flat
#6 (6,884) roller coaster, and jagged
#7 (10,813) up and down, full recovery September
#8 (13,688) down 2010, down Panda, some recovery September
#9 (61,027) down, summer recovery, down worse
#10 (37,401) way down, way up, down (IN rank)
#11 (77,601) down and bumpy, was falling since mid-2010
#12 (9678) down and stayed down
#13 (18,697) down and flat (big 2010 run-up)
#14 (1882) down and still drooping (down since 2010 peak)
#15 (93,976) down, up, flat, not 100% it's Panda
#16 (4809) down, down, big July recovery, par
#17 (755) down, down, drooping
#18 (22,913) down, down, spiky
#19 (46,275) down, down, down
#20 (4,367) down, up, down, flat
#21 (25,842) down, down, drooping
#22 (54,886) down, drooping
#23 (47,232) down, down, flat
#24 (2,439) minor Panda hit and flat after big fall earlier
#25 (34,449) down, down, flat (with seasonal spikes)
#26 (12,817) down, way up, way-way down, flat
#27 (28,013) down, drooping
#28 (54,128) down, down, drooping
#29 (25,21) down, trend up, full recovery late June
#30 (56,389) down, down, flat
#31 (31,800) down, down, flat to 2010 baseline
So there were only four full recoverys in the bunch so far, but that beats none. It seems to me that penalized sites were in a small number of subject groups, but they are self-selected reporters to the Google thread. I'll look for reasons as I have time in coming days.
Writing it up with subject area and site descriptions, maybe some webmaster interviews, would make great link bait for somebody who wants SEO traffic. Yuk!
[edited by: tedster at 7:53 am (utc) on Nov 22, 2011]
Panda is a new component of Google's overall ranking algorithm this year. It is named after a Google engineer who they credit with a breakthrough that made if feasible. After at least a year in development it first rolled out in February. Its stated purpose was to measure with machine intelligence what human intelligence commonly call quality.
Google recognized that some businesses were gaming their rankings by creating content that worked for both relevance and PageRank - that their content was "shallow" and provided very little value. As such it fell in between Matt Cutts spam team (it's not, strictly speaking, spam) and the overall rankings generated by Amit Singhal's team.
Some of these sites were being called "content farms" and so one early name webasters gave this new algorithm component was "the Farmer Update". However, Panda sort of took over as the name everyone uses now.
Panda is apparently quite complex. What seems to cause a devaluation of pages for one site is not always what seems to cause the devaluation of another site. Here's a thread with links to the many discussions that surrounded each new iteration of the Panda algorithm: Panda Iteration Dates [webmasterworld.com]
@whitey. This last month we've taken the content purge way deeper - so I'll keep you posted as to any positive results. It's not so much that the content is spammy - it's that the content is dated. At it's time (as a blog) the content was great and ensuing 20+ comments on each page showed this.
But looking at it now I've decided the content is no longer relevant... (as Google Panda also decided). The trouble is that panda punished every single page.
@synthese - many sites live with a 80/20 rule for traffic and conversions prior to Panda. If your analytics confirm this, it may make your decision easier on how to re work your site, and get some quality content ratios improved on your site. Best to block those pages, as you may later decide to re introduce some of the more marginal pages that simply need updating. Then you may need to wait 2/3 iterations to see the results of your decision.
Google's machine learning methodology likely takes in a lot of variables to establish different thresholds of tolerance in the algorithmn, which is why webmasters can only really share at a generalised level. So signals, such as "brand" , "freshness" , "link profiles" , "site duplication" , "competitive duplication" etc etc are the sorts of things that might get evaluated in combinations with a range of other value attributes. If this is accepted, then it's clear that not all sites have the same thresholds and "trust" as part of the overall quality score equation.
On the subject of "trust" i think it's worth defining this in the context of predictable patterns. Google wants to see these types of signals repeated to form a "trust profile" - or brand reputation being a very strong element.
Different folks have different strategies as well, so it's worth counselling a range of views as well with folks who are prepared to share. Often a fresh pair of eyes and different experiences can help folks to pick out the nuances and make adjustments easier.
| This 33 message thread spans 2 pages: < < 33 ( 1  ) |