homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 145 message thread spans 5 pages: < < 145 ( 1 [2] 3 4 5 > >     
Panda 2.2 Update Part 2

 1:57 pm on Jun 21, 2011 (gmt 0)

The following was continued from http://www.webmasterworld.com/google/4326253.htm [webmasterworld.com]
According to Search Engine Land [searchengineland.com...]
Google has given us confirmation that they have ran an update to the Panda filter recently.

We have been expecting the Panda 2.2 update based on news coming out of the SMX Advanced conference. Matt Cutts told SMX attendees that Panda 2.2 has been approved, hasnít been rolled out yet, but that should happen soon.

The update hit sometime late last week. I believe Google manually pushed out the Panda 2.2 update around June 16th.

Bad news for all of us still Pandalized waiting to have the penalty lifted.

[edited by: Brett_Tabke at 11:34 pm (utc) on Jun 21, 2011]
[edit reason] split from previous thread last week [/edit]



 10:12 pm on Jun 21, 2011 (gmt 0)

We just got hit by Panda 2.1 - trying to work out why?

We are a manufacture and have an ecommerce site. All our descriptions are unique and we have excellent content. Site is 6+ years old.

The only thing i can think of is comparision shopping and affiliates using the descriptions on there sites. These are now appearing above our original written copy.

Secondly, we have written different descriptions for the same product in various colours. But again all the content is orginal and rewritten (including original features, bullet points etc)


 10:16 pm on Jun 21, 2011 (gmt 0)

But it would be interesting and revealing to know why a site escaped 1.0 and was caught in 2.1.

I think it was the type of keyword. Panda 1 sites ranked for the trophy one word and two word keywords. Panda 2.1 sites ranked for three word keywords and long-tails.

Panda 2.1 sites were less important in G's eyes as they were sending less traffic to them.


 10:17 pm on Jun 21, 2011 (gmt 0)

Tell us: did you revise you META descriptions as well, so that searchers are more likely to click through to your site based on the SERP squib?

Yes - customised them for every single post.

With reworked content that more completely provides the information (or "answer") your long-tail searchers seek, what is the typical reader's next action?

A lot of the time the answer was nowhere in the SERPs, so by researching (by sending out emails etc) and adding the info in I was already at a significant advantage, as I was the sole provider of the answer! Start to add up a serious number of pages where only you provide the answer (because everyone else is regurgitating) and G can't ignore you.

But be sure to protect all your work from the scrapers. I have feeds set to short, never full and have traps set to ban Ip addresses of bots from scraping my stuff (have done this for years). Occasionally I just block countries like the Ukraine because a lot of the scraper bots are located there.


 10:38 pm on Jun 21, 2011 (gmt 0)

I think it was the type of keyword. Panda 1 sites ranked for the trophy one word and two word keywords. Panda 2.1 sites ranked for three word keywords and long-tails.

Panda 2.1 sites were less important in G's eyes as they were sending less traffic to them.

I thought about that but my one pandalized site does not fall in that category, I normally rank /ed for 3+ words and that's perfectly normal for my niche /domain aname. I am thinking that maybe G targeted top XX% of the 'worst' sites and then tried to expand in other Pandas, going back and forth by adding /removing /changing filters.

But you theory could explain why many cheesy 200 or so word single focus 'sites' still rank. They're under the radar where bigger sites that sent spammy signals like too many tags etc were caught. Probably just a mater of time before G removes the smirk from their faces :)


 10:50 pm on Jun 21, 2011 (gmt 0)

Walkman - is your niche a high traffic niche? I think G was aiming for the "big boys" with panda 1. Big traffic, trophy keywords, plus UGC sites, as those are the ones most open to spam unless they have really vigilant controllers (and most don't because that takes manpower and employing lots of staff destroys their business model).

The rest of the net is small fry, and yes, I agree, they are progressing down the ranks and the tiddlers will get caught in their net too.


 11:08 pm on Jun 21, 2011 (gmt 0)

I think it was the type of keyword. Panda 1 sites ranked for the trophy one word and two word keywords.

I gained traffic in Panda 1 and got nailed by Panda 2. I was ranking for some pretty good 2-word phrases, but interestingly they are phrases that have good traffic but don't have a lot of money in them (and thus not a lot of adwords advertisers).

I bet they used adwords data to determine which keywords to look at first. The higher the adwords bid, the more likely it is to be spammed.


 11:12 pm on Jun 21, 2011 (gmt 0)

Alyssa, yeah high traffic niche.

I have to really see if my competitors got hurt in Panda 2+ but I think a few did. That probably means that Google is doing both, going after high traffic niches and moving down to 'less bad' sites in that niche, among other ones.

Anyway, we're speculating for the most part. Hit sites will either come back or not :)


 11:21 pm on Jun 21, 2011 (gmt 0)

But you theory could explain why many cheesy 200 or so word single focus 'sites' still rank. They're under the radar where bigger sites that sent spammy signals like too many tags etc were caught. Probably just a mater of time before G removes the smirk from their faces :)

Then again the non cheesy ones will continue to prosper..and stay at the top :)..Google are only refining panda to deal with the crap, the scraper, and the only in it for the ads..

The real quality sites, IME, have nothing to worry about..

The 4 sites immediately below my "200 word single focus site" ( I'm touched you remembered;-)..are all large , all real genuine quality , and all original material , and they and mine, all haven't moved since February ..I have no doubt we'll all be on page 1 next year for the same "focused terms" ..:)

Quality and clarity, and originality, expressed uniquely, counts..

btw the results returned for are for 20 million more than 15 days ago..which demonstrates that even with more competition in a particular area, those of us who did not build for search engines, but built for users, weather the storms.


 12:06 am on Jun 22, 2011 (gmt 0)

The real quality sites, IME, have nothing to worry about..


those who are caught in Panda through inadvertence

Sorry, Leo, 'nuff respec' as I'm normally a big fan of your reasoning, but which is it? You can't have it both ways...


 12:10 am on Jun 22, 2011 (gmt 0)

Be aware of Google's commercial motives, be very aware. Seperate it from the purism of search quality and the teams that attempt to set this right. I remain convinced Panda was and is an ancillary strategy to push more websites ultimately into using Adwords.

Specific targeting , trophy keywords ... yes I've seen that too.

AlyssaS is on the ball. A very valuable contribution and highly informative - thanks. Merge this with Nippi's AU weekend revival and a picture is forming, along with others here. Keep it coming.


 12:41 am on Jun 22, 2011 (gmt 0)

Seems to me duplication through category to product pages is tipping the threshold, especially on aggregated or machine driven content. Taxonomy may be playing into this as well.

If this is true, and these early reports suggest it's at least highly plausible, i don't undertand the rationale behind their elimination in trying to conquer scrapers. Maybe it's collateral damage or is it just a case of let's remove 50% of aggregated content, and not discriminate?

Which could be why some of the better sites using aggregated content have been caught and some of the lowest quality ones have not.


 12:44 am on Jun 22, 2011 (gmt 0)

@Whitey..it's not that I don't think that G would prefer more adwords spend ( although they did kick out many and lost a lot of revenue "short term" ) and when one sees some of what they allow adsense to run on ( and in particular some of the "premium partner" sites, many of which fell, but some of the "horrors "should have been removed, and were not :( )..But this time around, money does not appear to have been their immediate imperative.

That said, although my niches and those that I watch and touch via searches are well spread out, I cannot by any stretch claim that I see every area, there may be some in which part of the knock on effect may well be that in order to regain lost exposure, some sites will buy ads and thus Google will have a net gain to their finances.

That would gel with what I have always considered to be , ( and have disputed here as to the ethics of it with their PR reps in the past ) the idea that ones quality score can prevent ones ad showing ..but if one bids more ..then a "hitherto not good enough to point an ad at" landing page..suddenly becomes "good enough"..if one pays more for the ad..

That really is saying that some advertisers can pay to be "quality"..priceminister still dominates adwords here with "buy your dead popes, for the biggest and best choice of deceased pontiffs " type ads.. :(


 8:10 am on Jun 22, 2011 (gmt 0)

Did anyone that was penalized on Feb 24th, 2011 (Panda 1.0) come back?

If you did, can you please share with us, if possible:
    The traffic increase - is it now pre-Panda level, 5%-95% increase from where it was now, much better than pre-Panda etc.

    Was it universal or just a few pages /section. Do a few keywords make your total google referrals or is a sitewide thing?

    Did the traffic increase start before Panda or was it one shot?

Hopefully we can see what's going on month five.


 8:36 am on Jun 22, 2011 (gmt 0)

Improved in May Panda, only to be slapped down again in June. Boo hoo... No time for self pity though....

I have three possible theories on what's going on:-

1)Google is doing some kind of document classification that works on a search phrase basis (notice how when you 'twist' the phrase a little you shoot back to the top?).

In short, Google compares everything in the results set with everything else, computes means or standard deviations (or similar), maybe using n-grams and assigns multiple scores that filter pages down according to what's being searched. It then builds an overall site-wide score based on means. So, your pandalized pages screw themselves and drag down your site.

The fact that it has to build these comparitive matrices of search words and scores and might explain why only periodically calculated.

This doesn't really explain how come pages with very limited content win out. Although they may benefit from the absence of content to critique and a good site wide score.

2) Google is using the greatest resource at it's disposal to rate quality; it's searchers. Again it has to build complex matrices based on search terms/ keywords comparing everything with everything (an 80% bounce is not bad if everyone else's is 99%!). Individual pages are pandalized and then a site wide mean score is applied, affecting domain trust/ authority and pulling everything down (from a bit to a lot).

This kind of fits with google's original objective of tackling content farms. After all, they didn't hate content farms per se, they hated the fact that users hated content farms (and were therefore not getting optimum satisfaction from their search at Google).

Explains why update is infrequent (lots of data to collect and mash-up).

Explains why good content on ugly/ non-user-friendly sites still get's pandalized (like that book review site mentioned here - sorry no offence to owner).

Explains why big brand sites do well; people give them more of the benefit of the doubt and spend longer trying to satisfy their search. Unbrand have a harder time convincing folks there's any hope of satisfying their needs there unless it's bloomin' obvious.

Explains why e-how bombed less than ezinearticles; the latter was just way to ugly and the former superficially engaging (like one of those chat magazines!)

3) It's a combination of the two. Classify the worst statistically, then let the public decide.

Anyone support any of these ideas/ dispute them/ have an alternative theory/ have anything to add?


 8:46 am on Jun 22, 2011 (gmt 0)

Sucks that you got hit, I thought you found the secret formula.

Option 2 is out. Not enough data to do per page basis. Maybe site basis, even then my onsite went up and bounce down to around 50%, which is very good for me.


 8:59 am on Jun 22, 2011 (gmt 0)

I have recovered.

I was affected by Panda on 24th Feb (first Panda), traffic down from 28k uniques to 14k uniques. Native English writer fired.

13 Jun - 14k uniques
14 Jun - 14k
15 Jun - 18k uniques - here recovery starts
16 Jun - 23k
17 Jun - 24k
18 Jun - 25k
19 Jun - 26k
20 Jun - 25k
21 Jun - 25k
22 Jun - 26k

What i have changed:
- i have moved the useful content very visible for users (before, when users arrived on my site, usually they clicked ads, useful content was bottom of the page)
- improved page views and bounce rate with 30%
- starting delete unuseful pages
- starting updating pages with low quantity of content to be more useful
- resolving duplicate titles from WMT
- deleting some categories with a few listings
- deleted Related links made for Google, not for user
- reduced a little internal links
- cleaned category pages from some content added for Google (sorry)
- added user comments with verify and spell check before approve

My theory about Panda
- is related to quality of web pages
- is related to user satisfaction on site
- is related to user first impression on site (like other member said)
- i think they have made a number of (example)10 conditions which an unuseful site has, if you site has at least 6 conditions -> apply Panda
- is very important if user click back and search again the same thing

My advice:
- engage users
- make them happy, to not search again in Google the same term
- if user enter your page, click ads, go on ads's site and from there go back to Google and search again is a problem, put ads a little un-aggressive
- shift your thinking to user >60% , instead SEO for Google: user is happy->Google is happy->you are happy (at the end you, not first!)

p.s. thanks tedster & co ,still have emotions with panda, but at least i see the Sun now.


 9:18 am on Jun 22, 2011 (gmt 0)

Hi Walkman

Humble pie for me! I still have pages that rank really well, but many slipped in latest update.

Why do you think there's not enough data for number 2?

OK, so more about us (in the interests of spotting the problem)...

We are e-commerce and always write our own content and take our own photos (OK < 1% mfr's photos).

If there's one thing that makes us standout (maybe our problem?), it's that we typically write rather more than our competitors. Have untypically long content for sector.

Our blog seems to be unaffected by panda, though it's on the same domain but in a discrete folder.

Our category pages have lots of content, first up in document order (but lurking in side bar), arranged in paragraphs, with inline links.

Our product pages have a lot more words on them than competitors. They also use a template, so generic content across pages and some laziness in just rewriting a similar product.

We wrote articles, some of which were picked up by local radio and even CNN (eventhough we're UK), but many of which have been scraped to some extent or another.

We allowed 20,000 search pages (only when good match shows) to be indexed (my bad Google, I'm sorry!). These had terrible bounce rates (90-odd %) but we thought we needed them to be a 'big' site.

We're hosted on Rackspace on a dedicated server, same ip and host for 5 years.

We're McAfee secure (since it was Hackersafe), use Verisign Extended Validation (that's the sexy green bar) certicates and have one of the obligatory safe shopping logos prominent in the UK.

Design-wise, the sites not the prettiest, but not the ugliest. It's certainly not hobby-like. Looks professional. We have used website optimiser extensively to make the best of my average design skills and have run a lot of tests. We do have fantastic, cut out photography that is well above average for our sector.

We have our own CMS, running on XSL transforms (server side) that means we can change our pages anyway we like to improve user experience and frequently do.

Our site is considered medium speed by Google; fast by Alexa (go figure).

We have verified the site in Webmaster tools and provide a feed for Googlebase (where we do well for product searches).

We spend between £500 to £5000 (worst case, rarely) a month on Adwords.

We don't run adsense on this site; we have disguised affiliate links on two pages out of several thousand. This is where we no longer sell that product category.

We've been online since 2003. Never bought a link and haven't participated in link exchanges for 6 years or more.

We don't have tonnes of spamming back links. We've done a little article marketing, but it's a drop in the ocean. And the same doesn't seem to have done my unpandalized sites any harm at all.

All I can think is that number 1 in my original theories has us looking too much like a content website (and not enough like commerce) and our content isn't goog enough.


 9:28 am on Jun 22, 2011 (gmt 0)

rowtc2, not to cast aspersions, but how do you have traffic numbers for 22/6? Even if you were on the Date line, it would only be 9pm when you posted.


 9:31 am on Jun 22, 2011 (gmt 0)

Congrats to rowtc2 first and thanks for the detailed description.
I have related links but actually serve the user and just this week I added a snippet preview on the categories. It serves the users as they get an idea if they start by category and I had removed it for Google, as I feared a dupe penalty. Now I got nothing left to lose :) (Plus, both related and text in the category help me with bounce rates and time on site.) My bounce rate per page is about the same as my competitors, as far as I can tell, and they have many more junk pages, since many weren't hit by Panda. But probably other factors null that out.

Suggy, did your bounce rate /time on site change for the worst between the Pandas?

rowtc2, not to cast aspersions, but how do you have traffic numbers for 22/6? Even if you were on the Date line, it would only be 9pm when you posted.
Probably a typo, watch him post a traffic graph now :)

sleep time for me.


 12:32 pm on Jun 22, 2011 (gmt 0)

rowtc2, not to cast aspersions, but how do you have traffic numbers for 22/6? Even if you were on the Date line, it would only be 9pm when you posted.

Statistic is from Statcounter, i was hurry because i left from home. It was a writing mistake.

Thuesday 21 Jun - 26k
For 22 Jun day is not ended


 12:47 pm on Jun 22, 2011 (gmt 0)

rowtc Congrats! And now I'm thrilled because you were hit in the first round of Panda and still recovered. There goes my theory that Panda 1 couldn't recover completely yet, even though Panda 2+ could. But it's a good thing that my theory was wrong. Just means I haven't done enough to completely recover. Obviously I've done some good things to get a partial recovery, but not enough for a complete one. Now I feel good about working on it. There's hope! Yay! :D


 5:46 pm on Jun 22, 2011 (gmt 0)

shift your thinking to user >60% , instead SEO for Google: user is happy->Google is happy->you are happy (at the end you, not first!)

That, IMNSHO, is the closest we're going to get to a "magic answer."


 6:33 pm on Jun 22, 2011 (gmt 0)

For the earlier posters in this thread with webmasters discussing internal navigation...

tedster posted this back in April:

The site-wide demotion seems to flow backwards through the site's internal linking. This I'm still not totally certain of, but there does seem to be a pattern that says "the negative site-wide factor is strongest for pages that are just one click away from the really bad page and not as strong for pages that are more distant."


Google has assumed responsibility for user experience not just on the landing page (where they landed from Google), but also for the entire site. Thus the sitewide demotion of Panda. It logically follows if the same principle is applied, your site could/would be penalized in part based on the number of internal links to weak/low-quality pages. You would get in trouble for linking to these bad "neighborhoods" or pages within your site.


 11:23 am on Jun 23, 2011 (gmt 0)

^^^ It's a possibility. Or it could just be that G got fed-up with people using (abusing from their point of view) internal links to rank stuff that no one outside the website would link to. Some of the really big sites like Ezine and Hubpages were really overdoing it. It was an easier method of ranking stuff than paid links! Once all those plugins started to appear, that added tag pages to your site (to bump up internal links to the posts that were tagged) and became popular on the WF, you sort of knew the game was up.

And yes, probably lots of innocent sites got caught up, because that's always what happens.

BTW - don't know if anyone else has posted this, but the following blogger thinks EHow took another knock with Panda 2.2



 11:35 am on Jun 23, 2011 (gmt 0)

As a former hedge fund analyst, and a career webmaster for a credit card site, I was a bit surprised at the lack of news coverage surrounding eHow's plight after the international roll out of Google's (GOOG) algorithm change on April 11.

This was written by the author in April before the article you refer to. That combination of skills could wreak havoc on share prices, but i guess you can only cover up diminished earnings for so long anyway. 4 months of Panda and anyone not out is starting to struggle.


 12:11 pm on Jun 23, 2011 (gmt 0)

You have to appreciate his candour:
Disclosure: I am short DMD

I wonder how many G engineers are in their private portfolios.


 1:48 pm on Jun 23, 2011 (gmt 0)

An immediate look at the alexa graph for ehow does show a staircase structure to corroborate it. Though the alexa graphs aren't anywhere close in predicting traffic, it does show the traffic direction to some extent.


 2:02 pm on Jun 23, 2011 (gmt 0)

I wonder how many G engineers are in their private portfolios.

I would have hoped that G engineers and family , friends etc would not be allowed to hold stock in demand media et al..and that internal company structures at G or elsewhere ( nay laws, even ) would be in place to prevent such ..can you say "conflict of interest" or worse, that would amount to "insider trading"!


 2:12 pm on Jun 23, 2011 (gmt 0)

[deleted for getting off track]

[edited by: indyank at 2:44 pm (utc) on Jun 23, 2011]


 2:17 pm on Jun 23, 2011 (gmt 0)

wishful thinking as it can no way be classified as insider trading.

It would here ..no way would someone who's job involved actions in company A, which could affect the stock of another company B , because company B was dependent upon the actions of Company A to the degree that Demand Media is on G, be allowed to hold stock in company B..

It however does show that G can control the internet economies to a great extent,if it wants.

agreed :(


 2:18 pm on Jun 23, 2011 (gmt 0)

Not to get sidetracked (apologies, I started it and am continuing it), but G engineers are very specifically NOT hurting DMD. They are working on an algo, with no express intent to harm or promote any given web property.

They merely seek to return the results that give peak aggregate utility (trying to strip away the subjective connotations of "best") to their users.

As always, itís the algo that did it.



Philosophically, itís a bit like the death by automated drone argument.

Say you're an innocent civilian, attending a wedding, and you get killed by an automated drone (non-existent at this point), who had identified you as a hostile. Who killed you?
The person designing the drone?
The person designing the rules of engagement?
The person who launched the drone?
The person who wrote the "identifying" software?

If Google reduce their traffic to DMD properties, was it...
Mr Panda, with his machine-learning techniques?
The human testers, who created seed-sets?
The engineers who selected the criteria to apply to Mr Panda's heuristic?
The engineers designing the SERP user satisfaction system?

[edited by: Shaddows at 2:43 pm (utc) on Jun 23, 2011]

This 145 message thread spans 5 pages: < < 145 ( 1 [2] 3 4 5 > >
Global Options:
 top home search open messages active posts  

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved