Forum Moderators: Robert Charlton & goodroi
Penguin 2.0 is upon us - May 22, 2013
We started rolling out the next generation of the Penguin webspam algorithm this afternoon (May 22, 2013), and the rollout is now complete. About 2.3% of English-US queries are affected to the degree that a regular user might notice. The change has also finished rolling out for other languages world-wide. The scope of Penguin varies by language, e.g. languages with more webspam will see more impact.
This is the fourth Penguin-related launch Google has done, but because this is an updated algorithm (not just a data refresh), we’ve been referring to this change as Penguin 2.0 internally. For more information on what SEOs should expect in the coming months, see the video that we recently released.
[edited by: Brett_Tabke at 12:12 pm (utc) on May 23, 2013]
[edit reason] added quote [/edit]
You do understand, of course, that even if your experiences and anecdotal evidence covered hundreds, or thousands, or even ten thousand sites, it would still be statistically insignificant compared to the total number of sites or URLs out there, of every type and every topic.
It's really easy to fall into the trap of "because it's happening to me and these hundred other people, it must be universal" and I fall into that trap all the time myself. But it really gets in the way of trying to parse out what's going on, and how to navigate through it. And of course, to figure out what to do next if you can't.
Over the last 14 months, it probably falls into the "thousands" range. However, the aggregate knowledge/experience is from a wide range of sources, both related to me and also unrelated. Are you saying that all of these sources cannot be representative of a larger constituency of websites? Perhaps it's just a wild coincidence that all of these sources experienced the same types of demotions for sites that had zero SEO work done on them?
That would indeed be jarring & jolting news and would afford a huge class action lawsuit for unfair business practices.
I am positive wilburforce will seek legal counsel with that tidbit.
I agree, but I doubt it would ever be provable - the algo isn't open to any kind of scrutiny as we all know - how can you possibly "prove" something when the evidence isn't available to view?
I am not simply taking Google word...
The lack of a Google acknowledgement is the best evidence.
According to your theory, I also must be whitelisted.
Then you really need to read up on random sampling - like I say, many sources whose experiences matched my own were unrelated to me, plus there's the added randomness of various developers having all kinds of clients who are not related to one another in anyway (same with my own clients too). When you add it up, it's a large random mix of thousands of sites. BUT...."pretty much, yep" they are all somehow related to one another - granted, statistically this is possible in the same way you may win the lottery twice in a row, but I'd (understatedly) say it's unlikely that you are right.
What evidence would there be to suggest that Wikipedia is not whitelisted?
I am not simply taking Google word...
You most certainly did when you stated:
The lack of a Google acknowledgement is the best evidence.
Your statement is not evidence at all and can be classified as an unsubstantiated opinion. Case studies do exist, which support the theory that Wikipedia is whitelisted. Whether you choose to research these studies to determine their legitimacy is a matter of personal choice or lack thereof.
According to your theory, I also must be whitelisted.
If it were subjected to the same spam links that were used in the case studies I noted, and did not witness a Penguin demotion, then it would provide evidence of whitelisting. But it has not and therefore is inconsequential for the purpose of this discussion.
[edited by: fathom at 7:05 pm (utc) on May 27, 2013]
What I think most people talk about [when they ask about whitelists], “Is there is some type of overriding, you are golden, you can never be touch, either philosophy or list.” And to the best of my knowledge, there is nothing like that.
Sullivan: So there is no overall, let’s call it the “Wikipedia list,” there is no overall, “This site should always be fine for everything.” But if you have a particular signal that you are implementing and you think, this signal is working well to deal with with 99% of the sites out there but “Wow, it is really hurting this other site for completely unrelated reasons, then let’s exempt them from that” type of thing?
Cutts: Well, I think if you were in charge of Google. Like suppose we all got hit by a bus and you guys had to come into the Googleplex and run the search engine yourselves, right? After you ate all the free food, then the next thing you would do, is you’d think of what is the philosophy, how do we make it work? And I think the right instinct is to try to do as much as you can algorithmically,
It is the same thing as spam. You try to solve hidden text or hacked site algorithmically, and then the stuff you can’t catch you are willing to take manual action on to remove because that is abuse and it is a bad user experience. And then you use that data to try do it the next round, so it is completely algorithmic, or it is algorithmic for more languages.
And I think that is the philosophy like almost anybody would come to, you want to be as scalable as you can.
Sullivan: So that is yes?
Cutts: I think that is a yes.
First you suggest that all that stuff is evidence and cannot simply be wild coincidence and then you say none of it proves anything.
What you mean to say... you do not have the expertise to determine what is and what is not evidence and all the other references that are suggestive facts are not immediately available to you thus that is why you cannot conclusive prove anything to a preponderance of evidence which is all that is needed to file a suit of this nature.
[edited by: ColourOfSpring at 7:22 pm (utc) on May 27, 2013]
And yet again, the random sample begins with a constant... PENGUIN 2.0 targeting link webspam.
I have yet to see a single website negatively impacted that did not have something to do with links.
If you have a domain the does not have something to do with links - point it out - it is a false positive and Google would love to fix your issue immediately.
Fair enough - you discount my anecdotal evidence by presenting your own anecdotal evidence. People tend to learn from their experience. But if they want to hold their personal experiences up to scrutiny, they go deeper and get involved in some data collection and research. That's what I did. I'm a developer, and so I asked other developers. Ones I knew personally (and trusted), and ones I didn't know via forums. We all noticed commonalities between many sites that got Penguin demotions - i.e. sites that didn't engage in off-page SEO getting penalised. Very perplexing and disturbing. Clearly there's more to Penguin than purely off-page signals. That's my experience. I guess your only recourse to disagreeing with me is to flat-out call me a liar, but that's what my research found, no matter how counterintuitive it is to you.
And then....when you look after Penguin updates in many of the niches that the affected sites were competing in, brands really took over (and even more so after Penguin 2.0).
They suggest (not me) it is primarily link oriented.
Question to a developer: how does Google automatically program to favor only brands so that whatever a non-brand is can't get in on it?
Surely they don't that manually.
They suggest (not me) it is primarily link oriented.
Links may be part, but I believe the way that it has happened is user metrics. Searches for brand name, time on sight, bounce rate, pages per visit. Take a look at the brands then take a look at your site. I would be surprised if you are beating the brand in these 4 areas.
They suggest (not me) it is primarily link oriented.
I personally think - and my own site's behavior is consistent with but not proof or my only evidence of this - that links are a strong element, but certainly not an exclusive one.
Sure and that is a competitive advantage... but link quality TRUMPS link quantity... and that evens the playing field again.
[edited by: ColourOfSpring at 8:50 pm (utc) on May 27, 2013]
I could not find your case studies maybe you could post a thread on them.
The goal - test if blasting 100s of thousands links each day can penalize site or help it?
Site 1 - Wikipedia Page - position 3 (starting position = 4)
Site 2 - Government Site - 258 (starting position = 36)
Site 3 - Small Private Site - Not in top500 (starting position = 42)
I don't think I am allowed to post a direct link to the case studies.
But I will provide you some final stats for one of them and a snippet you can use to find it on your own, if you so choose.
The goal - test if blasting 100s of thousands links each day can penalize site or help it?
Site 1 - Wikipedia Page - position 3 (starting position = 4)
Site 2 - Government Site - 258 (starting position = 36)
Site 3 - Small Private Site - Not in top500 (starting position = 42)
Every website not whitelisted is vulnerable to external forces outside of their control. I believe this because I have watched many of these case studies with interest, and they all end in some unknown website being driven into oblivion.
I won't discuss this any further with you fathom. My opinion is formed by evidence whereas yours appears to subjectively based on biased interpretations of what Google employees say. Such statements by Google employees are not evidence but merely public statements that are in direct conflict with everything I have witnessed.
You're not ranking top 10 for a product found in stores without some serious net clout
The problem is that really embarrasing results are not confined to niche service queries.
I really do think Google organic listings are kaput, both for webmasters and for users.
Does anyone see any problems I may face by doing this?
I'm not sure what you're considering commercial/transactional queries. The main keywords I'm referring to are used for people looking for free and/or to buy.
As much as Penguin is supposedly about link profiles and anchor text, could it be that other factors are used to determine if a site qualifies for a penguin penalty?
You do understand, of course, that even if your experiences and anecdotal evidence covered hundreds, or thousands, or even ten thousand sites, it would still be statistically insignificant compared to the total number of sites or URLs out there, of every type and every topic.
I have yet to see a single website negatively impacted that did not have something to do with links.
I just don't buy that Penguin is a punisher of websites anymore, it's more of a "does this site fit within our strict new expectations for this keyword" and "is this site targeting our shortlist of elite keywords" type scenario.
I'm not an Amazon guru, but the last time I heard Amazon UK would not accept orders from shipping destinations in the USA.
Is Penguin more than just links?