Welcome to WebmasterWorld Guest from 18.104.22.168
Something is now becoming clear, and I think it's time we put away the name "950 Penalty". The first people to notice this were the heaviest hit, and 950 was a descriptive name for those instances.
But thanks to the community here, the many examples shared in our monthly "Google SERP Changes" threads and in the "950 Penalty" threads themelves, we now can see a clearer pattern. The demotion can be by almost any amount, small or large -- or it even might mean removal from the SERP altogether.
It's not exactly an "OOP" and it's not the "End of Results" penalty. From the examples I've seen, it's definitely not an "MSSA Penalty" -- as humorous as that idea is. (Please use Google to find that acronym's definition.)
It's also not just a Local Rank pheomenon, although there are defiitely some similarities. What it seems to be is some kind of "Phrase-Based Reranking" - possibly related (we're still probing here) to the Spam Detection Patent [webmasterworld.com] invented by Googler Anna Lynn Patterson.
So let's continue scrutinzing this new critter - we may not yet have it nailed, but I'm pretty sure we're closer. The discussion continues:
[edited by: tedster at 9:18 pm (utc) on Feb. 27, 2008]
I've been lurking on the 950 penalty threads because I wasn't entirely sure what was going on, given that multiple factors do seem to be in play, but I have found some commonality with my own experiences.
1. Phrase-based penalties & URL-based penalties; I'm seeing both.
2. On phrase-based penalties, I can look at the allinanchor: for the that KW phrase, find several *.blogspot.com sites, run a copyscape on the site with the phrase-based penalty, and will see these same *.blogspot.com sites listed...scraping my and some of my competitors' content.
3. On URL-based penalties allinanchor: is useless because it seems to practically dump the entire site down to the dregs of the SERPs. Copyscape will still show a large amount of *.blogspot.com scraping though.
Getting rid of scrapers is a thousand page thread in and of itself, but what I've been doing so far is a mixture of modifying titles, slightly modifying on-page text, getting some new links that match the new title, and where possible, turning in the *.blogspot.com junk as spam on both the blogger and G spam report side.
Normally scrapers wouldn't be a huge problem, but with Google continually tweaking their authority knob, those *.blogspot.com are becoming instant authorities, which is really, really bad. That has to stop as of last year. I don't have an answer as to why sometimes the penalty is phrase-based and why it is sometimes URL based, but I can say that I've seen them alternate on the same domain, I've seen just the phrase-based issue occur and resolve itself, and I've seen the URL-based issue occur and resolve itself.
Confusing isn't it?
So that's my vote...false authority scrapers that are causing temporary filtering as Google attempts to determine which is the more valid source, rectified by modification of both on-page and off-page tactics.
Thinking that PR is the be-all and end-all is very 2002 thinking - this is 2007.
PR is still an important anti-spam tool. If a page that has been around for a while has no links at all, it's most probably spam.
This seems to be unrelated to PR. In fact hundreds of pages with less and even 0 PR are ahead of my missing pages.
I wouldn't expect Google to make these changes visible in the toolbar, they are most likely only temporary.
If the toolbar shows PR0, then a new page may, and usually does have some PR.
Whether or not PR is the tool that Google uses to send sites to the background isn't even important. What matters is the question why better-content, non-spamming sites are temporarily deranked.
No.... look again, especially at cache dates.
>Whether or not PR is the tool that Google uses to send sites to the background isn't even important. What matters is the question why better-content, non-spamming sites are temporarily deranked.
Welcome to 2007. This is way more than a PR issue.
[edited by: MHes at 6:03 pm (utc) on Feb. 11, 2007]
How else could Google rank the same url in positions 2 and 943 at the same time?
Basically, the Google cache is not a cache guys, it's a live download of your page and popped into the browser.
No, it's not. The images are live downloaded from your own server, but the HTML code is cached.
[edited by: tedster at 8:05 am (utc) on Feb. 12, 2007]
It may happen occasionally, but I've never seen a page that ranks #50 (for the poison word search) hit with this penalty (unless it was in a directory beneath a penalized page). High scoring pages are at risk, this means authority sites mistakenly get hit along with the spam sites beign targeted.
My missing subdirectory and a few other pages are still gone. Still working on them.
I think getting a new cache of your changes is obviously important, but also there is an offline analysis which needs to be updated as well. The big problem is knowing what data is being used offline and are they now using old data with previous 'fixes' that you have since abandoned!
annej - Are you ranking as high as before.... despite your changes. Also, are you getting the same indented page combinations?