Forum Moderators: Robert Charlton & goodroi
[edited by: tedster at 8:00 pm (utc) on Feb 26, 2011]
more hard data on the sites that lost out in the Farm update
It looks to me like something was added to the algo that devalued sites rather than a changed the ranking order. None of the SERPs I look at changed with sites moving up and down and all around. Sites got devalued. Other sites moved up as a result of them moving down. Does anyone else have this view?
Does anyone else have this view?
I've looked at my site as well as others for which I have stats. Every page on an affected site moved down, but some moved down a little, and some moved down a lot. For example, moving from #4 to #12 while another page moved from #5 to #49.
There are pages that dropped eight spots that seem to be as strong as the pages that dropped 44 spots. There's other factors that play into the difference in the levels of drop, though. I'm seeing some of those factors, but I need to dig deeper to find more.
Yeah, now that I read what you're saying correctly ... Could be a 'near duplication and origination of information' type of assessment ... EG There are already 10 of these we have which seem to be published first, so we'll discount the rest that have a high degree of similarity.
It looks to me like something was added to the algo that devalued sites rather than a changed the ranking order. None of the SERPs I look at changed with sites moving up and down and all around. Sites got devalued. Other sites moved up as a result of them moving down. Does anyone else have this view?
My first guess here (and it is a guess) is that the Farm Update leans too heavily on the Scraper Update of a few weeks ago. And the Scraper Update does not yet do a very good job at identifying the original source of content, especially for extensively republished content.
if you got whacked, you are similar to a scraper site in some respects, even if you are a good site...think about the attributes of a scraper site and then be the opposite of those bad boys. What does a scraper site not do, because they are robo-created and have crap content, that is a sign of quality? -- figure out the tough signs of quality that a scraper could never do and do more of that. My two cents, anyway.
The content on my site is entirely unique.
The content on your site (pages affected) may not fit a QDRL (query determines reading level) type filter or assessment.
The content on your site may be totally unique, but you might not have been the 1st of say 10 to publish (or more importantly: get spidered) something similar (topically and informationally) out of all the pages GoogleBot has spidered.
The content (documents affected) may fit a classifier pattern of a 'spammy site' based on phrase frequency, related phrases, and 'natural language' factors. +(User Behavior and/or Links)?
It's similar to Bayesian spam filters -- your email can be unique but it can still get flagged as spam based on its characteristics.
[edited by: vordmeister at 8:56 pm (utc) on Feb 28, 2011]
there are so many sites that are content scrapers ranking ahead of us with our content (or copied content from the MFG).