Forum Moderators: Robert Charlton & goodroi
Google Updates and SERP Changes - March 2011
< continued from [webmasterworld.com...] >
< related Panda Farm Update [webmasterworld.com] >
New Chrome extension: block sites from Google's web search results
Monday, February 14, 2011 | 12:00 PM
Today the Google web search team launched a new Chrome extension to block low-quality sites from appearing in Google’s web search results. Read more in the post below, cross-posted from the Official Google Blog. - Ed
[chrome.blogspot.com...]
Also - [webmasterworld.com...]
I think user behaviour data is being underestimated in this thread. Each website will have an depth profile building that feeds into a potential quality assessment by Google. What say you ?[edited by: tedster at 8:15 pm (utc) on Mar 15, 2011]
By the way, earlier I reported that my biggest drop was a -300 average position on a unique content page, with NO ads, well-written, about 1,000 words, and spanked down 300 positions. So, I started analyzing the page and the VERY first thing I found was a major word, in the page header <h3> that was mispelled. To boot, the title/desc metatags were very short and were exact duplicates of each other. Nevertheless, these were all signs of poor quality on an otherwise GREAT piece. So, I immediately made changes and corrected the spelling error in the <h3> tag this past Saturday. As of today, WMT reports that page has GAINED 200 positions!
To respond to that challenge, we recently launched a redesigned document-level classifier that makes it harder for spammy on-page content to rank highly.
The new classifier is better at detecting spam on individual web pages, e.g., repeated spammy words—the sort of phrases you tend to see in junky, automated, self-promoting blog comments.
Manipulation techniques that can, for example, be used are: using the domain name of a once legitimate document; filling the text of the document or anchor text associated with links in the document with certain popular query terms; automatically creating links from other documents to the manipulated document;
Any one or a variety of document signals may be used by various embodiments of the invention. Examples of document signals include, without limitation, one or more of the following: The text of the document--whether the text appears to be normal English (or other language) text or text generated by a computer, such as containing a large number of keywords and not containing any sentences; Meta tags--whether the document has meta tags and whether the meta tags contain a large number of repeated keywords; Redirect--whether there is any script in the document such as JavaScript or HTML script that redirects a user to another document upon access; Similarly colored text and background--whether there is a large amount of text in the document that is the same color as the background of the document (Systems and methods for detecting hidden text and links in articles are described in U.S. patent application Ser. No. 10/726,483, filed Dec. 4, 2003, which is hereby incorporated by this reference); A large number of random links--whether the document contains a large number of unrelated links; History of the document--whether the text of the document, the link structure of the document, or the ownership of the website on which the document appears has changed recently Anchor text--whether there are a lot of links on the page and there is little or no text that is not anchor text.
These rules can be designed manually to determine whether all documents in the cluster or a subset of documents in the cluster are manipulated. Alternatively, a machine learning approach can be used to define the rules. With the machine learning approach, a set of clusters, know as a training set, can be hand classified as manipulated or not manipulated. This information is provided to a classification system to train the system and allow the system to compute which signals to use and in what way.
are people overdoing the internal linking?
So Google has methodology they can put to work for article spinning
This reminds me of the -50 manual penalty where I waited 18 months and nothing happened. They simply closed all doors which is the case right now with the panda mess.
I too have seen scrapers take my content and rank higher.
The odd thing is that now our home page outranks the more relevant subdomain that used to rank for those keywords. And this has happened across the board for us. has anyone else seen something similar?
Even when you start to see changes, don't stop there -- make your site into the best resource of it's kind.