Welcome to WebmasterWorld Guest from 18.104.22.168
...we used our standard evaluation system that we've developed, where we basically sent out documents to outside testers. Then we asked the raters questions like: "Would you be comfortable giving this site your credit card? Would you be comfortable giving medicine prescribed by this site to your kids?"
There was an engineer who came up with a rigorous set of questions, everything from. "Do you consider this site to be authoritative? Would it be okay if this was in a magazine? Does this site have excessive ads?"
...we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. And you can really see mathematical reasons.
the bot cant see Adsense, but Google knows it's there
I wanted to update this thread with some additional guidance for those who have sites that may be affected by this update.
Our recent update is designed to reduce rankings for low-quality sites, so the key thing for webmasters to do is make sure their sites are the highest quality possible. We looked at a variety of signals to detect low quality sites. Bear in mind that people searching on Google typically don't want to see shallow or poorly written content, content thatís copied from other websites, or information that are just not that useful. In addition, it's important for webmasters to know that low quality content on part of a site can impact a site's ranking as a whole. For this reason, if you believe you've been impacted by this change you should evaluate all the content on your site and do your best to improve the overall quality of the pages on your domain. Removing low quality pages or moving them to a different domain could help your rankings for the higher quality content.
We've been reading this thread within the Googleplex and appreciate both the concrete feedback as well as the more general suggestions. This is an algorithmic change and it doesn't have any manual exceptions applied to it, but this feedback will be useful as we work on future iterations of the algorithm.
[edited by: tedster at 4:35 pm (utc) on Mar 9, 2011]
[edit reason] add quote box [/edit]
reduce rankings for low-quality sites,
If I search yellow pages businessa citya, and the first directory site perfectly has the phone number I need, I don't need to browse anymore...
mind that people searching on Google typically don't want to see shallow or poorly written content, content thatís copied from other websites, or information that are
Interesting PDF... So if bounce rate off of google ads helps page rank, should I be spam clicking my google ads to go x pages deep to fake a 'low bounce rate'. Wonder if this isn't just for bounce rates off of the sponsored ads, but the general SERP's?
If I search yellow pages businessa citya, and the first directory site perfectly has the phone number I need, I don't need to browse anymore...but did I just hurt that site?
there's probably a time element in play too. Perhaps a range of times, and your site will get plotted on a graph.
It takes at least 10 seconds to read a sentence, so if someone is bouncing faster than that, is it because the site was loading slowly and they got fed-up...
Lots of ppl search-tab. I mean, look at the google results, tab the pages they think could have what they're searching for, and then read them one after another. How could G separate this user behaviour (which I think is quite common) from bouncing all the pages?
[edited by: minnapple at 3:56 am (utc) on Mar 9, 2011]
EU..or .co.uk ..or .fr or .de or .sp ? ( and if any of the latter group ? ( which specifically ) ..hosted where ? )..This group, hosted NOT USA. Many of these bookmarks are 6-7 years old. Sites exist, but I searched for them again in G and most do not appear in the first three page returns when they were on page one when I found them, back then. As I said, just an observation.
If you're in the Googplex... can you let us know how YOU know who copied which content from which other site? Who had it first? Who copied it? Do we have to resort to DCMAs to supply that info? We all recognize that scraping is the problem, by why are so many ORIGINAL CONTENT CREATORS getting creamed?
I believe Google know they screwed up something in US but putting back the old results would publicly admit: "Yes, we screwed up and put everything back..."