homepage Welcome to WebmasterWorld Guest from 23.23.22.200
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Subscribe to WebmasterWorld
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

    
"Introduction to Google Ranking" post on Official Google Blog
Hach3




msg:3694558
 4:48 pm on Jul 9, 2008 (gmt 0)

An interesting post on the Google Blog that gives an overview of Google ranking and some of the criteria that Google uses for improving their search results.

Introduction to Google Ranking [googleblog.blogspot.com]

I thought this principle of "no query left behind" was interesting but would love to find out more how they determine what "less than ideal results for any query" are.

Do they monitor specific keyword phrases and use those as benchmarks? Are they low volume searches? Are they numerous queries for the same keyword phrase? What indicators do they use to modify their results?

What factors do we know about?
An old NY Times article gave an example for local results but curious what other factors are mentioned that I have missed.

Curious if the upcoming posts will have anymore insight.

I was also curious what the impact is of their "We make about ten ranking changes every week". I don't seem to notice these changes much for my sector and was curious if they are for concentrated niches or area specific results.

Curious if anyone has any insight into the matter.

[edited by: Robert_Charlton at 6:10 pm (utc) on July 9, 2008]
[edit reason] added Permalink [/edit]

 

Receptional Andy




msg:3694784
 8:15 pm on Jul 9, 2008 (gmt 0)

"less than ideal results for any query"

From Google's point of view, they likely want to deliver the best result first: or certainly within the top 3 results.

I don't think I've heard anything specifically about how Google judge if results are 'ideal' or not, but I would speculate that a lot of user data is involved. I imagine the worse case scenario for a search is if the user does not click on any results at all, especially if they click through multiple result pages. Of course, this has to be weighed against the fact that there are bad search queries, as well as bad results. "less than ideal" search queries are one of the reasons Google offers 'related searches' for many words and phrases.

Another factor may well be use of the back button: if a user clicks a result and then is straight back to results to click a different one, the chances are high that the results didn't match their search.

And of course, there are also obviously poor results like spam, which amongst other methods can be identified by direct user reports (e.g. the "dissatisifed" link) and by the people Google use to specifically identify spam within results.

I imagine that clear patterns emerge from the substantial user data Google has, enabling a pretty reliable judgement of whether a particular result 'tweak' improved user satisfaction or not.

tedster




msg:3694928
 10:57 pm on Jul 9, 2008 (gmt 0)

Google also uses a group of human editors to evaluate the SERPs. The help wanted ads for these Google positions have been visible off and on for several years.

No one human editor can directly change the ranking of a url. Instead, a number (say, 5 per SERP) work on rating the same SERP independently from each other. If a URL in some position is seen as either better than ranked or worse than ranked by all the raters (according to the specific criteria they are trained to use) then that human input can be integrated back into the algo and either raise or lower ranking for that url.

Here's our thread on the human editorial input patent [webmasterworld.com]. And here's a related thread about Google Quality Raters [webmasterworld.com].

kapow




msg:3698220
 7:00 pm on Jul 14, 2008 (gmt 0)

10 ranking changes per week... You could never understand it... You can't keep up... Just pay for ranking... "There are no droids here"...

AjiNIMC




msg:3699660
 9:03 am on Jul 16, 2008 (gmt 0)

90% of the search algo factors are 90% stable and it takes only 10% time to learn it. So its good to ignore the other 10% which takes 90% of your time and focus on building your business. Let the 10% be studied by SEO companies :).

I was talking to a SEO firm and they said they have application running to check the algo changes and so on. I wonder why people need to do it.

For me its simple, "Make it talkable and you will win".

Whitey




msg:3700333
 11:26 pm on Jul 16, 2008 (gmt 0)

In the patent, the human editorial input is used in a very creative and algorithmic way. [webmasterworld.com...]

Tedster - Can you explain "creative and algorithmic way "

tedster




msg:3700370
 12:16 am on Jul 17, 2008 (gmt 0)

1. Give the SERPs for a specific query to a group of human editors, trained in Google's rating criteria and working independently. I've heard a rumor that a group was made up of five individuals. Unverified, of course, but five makes good sense statistically.

2. Get their ratings for each url that appears in that SERP - whether it's really good for the user, should be higher, or should be lower.

3. If every one of the editors agrees, then flag that domain with a factor to raise or lower it for that particular SERP. If there's an obvious disagreement (especially is/isn't spam) then kick that up to a supervisory level.

4. Rinse and repeat.

5. Also take that feedback and see how the algorithm might be made stronger.

Whitey




msg:3700407
 12:59 am on Jul 17, 2008 (gmt 0)

appears in that SERP

Do you mean [ SERP ] as an industry or subject vertical, or a regional SERP [ e.g. google.co.uk ] ?

tedster




msg:3700421
 1:38 am on Jul 17, 2008 (gmt 0)

The patent makes it pretty clear - the editorial input is for the rankings on a specific query term (Search Engine Results Page = SERP]. If you don't rank for important terms in your industry, you won't get inspected ;)

AjiNIMC




msg:3700454
 3:05 am on Jul 17, 2008 (gmt 0)

Tedster,

Not from any patent but just some logical thoughts. If a website produces some negative signals (may be because of bad linking patterns, content issues etc) then can it get inspected manually before triggering some filters.

I remember it happening with Yahoo once. One of my friend was ranking in top 10 for very very high competition. All of sudden they were thrown to nowhere and rest of the SERP remained the same (So not on a basis of query term, looks site-specific). After talking to some internal Yahoo guys we came to know that it was a manual filtering. Only after a series of mail exchange, they could get it removed.

Whitey




msg:3700509
 5:05 am on Jul 17, 2008 (gmt 0)

If you don't rank for important terms in your industry, you won't get inspected ;)

I wonder what the top / subjects / industries are, that would be most prone to ranking inspections. Any ideas ?

[edited by: Whitey at 5:14 am (utc) on July 17, 2008]

tedster




msg:3700516
 5:20 am on Jul 17, 2008 (gmt 0)

Rough guess (very rough) - they'd look at the top 70% of all queries by volume, rather than doing an industry specific thing. They also might look at any "burstiness" that had not been checked before, or recently enough. From what we can tell, Google has thousands of human evaluators, and that can mean a lot of human input is possible.

I also think they're working on statistical means to back up (and eventually replace a lot of) the human editorial input.

Along the lines of the need for human editorial input, I recently read some heavy math that demonstrated pretty conclusively that search as a whole is not a process that submits to ordinary statistical analysis - standard deviations and all that jazz. In other words, AI as it's normally conceived of is not going to take over here, because the model keeps getting surprised too often. That may be why we hear about Google-stlye analysis working on "no modeling at all [webmasterworld.com]".

There was an interview with a top Google employee earlier this year where he said that they have a machine-learning version of the algo running in parallel with the live one that gets intensive, eyes-on checking. And they don't feel that the machine learning version is up to the job, as of now, so it doesn't run live.

I've been wondering recently if all this cycling we hear reported might be an artifact of trying to integrate some machine learning into the live SERPs. No one's likely to tell us, of course, but it's fun for me to think about anyway - I can't stop myself, in fact.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved