| 1:43 am on Aug 26, 2006 (gmt 0)|
Maybe we could get a little more explaination as to what this patent is about. I can't tell if it is about adding editorial comments to SERP results or modifying something like PageRank by looking at if a reference to a site was postive or negative.
Sometimes I wonder if Google doesn't file patents just to muddy the SEO waters.
| 1:49 am on Aug 26, 2006 (gmt 0)|
Definitely going to have to read this one when I don't have so much going on.
|A server improves the ranking of search results. The server includes a processor and a memory that stores instructions and a group of query themes. The processor receives a search query containing at least one search term, retrieves one or more objects based on the at least one search term and determines whether the search query corresponds to at least one of the group of query themes. The processor then ranks the one or more objects based on whether the search query corresponds to at least one of the group of query themes and provides the ranked one or more objects to a user. |
Huh? >> based on the at least one search
Rush to the patent office?
| 1:50 am on Aug 26, 2006 (gmt 0)|
I'm guessing it means that, should they want to, Google can hire teams of people to cleanse the results pages for valuable and popular keywords. Plus they could weight results in favour of Google-friendly companies.
How that would warrant a patent, I don't know. It would be a tiny change to the existing results code. Seems the US Patent Office will give out patents to anyone with a bit of cash.
| 1:52 am on Aug 26, 2006 (gmt 0)|
|How that would warrant a patent, I don't know. It would be a tiny change to the existing results code. Seems the US Patent Office will give out patents to anyone with a bit of cash. |
I think our patent system is so underfunded that it would give out a patent to anyone who put together a technical sounding paper. Just look at some of the patents that have been invalidated in the past few years.
| 2:21 am on Aug 26, 2006 (gmt 0)|
Here's my current, and very simplistic reading --
1. Get a bunch of people to find and rate really good websites and really spammy websites for certain searches
2. Make their rating into a parameter
3. Look at what the algo says the top results "should" be for a particular search
4. See if that search is in one of the topic areas that has an editorial rating
5. If so, look to see if there is some relationship to either the good guy list or the bad guy list
6. Shift the search rankings according to whatever parameter the editors generated.
7. Serve the shifted results to the user.
| 2:26 am on Aug 26, 2006 (gmt 0)|
What I find immediately notable, if I'm interpreting this correctly, is that it breaks with Google's policy of processing everything via algorithm, and introduces a defined human element.
| 2:31 am on Aug 26, 2006 (gmt 0)|
>> introduces a defined human element.
Matt Cutts had blogged that or mentioned that in a post about a year or so ago (and may be mentioned it at a pubcon). I think the gist of it was "if it can be made scalable, human elements will be used".
Yeah, I'm probably misquoting him, but I do recall his response being vague enough to the point where a misquote does not matter. :)
And then there was the webspam leak from a few months ago involving that process paper / students in Germany (?).
| 2:33 am on Aug 26, 2006 (gmt 0)|
With almost unlimited resources, adding some carbon based units is scalable. ;)
| 3:03 am on Aug 26, 2006 (gmt 0)|
|What I find immediately notable, if I'm interpreting this correctly, is that it breaks with Google's policy of processing everything via algorithm, and introduces a defined human element. |
They've always used a human element. PageRank, Google's original Unique Selling Proposition, is a formula based on humans' decisions (in theory, editorial decisions) when linking.
I wonder if this isn't related to the Google manual for spam raters that was leaked by a Dutch SEO blog last year, and which has been mentioned a few times in this forum?
| 3:05 am on Aug 26, 2006 (gmt 0)|
|With almost unlimited resources, adding some carbon based units is scalable. |
Pigeons are really cheap carbon based units and if I recall correctly I remember reading that PigeonRank was the real meaning to the acronym "PR". ;)
| 3:09 am on Aug 26, 2006 (gmt 0)|
EFV, Google has repeatedly stated that they prefer 'automated' solutions. This is the FIRST time they've broken with that, IF, I'm reading that patent correctly.
It's quite obvious that any system created by humans can't entirely eliminate the 'human' element. But adding active human editors is a switch from their previous PR.
[edited by: digitalghost at 3:26 am (utc) on Aug. 26, 2006]
| 3:22 am on Aug 26, 2006 (gmt 0)|
Sounds to me they may be considering to offer a VOTE feature YES if you like it, No if you don't...
I think this would be fantastic, especially for websites like mine that shine above most of my competition and have unrivaled product diversity and helpful content.
I would love a "ban" feature... ie. "Don't ever show me this stupid webpage again" ... because lets face it, when we are trying to find something difficult on the web we often do dozens of searches and the last thing we want to see is the same results from before.
They could even take the voting concept one step further by trying to get authority of the voters kind of like pagerank now... for example they could have paid web spam reporters at the googleplex.... and then based on the spam / votes of the google users compared to their own trusted spam reporters, they could give higher authority to those that find the best spam at the highest rate.
| 3:55 am on Aug 26, 2006 (gmt 0)|
I just made an interesting find -- eWeek covered the new Google patent [googlewatch.eweek.com] on Wednesday, but their take on it was much more in a web 2.0, "social search" direction. My reading is more like what shri and europeforvisitors mentioned above -- the eval.google.com story from June 2005 [webmasterworld.com]
| 3:57 am on Aug 26, 2006 (gmt 0)|
My take: It means that the scoring of a *page* can be modified by a prior human opinion of the entire *site*, even though a human has never rendered an opinion on the page in question.
| 4:01 am on Aug 26, 2006 (gmt 0)|
Good observation, jk3210, and a potential loophole if that's the way it really is.
| 4:04 am on Aug 26, 2006 (gmt 0)|
|...introduces a defined human element. |
Seems to me Google has done this with Google Co-Op, where they're using humans with "specific expertise" to suggest and tag authoritative sites. Other humans subscribe, and perhaps their usage patterns become a measure of quality. When results become good enough, they appear on the main page.
Strikes me that this is a hybrid people-seeded/algo-driven variant of TrustRank. The refinments to Google's health and medical results produced by this system are impressive.
Using a human-selected set of results as a seed, btw, goes back to Google's early days. I feel that they owe a lot of their initial quality to the directories, Yahoo and then ODP, that they were able to spider.
They're probably always looking for dependable seeding mechanisms to provide cores for TrustRank-type overlays to their algo.
[edited by: Robert_Charlton at 4:06 am (utc) on Aug. 26, 2006]
|smells so good|
| 4:07 am on Aug 26, 2006 (gmt 0)|
Not to stray too far OT, but I got a sale from someone doing just that, according to him for the last three years. He was waiting for the right time. There are many valid reasons to search for a page.
|and the last thing we want to see is the same results from before |
Carbon based input would be welcomed. I haven't read the patent yet, does it include an automated system to deal with site owners?
| 4:55 am on Aug 26, 2006 (gmt 0)|
Sounds a bit like Garry Kasparov vs IBM's Deep Blue to me. "It doesn't play like a computer, it feels as though it has been guided by a human hand".
I can hear it now ... "why don't they release the logs?"
| 5:21 am on Aug 26, 2006 (gmt 0)|
I wonder if it's going to be like a ranking ability by searchers. If so .. it's already being done by other companies and how can they patent that?
| 6:17 am on Aug 26, 2006 (gmt 0)|
google long ago gave up on algo alone; GoogleGuy even made a comment on that when the google employeed students rating sites story broke.
They could simply put a link on each link and let USERS evaluate if the site matched the search term: if you searched for "Lexus reviews," was the link you just clicked on useful? That maybe confusing and maybe instrusive, but who knows...maybe do it only for the people who have a google account, or who agree in advance to "help" google.
If this is a new idea, can I get #1 rank for 12 months ;)?
| 7:32 am on Aug 26, 2006 (gmt 0)|
In the google sitemaps section, they have been inserting 3 choice options as to how usefull the tools are - Good / Okay / poor (I think)
I imagine that those simple smilie faces could easily be added to the search results and people could rate how good the page is they accessed.
It would be a wonderful addition to the results cutting out the adsence scammers by being able to rank them down.
If you couple that with the SERPS then people would have a tool to help rank sites on a scale that would too big for individuals to cheat.
|norton j radstock|
| 8:14 am on Aug 26, 2006 (gmt 0)|
This is a really significant move on the part of Google -in essence it is similar to teaching a computer to recognise your handwriting -you don't need to instruct it on every piece of information, but rather just give it a big enough sample.
So how might this work? Let's say a team of human editors are scoring sites according to the Google manual and so called 'thin affiliate' or scraper sites are being given low scores. The computer then works out what the common factors are for those low scoring sites and makes that part of the algorithm.
Thus in simple terms it might calculate, for example, that scraper sites are commonly composed in the form of a directory of links followed by 20-30 words of text and a line break, all repeated several times, and with a high proportion of those links pointing at similar affiliate URLs. Other sites sharing similar characteristics would then be marked down in terms of ranking of search results.
Inevitably there will be good sites that fall victim to such systems, the ultimate aim of such a system is continual improvement aided by the ongoing human input. The important thing is that the humans are unbiased, so it is far more likely that Google will continue to employ trained expert assessors, rather than leave it up to the web at large.
| 8:34 am on Aug 26, 2006 (gmt 0)|
Does this mean we can all stop being link wh*res now?
| 9:00 am on Aug 26, 2006 (gmt 0)|
isn't it similar to TrustRank? That one also said about assessing sites by humans.
i am just wondering how are they going to combine human marks with off-site stuff.
i wish we could get access to one of their internal papers for assessors ;)
| 9:31 am on Aug 26, 2006 (gmt 0)|
excuse me but isnt ranking a site high because another human placed a link to you on their site basically just human ranking anyway?....so now its like them saying well we still want humans to pick out the best sites but now we are using humans that have their own trust ranking....i.e paid or selected!
| 9:39 am on Aug 26, 2006 (gmt 0)|
IMO it is cool, soapystar.
can concentrate on building cool sites more.
anyway, the patent was filed in 2000 so it has already been implemented by G anyway.
|norton j radstock|
| 9:44 am on Aug 26, 2006 (gmt 0)|
You can find the leaked paper by searching for "Spam Recognition Guide for Raters"
| 10:56 am on Aug 26, 2006 (gmt 0)|
|I imagine that those simple smilie faces could easily be added to the search results and people could rate how good the page is they accessed. |
Actually Google did this already long time ago. If you use the Google toolbar, you'll see two faces. One is to "vote for this page" and the other "vote against this page".
The patent in question is not new, and was filed in December 2000. I don't know when the first toolbar was available from Google, but I think that we should see this patent in the context of the technology used in 2000, rather than the current anti spam measures or eval.google.com or the spam rater guide, which are from a later date. Every talk about web 2.0 is not relevant, this patent is six years old.
Some searching at www.archive.org showed the following:
toolbar.google.com was introduced in February 2001, just two months after the filing of this patent. The vote buttons were first present in october 2002.
| 11:33 am on Aug 26, 2006 (gmt 0)|
These patents have become amusing.
"scalable" = "cheap" so I would expect this to be done thousands of miles away from the phd filled offices in MV, most likely in a place where English is not the first language.
It wouldn't be very surprising if google started using human review more than they already do. Somewhere along the line, google seems to have completely forgotten about the core concept of relevancy though, so, as a user, I would far prefer to see the odd crap result than 5/10 pages being barely relevant to the search.
| This 53 message thread spans 2 pages: 53 (  2 ) > > |