Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: goodroi
Inventors: Krishna Bharat [searchwell.com]
Assignee: Google, Inc.
A re-ranking component in the search engine then refines the initially returned document rankings so that documents that are frequently cited in the initial set of relevant documents are preferred over documents that are less frequently cited within the initial set.
Well then that means it is theirs then doesn't it?
If what he says isn't directly related to Google, that doesn't mean it can't be applied to their results. This patent calls for the reranking of results based on query dependency. Which can mean "Take a Google query and rerank it"
So basically for getting into the top 10 ranking you should have control over the linking of atleast 10-20 sites in the top 100 sites . So for pushing one site you should own 10-20 sites in the same subject all with "traditional" high PR and also hosted separately ...
I can clearly see sophisticated linking empires emerging in future :)
The only thing that I'm trying to get a handle on is, would this make topical reciprocal links of the highest stature? I would think that that is sort of anti-web and serves to eliminate the branching out that I would assume would make for a more natural linking method.
Back to the patent and its workings, never mind when the patent was filed, never mind if it can be done on the fly computationally, or is done beforehand on a limited scale for the top used search queries, there is some sense to this stuff.
I just really wonder how they would weed out the artificial reciprocal linkage stuff.
I would start with discounting links from links/resources pages.
From the point of view of niche ISP, such a system would be a nightmare. Hosting companies use a limited amount of class c networks. If the ISP is marketing itīs services to, say, "Widgets" Producers, and gets a fairly big chunck of the market, this system may end up sending most of itīs customers to Google irelevancy: Only one site will show, while the rest of competitors, sitting in the same class C network, and unaware of the problem, would we mistakenly flaged as "same host or an affiliated", and consecuentialy, delisted.
Iīm afraid niche ISP may have become colateral damage in Google's war against spam.
It would be easy and effective to just look at it this way:
Page A comes up in the subset of B but no others.
Page B comes up in the subset of A but no others.
Therefore, page A and B are crosslinked and they are each removed from each other's LR calculation. (This could be easily done over a range of many sites that all link to each other, but not others that are in the main set.)
Bear in mind that even though they've been using this since November or December of last year, no one has really optimized for it. Since there's no way of visually verifying exactly what the LR factor is for any page on any search they could very will do many things that people wouldn't even suspect or notice. (i.e. ANY pages that appear in each other's subset don't get counted, or whatever).
NOTE: The patent (according to image 3) says that the filtering is done only on an IP level. That could very well be the case, but I doubt it. The patent outlines the technology's foundation, not necessarily the current implementation of it.
So how are visitors counted? Surely the only thing Google can check is click-through from Google. I doubt they would be allowed to check your web-logs.