Forum Moderators: open
The phrase "goodbye stalebot.." was used related to Brett's theory that a move towards continuous updating could be envisaged.
Now while I like the idea, surely this will make it easier to decipher the algo as it would only take days to move towards figuring it out. Compared to the month for each incrimental change made at present.
As a side subject if they do pull this feat off (fantastic technologically) it will surely leave the other engines back in the distance. That could lead to some (more) rumblings of world domination which we already have at present. What will the other players do or can they do, to keep up. They need to think of something if they wish to compete in the same league.
Just my thoughts
Cheers
I would venture to guess it will make it harder. Bascially SEO's will now be looking at a moving target, as compared to a "usually" static target, at least for a few days between the new index and the new crawl, even if they could work that out!
Add to that continuous merging and shifting of different databases (and diff algos even) and it spells the end for opportunistic reverse engineering attempts at the algo. (i.e. exploitation of "holes" between logic and implementation) On the other hand, optimimizing according to using principles of good document design as they are recognized by Google is not at an end!
A lot of big SEO companies with the resources and cleint base to work out new "monthly" principles in a hurry and put 100 or so guys on automatically updating clients sites before the next crawl, will be hit. As far as the small guys are concerned it may save them heaps of time competing in an area that they could not possibly win, and the playing field will be levelled out somehow.
All a good move, I agree...
As a user iid welcome different results whenever i search, as long as the SERPS are relevant.
I dont think we can assume that there is one ideal index.
[edit] Actually it would also need to be tied into the number of results returned. For example if a user searched for my domain/company (one word) I would expect them to get my page, and not the page which just happens to have our company name in a bit of sample computer programming. Because it is a unique word with few results the most relevant should be up the top. So maybe it wouldn't work so well, I dunno, I'm just thinking out loud here. [/edit]
As a user, how would you like a search engine that would return random results to your query? Does that sound like a search engine that you would want to use routinely?
If the "random results" weren't really random, but were a shuffling of closely ranked results, it might be okay...in theory. In practice, it could be annoying or confusing to users. Here's why:
Let's say I search on "widgets", look through the first 10 search results, then go off to lunch. Later in the day, I search on "widgets" again and discover that I can't just skip past the first 10 search results because the results are in a different order than they were on my last visit.
Another possiblity: Google goes ahead and serves up different sets of results on a rotating basis, and is able to discern via CT data which sites seem most relevant to consumers as a result.
Then, just like AdWords, consumer's clicks help determine relevance and rankings...
And G already has the technology for that.
Guess what I meant was, they could simply asses clicks and that could lead to conclusions about relevance.
Example: I know of a category where a top performing site is ONLY about *very large widgets*, but comes up high for searches on "widgets". It's not a great result for most consumers, 90% of whom don't care about *very large widgets*...only other kinds of widgets.
If Google can determine that not many people click on the *very large widgets* site, then over time, they could let that site drop in the SERP's for searches on "widgets"...but it would continue to do well for searches on *very large widgets*... no privacy issues there...and better search results.
wack
G is doing this right now with Adwords. Rankings are a result of price paid + CT rates.
They could easily do this on SERP's too...no need to cookie users...just monitor clicks. If a site that's doing well in the SERP's is not receiving clicks, G could reasonably conclude that that site is not as relevant as their SERP's had indicated...
Hence, a new element is added to the algo, based on user behaviour. Make sense?
Im not convinced that click tracking is a great way to assess sites. BAsically it measures how good the site title and google snippet is rather than the site, as it only measures the decision to click. Now that may be related to site relevance and quality but may not.
Of course they could also see if someone came back to the SERPS fast, but that adds another level of complexity and another source of error. Remember DirectHit used this method for a long time and still didnt failed to get reasonable SERPS for anything other than simple one word or maybe two word queries and the biggest sites.
completely agree with the assessment of the downside here...I could go either way on this, but if I was G, I'd at least test it.
the approach makes sense for Adwords because the GOAL of the ads is greater CT for the advertisers and also thus greater profit for G...
SERP's are of course somewhat different.
Part of the process of listing sites in the SERP's has to be what should the site description be. Google has put some thought into this obviously. Let's assume they are happy with how that is handled.
Now if G sees that users don't click on the *very large widgets* site noted above, isn't that a good indicator that people don't think the listing is relevant to the "widgets" search. I'd say so. The question then becomes how to use this CT data...
How about using it two ways...
Macro: start offering different SERP's on a rotating basis, and asses overall CT's from one set of results to the next to get an idea of which set of SERP's is most appealing to users...
Micro: add this factor into the algo over time (maybe give it an importance factor of 15%), to dampen the rankings of sites that regularly do NOT receive CT's on keywords where they perform well.