tedster - 12:53 am on May 31, 2010 (gmt 0) [edited by: tedster at 12:53 am (utc) on May 31, 2010]
when you click through the relevance was really weak or absent
Can you elaborate on this some more?
I'm really speaking as a search user here, not as a webmaster or SEO. And by relevance, I really mean "user intention" behind the query terms, rather than mere text matching.
It's something like this: Many pages on strong sites used to get long tail rankings apparently from a combination of a strong site and the mere presence of the query terms somewhere on the page. No proximity or semantic association between the keywords at all, just mere presence - and often in different parts of the page template.
These strong-site pages were even outranking other pages that had the exact query phrase right there in the content area. In the 1990s, I learned to put up with this kind of "matching" but who likes it? The link I clicked on wasn't relevant to my query's intention, and the need to reformulate the query, over and over again, could be quite frustrating - heck, it still is.
Early on in this Mayday discussion, I mentioned the possibility of a change or evolution in phrase-based indexing. If you add to that increased weight for content area terms, you've got the picture I'm contemplating.
...by the time it drills down to weaker pages to " z " the relevance signals have also become very weak.
That too, absolutely yes. Again, the template areas such as the main navigation, or user comments, or even sidebar "features" used to help generate too many long tail rankings. The deep page itself had minimal relevance to the keywords.
I guess I'm saying it would be good to look at the lost traffic at a very granular level rather than only in aggregate. Pull out some specific examples and ask whether that traffic was truly right for that page. The bounce rate observation from Sgt_Kickaxe would support this direction.
Perhaps google has categorized the net
IMO, they absolutely have done that. There are some patents from a while back that talked about creating an automated taxonomy for websites. I'm pretty sure something like this kicked in last year with the focus on user intention.
You may notice that some queries (1-word and 2-word) almost never show a "transactional" result, or never show an "informational" result. Other queries always show a mix. On a very high-level, that's an indication of one kind of taxonomy kicking in, but the full taxonomy is much more granular.
The take away for me has been not even to be concerned about ranking certain pages for certain queries - just write it off as a lost cause. Google has decided that those terms want a different type of page from a different taxonomy category, and that's that
[edited by: tedster at 12:53 am (utc) on May 31, 2010]