TheMadScientist - 2:55 am on May 14, 2010 (gmt 0)
The more I think about it, the more I think the people at Google are fairly smart and I'm guessing there might be a bit of a method to the seeming madness... Back in the days of 'updates' there were often periods of time with a bunch of 'G is broken' posts, but somehow they seem to have worked things out after each, so my personal guess is there is probably something going on 'behind the scenes' and if tedster's guess is correct, maybe the system is more automated than before and it's expected for 'the wrong site' (or page) to float to the top for a while before being dropped to page 97?
It's tough to tell what's 'better' or 'worse' without a comparison, isn't it? So, if you let sites (pages) float through an automated system 'naturally' you get some comparative data and IMO should be able to make determinations faster as to what pattern (better than or worse than) fits each page or site as the amount of data increases...
IOW: Maybe they need to let things 'run their course' for a bit?