So I've got a website that was affected by Panda. I identified pages that I considered weak, and I blocked them using the robots.txt file.
Guess what - although Google has dropped our cache from their index, they're continuing to rank the URLs, and in some cases, better than before I blocked them! Presumably this means that these pages will still be contributing poor user metrics to whatever Google is using to calculate Panda?
I realise I've taken a few shortcuts here (I probably should've noindexed rather than robots blocked), but I kinda presumed a robots.txt file would be enough to get poor quality pages to drop out of the SERPs.
What do you guys reckon? Is this counter-intuitive on Google's behalf?