aakk9999 - 10:33 pm on May 8, 2011 (gmt 0)
Ever since that tweet exchange was published, I was thinking what is the reason for (so far) one-off run of Panda. I think this is done deliberately and not because of lack of technical resources. My thoughts are going along these lines:
a) Stop webmasters figuring out what works and what not in testing small incremental changes in order to recover their pandalised sites
b) Ensuring that any new content that comes to web is as close as possible to G. definition of "high quality", and in that way perhaps eliminating many future "low content" and spin-offs pages
So when (if) the Panda is rerun then:
- if your site comes back, you have no idea which of your actions actually fixed it, so there is less chance to game it. And knowing that you perhaps have a chance to fix the site for the next Panda rerun, and that if you missed your chance, you may end up being pandalised for another X months until perhaps another re-run should ensure you do your best to follow their quality guidelines
- if you have not been hit by Panda, be careful what you output on your website as you might get hit in the next Panda run, therefore thinking twice before any new page is published
And perhaps in this way G. is hoping to reduce the flood of pages output to web in a shotgun approach in a hope that "if I produce 100 pages, maybe one or two will rank..."