zerillos - 2:19 am on Mar 19, 2011 (gmt 0)
after yet another sleepless week, this thread made me see things through a different perspective.
first of all, I can't really see the point of focusing on this white list. from what I remember from my college years, when we had to analyze stats, the first thing we had to do is remove the top highs and the bottom lows from the data set. This white list seems like a top high for me. Unless you have a real chance of getting in it, why bother? If you do get in it, then every little alarm you trip will probably result in a manual evaluation. This means that even a small, honest mistake, could mean the end.
I'll toss out something on the other end. I have a site that I am currently working on
I have a felling G is now using data from its history books. Your site seems new, so this could be one of the reasons why you're not seeing any effects. On the other hand, because it's new you could be coding it differently. Maybe throw in a few ideas you got along the way without realizing it.
and poor sites like eHow gained
after reading a lot of HTML docs lately, ehow seems to be a lot more 'quality' than before...
regarding the scraped content, i can't get it either. intellectual property should be recognized no matter the algorithm or other interests.
I was trying to crash two sites which survived this panda mess.
no offense SEOPTI, but for me this seems to be an extreme exercise. if this is happening on a larger scale, it could explain some of the interesting results we see in the SERPs.