dataguy - 1:44 pm on Jul 15, 2011 (gmt 0)
I'm having second thoughts on this. Maybe this is a legitimate strategy.
I know that historically Google has treated isolated subdomains largely as individual sites, more so that I thought was proper. Maybe this isn't a hack to bypass Panda, maybe this is actually what Google wants us to do.
For whatever reason, Google has had a hard time determining the author of content when mixed with other authors' content. This can explain why they announced their use of the rel="author" attribute a few weeks ago. It seemed like a stupid feature to most website owners, but apparently Google really is having a hard time figuring out who the author is on multiple-author sites.
Then consider blogspot. Maybe this is the model that Google is most comfortable with. We know that more that 80% of blogspot is spam, but Google doesn't devalue every blog on blogspot.com. They are kinda like silos, and Google obviously recognizes them independently. As mentioned above, wordpress is another good example of this.
As the owner of a website with thousands of users submitting content, my hardest job is separating the spam from the legitimate content. If I separate the authors into their own subdomains, my job becomes much easier. Hopefully Google will just devalue the offending content and leave the good content alone.
I've already started working on building tests for this.