|Self Sabotage and Google Panda|
I did a lot of cleaning on my biggest site over the weekend. Of course, one year ago, it was hit badly by Panda. Amazing. I struggled and bitched and complained about evil Google without realizing I had been partly at fault for my misgivings.
I spoke to many SEO gurus who could not figure out how a "real" site with unique contents and established for so many years, with tons of natural links (I've never done a link campaign, yet my site is referenced all over, including on Wikipedia). We were never taken out of the serps, yet the site lost big time.
What was weird was reading about all those people talking about brands. My site was a brand before Panda ever came along. Google still visited and all. So what went was wrong?
1-Google since Panda is very anal about code and rules. I ran afoul of them without even using them for over optimization.
2-I over looked basics coding principles. For some reason in one of my site-wide updates, I took away the H1 tag. I have no clue how that happened but it took someone to point it to me. The moment I put back the H1, we gained a few percentage points overnight.
3-I moved to another server a few years ago. For some reasons, I left the older site up as a back up and never cleaned it. All the pages at the old IP were still available provided one knew where to find them. Google sure did. I was checking information on the number of pages on my site, and it always indicated that I had triple the amount I knew I had. Essentially, the older site was up for Google. The new stuff was only found in one place, which would explain why years of evergreens suddenly fell into no man's Land. I could explain it. Pages with tons of links and used all over by all kinds of other sites just vanished from the serps...
4-I had variants of the pages left on the old server and on the new server. I remember testing a feature with my CMS once and generating copies of the pages with other tags at the root level. Well, these pages - total duplicates of the originals had been lying around there for years without me ever remembering about them, until recently. In all, including on the old server, I had at least 5 copies of the canonical pages on my main site. No wonder I got hit bad by Panda!
5-It,s been said that Panda is about Google cleaning up it mess and forcing webmasters like me who test stuff, generate copies of their sites for development purposes , only to forget about them - to also clean up their act.
Well, it was a painful year that completely dominated my mind. How could a real site with real unique contents, with so many natural links, that had thrived in every single Google updates of the past and that wasn't spammy get hit so much? ow I know. Google got tired of figuring out which copy of my site to use.
I kinda feel like an idiot because all of these mistakes I made were beginners' mistakes. I was so confident on the power of my site to weather out any Google updates because it was a real site that I forgot to clean my home while continuing to build the site. There was no way I was gonna rename my site and start all over. It just would not be possible easily as a real brand.
One thing I did learn, mobile save my ass last year. Mobile is not as vulnerable to Google search as regular sites. Apps help capture an audience that by-passes Google, Facebook, Twitter completely. You're talking directly to your user once they download your apps. It helped the site survive and of course explains partly why I just could not pack up and start under a new name.
Not sure what will happen over time to my traffic. The bleeding definitely stopped, but we never picked up the pace as we had in the past. Before I figured out how I had sabotaged my site, I did clean up the most obvious - empty pages auto-generated by my CMS, any kind of duplicate contents.
I may never recover - yeah I'll recover in time. I built my site once, I can rebuilt it again. it's way tougher now than it was way back when I started, but then, it's not easy to built a large scale Website used by so many people everyday, overnight. I never gave up on the site ad kept adding contents. What I did do was to figure out a plan to allow it to survive in the next information crunch. At this point, I'm not even sure Google itself will survive the next iteration of the Web unscathed. It doesn't look good for them on some fronts.
Maybe Google will remember that it built its success in partnership with early webmasters like me. Maybe one day, we'll have no choice but be friends again to weather out the next big permutation of the Internet.
It's too bad that your site was unjustly demoted for technical issues. Your experience seems to illustrate the value of two principles I always try to follow:
1. Keep things simple.
2. If a site is doing really well, don't change anything.
Anyway, I hope that the measures you're taking will restore your site back to its previous rankings and traffic.
I've done many technical mistakes in this year too, got pandalized, fixed most of those issues and now my websites are coming back. It maybe just coincidence, but I believe that it's not :)
It's not a coincidence, you've gotta believe :)
I noticed that content on page A now impacts rankings of page B a whole lot more than it used to. If you have a high % of low quality pages it seems that you can expect less traffic to your high quality pages. Remove the low quality pages and the traffic returns to the high quality pages. At first I thought this must be due to internal link structure and how rank flows around the site but after running a bunch of tests I found that it's not.
It's a shame too because even a crappy site can occasionally have a gem page, geocities sites were a good example. Moving away from determining quality on a pages OWN merit is going to allow more great pages to get missed altogether by regular search users.
Did you just publish the best article of all time? Awesome, but it's too bad your site is new and/or given a mediocre overall grade! ahh well.
|even a crappy site can occasionally have a gem page, geocities sites were a good example |
Obvious corollary question: Can g### tell where one site ends and another begins? geocities was a pretty extreme case, but there are lots of other configurations where "domain" and "site" aren't the same thing. Does having your own domain name in and of itself give an advantage?
|Can g### tell where one site ends and another begins? |
I think that depends on what you consider a "site." Recently, I had a blog show up within the sitelinks which was not actually part of the website, but completely separate. The blog did link quite often to the website, but the website never linked to the blog.
The blog was a subdomain of the website. So maybe that's why, but I thought G use to treat them completely separate.