| This 216 message thread spans 8 pages: < < 216 ( 1 2 3 4 5 6 7  ) || |
|Panda key algo changes summarized|
Folks, I have been reading a lot, thinking a lot and analyzing a lot. I am still not sure, how to get the US traffic back to pre-24th of February levels! But I think it is time to summarize the key theories of the algo change in the US:
- Internal links devalued, only external count really
- Thin pages cause substantial bigger problems for a domain
- Duplicate content snippets on your page cause substantial bigger problems
- Too many external named links "widget keyword" instead of "more..." (eg) cause penalties
are what kept me working in the past 4 weeks. Do you have some additional meme?
We're still working on recovering from Panda. No significant gain yet after a 60% drop. I've got a lot of pages to touch...
I am seeing a sharp increase in non-search engine referrals (e.g. our site is being included as a link in lots more sites). My guess is that we are seen as an authority and lots of sites are linking to us as part of their Panda optimizations. I've picked up about 1/2 what I lost from Google. The traffic seems to be of lesser quality.
Anyone else seeing a spike from random other websites? The ramp-up is unmistakable.
Bounce rate is the collaborative result between my SEO and Google's search results. Since I haven't made any material changes, the current bounce rate is on G's head. While it is nice to know where to point a finger, knowing this doesn't fix the problem.
I put out quality, relevant, original content and use a light hand with tags and categories to help the G 'home in' on what I'm offering.
Look, last month my unique visits grew by about 30%. I'd be very happy about that except that most of those people were bad referrals ... my bounce rate soared from <30% to 98.5% on one site and from <30% to ~65% on another.
These are information sites. The one with adsense got the 98% bounce. I think that people, at least MY readers, are seeing adsense ads and bolting for the door. I just wish they'd use the ads once in a while on their way out. :-(
You are bang on the money.
Fact is Google is a confidence trick of marketing. Their results have been horrific for years but they are the market leader and can get away with it.
Pick a keyword, any keyword and do some research around it or its c1 relatives and you will find several obvious discrepancies.
Couple this with the fact Google have always said: Good content wins.
a) Google can never know what is good content.
b) Google have never proved they know what is good content.
c) Google want a site to LOOK like it's made by an expert.
The SERPs right now are dog#*$!. There is ZERO point changing anything until things settle down and we can gain an understanding.
1) Security certs CANNOT be used as a ranking factor. Why?
a) they cost money
b) google don't sell them
c) google have their own tool for this
d) they don't work - please don't challenge me on this, the whole industry is a massive scam.
2) Loading times...yah, ok. This is just like "your site should be coded to modern standards"...tables still do just fine (god help us they suck though :P).
3) Keyword targetted content - apparantly people are scared they're cracking down on this. So let me understand this right, Google are penalizing people for using keywords to describe their content in titles, breadcrumbs, navigation and so on. I see terms like "it's unnatural". Well if i have a blue widget and a red widget and I sell them in the widgets section guess what, I'll link to them in each of the several possible ways. Why? Because this is merchandising.
Google became the game, they can't change it without becoming even worse than they already are.
This post is not designed to be negative to the really great comments you guys are making, it is entirely aimed at Google and how poor they have become.
Has anyone found Google patents which relate to decision trees or anything Panda like?
Meanwhile, to get inside Mr. Biswanath Panda's head, since Google has not been very forthcoming, and there aren't many reports of Panda recoveries, we should read his writings.
Panda has put out at least one paper in 2011, 2010, and 2009. Below are the pdf files for these 10- to 12-page papers:
2011. Fast Algorithms for Finding Extremal Sets
2010. The Model-Summary Problem and a Solution for Trees
2009. PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce
|to get inside Mr. Biswanath Panda's head |
He appears to be a machine learning specialist - but that doesn't mean he knows a lot about WHAT the machine is learning, or even what data the process ranges over. So even though we assume it was his breakthrough in large scale processes that made Panda possible, I doubt that his papers are going to provide a lot of insight into the rating details of the new algorithm.
I can say I looked through his papers and I got no insight into the specific criteria used for rating - only the processes that were used to pick those criteria.
That Fast Algorithms paper is a good read potentialgeek. I read a bit of Mr Biswanath's stuff too and am struck with the feeling that he is smart but was also limited in scope and doing as told so I stepped back to see what his instructions might have been.
It's important to remember that Panda is a product of the search quality team and not the web spam team. Finding good stuff fast is the goal so it stands to reason that a mix of old data is tempering whatever data Panda is using.
From that, which is highly simplified, you can make assumptions on what's going on. IMO these are some of the most likely attributes of Panda. a) using trust and quality signals not under your control. b) traditional evaluations are tempered with new trust and popularity factors. c) a blind eye is places on things such as SEO, which is left to traditional methods of evaluation, and only formulates part of any given pages 'value'.
I ran a test, I took a heavily pandalized page and gave it some buzz. I added a single paragraph update which did nothing to rankings for 2 months and then I sent it into the social networks. As it got attention it very much came back to life.
To sum it up - the pages and folders on your desk are worth more to Panda than the ones with cobwebs in your file room but that fileroom is golden for traditional search. Today we're seeing a mix of the two data sets and we're seeing it lightning fast.
Conclusion It may be dangerous to dust off old pages since you'll be affecting history metrics but it may be dangerous to ignore getting the new buzz for your recent content. Ideally all your content is met with much fanfare and then settles into a prime location within your archives to maintain the value. The 'smoking gun' is not any one factor anymore though changes are making ripples, both good and bad at the same time.
Gameplan - leave older content alone, create wonderful new content that mentions the old where appropriate, stop reading into Google so deeply and just build good stuff.
P.S. I can't believe I just read the 215 posts above this one. Brett, when you have time, come up with a system that grabs the most pertinent info in long threads and condenses it! (you'd become a rich-er man :-).
| This 216 message thread spans 8 pages: < < 216 ( 1 2 3 4 5 6 7  ) |