| 5:07 am on Jul 4, 2011 (gmt 0)|
|In the "good old days" pre panda, articles/ content could rank for anything -- even when people wanted to shop, not read. Big retailers wouldn't clutter up their slick stores with tonnes of content just to compete, so content sites did disproportionately well -- even when they were just in the way of searchers' real intentions. Not under Panda. |
I don't want to argue the merits (or lack thereof) of my site. The searchers coming to the site are looking for brick and mortar stores, not ecommerce sites. The content is ads for those stores, written by the owners, or by me. It seemed to work, as the owners said they'd received new customers, and were happy with the service.
As I said before, if your theory is correct--and there's more reason than not to think it is--then Panda is seeing the content of the site as something that matches a pattern it's identified as "bad". If that's the case, then there's really no way for my site to come back again, at least in the form in which I'd created it.
| 7:03 am on Jul 4, 2011 (gmt 0)|
|anda is seeing the content of the site as something that matches a pattern it's identified as "bad" |
A genuine false positive, I'm afraid...
| 8:24 am on Jul 4, 2011 (gmt 0)|
So what you are basically saying is that there is no way to identify the specific pages that caused the pandalization.
So possible rewrites must happen site-wide?
| 9:03 am on Jul 4, 2011 (gmt 0)|
|So what you are basically saying is that there is no way to identify the specific pages that caused the pandalization. |
Erm...no...what I am saying is that rewrites might actually be a complete waste of your time.
If your site is a square peg and users are looking to fill a round hole, you can write Nobel-prize winning prose, but it won't make a jots worth of difference.
| 9:17 am on Jul 4, 2011 (gmt 0)|
Sorry, didn't mean you in specific, but the quintessence of this thread...
| 12:20 pm on Jul 4, 2011 (gmt 0)|
suggy - what you are saying makes perfect sense. If I am understanding this correctly?
Google is collecting user data based on queries/clusters of queries - and creating a profile of what the page should provide based on the positive signals.
If enough of your pages do not fulfil the ‘profile’ for the targeted queries- you will ultimately see a site wide demotion?
| 3:12 pm on Jul 4, 2011 (gmt 0)|
Suggy, with all respect, while what you're saying makes sense, so do many of the other theories that have been offered recently.
The problem with nearly all of the theories is that it's impossible to test any of them to see if they work without completely redoing a site. Even then, with Google not bringing sites back in a timely fashion, one can never know for certain if his site didn't come back because he didn't redo the site to address the correct issues, or if the site didn't come back because Google is waiting.
|If your site is a square peg and users are looking to fill a round hole, you can write Nobel-prize winning prose, but it won't make a jots worth of difference. |
That makes sense, but I'm finding enough examples with my site that I can't say with any certainty that you're right.
For example, I mentioned the ads I have for widget stores on my site. The number of views of those ads increased 21% from the period 2/24/2011 to 7/3/2011 over 2/24/2010 to 7/3/2010.
I also have a very small online store on my site. Some brand pages on the site increased in page views by 75%, while others decreased 43%.
To draw in visitors for the widget stores on my site, I feature photos and descriptions of widgets. These widget pages were hit very, very hard by Panda, and page views are down anywhere from 37% to 59%. However, I've no-indexed about half these pages, so page views would of course be down.
With Google playing this cat and mouse game of not ranking Panalized sites that have been redone, it makes little sense to redo a site. At some point the picture should be clearer (I hope).
| 3:23 pm on Jul 4, 2011 (gmt 0)|
|Maybe this is micro not macro.... |
And also include in that the variables for geotargetting and quite possibly tld extension and also whether, or how, G has evaluated a site as being international or national.
I'm seeing this, I have been for some weeks now, for instance my UK registered example.com and UK hosted since 1994, can hardly ever be found in G.uk but is consistently in G.com whereas my example.co.uk is in the top 3 for everything targetted in G.uk and rarely in G.com
Interestingly my new example.eu can already be found ranking extremely well in both G.com and G.uk.
| 3:42 pm on Jul 4, 2011 (gmt 0)|
If enough of your pages do not fulfil the ‘profile’ for the targeted queries- you will ultimately see a site wide demotion?
Only it's not site wide. It's per page per keyphrase and it's largely predicted from related searches and semantic clustering. It's linked to Google's ability to better understand intention.
It's genuis because Google does not have to understand 'quality', but only that users don't like pages featuring those 'signals' for this search (or we predict they won't based upon other searches/ the search phrase/ semantics)
I think the site wide element is possibly just that if your round peg type page is linked to lots of other pages with the same signals, the effect of being 'out of place' in a square peg type of search is magnified. Likewise, if your square peg page links to lots of round peg types this might diminish your square pegness!
OK, I think that's enough with the pegs now!
[edited by: suggy at 3:56 pm (utc) on Jul 4, 2011]
| 3:51 pm on Jul 4, 2011 (gmt 0)|
Ironically, I can see this page being #1 for "pegs" within a few days, and some legitimate peg sellers will not be happy about this ... :-)
| 4:56 pm on Jul 4, 2011 (gmt 0)|
Here is the issue as I see it right now. If there is a pattern to Panda demotion then it would be very un Google like. If things become obvious, then cheats and spammers can exploit. If there becomes a Panda playbook then Google has failed. That's the problem here folks.
If there is some playbook to recover from Panda, then guess what? All the "crap" they call it, can retool and get right back into the rankings. So if it's easy to rebound then the whole exercise is a waste of Google's time. Make sense? This is awful, some of us realize this, but some of us are still in denial mode. The ground is still shaking and it's not over.
| 5:00 pm on Jul 4, 2011 (gmt 0)|
All I got left is tricks, I'm done with the rest. I am objective enough to judge competition and see that something's obviously wrong. So I am going to try a few tricks, just for Google. All my pages have a < previous - next > and link to a set of popular pages from each category. That I am not going to change since it makes sense and it's good for the users to see and compare.
| 11:03 am on Jul 5, 2011 (gmt 0)|
Interesting theory here by surry. I've noticed the same thing to a large degree with some queries, particularly with the queries my Pandalized sites used to rank for. My feeling was/is that Google wants to return certain types of pages for certain types of queries, especially related to e-commerce, but also for some information searches (vs. transactional). I'm definitely seeing some exceptions to this though, and there are a couple of reasons I have doubts.
Remember that right after Panda and repeatedly since then, Google employees such as Matt and Amit have recommended noindexing/removing/moving "low quality" content as a solution to Panda devaluations. They've also repeatedly said that "low quality" pages will affect the ranking of your entire site with Panda.
So, one question we have to ask ourselves with any Panda theory (assuming Google isn't stating the above as a form of misdirection...definitely possible) is: Would "removing" low quality content have the same result as implementing what this theory implies?
If Panda has something (or especially everything) to do with matching page types and query types, how would removing low quality content help with that? It seems to me that it would not. If Google's suggestion about removing low quality content to escape from Pandalization isn't BS, then it seems this theory about page type/query type would be incorrect.
I should add that I have removed low quality content (not actually low quality, but low quantity...product level pages with lots of similarity, etc.), reducing two of my sites by literally 90%, and it has had zero affect.
One suggestion I think would be useful would be to remove certain factors as primary causes of devaluation. It's easier to remove a cause theoretically than prove a particular one. Not only have I removed or noindexed low quality pages to no affect, but I've also substantially increase the quality of remaining pages to no affect. Changes I made to one site just 2 weeks after Panda 1 decreased the bounce rate by 50% and increased pages per visitor by 40%. I'm 90% sure Panda has nothing to do with user data as I saw no improvement after making these improvements, and other ranking (thin affiliates vs. my thick 30+ year old, legit business) sites would most definitely have far worse usage data stats.
The theory of matching query types to page types feels good, and I'm seeing what looks like that to some degree. I'm going to test it. But it pretty much violates Google's advice on "removing" low quality content IMO. Thoughts?
| 11:49 am on Jul 5, 2011 (gmt 0)|
In a way it could reinforce it. If Google are focusing on intent from both the user and destination then you could assume that low quality content blurs the intent of the site.
| 12:53 pm on Jul 5, 2011 (gmt 0)|
|but I've also substantially increase the quality of remaining pages to no affect. |
But have you added the qualities of 'quality' Google wants for that search? If your definition of quality doesn't extend beyond adding more words and pictures, possibly not.
On the site wide front, previous poster has that figured. Off target pages linked from the page to rank dilute it. Plus, Panda is likely two sided... working both sides of the line Matt/Amit spoke of. Too many of the wrong sort of 'qualities' will mark pages as spam/ not the sort of content we ever want to show and dent trust in the whole site.
| 1:15 pm on Jul 5, 2011 (gmt 0)|
Regarding quality...my e-commerce site VERY much matches the profiles of the top ranking sites that were not hit by Panda. And by quality I mean the same profile (general amount of links, text, images, and arrangement) but content that has increased user engagement. There's no way to know for sure, but yes, I do think I've added the qualities that Google wants for those searches. I'm almost certain of that.
I'm not so sure about the "too many of the wrong sort of 'qualities'" pages. Like I said, on two sites I removed literally 90% of pages to get rid of thin pages...primarily through consolidation...to match existing top ranking sites in internal architecture, content, and images. I see your point about removing pages with too many of the "wrong qualities", but you'd still be left with the remaining pages having the wrong qualities, and they still wouldn't match the query if you were hit in the first place.
| 2:20 pm on Jul 5, 2011 (gmt 0)|
when did you make those changes?
| 2:23 pm on Jul 5, 2011 (gmt 0)|
From what I gather Panda is able to look at individual pages a lot more than previously and rank them on their own merits.
This might be useful - a site I work on (not our main one) has seen an increase in PR to 6... it has a stack load more quality inbound links (probably twice as many as nearest competitor) but it has slightly less content. Still good content, but not as much.
Result of Panda - moved on average down a position or two. So... strength of links in the algorithm seems to have diminished, content and of course other factors seems to have increased..
| 2:42 pm on Jul 5, 2011 (gmt 0)|
walkman: We've been continuously making them since we were hit on 2/24. On our e-commerce site, within a month we had consolidated all individual product pages so that they no longer existed...listing all products for a specific category on category pages rather than having individual product pages. That reduced our site size from ~2000 pages to ~200. We simultaneously improved the text content on top category level pages, etc. Our site has become more and more like the sites that didn't get hit. But in most cases we're "better" on every conceivable metric that used to matter...anchor text links from unique linking domains, brand links, product selection, etc., etc. We also changed style elements (colors, etc.) that had a very significant affect on our usage stats...on 3/18. We moved fast and did everything Google suggested, and our site dropped further for our efforts.
| 2:45 pm on Jul 5, 2011 (gmt 0)|
I've read and re-read this thread probably a dozen times, thinking about Suggy's theory, and some of the other posts here.
Again, the theory does make sense. There's been a ton of talk on this forum lately about Google utilizing user behavior as a ranking metric. Suggy's theory would mean that Google is predicting user behavior based upon what Google is establishing as patterns for various types of sites.
If the theory is correct, then it might be possible to figure out where your site really belongs--in the ecommerce bin, the informational bin, etc--and either trash the site if it just won't fit, or redo the content to make it fit the patterns more strongly. Some pages on a site might be easier to do this with than others.
Is that a reasonable conclusion to draw from what you've said, Suggy?
As I said before, it would be really doesn't make sense to redo an entire site to test a theory, and Panda seems to be sitewide enough that one can't experiment with a few dozen pages to see if there's an improvement.
Are there sites that have returned from Panda that could be examined to see if Suggy's theory applies to them?
| 3:05 pm on Jul 5, 2011 (gmt 0)|
I think it makes very good sense as well, and goes a long way towards explaining the sites that I have seen that were hit, and the ones that were not (but that, under the traditional way of thinking, you might think would be)
| 3:08 pm on Jul 5, 2011 (gmt 0)|
Well that's the thing. I have basically redesigned both of my sites that got hit, in their entirety. My e-commerce site was most definitely, clearly an e-commerce site before and after. The only problem (according to what Google said) was that I had product pages with thin content. There's almost no difference between hundreds of our products, as they're variations on the same things. By getting rid of those pages we also dramatically changed each of our category pages...since the product info was added to them. (Most of the remaining sites had a similar structure, with either no product pages, product pages on a sub-domain, or far fewer products.) So we DID redesign our entire site to make it fit the current ranking sites.
With our information site we also did a complete redesign. We removed 90% of the pages that could be seen "low quality" (via noindex), even though they were not low quality. And we completely re-wrote the remaining pages to resemble but be different from remaining top ranking sites.
It hasn't helped for either site.
I agree that suggy's theory does make sense, and as I said I'm seeing what looks like the application of that theory myself. But it doesn't line up with what Google has said and it hasn't worked for me. I like it nevertheless.
| 3:43 pm on Jul 5, 2011 (gmt 0)|
There is definitely a pattern. We just don't know it. =)
| 3:54 pm on Jul 5, 2011 (gmt 0)|
There may not be a discoverable pattern, ever. It could easily be multiple factors that are each complex, and the relationships could matter in ways that patterns cannot be seen.
| 3:55 pm on Jul 5, 2011 (gmt 0)|
I'm no SEO novice, and I've been looking at this non-stop for 5 months. Many other "experts" have too. If it were simple, we'd have seen it already.
| 4:17 pm on Jul 5, 2011 (gmt 0)|
|There may not be a discoverable pattern, ever. |
|I'm no SEO novice, and I've been looking at this non-stop for 5 months. Many other "experts" have too. If it were simple, we'd have seen it already. |
Maybe someone (G) needs to remove the [en.wikipedia.org...] . Until then, almost whatever you do will not matter.
| 4:34 pm on Jul 5, 2011 (gmt 0)|
|Maybe someone (G) needs to remove the [en.wikipedia.org...] . Until then, almost whatever you do will not matter. |
Definitely. And I'm pretty sure that hasn't happened yet. I'm skeptical of the very rare mentions of recoveries without data. Many people are calling every ranking loss a Panda hit, mistakenly so. And others are seeing small gains and calling them recoveries. It's definitely possible to see some gains even while under the devaluation, and I think that's fooling some. I'm not so sure the glass ceiling has been removed at all, in any of the new iterations to Panda. I wouldn't be surprised if the new iterations are devaluation-only.
| 4:55 pm on Jul 5, 2011 (gmt 0)|
of all the theories, maxmoritz makes the most sense to me.
People here are confusing themselves with ecommerce, transactional and informational searches and I suspect suggy seem to be tilting more towards the ecommerce stuff when he means "satisfying google's quality".
I do agree and see what Suggy sees in demotions being more query specific but I also see a site wide phenomenon.
memoritz, I agree with you on everything as i too see the same.
The only explanation that can be given on how they do this is they seem to not only run this manually but there also seem to be a huge one year (a few man-years) manual effort in working out this demotions.
Considering that we see a few sites returned often, for many queries suggest that is a huge list maintained and it is the result of this one year effort.
[edited by: indyank at 5:10 pm (utc) on Jul 5, 2011]
| 5:09 pm on Jul 5, 2011 (gmt 0)|
Thanks indyank. But unfortunately I don't have a theory myself! I think theories are largely impossible to confirm right now due to the lack of real recoveries.
| 5:10 pm on Jul 5, 2011 (gmt 0)|
|It's definitely possible to see some gains even while under the devaluation, and I think that's fooling some. |
Hell, if I have a good day with 2,000 more page views than the past five days, I can report a 10% "recovery".
maxmoritz, what you're reporting is discouraging with regard to Suggy's theory. It would be nice to have more re-worked ecommerce sites to use as tests of the theory, though.
If I do searches for phrases that I ranked well for in the past, I'm seeing the following pattern (if you want to call it a pattern).
If I search for "Acme widgets", the first page will have two to four results for the manufacturer, then the rest will be ecommerce sites selling Acme widgets. The second page is ecommerce sites mixed with informational sites and/or sites selling accessories for the widgets. Informational sites don't begin to show up to any great degree until page three or four.
if I search for "Acme model XYZ", the first page is a mix of ecommerce sites, forum posts, review sites, and informational sites, and consequent pages are much the same. In other words, more of a mix of results.
My site confuses Google, whether Suggy's theory is right or not, because it's a mix. It has the ads for the retail stores. It has pages with product information, much like ecommerce sites have (but usually more lengthy descriptions) that serve as pre-sell pages for the retail store ads. I also have some articles and niche-related books. Then there's my own small online store, which I'm splitting off into its own domain to make its purpose more clear.
| 5:16 pm on Jul 5, 2011 (gmt 0)|
maxmoritz, I know that because none of the theories have helped anyone so far and I too don't have anything left now.
Since I don't see a pattern in this like you, I am suspecting google's manual tools in play.
But yes, every point of yours makes sense as you seem to have tried everything and from how you explained it, I could easily see an expert in you.
Suggy definitely has a good theory but we are still scratching the surface.
[edited by: indyank at 5:26 pm (utc) on Jul 5, 2011]
| This 80 message thread spans 3 pages: < < 80 ( 1  3 ) > > |