| This 337 message thread spans 12 pages: < < 337 ( 1 ... 2 3 4 5 6 7 8 9 10 11  ) || |
|Analyze Panda Losers That Don't Fit The Mold|
| 5:55 pm on Apr 14, 2011 (gmt 0)|
So we've had two iterations of Panda now, and with each iteration has come a publish list of the biggest losers. We all know, if we're honest, that a lot of the losers on those lists deserved to lose and lost for obvious reasons.
The point of this thread is to pick out the sites from those lists which DO NOT fit that mold, sites which it's not obvious why they lost, and figure out why they were hit.
In doing so, maybe we'll understand why Panda has hit so many here who don't seem to deserve it either. Here's the list of sites to discuss, I suggest we take them one at a time and simply go down the list one at a time and each list reasons we think each site might have been Pandalized. Once we think we've come up for an explanation for that site, we check it off and move on to the next one:
| 10:48 pm on Apr 30, 2011 (gmt 0)|
Me thinks only those who've not been penalised by the G team are comfortable with gwmt.
So nothing I say will mean anything to you folks.
Some time ago, a site of mine which was penalised started ranking again, very , very shortly after ,a google IP appeared to trace through a few urls that were marked as errors in gwmt,
I investigated the urls an found , belatedly that they where real errors , and that was that, penalised again
basically, no matter what they say, these folk like to pick on those who talk to them, first
Afterall, tis a lot easier than pursuing all the multi hypenated, sites supported by links from sites of distinctl similar provenance, which now profiliate in many of the serps i am interested in
Anyway, enjoy your warm cosy relationship with them,
Very successful fellow once posted on this forum , stay at arms lenght from the G , an i cheerfully ignored him ,,
anyways, what do i know
P.S. The lenght of this thread reassures me that my understanding of Panda is sound :) They say that exceptions prove a rule, problem is how do you define the word exception
| 11:34 pm on Apr 30, 2011 (gmt 0)|
I'm sorry you feel so jaded about GWT, scooterdude. You know that Google would have detected the error whether or not you had an account, right? The difference is, without having an account you would not get report about what they found.
| 3:53 am on May 1, 2011 (gmt 0)|
There is no such thing as 'honesty' when talking to a machine. This is a damn machine. I would not recommened setting up GWMT, don't make it easy for a poor paid reviewer to punish all of your sites one day if you make something like a 'mistake' in their eyes so here you go, you will be rewarded with a -50 hell for *all* domains listed in your account, even those domains which did no mistake. I know what I'm talking about, this happened to me a few weeks ago, they hit all of them (10-15 domains).
Common sense and your eror_log are your friends, their WMT which are 99% buggy are really not needed.
| 5:39 am on May 1, 2011 (gmt 0)|
I used to feel the same way, and then over time I changed my mind. So I can certainly sympathize with the position.
| 8:24 am on May 1, 2011 (gmt 0)|
I say this cautiously, methinks some folk are held exempt from this, and probably penalties for them need a higher level of authorisation
Ask yourself why it happens to some, but others entirely have no conception of this, does that mean this idoes not happen?
| 2:50 pm on Jun 6, 2011 (gmt 0)|
|There is no such thing as 'honesty' when talking to a machine. This is a damn machine. I would not recommened setting up GWMT, don't make it easy for a poor paid reviewer to punish all of your sites one day if you make something like a 'mistake' in their eyes so here you go, |
I have found GWMT to be helpful (though you are more experienced, my site only a year young) such as giving me messages when my site was down (glitches on web hosting side) - yet have some concerns as to how much you can 'trust' data from GWMT.
I fixed crawl errors over four months ago - now GWMT is showing the same crawl error on the pages that were corrected (URL missing the [)...] dating the error back to November 8th, 2010. And claiming the crawl error data was updated on April 8th, 2011.
What the h**? So Google is crawling my site every 5 months or so?
And I did check way back when after updates, to see that the errors were cleared on my webmaster tools account.
They have now reappeared - though, of course, are still 'fixed' on my site, not actual errors.
Lovely. So maybe having a GWMT account can give you some insight into the workings of 'the machine' - and valuable ones at that.
| 3:25 pm on Jun 6, 2011 (gmt 0)|
Now the page is updated to current date (June 5th, 2011) but the errors still stand, and there are pages they are saying have broken links that do NOT have the link anywhere on the page.
How can I trust information from GWMT with this type of errors in evaluating my site?
If its the 'machine' at work, then their processing of data is sorely mis-sorted. Evidence of buggy programming?
To their favor - original crawl errors were accurate... and fixed months ago, just now the data is wrong. Very wrong.
| This 337 message thread spans 12 pages: < < 337 ( 1 ... 2 3 4 5 6 7 8 9 10 11  ) |