| 5:47 pm on Apr 28, 2011 (gmt 0)|
indyank: some of my pages on hubpages are #1 and make a couple hundred $$ per day (and yes they are #1 for high cpc/high traffic keywords which Panda definitely included). Same about EZA and Buzzle, I use a lot of Web2 sites. The point is: do not generalize, Google's ranking is page specific, not site pecific (unless it is a site wide manual penalty which is not in the case of Web2 sites). The key is in diversified backlink profile with lots of social signals plus good long posts that include diversity of elements. Throw in an image, video. Make sure all elements are ranking high themselves. If you throw in the video, make sure it is ranking high for your nioche in Google, etc etc etc. You got the idea.
| 5:52 pm on Apr 28, 2011 (gmt 0)|
|some of my pages on hubpages are #1 and make a couple hundred $$ per day (and yes they are #1 for high cpc/high traffic keywords which Panda definitely included). Same about EZA and Buzzle, I use a lot of Web2 sites. |
Two of my writers have contributed very good research/expertise to those hubs you mentioned. It may be a function of author credibility in a lot of cases. There has been speculation (earlier in this thread) that author identity, trust, and reputation modeling could be used as a ranking factor.
| 6:01 pm on Apr 28, 2011 (gmt 0)|
|point is: do not generalize, Google's ranking is page specific, not site pecific |
correct .. from where I'm sitting
| 6:07 pm on Apr 28, 2011 (gmt 0)|
crobb305: Google can't read and judge that you have good reserach on there. Come on! It needs signals to judge it - backlinks, tweets, fb likes, etc.
| 6:12 pm on Apr 28, 2011 (gmt 0)|
|crobb305: Google can't read and judge that you have good reserach on there. Come on! It needs signals to judge it - backlinks, tweets, fb likes, etc. |
Google can do a lot of things using the very signals you just mentioned (which has been the gist of this entire thread). Credible authors will have backlinks, tweets, FB likes, etc. Google can model an author's trust, credibility, and reputation by creating a statistical profile from the aggregation of those data. This may or may not be a ranking factor, but it can be done. I did NOT say that Google can fact check a document. Read more carefully.
| 7:38 pm on Apr 28, 2011 (gmt 0)|
I have the advantage (or disadvantage) of about 100 similar domains - each targetted locally. My observations are that sites that have higher quality pages have survived and actually improved. We have 2 "classes" of pages I guess you could say. The sites that got hit the hardest had more of the "lower class" (less text) pages than higher class (more text on the page) pages. I am testing now by rewriting the content on pages that were #1 pre-panda2 (we survived panda1 just fine). Early indications are favorable - pages that i have rewritten are moving up albeit slowly. Although I am seeing G taking longer to recache the new content. where they used to recache pages in 2-3 days it's taking 7-10 now to recache changes.
| 7:47 pm on Apr 28, 2011 (gmt 0)|
its funny how things are working out. its like the algo is going back to the 1990s again.
back in those day, it was all about the text on the page, and here we are ten years later saying that pages with more words rank better.
presumably just adding 200 nonsense words on the end isnt going to make any difference. its got to have something to do with the subject matter... which sounds like keyword density is making a comeback too!
maybe webrings will make a comeback too. any site on a webring gets a bonus.
| 8:21 pm on Apr 28, 2011 (gmt 0)|
|maybe webrings will make a comeback too. any site on a webring gets a bonus |
ROTFLMAO! That's the quote of 2011, IMO. Thanks for the laugh.
| 9:40 pm on Apr 28, 2011 (gmt 0)|
>> Sites like hubpages will never move up as long as panda is active.The kind of content on their site is total crap. And a huge percentage of it is scraped.
It makes you wonder why so many Wikipedia articles with questionably 'original' content continue to rank #1.
| 10:21 pm on Apr 28, 2011 (gmt 0)|
Andem: Good point. My guess is that since Wiki has all- encompassing content the user actually is more likely to stay within wiki domain clicking around meaning that the back buttom is not used as often and time spent on the wiki domain is significantly higher than say on hubpages. Thus wiki may win in terms of the new row of user generated signals. The system of internal highly contextual links used within wiki is very sound and wiki is HUGE. Both in terms of traffic, PR, contents etc, It is as big as Google itself. This hugeness gives it a special spot in Google. While Hubpages or EZA is a content farm, WIKI is not the farm - it is so to speak super farm. It is like the economic laws that apply to average countries, do not apply to economic superpowers. This is the best analogy I can come up wuth.
| 12:43 am on Apr 29, 2011 (gmt 0)|
Anyone find it crazy that as a result of Panda we are having to evaluate our sites on a page by page basis to work out whether to block "thin" or lower quality pages so that the whole domain doesn't get a penalty.
Absolutely mental - how can non-webmasters deal with this. How do e-commerce sites with manufacturers descriptions and better prices deal with this.
It is not about content by itself in the commercial web.
Why can't Google just ignore the "thin" or listing type pages like category links, blank profiles, etc. etc.
Just give weight to the good stuff and ignore stuff that you can't score as quality (and that could be lower "quality" for other reasons that there aren't 500 words of unique content on it).
At the end of the day this "quality" recognition algorithm does seem to be based on lots of what is being discussed on this forum - but fixing it is near impossible for so many people.
I have recovered my penalised sites - I did this by buying aged PR 5/6 domains and putting unqiue content on them with price comparison tech. I am now doing this aggressively and not pursuing my "one site" objective where I create branded sites etc.
Surely this is the opposite of what Google really wants in it's index.
| 12:51 am on Apr 29, 2011 (gmt 0)|
I also think it is ironic that I was speaking to my adsense for search rep the other day and they were saying how much they loved eHow.
So the algo that was the eHow killer has turned out to be much more than that - the clamour to downgrade "content farms" by webmasters has created the Panda algo.
Unfortunately Google always have to go one extra step - algorithmically rather than manual and have spent the last year working out how to downgrade these type of sites using technology.
As usual it is crap and takes out several thousand real sites (not mine - I have a few thousand that are jumping up nicely with no content, I accept I am an affiliate and it is up to me to work ahead of the game).
Why not just employ quality raters and manually downgrade domains that are mass content farms. Would have taken about 5 hours with a few guys manually doing that. You could even be crazy and employ a few of them ongoing and keep on top of it.
| 12:55 am on Apr 29, 2011 (gmt 0)|
I could do it in a few mins
SELECT * FROM GOOGLEDATABASE
AND DOMAIN NOT IN (SELECT DOMAIN FROM WHITELISTEDSITES)
AND (HASADSENSE=1 OR AFFLINKS=1)
ORDER BY INDEXEDPAGES DESC
substitute 500,000 for whatever figure you want to look at - then employ raters to look at those sites.
[edited by: Swanson at 1:00 am (utc) on Apr 29, 2011]
| 12:57 am on Apr 29, 2011 (gmt 0)|
|I am now doing this aggressively and not pursuing my "one site" objective where I create branded sites etc. |
I'm aware of 2 networks generating billions of $'s over a sophisticated / combined network of 50,000+ websites on different domains, owner/whois profiles etc. etc. . There must be others out there. They have overtaken branded sites and of course cannot be maintained in any degree of quality - it's just unworkable in the long run. They completely escaped Panda - in fact they improved so I'm told , and the pages are well constructed, but i take your point Swanson - is this what Google want's - websites created to avoid getting penalised.
Surely it's better to work with webmasters and brands through WMT and help them to meet the standards better. If the sites are no good , they're no good, but if they can be improved , then i recommend Googlebe a bit more straight forward in it's communication.
It'll keep everyone happy and stop this propogation of unnecessary sustainability type strategies.
| 1:19 am on Apr 29, 2011 (gmt 0)|
Whitey, I totally agree.
The core problem here is that a bunch of tech guys have done this, cuddled away writing code - backslapping each other, riding around on crazy surfer dude bikes in Google's cool office.
But this is about language and content to do with language - no algorithm works well with written content to judge it.
I have a masters degree in English and have written 3 novels - and I don't call myself an expert.
But what I do know is that the tech guys that are working on Panda are light years behind my ability in language - and I don't mean I am some sort of clever writer etc. - I mean I can write for whatever audience is needed.
But looking at the insider stuff on how they worked out synonyms, related queries etc. it is child play - and they have rolled that out to a billion dollar business.
I retain the view that they are being allowed to develop stuff based on language that is out of their league - they really are little kids showing off how clever they are at coding. Their job descriptions encourage it.
| 1:33 am on Apr 29, 2011 (gmt 0)|
Whitey, going to keep buying those Pr 5/6 domains till I hit 50k!
I have 4k now and developed 3k - they were originally just keyword rich domains to sell but now I am developing them with content to make more from them.
| 3:14 am on Apr 29, 2011 (gmt 0)|
|Just give weight to the good stuff and ignore stuff that you can't score as quality |
I hope google uses this mentality when it comes to comments and other forms of UGC. Unfortunately I don't believe that to be the case for Mr. Panda.
Two of my sites are set up identical to each other just different topics. One escaped panda, and one got hit hard.
The pandalized site has 5,000+ comments while the escapee has always had comments disabled.
Pages with 50+ comments have sunk to page 5 and beyond in the SERPS. Now this site caters to a young uneducated crowd so a lot of the comments aren't grammatically correct. But still, am I supposed to sacrifice user engagement to get on Panda's good graces?
I suppose I should wipe the slate clean and just install Facebook Comments.
| 4:13 am on Apr 29, 2011 (gmt 0)|
Come on, one site strategy still works if you are wikipedia or godaddy or take a lok around guys. It's not all that bad. But if you are an AFFILIATE diversification is a must period.
| 4:39 am on Apr 29, 2011 (gmt 0)|
Hey Google, my /about.html page is really SUPER thin, should I delete it to avoid being pandanded?
What about all those wikipedia stub pages with 5 words, should I delete all 9 billion of them.
How bout I spam you into the ground with hexadecimal domain names with scraped content.
Just kidding, or am I?
| 9:48 am on Apr 29, 2011 (gmt 0)|
>>The mere size of those sites makes recovery even more problematic. I think the probability of recovery may increase with decreasing site size.
This. Is anyone with a large site, with say... 50,000 pages or more reporting any significant recovery of any kind?
| 10:27 am on Apr 29, 2011 (gmt 0)|
|50,000 pages or more reporting any significant recovery of any kind? |
60% Panadalized on Panda 1; no change Panda 2
My site had about 60,000 pages. I found about 25,000 pages of junk User Generate Content in there. I no indexed all that user Gen. content March 1 and I removed the 25,000 pages over two months. Just finished Monday, removed the noindex and I saw a slight blip up 10% on about Tuesday, when everyone was reporting it.
But no where near a recovery.
| 11:08 am on Apr 29, 2011 (gmt 0)|
Shatner: over 3 Mio pages, drop of 15% on 24th and now at around 7-8% of traffic loss - so I slowly work my way up again.
| 2:46 pm on Apr 29, 2011 (gmt 0)|
|Why not just employ quality raters and manually downgrade domains that are mass content farms |
The main problem I see is that this could be abused or applied unfairly when subjective critiques allow a site to be penalized or banned. Each reviewer will have a different level of tolerance, they could be in a bad mood and ban your site if they think it's ugly (despite the content), or they may simply not understand what they are looking for.
I remember the old Inktomi blacklist that carried over to Yahoo. Reviewers were banning "affiliate sites" from the search results. I was on that blacklist for 5 years. Review after review, they kept seeing the same affiliate link. In my traffic logs, I could see the reviewer go straight to a page with an affiliate link -- nevermind the 120 pages of unmonetized content. Unfortunately, the reviewers weren't looking at the content. They would find an affiliate link and hit the ban button, making the judgment in under 5 seconds before leaving. So, I think it's good to have an objective measure. At least we have hope of recovery since it is algorithmically determined.
| 3:08 pm on Apr 29, 2011 (gmt 0)|
crobb305: Iam testting something with my penalized affiliate site now and I will share my experience when I finally get it out of the box, but I feel like the same thing that you are talking about is happening with google manual reviewrs who slap -50 penalties on affiliate sites. At least we know that these are manual as Cutt and the "quality" team video confirmed.
| 6:30 pm on Apr 29, 2011 (gmt 0)|
|I remember the old Inktomi blacklist that carried over to Yahoo. Reviewers were banning "affiliate sites" from the search results. I was on that blacklist for 5 years. Review after review, they kept seeing the same affiliate link. In my traffic logs, I could see the reviewer go straight to a page with an affiliate link -- nevermind the 120 pages of unmonetized content. Unfortunately, the reviewers weren't looking at the content. They would find an affiliate link and hit the ban button, making the judgment in under 5 seconds before leaving. So, I think it's good to have an objective measure. At least we have hope of recovery since it is algorithmically determined. |
I hope the person that came up with that idea lost every penny in the stock market. Stupid b@stards, to ban sites for life with almost zero chance of recovery just because a person didn't like it, at a certain point. They loved the $200 to list it in their checked-for-quality-directory though and they had just introduced Pay-For-Inclusion in SERPS too. It felt great when arrogant yahoo search died, I LOVED IT.
</end somewhat off topic>
| 10:18 pm on Apr 29, 2011 (gmt 0)|
|It felt great when arrogant yahoo search died, I LOVED IT |
Yes it did! In fact, I just had a surge of endorphins when I read your post. It was a great feeling of vindication after being on the Inktomi black list for 5 years. Thankfully with Bing, search has changed for the better I think.
I think it's critical that websites be ranked objectively (algorithmically), to the greatest extent possible.
| 11:33 pm on Apr 29, 2011 (gmt 0)|
crobb305, after thinking about it I totally agree with you about it being better to use an algorithm - and I remember the Inktomi blacklist which was really amateur.
I suppose Google do in a way use manual intervention to a degree - for example when they react to spam reports, most recently in the case of J C Penney buying links and then when reported Google manually downgraded the effect of these links. They do also try to discover link selling networks and manually penalise the sites participating.
Ironically the quality raters are used to check the quality of the algorithm changes - and they gave this one the green light.
Illustrates perfectly why the quality raters are useless!
| 11:55 pm on Apr 29, 2011 (gmt 0)|
noticed some movement today. A lot of different sites shuffled around.
An exact match domain overtook the #1 position today. The actual domain is not in use and has one of those pre fab default landing pages with a bunch of ads on it.
I can almost understand a lot of panda's glitches, but how in the hey can google actually favor a site with one of these landing pages? They are so common, google should be well aware of them by now and understand they offer no value to the searcher. This landing page is not new, its been on this domain for years.
| 8:52 am on May 3, 2011 (gmt 0)|
My sites that were effected have been working their way back and one site which had a page rank 5 that was completely pulled from results is back with a page rank 6. I guess Panda is sill reviewing sites.
| 3:48 pm on May 3, 2011 (gmt 0)|
I've noticed many pages coming back as well - traffic is still down from pre-panda2 but they appear to be slowly returning. Interestingly I did make some changes - not too many - and those pages HAVEN'T come back yet - google still has the old content cached and old post-panda rank
| 8:23 pm on May 3, 2011 (gmt 0)|
We're still working on recovering from Panda. No significant gain yet after a 60% drop. I've got a lot of pages to touch...
I am seeing a sharp increase in non-search engine referrals (e.g. our site is being included as a link in lots more sites). My guess is that we are seen as an authority and lots of sites are linking to us as part of their Panda optimizations. I've picked up about 1/2 what I lost from Google. The traffic seems to be of lesser quality.
Anyone else seeing a spike from random other websites? The ramp-up is unmistakable.
| This 216 message thread spans 8 pages: < < 216 ( 1 2 3 4 5 6  8 ) > > |