| This 195 message thread spans 7 pages: < < 195 ( 1  3 4 5 6 7 ) > > || |
|Documenting my attempt to re-rank after "Farmer" update|
| 1:58 am on Mar 3, 2011 (gmt 0)|
Ok, so for the sake of all who have been hit by the so-called "Farmer" update, I thought it might be useful to document my own attempt to increase my website's "Quality" measurement (whatever that means,) in an attempt to gain back some of my previous good ranking in the SERPS.
My site is rather small compared to many mentioned on these boards just 700 or so pages and at it's peak it has never made me more than $1,000 in any month period, but in this economy that lost $1,000 is kicking my butt.
The site I am referencing is built on Wordpress as a CMS, has been copied copiously by scrapers in the past, and is a review/news-type sight, where I post long, honest and well-written reviews, news, how-to's, etc about my favorite types of widgets.
I have never paid for a link, and I do not run any sort of linking campaigns, though the site is plenty popular, gets a fair amount of natural links, and has nearly 7,000 RSS subscribers.
Obviously, we know very little about the particulars of this latest Google update, but the concensus seems to be that this a radical change in the algo, and we can't expect our ranking to just suddenly reverse any time soon.
With that said, I thought I'd document my meager attempt to gain back some of my good graces with Google.
After reading as much as I can find on the subject, I came across this article posted in one of the other threads here, and it seemed like a good place to start.
The quote that jumped out at me was this:
|Sites that believe they have been adversely impacted by the change should be sure to extensively evaluate their site quality. In particular, its important to note that low quality pages on one part of a site can impact the overall ranking of that site. |
With that in mind I decided it was time to clean-up my site... but where to start? Like most of the good folks here, I tend to think everything I write is pretty good, and I certainly don't copy and paste other people's work etc. To the extent that it's possible, my pages are unique and well-written.
I finally decided to go through my Google Webmaster Tools account, and take a look at exactly which pages lost ranking, paying particular interest to the pages that lost BIG. I'm talking drops of 50, 100, even 200 or 300 places.
Luckily there were only a handful of these.
As I began to dig through them, it became apparent to me that at least 80% of these pages were very obviously "thin." Much thinner than I even remember writing.
So, for lack of a better idea, I canned them; my hope being that by removing entirely the pages that have suffered the most in this update, my entire website will look better to Google.
All told I threw away approx 40 pages. Now this is Wordpress, so "throwing away" just means moving them to the trash, where they are no longer viewable online, but can be easily retrieved should I decide I want them indexed again.
I can not think of any other ways to "clean up" my site... I long ago blocked things like Tag Pages, Category Pages, etc from being indexed, so there is really nothing extraneous from my site in Google's Index.
Now I just cross my fingers and pray. Nothing I've done is irreversible, so what did I have to lose.. my already tanked rankings?
I'll let you know how it turns out.
| 9:40 pm on Mar 3, 2011 (gmt 0)|
dazzlindonna, I think you are spot-on. I found some rather pathetic pages on my own website, from way back in the day.
Honestly, at the least it felt good to get rid of them. Who knows whether it will have a positive effect, but hey I can't imagine it having a negative effect! lol
| 10:06 pm on Mar 3, 2011 (gmt 0)|
I hadn't really thought until now about my articles section. I have articles that I'd paid experts to write back in 2005. They were all original content.
I did some searching for the first sentences in quotes, and found that all of the articles had been scraped, extensively. Rather than try to chase down all of the scrapers, or file complaints with Google, I decided that it would be more time-efficient to "noindex" them. They don't draw much traffic anyway, but I'd like to keep the articles section for future content.
As I read what others are doing with their sites, I wonder if putting noindex tags on those is enough for this new update.
| 5:42 pm on Mar 4, 2011 (gmt 0)|
Just to update: I have now hand-picked and removed about 130 pages from my website that were very old, and honestly pretty weak.
I've also made some minor changes to my layout in order to reduce clutter, and have deleted one affiliate ad unit. The site does not contain AdSense, but when my rank dropped I had about four affiliate ads on a page. That is now reduced to three, and they are not at all intrusive.
The overall layout of each page is now easier to follow, and more pleasing to the eye, though honestly, it's always been a good looking website (I'm a graphic designer by trade.)
I can't see much else to modify at this time, so I suppose I've entered the waiting game. ;)
| 6:11 pm on Mar 4, 2011 (gmt 0)|
dickbaker, it seems reasonable to think that noindex would be enough, but who knows... It would be great to have at least one person noindexing, and one person removing, so we could see if both work, or just one does. (Not that we'll ever know "for sure" that what we do has a real cause/effect, but we'll just have to assume for now). But I understand if you'd be cautious about being the noindex guinea pig. :)
| 6:45 pm on Mar 4, 2011 (gmt 0)|
I'm hoping that the noindex route will work, but my site has been hit so hard across so many pages that I think it's possible I could never know if it helped.
Since I started my site, I've had about 200 pages of niche-related books from you can guess who. It really never occurred to me that the descriptions would be duplicate content, as I simply assumed that the SE's would recognize the pages for what they were. I'm going to noindex those as well.
I have a short list of phrases for which I was page one that I'm using to check each day to see if anything is improving. Sad to say, nothing is moving up, but some have gotten even worse.
| 6:59 pm on Mar 4, 2011 (gmt 0)|
Looking at analytics for about 50 quality domains that took an initial drop, I see a lot of step-wise recovery. The Google traffic graphs all look bowl shaped, but with one side steep and the other side sloped more gradually. The recovery in some cases is back to previous traffic levels as of Thursday (yesterday) - but not in all.
None of these recovering sites took any steps to make improvements happen, so I can only assume that there is some form of algorithm adjustment happening. It may be automated machine learning, or it may be newly introduced changes - clearly, that's hard to determine. But seeing an incremental improvement one day after the next, makes me lean toward automated machine learning, with "layer 2" of the algorithm yet to come.
| 7:07 pm on Mar 4, 2011 (gmt 0)|
So, it looks like duplicate content is the main issue. I own a real estate website where we have a 6 line description (with our own words) of what we are and what we try to become. I just searched for 8 consecutive words from those sentences on google, and we are nowhere! Our site is not a content site, its content is generated 99% by real estate listing posters, yet those tiny fraction of own content (we have others as well, helps, about us, etc.) is copied by scrapers who try to compete us! Shame on them, they are not capable of writing 2-3 sentences... And, google list my site nowhere for my own sentence!
Now, I replaced those sentences as my main homepage is the one that is hit the most, for some terms we went to position 9 from 3 and for another one we went from 13 to 100-110. Let's see what will be the effect. Lucky me, in 20 minutes the new content is already cached, let's see if this will have a positive effect in getting back my earlier positions.
| 7:13 pm on Mar 4, 2011 (gmt 0)|
zoltan, I don't doubt there is 'something wrong' with your results, but I would guess your 'standard text' is discounted as 'boilerplate' text and not a factor in the ranking of your pages ... I'd look somewhere else to try and find the issue.
There have also been reports of changes in rankings and rankings returning lately ... I really don't think your 2 sentences was the issue, but I guess anything is possible.
| 7:30 pm on Mar 4, 2011 (gmt 0)|
What is "boilerplate" text? Do you mean, header, footer and navigation? If so, that text is only added to the homepage, no other page contains that text.
| 7:45 pm on Mar 4, 2011 (gmt 0)|
Oh, I thought you meant it was repeated ... Boilerplate text is text repeated throughout the site, usually a header or footer or copyright, but it doesn't have to be.
| 7:38 am on Mar 5, 2011 (gmt 0)|
This is quite funny... As stated above, I changed the text that was scraped from my homepage and less than 12 hours later, I got hit by another -20-30 positions, now I am only on 130-140 position for one of my main keywords... It looks like I should stop doing anything and wait to see if the traffic will recover, leaving everything as it is now...
| 7:54 am on Mar 5, 2011 (gmt 0)|
|I should stop doing anything and wait to see |
Ever thus has it been, and probably ever thus will it be. I don't worry about G anymore... I've moved on (diversification, etc.)
Those eggs in one basket kind of thing is the really scary part!
| 10:27 am on Mar 5, 2011 (gmt 0)|
Killing 1/3 of the site is not a great idea considering the internal linking structure and the link juice flow. I've got many pages which doesn't have any "good content", but they are part of the whole scheme - if I disallow/delete them this will break the site's internal linking - I've done this before and the whole site's ranking dropped :-( Re-writhing the content might be a better idea... but that's just me :)
| 12:57 pm on Mar 5, 2011 (gmt 0)|
Will Google apply their algorithm again and again on indexing of every page? Or do they do it on a time frame, say once every two days?
| 2:50 pm on Mar 5, 2011 (gmt 0)|
Why should you be avoiding nofollow or the jsnofollow?
If you don't trust a story on a site that you want to refer in your article, then you shouldn't be writing the article in the first place.
Imagine a situation where you nofollow most external links in every page or most of the pages.This would mean to google that you aren't trusting those links.If that be the case, you shouldn't be linking to them as when you don't trust those links, why should you refer them to your site visitors?
This simple logic could very well be part of this new algo.
Never do a nofollow for the sake of preserving PR to your site as google has long changed the way they distribute link juice.By having a nofollow you are just wasting the juice.It is better to credit it to the source.
Offcourse, this doesn't mean that you should not use nofollow at all.But it should be rare and there should never be a situation where you want to write most of your stories referring sources that you don't trust.
| 3:22 pm on Mar 5, 2011 (gmt 0)|
Yes wikipedia does it but in matt's own words, sites like wikipedia are on side and they may not be giving the same treatment to every other site.
| 3:51 pm on Mar 5, 2011 (gmt 0)|
My understanding is that you should dofollow editorial links and nofollow affiliate links. Is this the right way to go? Any confirmation is appreciated.
| 4:26 pm on Mar 5, 2011 (gmt 0)|
@falsepositive you are right.
Google folks claim that they do things using algorithms. Mathematically nofollow links as a percentage of dofollow links should be minimal.
But this whole idea of nofollow is a crap.It was introduced by google and i never understood why?
[edited by: indyank at 4:42 pm (utc) on Mar 5, 2011]
| 4:41 pm on Mar 5, 2011 (gmt 0)|
Instead of chopping pages, have you guys though about combining pages together? We did that last year and it has helped.
| 4:46 pm on Mar 5, 2011 (gmt 0)|
@indyank, the JS example I've suggested is based how eHow use their no follow. I am just trying to help fellow WebMasters. It is similar to no follow, but G can't read JS. TheMadScientist wrote several comments about eHow js no follow use. Remember, eHow is doing great and I am definitely following eHow's best practices and using no follow JS. I don't see any side effects.
| 5:44 pm on Mar 5, 2011 (gmt 0)|
browsee, I know where you are coming from as i am the originator of the thread on ehow. But my point is google may not treat every site on par with eHow.
To me, what eHow is doing isn't good and wouldn't definitely be good in the long run.
| 5:52 pm on Mar 5, 2011 (gmt 0)|
EHow's pages are laid out in a very user friendly and readable fashion. That is what helps them pass this particular algo's test - the first impression "Blink" test. Not only that - they do have some OK content along with the rest of the filler.
| 6:11 pm on Mar 5, 2011 (gmt 0)|
This is not true. [google.com...]
References dating back as far as 2005: [webmasterworld.com...]
| 3:18 am on Mar 6, 2011 (gmt 0)|
Anyone who has made changes: Have you seen any new cache dates since you made the changes?
| 5:30 am on Mar 6, 2011 (gmt 0)|
|Anyone who has made changes: Have you seen any new cache dates since you made the changes? |
Yes, lots. No noticeable traffic changes so far, but then I am not done fixing them up and they still have a number of issues.
As another poster mentioned, they may not rerank the cleaned up sites again right away, otherwise it would be a giveaway as to what works. In fact I am prepared for another drop or two before they start ranking again just as a fake out.
I don't expect my pages to return do to algorithm tweaks as I can see they have some stuff that Google was probably right to target and should be cleaned up.
| 12:47 pm on Mar 6, 2011 (gmt 0)|
For one of my keywords, the positions are constantly changing every couple of days. Before the algo update, the page was ranked second, then moved down to 10, then 7th, and then 11th and now it settled at 7th since yesterday.
I made a change before the last change and I believe its the reason for moving up from 11th to 7th. But I am not sure about the other ones. May be it's because of the ranking changes of other sites for the keyword.
The page I was referring to is cached frequently, twice or thrice everyday since the last few months.
| 1:27 pm on Mar 6, 2011 (gmt 0)|
|positions are constantly changing every couple of days |
I've seen this pattern in the past. It looked to me like Google might be doing some sort of elaborate multivariate testing -- generating and analyzing sample data which allows them to run all sorts of statistical tests, comparing CTRs, frequency of users returning to try the same search again, etc.
I've even seen this pattern of position swapping on an intraday basis -- our page usually appears in position 9 but I'll see it appear in position 5 for a few minutes at random points during the day, (or at least that's the impression I'm getting -- but I haven't been curious enough to actually check 300 times at 5 minute intervals.)
| 3:36 pm on Mar 6, 2011 (gmt 0)|
econman, is it possible that you're being given the different results from different datacenters?
| 4:03 pm on Mar 6, 2011 (gmt 0)|
|they do have some OK content along with the rest of the filler |
Regarding EHow's OK content - they also seem to concentrate on some really mainstream topics and avoid the dodgy stuff.
Have a look at this article on the topics on Ezine articles:
Seven phrases appear on 10% of their articles! Whereas these phrases appear in 0.03% of EHow's pages.
We know that ad placement must be playing a part - that could be the only reason a really useful unique site like askthebuilder.com got hurt - they have eight ad units on each article page.
But G may also have been penalising sites in certain "hot" topics. So if you don't have ads, then is your site on one of the dodgy topics?
Then there's the collateral damage thing - you may have gotten hurt mainly because your links are from article directories which have been penalised.
Then there is the thin/useful content and duplicate content issue.
Then there is the issue of un-moderated comments - didn't they say on the Google blog that their document classifier was looking for the sort of spammy words that are used in automatic blog commenting and some USG content?
And there's probably another dozen other factors that we haven't thought of.
| 4:08 pm on Mar 6, 2011 (gmt 0)|
you can have 8 banner ads and be /look fine, so Google is penalizing their own system and small time publishers that can't call Hp or Verizon and sell their ads for a whole year.
It's a shame that askthebuilder was hit, I used a lot and the author even answered questions in the comments section on each article.
| 4:15 pm on Mar 6, 2011 (gmt 0)|
|you can have 8 banner ads and be /look fine |
Agree, but I think G don't want the ads in the content area. Askthebuilder has ad blocks within the content
See this url:
Above the fold on the chrome browser, three ad units, and the ads appear to take up more space than the content.
Whereas cultofmac guy seems to have moved all his ads to the right-hand sidebar and he is back in the SERPs.
Both useful unique sites with good content and good backlinks - the only difference being ad placement.
Edit: Sorry askthebuilder had four ad units above the fold, forgot to count the one in the header. That's too much for above the fold.
| This 195 message thread spans 7 pages: < < 195 ( 1  3 4 5 6 7 ) > > |