I doubt 1200 words would be too long - provided the content is good, I can't see that causing a problem - there are lots of high quality articles longer than that.
1200 words isn't long at all. I would call it mid-length. I have a 6000 word article ranked no. 1 in Bing and no. 4 in google - it used to be no. 1 in google but the site was hit by penguin. I've seen Wikipedia articles that are 10,000 words or more ranking no. 1.
Do what's best for your readers. Personally I don't like to try to read an article that's broken up into short pieces so that you have to keep clicking to a new page.
If you are worried about length, then you are doing it wrong.
Why am I doing it wrong?
I ask because an the articles outranking me are shorter, and cover less detail. One article above mine is actually content scraped from another site that scraped its content from mine. So a scraper of a scraper is now doing better. Mine covers the topic well and includes original images (it's a medical condition).
What happens when google changes the rules. Are yo going to rewrite your articles again? Stop worrying about google. Tell me what your readers want. Do you even know? Are you writing the articles for google or humans?
I'm writing for humans. My aim is to give them all the information they need, which I feel I've achieved.
I worry about Google because despite having good articles, scraper and thin sites are outranking me. I guess I'm just trying to understand why.
Articles were traditionally split to increase page views. This typically increased revenue generated by cpm ads.
Google might now consider the extra page views as a sign of user satisfaction. A person lands on your site having searched for "widgets" and reads one page - your competitor enjoys 4 page views
|Do what's best for your readers. Personally I don't like to try to read an article that's broken up into short pieces so that you have to keep clicking to a new page. |
A number of years ago, WIRED reported on an academic study that compared two versions of an article: one on a single page, the other divided into multiple pages.
The findings were interesting: Users preferred the article broken into multiple pages, and they thought it was the shorter of the two versions--even though it was, in fact, longer.
The study's findings may have lost relevance over the years, thanks to higher-resolution displays and broadband Internet connections. But now that many people are viewing the Web on tablets (which typically have resolutions no greater than 1024 x 768), the study may not be obsolete after all.
My own approach, which has worked well over the years, is to break long articles into logical units. If I were to write a comprehensive article about breeds of unicorns, I might have an introductory page followed by a separate illustrated page for each breed and ending with a page of links to other unicorn resources. And while this might be useful to me in terms of SEO and ad revenue, it would also--and perhaps more importantly--present a better user experience to the reader.
I don't mind articles broken into two pages, but always click back if it's more than that. I find it quite irritating.
On breeds of unicorns, I would write an article on each specific breed. Long horned unicorns, short horned unicorns, mixed breed unicorns etc.
Beside the keyword phrases you think the articles should be ranking for, how many other phrases are driving traffic to the articles? How many other phrases are driving traffic to your competitors articles?
It has nothing to do with the length of your articles. Quite frankly you are being beaton by sites that have more authority than yours for whatever reason. For example if the Huffington post decided to target your keywords they could summarize your articles for $5 and be beating you within 24 hours. Unfortunately that is where we are today with Google. If a sites "brand signal" (whatever that may entail) is better than yours and content is close to yours, they will beat you end of story.
Bill Slawski wrote an article on this a little while ago. Google actually has a patent for it. What Google is doing is finding what they consider a "trust worthy" source for content. They really don't care who the originator is as long as some site that they trust puts it up then they will display it.
I may be wrong but your site is probably suffering from this affect. We have seen scrapper sites do well from this. They fool Google into thinking they have some kind of trust factor or brand signal and they take your content and beat you with it.
The problem is that Google appears to be heading down this road more and more. After all Eric Schmidt said "Brands are the solution, not the problem," Mr. Schmidt said. "Brands are how you sort out the cesspool."
I agree on the logical units divider. The length of articles itself isn't an issue. The ability for the readers to quickly find and narrow down the information that they want is more important.
If a article is long, does it have clear organization for readers to scan through it, and zone in on the parts that they need? Or it's just long for being long's sake. Poor organization, repeated infroatmion, and such.
Dividing articles for the sake of random division is pointless in my point of view. However, dividing articles into pieces that users find useful can prove to be very helpful. It's alot more work, but it's better for the visitors.
When the length starts muddling with the information content that you want to get across, then the length will become a hindrance.
So, my conclusion is...yes an article can be indeed too long when it gets in the way between your readers and the information they need.
For example...Niche article can have different parts.
History of niche.
How to use niche.
Why to use niche.
What to do when niche breaks down.
You can have 1 long huge article with all 4 parts with information all scribbled throughout. Or you can divide it into logical pieces that work and helpful.
The key is to have users land on any of the 4 pages, and that they can quickly navigate into the part where they find it useful.
Random "article divider" which divides the long article into 4 even spaced pages...is a hindrance to user usability. Align your position with the readers and decide whether or not dividing the articles make sense. Imagine reading a book, and how you would like to read it.
However, do avoid splitting articles into too short of a division as it can triggers some sort of spam signal..but that is not really your problem. Just something to watch out for if you do decide to break the articles apart.
I forgot to emphasize, interlink the articles in a series to help users jump through the different parts of the articles in the same series help.
On breeds of unicorns, you want visitors to be able to freely jump through the different unicorn breeds without back space.
Although, if each breeds of unicorn do not justify a page on its own. Grouping them can be more helpful in ranking for "unicorn breed" instead of targetted "short horned unicorns" single page. Again, think about what is useful in terms of information and necessity.
You will have to be creative and work to figure out how to help your visitors navigate through your article content structure.
I wasn't sure if interlinking was okay anymore and have started removing links to other articles. I'm terrified of Google penalising my site more.
This article in question is a health one. It is broken up into sections.
How it's transmitted
Who's most at risk
How it's treated
Mention that it can be passed on to people
How it's treated in people
So visitors can quickly jump to whatever section they want instead of trawling the entire article. They may only want to know the symptoms, or how to treat it, or if it's zoonotic.
The article beating me is just 250 words of text. Not broken down into sections. Too small anyway.
And yes, it's a branded site...theirs, not mine.
|And yes, it's a branded site...theirs, not mine. |
@Saffron unfortunately you have probably done nothing wrong. Which is the ironic part of these situations. You said you have ranked for these keywords for a long time and all of a sudden these "branded" sites have taken over. This is happening more and more and it is sad to see.
Google is placing emphasis on big scraper sites because they don't really care as their audience ends up seeing the content anyway. The problem is what happens when all the little guys are gone? Who will they scrape then?
[edited by: Robert_Charlton at 2:29 am (utc) on Jul 15, 2013]
One last thing.. Google is looking for freshness these days. So you might want to write new articles about the same things but with an updated feel? I am sure there are some smart people here who can tell you more about doing this.
Right now I'm writing about stuff the other sites aren't. They cover mostly <widgets>, so I'm writing about non-<widget> topics. I've written some interesting articles this year, but it takes a while for them to really take off and time isn't something we have anymore.
[edited by: Robert_Charlton at 3:44 am (utc) on Jul 15, 2013]
[edit reason] examplified niche [/edit]
So how do the scrapers get Google's trust? Can we emulate it?
|The article beating me is just 250 words of text. |
Again, you're focusing on rankings for some certain key phrases and not overall traffic that an article might be attracting. In fact, you never mention traffic.
Traffic is much more important than rankings.
Delve into your analytics and try to answer the couple of questions I posed above. You should get a more nuanced picture -- and better understanding -- of the situation.
|One last thing.. Google is looking for freshness these days. So you might want to write new articles about the same things but with an updated feel? |
For our main site (which has quite a few articles dating back to the late 1990s), long-established pages often rank much high in Google than newer pages on similar topics do--even when the newer pages are a better fit for given search queries.
On the other hand, Google seems to be much slower to index and rank newer articles or pages these days--at least on traditional Web sites. Our blog posts--which are on a separate domain--are indexed and ranked quickly, but our non-blog articles seem to get stuck at the end of a queue. Maybe Google's idea of "freshness" is "recent posts on a blog."
The length of articles itself isn't an issue. The ability for the readers to quickly find and narrow down the information that they want is more important.
FrankLeeceo beat me to the punch but i'll expand regardless (on my opinion for whatever it's worth.)I don't think the length of the article is the issue, I would wager people are looking for a quick answer and that your user metrics (bounce rate, CTR, etc) are lower than theirs. Or at least a handful of your past traffic was looking for a quick answer.
Like viral said, you probably did nothing wrong, just think about the user experience on the page. As FrankLeeceo said, can they find what they are looking for right away? That may be a good reason to break it up into two or three sections.
Also, like viral said "Google is looking for freshness these days". I don't know how long you've been outranked by the scraper site but maybe their just benefiting from being a new site and will drop down soon?
I am sorry I don't have an answer for how to improve your rankings. I wish I did.
However, I think I MIGHT have something that will help you in determining if your articles are too long FOR YOUR READERS....
Advanced Content Tracking with Google Analytics: Part 1
This is some google analytics code that tracks events, and there are instructions on how to determine how many people read down to the bottom of your content compared to how many people start reading the article.
You can also set it up so that it will measure how many people make it to the bottom of the page - meaning past the comments and past the footer of the page to the bottom
Anyway that will at least let you know how many people are making it through the article. And it will give you (in my opinion) a better idea of your true bounce rate, since people who scroll down your article past a certain point (that you decide upon) will NOT be considered as bounces.
Hope this helps.
Mods note: The Cutroni article cited above was written by Justin Cutroni, currently the Analytics Advocate at Google, with input from members of the Google Analytics team and authorities in the field. It's a step beyond the default content tracking in Google Analytics.
[edited by: Robert_Charlton at 5:28 am (utc) on Jul 23, 2013]