homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

The SEO Consequences of NoIndex and Bot Blocked Content

 1:59 pm on Apr 15, 2013 (gmt 0)

I'm just curious opinions on this.

I have been "no indexing" a lot of the new content I add to my site because it is just data that users find helpful, but Gbot probably sees as very thin.

Because the data is so popular, I was going to focus on adding it almost exclusively for a long time to come.

If I keep this content "No Index" or blocked in Robot text, do you think there will be any SEO consequences? I plan to add 95% blocked content and 5% gbot readable content in the future.



 4:30 pm on Apr 15, 2013 (gmt 0)

If the data is relevant, I wouldn't "no index" it, just let G decide if they aren't going to index it or not (you know in WMT where it shows the number of pages you submitted in the site map versus the pages indexed?). That doesn't apply if your basic "template" or "layout" is going to cause a duplicate content penalty or "soft 404" errors.

EDIT: I'd expect ranking related problems if you're blocking most of your site.


 10:11 pm on Apr 15, 2013 (gmt 0)

I'd expect ranking related problems if you're blocking most of your site.



 12:16 am on Apr 16, 2013 (gmt 0)

just let G decide if they aren't going to index it or not

Is there any case in history of g### crawling a non-duplicate page and then deciding Naah, we won't bother ? Keeping in mind that they will happily index pages they've never even seen-- along with stylesheets, javascript, raw logs, midi files, robots.txt and anything else they can get their hands on...


 1:07 am on Apr 16, 2013 (gmt 0)

In this thread someone won'I just left another thread where someone is allowing Gbot access to WordPress admin because it complained in Webmaster Tools when they password protected it and it's not April Fool's day.

I'd let it crawl as I have thick and thin content and let it crawl it all without issue and have for many years. People can't find what isn't indexed.

You'll fare best IMO if you pretend spiders don't exist and just build a site for your visitors and ignore all the robot directives except NOARCHIVE.


 1:36 am on Apr 16, 2013 (gmt 0)

@jimbeetle I can't find the source but I'm certain it was a MC video that he says blocking too many pages causes problems for a site. Personally I've had ranking drop over a week's time blocking 100 or so URLs then recover after removing the disallow.... twice.... and it's pages they don't index.

@lucy24 Yes, of the pages in our site maps for large sites, not all the pages are indexed and I know why, not duplicate content either. It's less than .20% of the total pages and they do get crawled, but not indexed. That's according to GWT and can be verified by the site: operator.

You'll fare best if you pretend spiders don't exist and just build a site for your visitors


 1:37 am on Apr 16, 2013 (gmt 0)


I always had the concept of let gbot figure it out themselves, but after a huge panda hit destroyed my main money site and income, I don't know what think these days.


 1:40 am on Apr 16, 2013 (gmt 0)

Personally I've had ranking drop over a week's time blocking 100 or so URLs then recover after removing the disallow.... twice

That's interesting. Thanks for sharing!

I've noindexed 1000s of pages at a time* and had better traffic than by not using it, in fact, I use it frequently as a "tool" to help get the right pages ranking in the right places.

I would not have thought a robots.txt block would have the opposite effect.

* Including currently, and unlike the robots.txt situation, traffic is increasing over when it was left to Google to decide which pages should be included. I didn't "have the final say" for a while and was told to include them, but when I "got control back" one of the first things I did was go back to "strategic noindexing" and the results have been very positive since they've started getting the right pages in the right places in the SERPs. I'm really surprised at the difference seen using robots.txt, and it's good to know there is one.


 2:35 am on Apr 16, 2013 (gmt 0)

@TOI - as I mentioned, they aren't indexed pages and I've had that happen when adding "noindex" to the meta robots too. I could have been doing it poorly though.

I still can't find the video reference I'm looking for but SEOMOZ offers this information for robots [seomoz.org...]


 2:57 am on Apr 16, 2013 (gmt 0)

Meh, SEOmoz. They had a piece on what the keyword density of a page "should be" or what the average keyword density of N pages was or something like that a year or two ago and I stopped even bothering clicking links most of the time then, because "the right" keyword density varies from niche to niche, expected averages and phrase / related phrase predictability, so some "study of N sites and the keyword density average" being published as a big deal or what people should follow is really just BS people want to hear more than "ranking impactful" information.

If someone followed the "average keyword density" they published the likely impact would be none to negative, because natural writing style trumps keyword density these days and has for a while.

I think it's highly irresponsible as a "leading SEO site" to publish something like that, because if they don't know it varies from niche to niche based on phrase predictability and natural writing style wrt a specific topic they shouldn't be giving SEO advice, but if they do and they published what they did, then it's not only misleading it's just plain BS info to get more traffic by telling people what they want to hear or expect to hear or search for more often, rather than explaining the reality of where things are and how things should be written wrt SEO and rankings.



 9:04 am on Apr 16, 2013 (gmt 0)


It's great to hear the strategic noindexing works as it was intended, at least that's the impression I get from all of MC's videos.

I'm interested to know everyone's opinion on whether "no indexing" a good portion of one's site will have negative outcomes on the other "indexed" pages of the site. Assuming all the content was high quality on both sides.


 5:45 pm on Apr 16, 2013 (gmt 0)

Str82u, I've had 80% or so of a site block the past couple of years with no problems at all for the surviving section.

And not really sure what the referenced moz thread is meant to say except way too many words warning folks not to shoot themselves in the foot.


 6:09 pm on Apr 16, 2013 (gmt 0)

@jimbeetle - that's good to know. The pages that caused my grief weren't even indexed but it was only a day after removing the directives that the traffic started creeping back; GWT showed the number of blocked pages drop and Analytics showed the traffic coming back... none to those pages of course.

GWT was showing that blocked pages had doubled, perhaps that's the clue.


 2:51 am on Apr 18, 2013 (gmt 0)

It's great to hear the strategic noindexing works as it was intended, at least that's the impression I get from all of MC's videos.

I my experience it does.

To expand a bit, one thing I've noticed is if I have a page that ranks for "widget" and another is more specific, if I noindex "the wrong page" they replace it with the "right page" that's "the next best choice" based on the factors, in the same position on the SERPs.

When I get the "right page" to replace the "wrong page" for a query and wait, usually the "right page" moves up in the rankings, which I believe is due to visitor behavior. I don't like to "let Google decide" if there could be a better page ranking for a term, because I believe visitor behavior is a factor (based on my experience) and if I can get people to the "right page" for the query more often, then it's a better result for not only me, but the visitor and Google.

I guess what I try to do is get things to "mesh" together to the point where if Google has determined the "wrong page" should rank, I'll pull it out, leave the "right page" in and let the algo "do it's thing" as visitors respond to finding what they were looking for easily.

* I should note I'm a bit "contrary to many", because I aim for a high bounce rate since to me, as long as the query and info on the page "mesh", then the search ends it says "good result". IOW: Google's visitor searched. They clicked on the page I "tailored" for the results. The visitor found the info they were looking for on on the page. The search ended. I don't see how that's bad for Google, the visitor or my site and I've had a page with +90% bounce rate that was in the top 3 (usually only behind the EMD I wrote the info on) for years before I let the site go to back up the "thought" I have about a high bounce rate not necessarily being a bad thing.

ADDED: As far as bounce rate goes, one site I've been working on regularly averages +75% and it's getting closer to 80% on average. Traffic is finally starting to consistently climb and bounce rate has been increasing at the same time. I know it shows red in Analytics and "everyone" says if it's too high that's bad so they try to manipulate the number, but w/e, more traffic (not "seeing green" in Analytics or doing what "everyone" says) is what I'm after.

Global Options:
 top home search open messages active posts  

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved