Forum Moderators: goodroi
I'm doing some SEO for a site and the site also has a blog attached: http://www.example.com/blog/ . The problem is, there is duplicate content on the blog and I need a robots.txt file that will allow the bots to index only one copy of the content while avoiding the others.
For example, one post will appear in three different locations:
1. Category page
2. Single post page
3. Archive page
Ideally, I would like to only have the 'Single post page' to be the page that is indexed while the others are ignored by bots (but I want to keep all three copies for ease of the user navigation).
I know how to create a robots.txt page, but I'm not confident that I can pull this off without blocking pages on the root domain and/or blocking all but one copy of all content.
I know this is a common problem for Wordpress blogs, and I've done some research and found some answers, but I'm not confident in what I'm doing.
Is there anyone out there who has a robots.txt file addressing this same issue? Can I see it? I would greatly appreciate it.
Thanks
[edited by: encyclo at 1:48 am (utc) on April 27, 2007]
[edit reason] switched to example.com [/edit]
[webmasterworld.com...]
[webmasterworld.com...]