Welcome to WebmasterWorld Guest from 54.227.214.52

Forum Moderators: goodroi

How to exclude from crawling subdomain

exclude the crawling subdomain

   
3:02 pm on Feb 10, 2007 (gmt 0)

5+ Year Member



I have a problem. I want to prevent search bots to index all the pages from subdomain.example.com because that is not a unique content and I am afraid that it will harm our rankings.

The text of those pages is something many sites have it on their sites so it is definately a dupliacte content.

Can I used the robots.txt file

and how to do it please

thank you

[edited by: tedster at 3:21 pm (utc) on Feb. 10, 2007]
[edit reason] use example.com, remove specifics [/edit]

9:56 pm on Feb 12, 2007 (gmt 0)

WebmasterWorld Administrator goodroi is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



upload a robots.txt file to subdomain.example.com/robots.txt

to block all robots the robots.txt should have only the 2 following lines in it:
User-agent: *
Disallow: /

3:37 am on Feb 15, 2007 (gmt 0)

5+ Year Member



But do you think we should do it , will it harm our site if we would not?
1:00 pm on Feb 15, 2007 (gmt 0)

WebmasterWorld Administrator goodroi is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



so it is definately a dupliacte content

i'm not a believer of having duplicate content on a site. it is generally possible with some extra work to make it original content. rewrite it, add new stuff or reformat it. the search engines will filter or penalize duplicate content, from my pov it means the same = no traffic. my personal preference is to block duplicate content if i can't make it unique content.
 

Featured Threads

Hot Threads This Week

Hot Threads This Month