Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: goodroi
I currently have two sites (white label) that share the same files and server directory, however I don't want the second site to be seen by search engine spiders as I know this can be very bad for me with Search Engine listings (duplicate information and mirror sites, etc).
So I thought I might be able to do this with my robots.txt file, it's just how is the problem.
Let's say site one is called (A) www.whatever.co.uk and the other (B)(www.whatevermore.co.uk (fictional sites...hopefully!)
So site B shares the same files as site A so I can't exclude any files or folders as I need these indexing for site A. Site B purely says 'if site = B then show text for site B instead of text for site A.
Can I do this with robots.txtt? Or do you have something better I could implement.
Like I said at the beginning I dont't want site B to be spidered in anyway.
Many thanks and seasons greetings to you all,
This is not something that you can do with robots.txt alone. You have two options: the first is to use mod_rewrite to map robots.txt to a script which will determine the HTTP_HOST name and echo the appropriate robots.txt information. The second way is if you are using server-side scripting within the site, where you detect the HTTP_HOST and add a
to the page head.
<meta name="robots" content="none">