Forum Moderators: goodroi
How could I control and allow just google to just index only primary domains (e.g www.primarydomains.com/robots.txt) and not around 300 odd secondary doamins using obj.conf and robots.txt. I have just one document root directory under which robots.txt reside.
How could I typically achieve below this using obj.conf (iplanet) and robots.txt:
1. All primary domain could access robots.txt_a which will have a rule that allows only google to crawl.
2. All the secondary domains could access robots.txt_b which will have rule which blocks all crawlers.
You may need to also consider the 'removal tool' - but that, too can have disadvantages.
What exactly are you trying to achieve?
This is to achieve consolidation of seach results going forward and to improve search on primary domains.