Welcome to WebmasterWorld Guest from 126.96.36.199
Forum Moderators: goodroi
Should I be looking at the Robots.txt file of all my linking partners? Has anyone ever discovered that one of their link partners was doing this?
joined:Sept 1, 2000
Welcome clueless, to Webmasterworld. Happy posting.
So unless a link is made purely because of value to visitors, it is wise to make sure your reciprocal link partners are not blocking the search engines from spidering their outbound links.
Only if the site explicity stated in their agreement with you that your links should be spiderable by SEs, in which case its not only rude but lying.
dont assume that other sites realise why you wanted the link! Many wouldnt have a clue what PR is and probably just thought you were just being neighbourly and social!
We rarely do reciprocal links but when we do we assume that when people ask for links they are looking to receive direct referrals from people reading our pages, unless they specifically state they want the "link popularity benefit" or that the linked pages needs to be spiderable, in which case we usually don't bother to go any further.
[edited by: chiyo at 1:59 pm (utc) on May 4, 2003]
'Disallow: /' would disallow crawling of the whole site.
Check out robotstxt.org [robotstxt.org], there's only a few pages of light reading, all about robots.txt and how to use it.