Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: goodroi
This is probably an extremely simple question so forgive me if it sounds simplistic.
We're going to be using a robots.txt file for a new client and in the course of my research I've seen that some engines ignore. Simple question is - which ones so we can advise the client properly?
Many thanks from sunny Scotland,
It is not unreasonable to suggest that if a search engine spider can't gain access to lots of websites this would lead to a less useful search engine, once they are in this situation then there are only so many options;
1) Fix your search engine to work with robots.txt
2) Leave the search engine business
3) Carry on and pretend that everything is fine
Obvious it's a lot easier to build a working spider (or at least learn from webmaster comments that describe where your spider is failing) than it would be to only fix your spider when lots of sites have blocked it and your business is failing as a result.
People are generally very tolerant of most things SE spiders do - this does not include ignoring robots.txt as this is a very clear cut thing as it protects both the website and the spider.
As a rule - most bots with variations of "rip", "siphon", "harvest", "download", etc. etc. are going to disregard robots.txt and do as they please. It's probably more constructive to concentrate on the spiders you want to visit your site and instruct which pages and directories to parse through robots.txt than to try to ban rude bots through it.
If you look through the posts at WebmasterWorld you'll see lots of people report whether a bot disregarded robots.txt and form your own conclusions about who/why/what to include or exclude in your robots.txt file. Also, many of the current legitimate spiders can be found at searchengineworld.