Forum Moderators: goodroi
Assuming you know nothing: here is the answer.
A robots.txt file on a website will function as a request that specified robots ignore specified files or directories in their search. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operate on certain data.
For websites with multiple subdomains, each subdomain must have its own robots.txt file. If example.com had a robots.txt file but a.example.com did not, the rules that would apply for example.com would not apply to a.example.com.
Open notepad or similar, Create your specific requirements as per the examples below. Save as Robots.txt. Upload to the root of your website. A robots.txt file will be required for each root folder you have on your server.
Examples
This example allows all robots to visit all files because the wildcard "*" specifies all robots:
User-agent: *
Disallow:
This example keeps all robots out:
User-agent: *
Disallow: /
The next is an example that tells all crawlers not to enter four directories of a website:
User-agent: *
Disallow: /cgi-bin/
Disallow: /images/
Disallow: /tmp/
Disallow: /private/
Example that tells a specific crawler not to enter one specific directory:
User-agent: BadBot # replace the 'BadBot' with the actual user-agent of the bot
Disallow: /private/
Example that tells all crawlers not to enter one specific file:
User-agent: *
Disallow: /directory/file.html
Note that all other files in the specified directory will be processed.
Example demonstrating how comments can be used:
# Comments appear after the "#" symbol at the start of a line, or after a directive
User-agent: * # match all bots
Disallow: / # keep them out
[edit] Compatibility
In order to prevent access to all pages by robots, do not use
Disallow: * # DO NOT USE! Use "/" instead.
as this is not a stable standard extension.
Instead:
Disallow: /
should be used.
[edit] Nonstandard extensions
[edit] Crawl-delay directive
Several major crawlers support a Crawl-delay parameter, set to the number of seconds to wait between successive requests to the same server: [2] [3]
User-agent: *
Crawl-delay: 10
[edit] Allow directive
Some major crawlers support an Allow directive which can counteract a following Disallow directive.[4] [5] This is useful when you disallow an entire directory but still want some HTML documents in that directory crawled and indexed. While by standard implementation the first matching robots.txt pattern always wins, Google's implementation differs in that it first evaluates all Allow patterns and only then all Disallow patterns. Yet, in order to be compatible to all robots, if you want to allow single files inside an otherwise disallowed directory, you need to place the Allow directive(s) first, followed by the Disallow, for example:
Allow: /folder1/myfile.html
Disallow: /folder1/
This example will Disallow anything in /folder1/ except /folder1/myfile.html, since the latter will match first. In case of Google, though, the order is not important.
[edit] Sitemap
Some crawlers support a Sitemap directive, allowing multiple sitemaps in the same robots.txt in the form:[6]
Sitemap: http ://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml
Sitemap: http ://www.google.com/hostednews/sitemap_index.xml
[edit] Extended standard
An Extended Standard for Robot Exclusion has been proposed, which adds several new directives, such as Visit-time and Request-rate. For example:
User-agent: *
Disallow: /downloads/
Request-rate: 1/5 # maximum rate is one page every 5 seconds
Visit-time: 0600-0845 # only visit between 06:00 and 08:45 UTC (GMT)
I hope that answers your question.