Forum Moderators: Robert Charlton & goodroi
[edited by: aakk9999 at 1:14 pm (utc) on Oct 27, 2014]
[edit reason] Replaced with example.com to avoid auto-linking [/edit]
Over the last 24 hours, Googlebot encountered 24 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file
Google's user agent for web search should be "Googlebot" and not "Google".
Sitemap: [example.com...] might your robots.txt be at https://www.example.com/robots.txt rather than http://www.example.com/ as shown in your question? if the site is set to serve all content from https: instead of http: you will want to do an address change in GWT and verify the "new" site so they will look for your files at the correct URL, especially if proper 301 redirects are in place.
This means that although the robots.txt may have errors-- as detailed above-- it isn't the source of the problem, because Google couldn't get to robots.txt in the first place.
The robot should be liberal in interpreting this field. A case insensitive substring match of the name without version information is recommended.
3.2.1 The User-agent line
Name tokens are used to allow robots to identify themselves via a simple product token. Name tokens should be short and to the point. The name token a robot chooses for itself should be sent as part of the HTTP User-agent header, and must be well documented.
These name tokens are used in User-agent lines in /robots.txt to identify to which specific robots the record applies. The robot must obey the first record in /robots.txt that contains a User- Agent line whose value contains the name token of the robot as a substring. The name comparisons are case-insensitive. If no such record exists, it should obey the first record with a User-agent line with a "*" value, if present. If no record satisfied either condition, or no records are present at all, access is unlimited
The name comparisons are case-insensitive.
I did not fully or partially understand the O.P.'s issue. Followup pedantic posts made the issue even more clouded in my mind.
Over the last 24 hours, Googlebot encountered 24 errors while attempting to access your robots.txt.
The best way to check if robots.txt is a problem is to use "Fetch as Googlebot" in WMT and fetch the home page and robots.txt file. If you get message "unreachable robots.txt" then this could be the problem even if robots.txt does not exist or never existed on the site - in which case go and check your response codes!
Also note that "Blocked URLs" option in WMT that "tests" the robots.txt is not a good way to test this particular case as it still reports home page as "Allowed".