homepage Welcome to WebmasterWorld Guest from 54.161.155.142
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
Forum Library, Charter, Moderators: goodroi

Sitemaps, Meta Data, and robots.txt Forum

    
Robots.txt validator updated.
Brett_Tabke




msg:1527891
 4:57 am on Apr 19, 2002 (gmt 0)

I've updated the robots.txt validator over on SearchEngineWorld [searchengineworld.com].

Change log:

Filenames
Adjusted the url scheme so that "robots.txt" is _not_ forced. This allows people to check any robots.txt formated file. Thus, you can check development copies of robots.txt files without the chance of a robot running into the real one while it is in an invalid state.

Duplicate Agent Fields
Added several checks and warnings for duplicate agent fields - including wildcard parsing. Using duplicate wildcard agent names for multiple disallows is very common. However, I have been informed by one search engine, that they may have a problem with duplicate agent wildcards.

Although not specifically addressed by the robots.txt standard, formats such as the following may be a problem with some spiders:

User-agent: *
Disallow: /apples

User-agent: *
Disallow: /oranges

User-agent: Zippybot
Disallow: /chevy

User-agent: Zippybot
Disallow: /chrysler

Case Checks
Several more case check errors have been added for agent and disallow field names. There is some controversy about how the standard should be interpreted, so I felt the more strict interpretation should be used.

Feeback appreciated.

 

physics




msg:1527892
 5:05 am on Apr 19, 2002 (gmt 0)

Thanks Brett.

pageoneresults




msg:1527893
 8:22 am on Apr 19, 2002 (gmt 0)

Brett, thank you! This clears up a nagging question on the trailing slash issue. There are quite a few who will find this rather interesting.

Doofus




msg:1527894
 1:13 pm on Apr 19, 2002 (gmt 0)

I've never found it confusing; maybe I do too much C programming and need to get out more. But there are two other aspects to the robots.txt that I think need to be mentioned. First, this is how I see the slash thing:

You have two terms. One is shorter than the other. The short term is the one you have in the robots.txt file. The long one is the one that has the URL that the spider is thinking about. Actually it's a non-issue which is which, because you are considering the situation only to the depth of the shortest term.

The URL that the spider is thinking about always starts with a slash, because the domain is stripped off by the spider's httpd daemon, per standard practice. A single slash represents your website's root directory. That's why a single slash represents a total Disallow.

You want a "leading letters" match to the depth of the shorter term. It's like a strncmp(A,B,X) where X is the length of the shorter term, and A and B are the two terms.

If you use /help/ then the match must be perfect 6 characters into the URL. If you use /help then the match must be perfect only 5 characters into the URL.

So the 6-character example is necessarily a directory disallow. But the 5-character match could be either a directory or something else.

A /help* is a no-no because you will never get a leading letters match on a URL this way. No URL has an asterisk in it.

PROBLEM 1:

Now this is why I don't like the apparently acceptable format of telling a spider that it's okay to spider everything by using this:

User-Agent: *
Disallow: [ nothing after the Disallow ]

It worries me for this reason:

Assuming that the spider properly disregards all white space after the colon that follows Disallow on that line, and that includes space and/or carriage return and/or line feed, it means that the shorter term now has a length of zero. Guess what? A standard string comparison to depth zero will return a match for any two strings (the null set is a true set) using strncmp(). That means the spider has to be smart enough to add a second test and eliminate the null set from the comparison. This makes me nervous.

Let me state it more succinctly: in most programming languages, the null set is a true set, and comparisons with a null in them will not throw an error, but will return true. In the robots.txt standard, a Disallow: with nothing after it is a null set, but the standard says that this should return a false by considering all URLs a mismatch with the null, and proceding to spider the entire site. My confidence that all spiders are testing for the null condition is not very high.

It's much better to think of the robots.txt as ONLY an exclusion standard. I can not conceive of a case where it's necessary to use nothing after the "Disallow:". Much better to leave it out.

PROBLEM 2:

The standard says that the first User-Agent that a particular spider encounters in a robots.txt, that applies to that particular spider via either a direct name or a wildcard, is the User-Agent that should apply to that spider. At that point, the spider has what it needs and should not be consulting the rest of the robots.txt.

That's the second thing I see in robots.txt that is done improperly lots of times. The arrangement of the various sections is important. It makes no sense to have a User-Agent: * on top and then a long list of specific bots below that, with their own Disallows.

Is my way of understanding this stuff reasonable?

Xoc




msg:1527895
 3:05 pm on Apr 19, 2002 (gmt 0)

Does anyone else think the "standard" isn't much of one. It's not very precise. Where is the W3 spec for robots.txt?

Doofus




msg:1527896
 6:39 pm on Apr 19, 2002 (gmt 0)

All I could find was a paper written in 1996 where they concede that there are lots of things that need improving:

[w3.org...]

Apparently it fell through the cracks.

A couple months ago, on a minor freebee site I have, I was trying to get Google to remove everything. The site has a URL that results from the fact that it was free web space from Sprint (now part of Earthlink). It looked like this:

[home.sprintmail.com...]

Of course, I don't have access to anything above my directory. When removing stuff, you have to put in a robots.txt before you click on the Google remove. I can't remember how it turned out, but I do remember that I had a lot of trouble getting Google to understand that my robots.txt WAS in my root directory. Google's respondbot kept insisting that it was too far down in the site.

Doofus




msg:1527897
 6:48 pm on Apr 19, 2002 (gmt 0)

And another thing -- I screwed up in my post above. I've read that the "User-agent:" and the "Disallow:" are case-sensitive, which means that my upper-case "A" on "Agent" was a mistake. It's all so counter-intuitive.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved