Now, check out google's robots.txt. I thought it was pretty clear to everybody that the Allow: tag was not correct... If you try to validate [searchengineworld.com] www.google.com/robots.txt, you simply get an error that google uses the Allow: tag in its robots.txt!
Does this mean that googl'es agents actually use the Allow: tag?
a bit off-topic, but I see /search listed in google's robots.txt and if you look at these results: [search.msn.com...] you'll see a google search page listed there. which means search.msn has indexed a disallowed url. It's funny to see a google result page listed in a microsoft one :)
The msnbot has probably indexed this link found on several other websites...