Once, Bing and Google would find, for the search string ".well-known" webpages that had references to the string ".well-known". Which many WebmasterWorld users had seen references to in their log files.
GoogleBot started doing these:
GET /.well-known/apple-app-site-association
GET /apple-app-site-association
And many people were wondering what was up with that...
RFC-5785 explains them as "Well-Known Uniform Resource Identifiers" and how to implement them and why.
After testing, I have found that this works for GoogleBot:
Disallow: /.well-known/
Disallow: /apple-app-site-association/
As an aside, as a test, I added
/.well-known/
to the end of some popular CMS sites, like Joomla, to see what they would do. Joomla served up a (very large) "Page not found" page - obviously Joomla's entire code base loaded and ran only to then find out the request did not exist. (That is a fault of many CMS programs.)
My point being that Web Developers really need to detect some obviously never-will-exist requests *before they load their entire code base* (which are usually tens of megabytes of PHP/Perl code).
But I digress...
(Joomla now detects and (properly) ignores
/.well-known/
.)