enigma1 - 6:14 pm on Mar 28, 2012 (gmt 0)
In my view the main problem with this is server resources.
Perhaps google wants to do additional validation on the side scripts and that has some benefits for webmasters too, like we get a report of non-existing css or js files which we can fix etc.
However the bandwidth waste to do that will be tremendous. Plus other spiders will surely follow this approach and maybe outweighs the benefits. Of course we could utilize compression, caching etc for these scripts on the server, but I don't know if the number of extra connections alone will interfere with the site's functionality.
Personally I don't block the side scripts because there is no need yet as googlebot won't crawl them, but we will see how this evolves and what resources will consume. I think at this point Matt owes a better explanation on the topic and what the extra accesses will be used for. Is it for some file validation, in-depth cross-referencing of code, indexing etc.
As a side note yahoo does something like that which is terrible at the moment from what I see in my logs. Then again they don't follow the cache headers.