Silvery - 3:24 pm on Jun 28, 2010 (gmt 0)
MichaelBluejay asked how it helps anyone -- I think Google would say that it helps them discover many more links/pages that unknowing webmasters unintentionally hid from bots. They'd also say that it's necessary in order for them to discover various cloaking-style exploits and other spammer hacks, and so that they can suppress/warn users of malicious content.
There is a difference in actually executing code as a useragent versus merely interpreting what code is doing in a predictive modelling fashion. The difference is primarily in the potential impacts that executing code would have upon a live application. Otherwise, I simply knowing what an application does and how it affects user-interactions with a site, the difference is primarily semantic -- Google's use of that info would be the same regardless of the method used.