1script - 4:36 pm on Nov 3, 2011 (gmt 0)
They've been executing JS for quite some time now (3-4 years I think) to fish for hidden links outside HTML code. I understand that could be done during indexing, i.e. not in real time.
But are they now executing it during crawl or still during indexing/processing stage? I mean, those Facebook comments and the like are all time-sensitive info. It would make no sense to execute THAT JS later - so are they going to run all JS on all pages they download right that instant?
I did not think there's enough computing power in the world that would let anyone do that, not at their scale anyhow. How about JS with errors? Endless loops and otherwise bad code? Wow, this looks like a can of worm I would prefer to keep sealed ...