IMHO, informational websites probably needs to update their "stale" pages while other sites like news can get away with it.
Ah, but how is a few thousand lines of code going to discriminate accurately when humans can read it and still not interpret it, especially when skim reading?
When I say "a few thousand lines of code" I don't actually know how many lines of code Google uses, but I wrote an application a couple of years ago that was comprised of 10,000 lines of code. Most of that code was designed to compensate for human error in inputs for scoreboard and overall percentages/ratings covering a variety of usage scenarios, so I guess it could be similar. Google could be using 100,000 lines of code or more with only 5-10% of it actually being referenced in any one incident depending on which conditions are met.
So when it comes down it it, I think that expecting Google indexing to be able to tell which is good grammar or not is absurd. For example, is it going to check absolutely every word in a dictionary to see it a) exists, b) a proper noun, new terminology, abbreviation, etc., c) mispelt, or d) spelled according to which language? And then after all that is done, is it then going to pedantically grade literary composition?