Exactly...not to sound all high-and-mighty, but sometimes people who haven't been up to their elbows in stuff like this only see input and output; they can't imagine what needs to happen in between. As worthless and outdated as some of my CS seemed at the time, I realize part of it was just intended to teach us how to think. At least I hope that's what they were going for...it definitely had no immediate practical application. ;)
That's a very naive assumption. IMO, googlebot will be parsing & running basic JS long before it can spot even the simplest implementations of JS hidden text. CSS doesn't seem especially meaningful to Google except in spotting hidden text, but that's going to be a very difficult task as well. Combine JS+CSS in any reasonably clever way and things are going to get extremely dicey, nearly impossible to detect.
Think about everything a compiler has to do, and that's just what's needed to transform language code into machine code. Compilers can't tell you the first thing about what a program will do once it runs, and this particular problem is more on that level. In order to detect this stuff, google has to determine the effect of code, which is a very difficult task for a machine...and a relatively simple one for a human.
I remain convinced that it is very difficult. It goes far beyond a question of development.
First of all, any kind of simulated scripting is going to increase processing time by orders of magnitude. They'll need every second they can shave off PR calculations and maybe another 50,000 boxes. Sure, they can afford it, but I don't think its going to look like a good idea to the board of directors come IPO time, let alone the current leadership.
Secondly, assuming a detection system like this would dole out automatic penalties, false positives are going to be a major issue. No system like this is foolproof in either direction, and getting both the right detection code to begin with and then the right threshold for a penalty is going to require lots of analysis.
So this is indeed a non-trivial problem, perhaps one not suited for algorithmic detection. That's not to say that they shouldn't try, but in the mean time, its as compelling an argument for paying attention & responding to spam reports as I've got.