I think the key issue in this is that Google must have sufficient objectivity to realise how bad they are at accurately assessing the quality of the written word.
Consider Google translate – a project entirely dedicated to language and it rarely returns a translated sentence that reads correctly to human eyes - more often that not it is just total nonsense.
Testing Google’s reading level assessment this morning with sites that use spun articles that really are so bad as to be unreadable, Google returns many results that are mainly intermediate and advanced levels. Why? Possibly because vocabulary must be part of the assessment algo and because spinning software searches for opportunities to replace words and inevitably uses less common synonyms, this exotic vocabulary may translate as an advanced reading level.
Google must know they can’t do this very well and whilst they certainly may continue to test and improve and integrate such emergent techniques, it doesn’t make sense for this metric to figure very strongly in the algo just yet.