Trying to get back on topic I'll go over some of the warnings I encountered and why I feel they're invalid concerns. Use efficient CSS selectors
Swa, I think my concern that conflicts with your statement was actually that I feel people don't have a good structure for learning things such as CSS and other web-oriented languages. It's good that people can just jump in though my concern is that when people don't understand the fundamentals of how a language started, at least in the context of CSS, then they will attempt to use newer properties and selectors to achieve simpler goals that they should be using older properties and selectors for. I think that's probably better way for me to word it then my "challenge". :o
Trying to get back on topic I'll go over some of the warnings I encountered and why I feel they're invalid concerns.
Use efficient CSS selectors
This is an invalid warning because nothing is more annoying then not knowing what label is associated with what field. In fact I even like WebKit's take that when you over over a label element it actually trigger's the hover psuedo-element of the input element it's associated with presuming of course that the label's for attribute's value is the same as the input's id attribute's value. These are visual ques and help aid the user. Aiding the user is more important then saving 0.04ms or whatever amount of time is "wasted".
Additionally for WAI AAA compliance I am very strict about how I construct my CSS selectors. You have to put field elements inside a field just like you can't put an block-level element inside of an inline element. There are so many warnings that I can't remember where it was located that it was also another false warning that can safely be ignored.
Remove unused CSS
This is perhaps one of the most misleading warnings. It's implementation is too simple and therefor should have been kept out of the extension until it was constructed to the extent of the complexity it should have been implemented as. I don't make a style sheet for each page, that's a total waste! Why download a whole new CSS file per page? So it's only natural that each page will lack some traits of other pages...you're not going to make a single page website? Aren't the SEO forums screaming about providing as much possible good content? Aren't the search engines trying to find good sites and determine if there is junk? So why then slap good sites in the face that have more then one page and thus CSS for more then one page?
This is no different then Vista's (and to a lesser extent 7's) memory management where the OS treats the RAM like a RAM-drive (RAM drives are battery-backed up "hard drives" for super-enthusiasts and tend to have about 8GB+ for insanely fast OS and application launching). It simply presumes you want to open anything and everything thus dumping it in to the memory. Proper implementation would be to determine when user does A they are likely to do B so load things associated with B. The hard drive is the slowest part of the computer so why slow that down for unnecessary stuff that has no justification that it will ever be loaded in the first place?
Implementation is key and if this feature were you be implemented correctly I would have dictated that it crawl through the site (let's say limited to a domain/path of localhost/projects/happy_fish) and then after looking at all the pages determined what went unused...now that would be insanely useful! Are you listening Google?
Parallelize downloads across hostnames
I can sort of understand the concern here because there is an HTTP limit especially with Windows XP Service Pack 2 where simultaneous active TCP-IP connections were limited to a maximum of ten to combat spyware and zombie computers sending out hordes of spam. However this restriction has been eased the last time I read on the subject plus it doesn't make sense for sites for smaller businesses and individuals to spend money on a second domain and spend time ripping files from their main project and sticking them somewhere else associated with a different domain and all the headaches associated with that. Honestly I think this issue is negated by caching headers. If you set the right headers for file types to not expire the browser won't even make an HTTP request for them in the first place until the date of the expiration headers so you won't even have to wait for the server to return HTTP 304. I think that is the direction Google should have suggested with this warning.
I couldn't find this option so if someone would be kind enough to point it out I'd appreciate it! :-)