Page is a not externally linkable
- Code, Content, and Presentation
---- CSS Selectors and Page Speed
alt131 - 11:56 pm on Mar 7, 2011 (gmt 0)Thread source:: http://www.webmasterworld.com/css/4274514.htm
he he. Is this the modern form of fichering? Recalling our Scots stopped evolving ... or maybe treated as the Germanic root if that is still in use.
|fickering [scottish word for messing] |
... But hey, you got my undivided attention :)
But if that's the case the whole scoring system is as sensible as putting a sea anchor on a formulae one race car.
|weighted - definitely yes - some sites might take care of one part some of another, making it balance itself out? |
I think weightings are of (maybe more) two relevant types. The first is in the allocation of "points", for eg, having style rules automatically scores a certain number of points. The second is the comparative weightings - so is compression worth more than efficient css - or the reverse, etc?
It's the failure to disclose that irritates and yslow reinforces that by recording different "performance scores" based on the selected ruleset. The site's performance didn't change, just the points allocate for different features. If I cared, I'd be offended by the assumption coders are too dumb to make the distinction.
Even worse example from the build notes:
"Removed "Avoid CSS Expressions" rule since CSS expressions are no longer supported in any modern browsers"
What! So that (not so) old code may get 100% because css expressions aren't measured. That would be fine if the reason was (as you've identified) unused selectors aren't penalised as much as non-specific selectors. But it's not: It's because in someone's universe the world updated to ie8 sometime in May 2010 - and all coders immediately updated their code? Double irony - I was surfing using ie7
I know that's rhetorical - but possibilities:
|[how can the unused code differ if I haven't changed it? hmm] |
# the desire to reward combine/minify means the internal logic realigns assumptions and weightings - without following through the impact that would have on some of the details
# The programme can't count accurately - which may explain other oddities in results
# You discovered a bug. I found a similar one reported in January, but not actioned - but the report was a little confusing to read - maybe worth reporting again?
Thanks for the pointer to GTMetrix - very handy.
Oh what fun:) Looking forward to the results - but is that "pagespeed efficient" - which seems to mean specific?
|and make the selectors as efficient as possible at the same time |
Random thoughts on testing
Barons said Moz dumps all id's into a hash. That suggests using id's or classes, and probably id's on the basis the browsers just applies the defaults, then checks id's, then paints - as opposed to having to check for classes as well. Plus, as id's can only be used once, the id could be purged after it has been used. So assuming no unused selectors, parsing speed should increase as the page is loaded. Trouble is browsers will apply the same ID more than once, so the id's aren't purged, so no speed gain there.
Second, a characteristic of heavily id-ed divitis seems to be id plus class plus multiple classes. So I'm doubting there will be gain by using just id's. In fact approaching this from the reverse, wouldn't that make classes, especially multiple classes faster - because you cut out the need to create and search the id hash as well?
So do your tests show if there is a score differential between
# just id's versus just classes,
# just classes versus multiple classes
# specific descendent selectors versus just id's versus just classes
Second, is it possible to get 100 with flat, semantically coded html that uses descendents - or is some form of specificity required?
Finally, out of interest, have you run your test pages through web page analyser [websiteoptimization.com] to get a "feel" for how they perform according to more generally understood definitions of "performance" and "efficiency"?
And this would be ... responsible citizenship means official repositories should only publish modules that meet minimum standards of optimisation? :)
|if you were just to optimise a module image to the same location it comes from it would overwrite the next time that module updated - so I think I'm about to learn another lesson |
Brought to you by WebmasterWorld: http://www.webmasterworld.com