homepage Welcome to WebmasterWorld Guest from 54.145.183.126
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Code, Content, and Presentation / CSS
Forum Library, Charter, Moderators: not2easy

CSS Forum

    
CSS Selectors and Page Speed
howto & busting the unused selectors problem
SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 11:41 am on Mar 1, 2011 (gmt 0)

I've ranted on long enough about the "warnings" that Google's PageSpeed puts out about "inefficient selectors" and "remove unused CSS, and I thought I might try and discover what might actually make them more "efficient" or to see if looking for those "needle in haystack" unused ones would be worth it.

As part of my quest I watch a very long (and sorry, but very boring/dry) video talk by David Baron : Faster HTML and CSS: Layout Engine Internals for Web Developers [youtube.com], I did take from it a bit about Mozilla's theory about how they parse CSS in relation to everything else that needs to go to output a page. and why they have to do it like this because of subsequent or immediate changes any scripts may make to the page rendering

Also I came across an Article by Snook [snook.ca], which starts off an subject you wouldn't think is related, but it is.. the article as well as explaining the "lookback" dilemma nicely, then has a further enlightening bit, from snook, in the comments ref: simplifying selectors:

Complex applications like Yahoo! Mail benefit from simplified selectors and improved rendering times because of the amount of work that is being done on the client. If you have a blog then it really isn't an issue because the page is rendered once, with few rules needing to get evaluated. Changing everything to ID or class selectors won't be worth the maintenance headache.


This is what I'd always thought, I knew it was Yahoo's (Grids) and Googles (Blueprint) policy to want us all to use class names for everything.. it turns out that it is because they have to as it does affect their rendering, Yahoo page especially is in a constant state of change, so the quicker the make a selector unique (reading it from the right side) the better.. for most normal websites it isn't worth it. The time to lookback through a rarely changing page would be minimal to none. IMO an exception is possibly those dropdown menus.. which I might get to refining later

So I made a page, a very simple test page (no images!), made sure it "passed" the test on all points - compression and minimised HTML especially.. IOW I wanted to get to a point where the CSS and only the CSS was affecting the "score"

the results slightly surprised me, but not much

with 5 very inefficient selectors I still got a score of 100, with a green tick next to the "inefficient selectors" although the arrow could still be clicked to see those 5

repeating those same 5 to have 10, the score did drop - to 99 - with the yellow triangle

repeating the 10 to 20 no change, in fact repeating on up to 80 there was no change, however the rules were only repeated which may have had an effect, but I think it's more likely that the "warning" is only worth a minor score drop. The fact that the selectors were repeated did not give "unused selector" warnings I wanted it that way as I didn't want one warning contaminating another.

However onto them, Unused Selectors, I deliberately input some, so with the page score at 100 with - 1 very inefficient rule - I added 10 unused EFFICIENT ones (I copied the 1, made it efficient, and changed the class name on all 10 rules to 10 names not in use) then 20 then 30.. no change to score.. it remained at 100, when I made these unused selector inefficient again, the score again dropped to 99 and
that was because of the inefficient selector warning not the unused warning!

my conclusion (would welcome views!) is that it doesn't matter one single bit about the unused selectors, though in theory it should as they are unnecessary lookbacks.. what was more important was the efficiency of them. IF unused selectors are efficient the "lookback" load must be so tiny it's not worth the effort, nor necessary to "penalise".

Further conclusion, if you want the nice green tick, and to actually affect page performance make all selectors, used or not, "VERY efficient"

How do you do that?

Well the good news is that it doesn't matter about "qualifying" class names or IDs (though qualifying a unique ID shouldn't be necessary if that ID is indeed unique!) - I thought PageSpeed much have got smart enough to figure that out, it was one of my biggest gripes as reuse of class names is a big part of CSS), indeed by the end of my tests I finally realised why that particular warning was there at all!

"qualifying a tag" means that you add the element name it belongs to your selector e.g.
ul.one li p a - you have qualified the ".one" class by adding the "ul" element to it. This always irked me that it, "qualifying", appeared in warning messages, as it's possible to use a class name on multiple elements (tags) and you may very well need to qualify the class in order to get specific. Theoretically modules or plugins can introduce identical class names too so it's not just a simple case of ensuring your own CSS only attaches a class name to a particular element, e.g. class="list" can only be used on <li> elements, which is most people understanding of the warnings given.

So to the Good news.. if you make
ul.one li p a efficient.it becomes (for my code - do not take this as gospel without reading further!)
ul.one>li>p>a

by introducing the parent selector (>) you make things really easy for the browsers to lookback through the tree.. it starts at the <a> - reads selectors from right to left - it can immediately start reducing the amount of lookbacks as it no longer has to traverse very far up the tree to see if the next element, <p> is in the <a>'s ascendancy.. it only has to look at it's immediate parent, and so this goes on.. wheareas with the default descendant selector
li p a {} it might have to look back the whole document tree, all the way back to the root body tag at times.. to see if there's any chance there's a <p> in an <a>'s ascendancy, before it can then start the lookback all over again to see if all <p>'s matched have an <li> in their ascendancy.. in other words multiple lookbacks may have to occur

but with my code, it's not flexible enough.. what happens if I want to target all <a>'s inside an li, no matter if they're inside a <p> or not.. should I write the selector

  1. ul>li a {}
  2. ul>li>a {}



2. will not work to target the <a>'s inside any other element that happens to also be in the <li>.. e.g. <p> <div>

1. will work! no warnings

#1 was the surprise, I really expected it to throw the warning again, but apparently it's smart enough not to warn about the descendant selector if it "knows" that there could be other possibilites.. it fits with something David Baron said in that video too.. make a selector efficient/specific as soon as possible starting at the right side.. you see in my code those <a>'s could have been inside a <p> element which was inside the <li>.. or it could have been inside a <strong> element inside the <li> (they were actually!) - in other words it could not be made more any more efficient without me having to add more rules but .. e.g.
ul>li>p>a, ul>li>strong>a {} - btw I removed the <p> and <strong> elements, which would have meant that rule #2 would possibly have been my best choice for efficiency, then rechecked just in case PageSpeed had got really clever matching subtly unused to efficiency.. but no.. it still didn't flag ul>li a {} as inefficient, cool I would have hoped not that would really make "efficient" maintenance an impossibility ;)

Lightbulb moment, on qualifying..

..then I added a class somewhere into the mix, on the ul:
ul.one>li a - not an unreasonable request given that one way around most of the warnings is to add classes to everything - I got a warning again though this time it was not "Very Inefficient (good to fix on all pages):" like the previous ones had been.. just inefficient "Inefficient rules (good to fix on interactive pages):" and there's the clue.. interactive pages, most are these days what with JS libraries being built into most CMS'es and other fancy plugins abounding

so how to "fix" for all?

choices, adding selectors for every contingency
ul.one>li>p>a, ul.one>li>strong>a {} is one way, and changing the class to an ID so making it unique and removing the need to qualify the tag too, does it too #one>li a {} or simply ensuring that you never have clashing class names (fairly impossible) so removing the need to qualify too..

most modules or plugins to the page will come with its own "namespaced" CSS i.e. it will already be wrapped in a uniquely identified div, this is so the JS/required interactivity can be targeted.. the module CSS that accompanies it may not be as efficient as it can be however ;)

In the grand scale of things this was just an eye opening information exercise, it's probably not worth it for smaller sites as Snook points out, however if you want to squeeze every last drop of performance out of your page, or just want that green tick... refactor your CSS or at the very least your module/plugin/addon CSS, by adding to your selectors :o.. and if you're worried about unused selectors, DONT, just make them efficient, then it won't matter if they're there or not (it should no longer affect the score or speed) however the act of refactoring them to be more efficent should help you decide if they're required or not ;)

Any thoughts or other discoveries or any views on when/if it's worth it to refactor?

Suzy

 

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 12:56 pm on Mar 1, 2011 (gmt 0)

just a note if you saw this in mid-edit - I made a mistake in my original post so the choices at the end have changed, however I'm not allowed to edit any more, so to clarify one point..

when I say "fairly impossible" I mean specifically with interactive pages it's not always possible to ensure a classname is not reused, and it's likely dangerous just to blindly assume that because your classes are only used once (like an ID) on your page that removing the qualifying element will be enough.. it might break a page later on down the line. Some widget you install at a later date might use the same class names.. usually this is not the case with ID's as widgets make sure their ID's are "namespaced" i.e. very particular to their code to ensure there's not a clash .. hence my advice to use ID's if unsure. Your ID's as well as still being unique(hopefully ;))- will likely not be the same, as specific as, plugged in code is likely to be, their best practices mean their id's are usually prefixed with "mywidget" e.g. #mywidget-left

alt131

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 4274514 posted 11:54 am on Mar 2, 2011 (gmt 0)

Baron was a great find, and I thought he was brilliant. There should be a rule that obliges browser engineers to present more often ;)

Anyway, refactoring:
I've never been impressed by pagespeed, so I wouldn't refactor on its results alone.

and other shocking ideas ....

This one isn't mine, I copied from Using efficient selectors [code.google.com]
... the fewer rules the engine has to evaluate the better. So, of course, removing unused CSS is an important step ... After that, for pages that contain large numbers of elements and/or large numbers of CSS rules, optimizing the definitions of the rules themselves ... The key to optimizing rules lies in defining rules that are as specific as possible and that avoid unnecessary redundancy, to allow the style engine to quickly find matches without spending time evaluating rules that don't apply.
Which makes clear that pagespeed's approach to optimiosation is premised on large numbers of elements and css rules, salted with lots of unused css. Firmly in the yui grids family.

However, as you say pagespeed doesn't care too much about unused selectors. Of course. Its designed to evaluate templates targeted at people who don't want to get their hands too dirty with code. That's laudable, but having a tool that then told them they had to get their hands dirty searching for unused selectors would undermine the whole marketing premise that content could be dropped into the templates without any need to understand much code. Self-defeating.

Aside from that, I once tried to unpack a yui grid. The things need a hazardous substance label and if I'd done such a silly thing on my first day coding I'd still be undergoing trauma counselling.

Second, of course page speed advocates maximum specificity. I'm not disputing that id/classes are more efficient than descendent selectors. What I am disputing is whether that is the real efficiency question. To me, the real issue is why the html has so many nested elements that css specificity would improve painting to the point it would be noticeable by a human. Computers process information in a micro-flash. I cannot imagine the amount of over-engineering required to slow code processing to the point that processing could be delayed enough to be noticed by a human. OK, I looked inside grids so I actually can imagine code like that. Politely put is clashes with the opening gambit that "the fewer rules the engine has to evaluate the netter"

But that isn't inefficient selectors, it's inefficient html.

Note that neither pagespeed or y-slow measure total time from first server call to displaying content. To me that highlights that the definition of efficiency is neither user centric or delivery focussed. And that makes any subsequent measurement as helpful as body#mybody {display:none}, which is valid, efficient - and useless from a user perspective.

I'm not against writing code that uses the most efficient selectors. I just doubt claims that making selectors more specific will make code more efficient when the real inefficiency problem is html and the bloated css used to style it, all bolstered by measuring efficiency in terms that don't relate to efficient display to the user. That, I thought was the principle underpinning Baron's presentation - although not the one he spoke to.

Baron and snook both emphasise the excellent point that css matches right to left, so the shorter the selector the faster the match. On that, it might seem, pagespeed and y-slow base their claim that specific selectors will make code more efficient. I interpreted it as a good reason to avoid spaghetti code in the first place.

Personally I only use pagespeed because it's conveniently located in firebug and might highlight something really extreme. Neither pagespeed or y-slow give me the information I need, and your findings tend to reinforce how superficial they are. For things I would refactor to improve delivery to users I prefer the information provided by the likes of websiteoptimization.com

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 1:35 am on Mar 4, 2011 (gmt 0)

yea, it's a merry-go-round for sure, but being as this has been a pet peeve of mine for a while I am going to take the time to optimise the site in my profile as best I know how using cdn's, sprites, Data URI's, htaccess tricks, compression, minification, (and eventually caching!) - tricks like you find on that site you mention, or here on WebmasterWorld actually (thanks to my twitter friend).. anything really

then well all that is done (or a balance of it), and because of the fact I don't have to support IE6 any more I am going to try the CSS optimisation, and make all my selector (even more) EFFICIENT haha ;)

I'm not against writing code that uses the most efficient selectors. I just doubt claims that making selectors more specific will make code more efficient when the real inefficiency problem is html and the bloated css used to style it,


I have always doubted, but with everything web like eventually a corner will turn, so I'm thinking now that the above test may prove one way or the other, and if it doesn't then I'll have some more data or experience. BUT of course I've already tweaked/re-coded half the template to not be quite so bloated HTML, the other half is not completely outwith my php capability I'm just trying to retain a balance of "if aint broke" .. this is not the fault of the software we have available today, either way, it's the fact that they work for the majority, and that most big sites still need to (and will continue to need to) support IE6.

You are right though, unless you've already optimised everything else, HTML, Images, reduced http requests, parallelized downloads etc.. then "fine tuning" the CSS likely won't register.. but if you are really tight or are chasing that green tick, like some seem focused on, then I figured it might be an interesting exercise, if nothing else

SevenCubed

WebmasterWorld Senior Member



 
Msg#: 4274514 posted 2:28 am on Mar 4, 2011 (gmt 0)

Thanks for sharing all your observations Suzy. I missed this thread when it was originally posted 2 days ago. I just caught it now because it floated to the top of the active thread recent posts.

It's been a long day and I'm too tired to concentrate or focus on it so I'm going to need to read it through a few times for full appreciation.

Just 2 real quick comments though. First one is that PageSpeed is still in Google Labs Beta so no doubt there will be changes to it. Secondly, I'm still avoiding CSS sprites because at one time they caused grief on iPhones and caused them to freeze. I don't know if they still do or not but that was based on tests done by...can't remember for sure but I think it was AOL engineers.

alt131

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 4274514 posted 2:23 pm on Mar 4, 2011 (gmt 0)

I don't have to support IE6 any more
;) ... and some sensitivity for the less fortunate would be appreciated ...sniff ... ;)

You are right though, unless you've already optimised everything else, HTML, Images, reduced http requests, parallelized downloads etc.. then "fine tuning" the CSS likely won't register.. but if you are really tight or are chasing that green tick, like some seem focused on, then I figured it might be an interesting exercise, if nothing else
Of course an interesting exercise - your findings were fascinating, as usual. I'm especially impressed by your discovery that specificity is weighted more highly than redundancy is penalised.

However, my point was a bit more direct than "unless". Being tight is fine, my concern is "chasing the green tick". From the blurb:
The Page Speed extension is the fastest way to get accurate performance analysis of your web pages.
I don't think pagespeed does that at all, and my concern is that unless that is fully understood, we risk developing a belief system that really confuses efficiency with pages getting a green tick, even though the tick is meaningless outside the scope of the purpose-specific definition of efficiency, and in some cases the techniques used to acquire it are really quite harmful.

Also, just to clarify my reference was to the online analysis tool provided for free rather than the "tips and tricks" (some of which are showing their age now), because the page analysis tool does try to analyse a page for efficiency from a delivery and user perspective.

Questions/thoughts
I told myself I would not test ... but I did. So I coded a basic html document, without style, therefore relying on defaults.
It scored 99 so I introduced the rule: * {colour:red} and scored a perfect 100.
So to me that reinforces that pagespeed measures selectors, not "performance", and that as it is selector focussed it requires the presence of a selector - and on that logic an inefficient selector earns a better score than none. Or is there another way of interpreting this?

I then tested basically unmodded themes. Most scored a respectable 88/100. That's despite between 70 and 100 images, more than 10 scripts, and in some cases over 30 css files (nothing compressed, combined or minified). At highspeed they take more than 1/2 a minute to deliver content, 128k almost 2 minutes, dial-up users please book over-night accommodation.

A bespoke site with the same visuals (a few less icons to download, but not critical to the overall point) scored a 89/100. wahoo! It wins!
And delivers in 1.09 seconds. <--- note seconds, not minutes

The bespoke site had some files uncompressed, short expires-by and the like, so much improvable, but I am intrigued it scored only one point more than sites with no compression, all those server calls, and bandwidth hogging measured in minutes. Being fan of black comedy that did produce the quote of the day:
With Page Speed, you can make your site faster, keep Internet users engaged with your site, reduce your bandwidth and hosting costs and improve the web!
But to the point, I've always suspected the "scores" were weighted". Your finding regards the redundancy/specificity relationship seems to reinforce that. Did your testing produce anything interesting in terms of weightings between different factors - like compression over server hits, etc?

Finally, I'm sure common scores are 83 or 88, as as if there is a benchmark/weighting scheme that makes that reasonably/generally achievable. Any thoughts?

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 10:36 pm on Mar 5, 2011 (gmt 0)

I'm sure common scores are 83 or 88, as as if there is a benchmark/weighting scheme that makes that reasonably/generally achievable. Any thoughts?


lots..!

weighted - definitely yes - some sites might take care of one part some of another, making it balance itself out?

see below, and I've only just started..

I started it today, I have 2 x benchmarks, with everything else optimised as best I can without fickering [scottish word for messing] with "core" - I can then have Drupal optimise (as in combine.. not minify) the CSS or not

so without the Drupal optimisation:
YSlow - 85
Pagespeed - 72
Load speed - 1.217
http requests: 34
---
PS score for unused CSS - (72.8) C
PS score for Efficient CSS - (0) F



with the Drupal "optimisation" - which is no more than combining the multiple sheets into one:
YSlow - 91
Pagespeed - 84
Load Speed - 1.06
http requests - 14
---
PS score for unused CSS - (59.2) E
PS score for Efficient CSS - (0) F


So with no changes to the code whatsoever.. the overall scores increased, page load time decreased as expected (due to optimsation and http requests being reduced) - but the PS 'unused' score has actually decreased while the overall score has increased? - I checked it 10 times! [how can the unused code differ if I haven't changed it? hmm]

I'm using GTMetrix, which gives me history and a combined summary

Bearing in mind, the findings earlier in thread about unused v efficient - I'm not worried about the "unused" , in fact these first pass results almost underline the first findings ;)

so now, I'm going to attempt to combine the sheets by myself by pulling the modules sheets into my theme and make the selectors as efficient as possible at the same time

I'm still getting lots of warnings/penalisation about compressing images - and yes that part of the score does appear to be more heavily weighted - (am ignoring CDN's for now, but am using a fake CDN to split resources.. though that is included with both scores) - the images still giving the "warnings" are mostly all involved in the module CSS files so I'll be dealing with them as I think best, smush, sprite or dataURI as I reach them in my CSS optimisation - I'll just move them first, then I'll run another test by optimising them when they can't be overwritten by an upgrade

info: you can't integrate a module CSS file without changing the image path anyway, and if you were just to optimise a module image to the same location it comes from it would overwrite the next time that module updated - so I think I'm about to learn another lesson ;)

at the end, regardless of the results I will be able to compress the single css file as well, which is more than my Drupal setup can do at the minute, I know it probably could minify too if I looked for YAM (yet another module), however I want to leave this as "vanilla" as possible while I prove or disprove the "efficient" CSS myth! (btw this is now outwith tests, it is happening on a live site. I'm trying to dis/prove it a little more than theoretically just to see if it's worth tweaking those themes - WP, Drupal or Joomla)

alt131

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 4274514 posted 11:56 pm on Mar 7, 2011 (gmt 0)

fickering [scottish word for messing]
he he. Is this the modern form of fichering? Recalling our Scots stopped evolving ... or maybe treated as the Germanic root if that is still in use.
... But hey, you got my undivided attention :)

weighted - definitely yes - some sites might take care of one part some of another, making it balance itself out?
But if that's the case the whole scoring system is as sensible as putting a sea anchor on a formulae one race car.

I think weightings are of (maybe more) two relevant types. The first is in the allocation of "points", for eg, having style rules automatically scores a certain number of points. The second is the comparative weightings - so is compression worth more than efficient css - or the reverse, etc?

It's the failure to disclose that irritates and yslow reinforces that by recording different "performance scores" based on the selected ruleset. The site's performance didn't change, just the points allocate for different features. If I cared, I'd be offended by the assumption coders are too dumb to make the distinction.

Even worse example from the build notes:
"Removed "Avoid CSS Expressions" rule since CSS expressions are no longer supported in any modern browsers"
What! So that (not so) old code may get 100% because css expressions aren't measured. That would be fine if the reason was (as you've identified) unused selectors aren't penalised as much as non-specific selectors. But it's not: It's because in someone's universe the world updated to ie8 sometime in May 2010 - and all coders immediately updated their code? Double irony - I was surfing using ie7

[how can the unused code differ if I haven't changed it? hmm]
I know that's rhetorical - but possibilities:
# the desire to reward combine/minify means the internal logic realigns assumptions and weightings - without following through the impact that would have on some of the details
# The programme can't count accurately - which may explain other oddities in results
# You discovered a bug. I found a similar one reported in January, but not actioned - but the report was a little confusing to read - maybe worth reporting again?

Thanks for the pointer to GTMetrix - very handy.

and make the selectors as efficient as possible at the same time
Oh what fun:) Looking forward to the results - but is that "pagespeed efficient" - which seems to mean specific?

Random thoughts on testing
Barons said Moz dumps all id's into a hash. That suggests using id's or classes, and probably id's on the basis the browsers just applies the defaults, then checks id's, then paints - as opposed to having to check for classes as well. Plus, as id's can only be used once, the id could be purged after it has been used. So assuming no unused selectors, parsing speed should increase as the page is loaded. Trouble is browsers will apply the same ID more than once, so the id's aren't purged, so no speed gain there.

Second, a characteristic of heavily id-ed divitis seems to be id plus class plus multiple classes. So I'm doubting there will be gain by using just id's. In fact approaching this from the reverse, wouldn't that make classes, especially multiple classes faster - because you cut out the need to create and search the id hash as well?

So do your tests show if there is a score differential between
# just id's versus just classes,
# just classes versus multiple classes
# specific descendent selectors versus just id's versus just classes

Second, is it possible to get 100 with flat, semantically coded html that uses descendents - or is some form of specificity required?

Finally, out of interest, have you run your test pages through web page analyser [websiteoptimization.com] to get a "feel" for how they perform according to more generally understood definitions of "performance" and "efficiency"?

if you were just to optimise a module image to the same location it comes from it would overwrite the next time that module updated - so I think I'm about to learn another lesson
And this would be ... responsible citizenship means official repositories should only publish modules that meet minimum standards of optimisation? :)
SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 11:06 pm on Mar 9, 2011 (gmt 0)

heh alt,

out of interest, have you run your test pages through web page analyser [websiteoptimization.com] to get a "feel" for how they perform according to more generally understood definitions of "performance" and "efficiency"?


Yes I've run them through every optimsation known to man I think :) settled on the GTMetrix vlaues as I can download a history and see what changes what. I'm not really bothered about it as an exact science if you get my drift.. I know it's common sense really - I'm just concerned more about people who descend on the CSS forum thinking the unused/inefficient CSS warning is any easy "out" that they can cope with.. and also as an aside I'm a learning as always, what all this means with an interactive site.

.. for instance just now I was tackling a Forum page.. - I've no longer got "inefficient warnings btw. at least not ones that I'm willing to tackle yet - but on this page sitting at a PS score of 83 - (not finished yet! haven't minified or compressed) - I had just completed the "optimisation" of images as suggested and was awaiting a big score change LOL - it did .. it went down!

that I wasn't expecting as I had an awful time getting my uncompressed page to come out of the 70's , that turned out to be solely down to one unoptimised image, a single button in the WSISWYG editor for replies/posts.. after that I considered the rest a formality having done most of the leg work.. but this latest one just made me spit coffee! it's gone down because now instead of asking me to optimise it's asking me to sprite and giving me a lower "score".. btw I have got no intentions of spriting a set of Forum icons, on forum that I need to update anyway.. so I'll DataURI them eventually

nearly there though,

And this would be .


That no matter if you can (it's awfully easy to pop open that editor and modify/fix a provided CSS or image), .. do not mess with core, find a way to isolate the file or folder then tweak to your hearts content, if all fails you can just restore it to the provided files.. also it tells me that we do not live in perfect world and that provided modules and plugins are not perfect, hehe if they were there would be no issue queues or help forums I suppose so I'm not complaining except about the people who try to benchmark this stuff

So do your tests show if there is a score differential between
# just id's versus just classes,
# just classes versus multiple classes
# specific descendent selectors versus just id's versus just classes


so far..

no differential between ID's and classes, though the more descendant classes you have, the bigger a "parent" chain you need to make it VERY efficient (unique)..

a selector with only one descendant will be deemed "inefficient" no matter whether its parent is an ID or a class.. which surprises me, an ID with one descendant should be absolutely fine.. even two or three for goodness sake

think tables.. to make a table selector efficient, you can't say
table#mytable td {} - because not only do you get the "over qualified by an element" warning but a td is one of two possible descendant trees for a table and even if you remove the qualifier (reduce specificity) you get the same warning.. the tree from that could be #mytable>thead>tr>td or it could be #mytable>tbody>tr>td

now put two classes on the <tr> say "
tr class="even bold" and no I don't advocate naming a class after an effect but I can only describe so much in type!

now what does that do for the amount of selectors required to make it "very efficient" yes rhetorical

multiple classes are good if as a CSS'er you use them in Cascading order, so far I haven't found the need to "chain" any of my classes.. but I would if I had to, I find that re-factoring the stylesheet into a kind of "nested" structure means the Cascade can use the multiple classes as intended.. obviously - but that if you than take the time to "group" your selectors and by that I don't mean comma separate I mean with a comment or something (i'm using hidden comments) to spell out the HTML chain if these selectors are then beside each other in the stylesheet (gasp! rather than grouped selectors) it's much easier to see..

as to your last point/question no matter how specific you make your HTML with as many classes/descendants as possible if you don't write them as child selectors you will not see the benefit & if you do write them as required you will lose the benefit of CSS, you will be writing a style per selector.. that is in fact what I'm doing to "beat" that rule .. and if that does indeed turn out to be the case (I will be able to tell fairly easily!).. I will give up as CSS will have been spoiled.. may as well call it SS

and the biggest and I mean biggest downside to all of this is that you will need to repeat yourself in the stylesheet. Think now about nested lists as well as tables that might need the same CSS applied to various levels.. easy peasy in descendant selector language, but as they're apparently the biggest CSS penalty giver .. all I can only say I'm lost until the figures show themselves at the end of this little experiment

for now, one of the top CSS "wants" is that we should be able to nest selectors to save repeating ourselves and maintain readability.. try it, get something like LESS or SASS, both of which work as intended, but is the way CSS is supposed to work?.. darned browsers, still got their own agenda! btw my site is super fast on IE8 after all this time they start to get it right and apparently are not entering the "browser wars" as far as lowly CSS is concerned, they actually deal with it better, FF is the worst, Opera can't manage it all, though it does what it can quite fast.. and Safari is just hovering between FF and IE..

and lastly it's quite weird to peer inside a provided module and see some of your own code staring back at you :o

less emotion and more figures will follow!

[edited by: alt131 at 3:21 am (utc) on Oct 17, 2011]
[edit reason] Thread Tidy [/edit]

alt131

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 4274514 posted 9:33 am on Mar 10, 2011 (gmt 0)

Yes I've run them through every optimsation known to man
I was looking for an independent measurement. To illustrate using an extreme example, if the code failed to get a single "congratulation", but pagespeed scored 100, then I have a feel for the code, plus a "gross error check" on the pagespeed scoring system.
a selector with only one descendant will be deemed "inefficient" no matter whether its parent is an ID or a class.. which surprises me, an ID with one descendant should be absolutely fine.. even two or three for goodness sake
That's the effect that needs to be removed from the results. Recalling this is not about how selectors should work, organising css, or best code practises: Purely limited to identifying the easiest/ fastest way to obtain the "pagespeed efficiency" tick.

I'm smiling because our minds often arrive at similar points, but via opposite off-ramps. My tests are about deduction, not induction, analysis not synthesis, data not discussion - whatever description makes sense. And overall - yes - that requires measuring what I think of as "the dark-side" ;) Anyway, let me put the question differently;

Backstory: We suspect pagespeed has an internal weighting scheme, but don’t know what it is. Logic says understanding the internal scheme will help score 100 You have already identified some things deemed “pagespeed efficient”, but we don’t know if one is weighted more/less efficient than others.
no differential between ID's and classes, though the more descendant classes you have, the bigger a "parent" chain you need to make it VERY efficient (unique)..
... a selector with only one descendant will be deemed "inefficient" no matter whether its parent is an ID or a class...
... think tables.. to make a table selector efficient, you can't say table#mytable td {} - because not only do you get the "over qualified by an element"
That's the exact testing environment to avoid. Without knowledge of the internal weighting scheme combining selector types contaminates the results – the selectors need to be isolated. To illustrate, that td becomes:
# just id's versus just classes,
give it an id= #mytd, test, record score#1
Replace the id's with classes, record score#2 and compare to score#1.
# just classes versus multiple classes
Just classes is score#2.
Modify the css so the same style outcomes require multiple classes - while keeping the css file size and numbers of rules the same. Something like class="my td bold".
Record score#3 and compare to score#2.
# specific descendent selectors versus just id's versus just classes
Not well put - I was referring to using the child selector to make descendents more "pagespeed efficient". So remove all classes/id's and write the css using "pagespeed efficient descendents" - which seems to mean the child selector. I guess that would be tr>td, although that’s dumb, even for the dark side - I was thinking something like div>p when designing the parameters.
That will produce score#4. Compare with score#1 and score#2 - and as it’s already generated, score #3

That will disclose much internal logic - particularly how much efficiency and specificity are treated as the same. If so, stop wasting time on html5 and css3 selectors because the way to avoid an seo penalty is to use id'ed divs. I'm hoping the results will allow a more intelligent option.

That said, if you don't have time to run the tests, but are prepared to make your test pages available to me, I'll do it myself. I'm suggesting using the same code because this is an exercise in converting real-world pages to 100 - so using the same pages means anything learned can be fed back to the "improvement" workflow and continue to provide useful information about it takes to get 100.

it's gone down because now instead of asking me to optimise it's asking me to sprite and giving me a lower "score".
Recalling the previously reported counting issue, to me this indicates either:
1.massive fiddling with the weighting scheme to produce pre-determined outputs (ie, first optimise, then sprite, next transfer money to our personal bank account)
2.There are errors in the programmatic logic. Any of false cause, denying the antecedent, illusionary correlation ... I could go on.

you will be writing a style per selector ... CSS will have been spoiled.. may as well call it SS … and … you will need to repeat yourself in the stylesheet.
Yup, a style per selector, a selector per element - pagespeed seemed heading in this direction from the outset. Big on computing power, low on learning code. Hence my tests - to see how much the dark-side is already being rewarded. Frankly, I’d expected more tests and more outcry by now because so much indicates the proclaimed best seo advantage is achieved by discarding many of the basic premises of html and css.

for now, one of the top CSS "wants" is that we should be able to nest selectors to save repeating ourselves and maintain readability.. try it, get something like LESS or SASS, both of which work as intended, but is the way CSS is supposed to work?
Nope ;) and best I can tell pagespeed doesn't want that either. But I think this highlights the mismatch – coders debate nesting selectors while the seo world is influenced by a tool that doesn't want more from html than divs. Small wonder many think css isn’t viable in the real world.

and lastly it's quite weird to peer inside a provided module and see some of your own code staring back at you
Ooooooooo K. Leaving aside the intellectual theft issue, ... you've got your code (which is usually clean and beautiful to read) with a big overhead, liberally salted with someone else's messes? You know I dislike libraries - are you trying to give me more reasons not to budge? ;)
alt131

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 4274514 posted 10:13 am on Mar 10, 2011 (gmt 0)

I forgot this gem from optimi[sic]ze browser rendering [code.google.com]
Use class selectors instead of descendant selectors.

For example, if you need two different styles for an ordered list item and an ordered list item, instead of using two rules:
ul li {color: blue;}
ol li {color: red;}

You could encode the styles into two class names and use those in your rules; e.g:
.unordered-list-item {color: blue;}
.ordered-list-item {color: red;}

... and we're assuming an internal logic ;)

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 11:15 pm on Mar 10, 2011 (gmt 0)

I can't take it all in.. and posts are getting too long..

but suffice to say I agree we may well arrive at the same point via a different route

I guess that would be tr>td, although that's dumb, even for the dark side


I don't think that's dumb at all, that's using the flow - the natural document flow that is, I mean yes on the surface it may seem to be a bit OTT but.. and I think TABLES are exactly the element to demonstrate this on, if you have a TD that contains some "warning" text .. so you give it, the TD, the class warning, either manually or via JS but you also have a div that contains some warning text the same class. as you intend for them to virtually look the same

but the div has double borders and is padded a bit extra so you can't apply the exact same CSS to a div as a table cell, so you can't just use
.warning {} - if you do it will ultimately affect the entire table layout which you don't want.. how do you decipher between div.warning {} and td.warning {} without the "inefficient" qualifier?

tr > .warning {}
- of course

or even
thead > tr > .warning {} over tbody > tr > .warning {} to differentiate the main td's over the header ones.. that is if the header ones aren't th's , they aren't always.. that's why I say that tables are exactly the logic to illustrate this point with (darn tables are always going to haunt us ;))

now the same goes for


For example, if you need two different styles for an ordered list item and an ordered list item, instead of using two rules:
ul li {color: blue;}
ol li {color: red;}

You could encode the styles into two class names and use those in your rules; e.g:
.unordered-list-item {color: blue;}
.ordered-list-item {color: red;}


hmm yea so that really stopped me using two rules?!?! - not! an li is either a child of an ol or a ul so at it's most efficient, rather than adding class names as suggested by the PS optimisation rules, what's wrong with

ol > li {color: red;}
ul > li {color: blue;}


hmphh I'm with you "what internal logic?"

but then again you won't always get logic from me alt.. I do that in every other walk of life, accounting, databases, spreadsheets, rebooting the digital TV ("it's not a computer mum".. "hmm, yes it is"), you name it I do it.. however CSS is the one "out" that combines, or accepts, both left and right brain logic [is that even possible?] that's why I like (and understand?) it so much ;)

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 11:26 pm on Mar 10, 2011 (gmt 0)

added because I said I couldn't keep up..

I won't need to give you, or anyone, a reason to budge (don't do it! ;)), you will just need to believe what you see as opposed to what you read.. it doesn't matter about SEO or if it does you just need to think outside the "logical algo" - which you are already doing - you are the few, the others are the many .. the many is what any SE algo is based on, but as always do feel free to test, and be questioned when you do - it's 'only' CSS after all..

swa66

WebmasterWorld Senior Member swa66 us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 8:43 am on Mar 11, 2011 (gmt 0)

The more I read about it, the more I'm convinced that for a somewhat static page all this stuff about "inefficient" selectors is futile.

Somebody show me a real page that's not dynamically changing all the time (e.g. google's maps.google.com) where they can measure a difference in rendering speed based on a "ul li {...}" vs. a "ul>li {...}".
I claim it's unmeasurable.

Where does the difference come IMHO: when you continue to add and remove stuff in the DOM: of course then the CSS needs to be used all the time just as well.

Aside of that: there are far more horrible things wrong with the CSS people throw out there (thanks to MSFT's IE in many cases; thanks to ignorance and/or bad training in most others) than to worry about this.

And as far as SEO goes: the rules are dynamic: they will change and even those behind this tool they made available will eventually see they're plainly wriong about it and change their metrics. Or more evil: they'll detect who's been going for what the tool says instead of going for what the visitors need :evil-grin:

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 9:50 pm on Mar 11, 2011 (gmt 0)

Somebody show me a real page that's not dynamically changing all the time (e.g. google's maps.google.com) where they can measure a difference in rendering speed based on a "ul li {...}" vs. a "ul>li {...}".
I claim it's unmeasurable.

With that swa, I tend to agree

this experiment is being carried out (not on google maps unfortunately;)) on a Drupal powered community site with Forum and photo galleries (e.g. Lightbox) it was the nearest thing I could get to a dynamic site, Drupal has, behind the scenes, quite a lot of JS based functionality which does indeed push the "unused" CSS count up per page, and there is no way it's ever going to be worth trying to serve only a "used CSS" sheet up per page even on a site this - relatively small - size.

I've noticed that the unused CSS count is a percentage of "bytes" of the total sheet not the number of rules - so if the CSS is minified the unused count will be a higher percentage and the score will be lower (worse) - but without the minification and compression the unused score looks better but overall score is obviously a lot worse - like you say immeasurable and I see no point in that particular "rule" being there at all.. in most, if not all, of today's interactive sites there is a lot of JS functionality, even on the admin or user content updating side, it's simply not worth trawling all the plugins/modules to then try to provide a custom sheet per page, probably impossible for some bigger sites I would think!

Anyway I have some new figures and am rapidly losing the will to live :o - if you're still reading then I applaud you.. 'cos I'm bored!

I have not yet been able to tackle all the "inefficient" selectors, some of the modules I use are called in different parts of pages and have different "trees" to their target "key" element so until I can track down them all I can't (or won't) even try to change them

some selectors are IE or "no-js" specific and a class for these have been put on the HTML element which means to make them specific a route has to be traced all the way back to the HTML element.. e.g.

html.no-js fieldset {}

apart from the obvious difficulty in doing this as there could be multiple ascendant paths from any one fieldset back to the HTML element.. and actually the point of making the selectors efficient is to avoid the Browser having to do unnecessary lookbacks, when in this case we actually want it to lookback anyway.. no matter if we give every ascendant path specifically in the CSS I just don't see a case for this one at all.. so it stays (but I'll remove the html qualifier changed to
.no-js > body fieldset {} to make sure it goes back to the HTML element! ;))

What this means is that although I've managed to change more than half of the inefficient selectors to be efficient, the score for them has not budged it's still 0 (F).. and as the selectors, I have managed to change, have likely increased in size they're contributing to the unused score now too (swing and roundabouts) though like I said I don't think unused is actually penalised, I still can't prove this, but I'll keep working on them as and when I have a minute.

I still very much believe they (unused/inefficient) are not a cause of poor scores - but a symptom.. the whole exercise of re-factoring the CSS to be as efficient as possible has reaped the reward I'd hoped.. check out these scores (with previous ones for comparison!

without the Drupal optimisation:
BEFORE:
YSlow - 85
Pagespeed - 72

Page Size: 241KB
Load speed: 1.217s
http requests: 34
---
PS score for unused CSS - (72.8) C
PS score for Efficient CSS - (0) F

AFTER:
YSlow - 86
Pagespeed - 86

Page Size: 168KB
Load speed - 1.39
http requests: 33
---
PS score for unused CSS - (79) C
PS score for Efficient CSS - (0) F


not much difference, except the PS score , what they "liked" here was although the actual CSS sheets are not yet combined into 1, the CSS inside them is now minified

with the Drupal "optimisation" - which is to combine the multiple sheets into one:
BEFORE:
YSlow - 91
Pagespeed - 84

Page Size: 235KB
Load Speed - 1.06
http requests - 14
---
PS score for unused CSS - (59.2) E
PS score for Efficient CSS - (0) F

AFTER:
YSlow - 93
Pagespeed - 93

Page Size: 164KB
Load Speed - 1.12
http requests - 14
---
PS score for unused CSS - (56.1) E
PS score for Efficient CSS - (0) F


The load speed figure is very deceiving, it appears to be when the "onload" event fires not when the content actually downloads onto the page - the content is actually downloading quicker honestly almost instantly in some cases! I'm sorry I have no figures to support that, but the waterfall charts show it too..

So there you have it, we can do things via CSS to help, the actual load speed reported in all 4 sets of figures above is for the non-cached Home page, and it varies across browser.. IE is instant! a primed cache for that final combined page is a 1 x HTTP request for a Total Weight - 26.4K apparently so no wonder IE is instant

What I didn't do was check the scores along the way, well apart from the one time as mentioned last night LOL.. as I knew it would only be the end result which would show the most savings.. i.e. when everything was combined

so what did I do, I took every individual stylesheet into my theme instead of it being called from the module/plugin/widget I was then able to refactor it, knowing I could switch back to the module one, an "unbroken" one, if needed

to re-factor, I didn't want to change the actual CSS content, i.e. what it was doing - except where I knew a rule to be outdated or wrong, I just wrote the rules more efficiently where I could. I was able to remove some rules that I could see were just plain wrong (targetting an element that no longer exists in the module) or were duplicated, I guess those module writers are too busy to clean up their CSS at times too ;)

Then because I'd moved the sheets into the theme I also had to change the paths to all the images used by each CSS. This was a good wake up call to exactly what images the site was using from modules, so firstly I changed their paths (an easy job using SASS!) and copied the module image folder to my theme too - this let me (and users, remember live site! no fancy dev area set up prior to this, there is now - another side effect of this experiment!) view the page as it had been.. then I ran PageSpeed and optimised each image, optimisation is made very easy using PageSpeed itself, it very kindly offers you an option to "Save Optimised" images, so I saved them all (I checked a few first and the quality is fine for web!) - but I didn't upload them to my newly copied image folder.. instead using a handy encoding tool [abluestar.com] I base64 encoded them all and changed the CSS to use them in their DATA Uri format, I also have an mhtml file so IE can use them too.. and this does not seem to negatively affecting IE at all, in fact quite the opposite!

OK so what all that did was reduce HTTP requests for background images.. except these images weren't being requested on page load anyway as background images are not requested until they're needed and most of these are hidden until some user action takes place however even though they weren't being requested on page load PS and Y!Slow still reported them as extra requests and suggested spriting them, like I said I don't want to build & maintain sprites for plugin code that could change, this way I still have control over each individual background image and I simply copy and paste into one place in my SASS file which will then update every occurrence of that image for the final CSS. I'm happy, and the tools it seems are extremely happy by the looks of things

With my theme itself I was able to DATA URI all but one of my images, these wouldn't all sprite anyway as they are repeating backgrounds in many cases.. the one exception being a larger header.jpg.. that did cause me a few headaches as it increasing the onload event time somewhat (according to the pretty waterfall views you can get). CSS backgrounds are always called last so it wasn't really affecting the content load time, but you know I wanted to see what I could do.. I cheated I set the header.jpg to load first in a hidden div in the HTML (hmm maybe I should do this on the front page only?) I had already parallel downloading working so it no longer kicked the onload event out a bit and the final waterfall looks beautiful ;)

As part of the re-factoring with SASS I was also able to remove ALL comments from the final CSS file and then also minify it very easily.. should I need to troubleshoot or change the CSS I simply open the SASS file complete with the comments and unminified, change where necessary and rebuild the required CSS.. this process was taking me less than 5 minutes as I worked on the individual sheets laterally. So in adding the extra bytes due to DATA URIS I was able to remove just as much, if not more, through comments and whitespace

actually it's likely more given that you can see that resulting overall Page Size is about 33% smaller

I left the sheets separate and used the Drupal functionality to combine them because the way Drupal works it does sometimes only call a sheet if it's required on a page (e.g. you won't see the admin CSS if you visit!) and this way if I disable a module it will automagically not include that sheet, therefore minifying them separately before letting Drupal combine them was I figured the best way to get the best of both worlds.

I presume this might be a different workflow for other CMS's, but hopefully the theory should be the same

So the upshot is.. OPTIMISE ALL IMAGES and combine/sprite them or DATA URI them, then re-factor the CSS around them; don't worry your pretty little heads about unused or so called inefficient selectors, but bear the inefficient ones in mind. If you can un-qualify them, then do.

I'm now sitting at 93 with both Pagespeed and Y!Slow, and a big part of raising this will now be minifying the JS.. outwith my area of expertise hehe.. Y!slow still wants me to use a CDN, but I figure I've no need with only 9 static resources, already parallelised via subdomains. I apparently cannot avoid cookieless domains for these resources as I'm already serving the site from the non-WWW and would have to get/use a completely different domain/IP to satisfy this clause, don't think my wee community site warrants this.

so just how do you minify JS?

I think I can treat JS files the same way as CSS by pulling them into the theme separately and if I could minify them before combining I think that's fairly much it for this experiment ;)

Hope that gives an insight to "do as I say and not as I do".. i.e. don't stress over those CSS warnings ;) - but do use them to get yourself familiar with what else may be lurking in those CSS's that you can fix

Suzy
:)

alt131

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 4274514 posted 11:32 pm on Mar 11, 2011 (gmt 0)

Wahoo! You have been busy - and without a yes/no, I did the tests myself. Want more time to reflect on your last post, but a first read suggests my results confirm - maybe inform - some parts of it.

How this relates to your tests
Although you have now moved to general optimisation techniques, these tests were designed around the OP, so provide a sub-set of more detailed information. The issue was that pagespeed wants efficient selectors and seems to treat some selectors as more efficient than others - which prompts the question
"Does pagespeed consider one "efficient" selector more "efficient" than others?
Second, although unused selectors are not significantly "penalised", they are the opposite of "efficient" - so what happens when they are removed.

Where this takes the thread
Nowhere new - just providing finer detail on a sub-set of questions prompted by the OP. It does reinforce the general thread theme that pagespeed scores reflect usual performance improvement techniques (optimsation, caching, etc). It confirms already identified oddities, and I think confirms that "refactoring" around the detailed warnings/suggestions relating to css and selectors is best left until the tool matures.

The chosen test page:
On a well-known site, mostly text, reasonably well-written code. I'm still struggling to find a pagespeed definition for "interactive", and this one may not qualify. But it provides an opportunity to test whether "non-interactive" pages respond the same way as "interactive" pages.

In the original weboptimisaton analyser gave all "congratulations", total weight 43000kbs, 7 htttp requests, 4s download at 128k. pagespeed (version 1.10.2) scored 91, complaining of un-combined css, inefficient selectors and unused css*. Downloaded and reassembled so it could be manipulated for testing scored 79 - an acceptable difference given it was uncached, no expires-by, a couple bad requests, etc
* Included alternate, print and other media css - all treated as required downloads, then all selectors reported "unused"

The tests:
# just id's versus just classes
Made no difference to the score.
# just classes versus multiple classes
Could not be tested because (randomly) first or last class was reported as unused.
# "pagespeed efficient" descendent selectors versus just id's versus just classes
Results ceased to be meaningful:
  1. Definite counting issues
  2. Unused selectors were incorrectly reported.
  3. Sometimes a certain number of unused selectors seemed to be required before the warning triggered. In small test files only one, in larger pages either: adding/deleting a particular selector would trigger/remove the warning, OR large numbers of rulesets (but not all selectors reported as unused) had to be removed. (Code was double-checked for validity)
  4. Attribute selectors not recognised and therefore reported as unused. (Confirmed by a test file using only attribute selectors.)
  5. Randomly reported pseudo-elements, pseudo-classes and adjacent siblings as for 4, also randomly failed to count siblings correctly which meant different test runs would report a different group of adjacent sibling selectors as unused
  6. When all unused selectors were removed, bar one to trigger the warning, pagespeed warned "0 of 0 is not used by the current page" followed by a list of a few selectors from each of the css files. (Confusing as previously reported unused, not used selectors.)
  7. Consistently reported 42.7% unused selectors and the same (but wrong) size css files. This was maintained despite deliberately increasing/reducing file size. Continued despite trying to clear a "sticky memory" - clearing caches, starting/restarting browser and pagespeed, cold booting, etc
  8. Score increased to 81 when all css was moved to a single file.
  9. Score increased to 85 when all claimed unused selectors were removed. As these were incorrectly reported, all page style was lost - but pagespeed then reported 87.4% unused selectors.

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 12:21 am on Mar 12, 2011 (gmt 0)

"Does pagespeed consider one "efficient" selector more "efficient" than others?


No, just whether it's deemed efficient or not, but back to the OP even if it's efficient it's quantity not quality ;) I'm fairly positive I would need to reduce my "inefficient" count to under 5 before that would have an overall score impact, but for the reasons explained above this is not only not possible it very likely a waste of time just to gain a point (actually after last nights debacle I might lose a point :o)

Although you have now moved to general optimisation techniques,

I haven't really deviated from the OP, or at least I didn't intend to.. I said I was going to use the opportunity to see what lurked beneath; in fact I landed where I instinctively thought I would, I can say without a shadow of a doubt that it is not worth looking for unused selectors.. that is what started me out on this and I'm happy.. I'm fed up with people looking for "unused selector" tools without knowing why they are.. like it's a cure-all or something.

The inefficient ones have made me think differently.. I could pretty much say they're not worth it either but they taught me something more, not that I can "beat them" but that I can USE them.. so that I can't say "shadow of a doubt" about them.. but nearly ;)

The chosen test page:
On a well-known site, mostly text, reasonably well-written code. I'm still struggling to find a pagespeed definition for "interactive", and this one may not qualify. But it provides an opportunity to test whether "non-interactive" pages respond the same way as "interactive" pages.


interactive means that users interact with the page, sometimes to add content, sometimes to vote, sometimes to open a larger version of an image, (or as in the WebmasterWorld homepage.. what's up with that right now? 11s to get the page, I though BT thought that homepage load speed was more important than anything..thank goodness I never visit it that way! - all that to click a like button or to tweet it *shrug*)

all of these "interactive" functions should not impact the page load, they could be deferred if necessary - it's highly unlikely you are going to want to add content/vote/view image until the actual page is available for you to click on !

So my lesson was .. my gut instinct is still correct, I knew this already it was common sense.. I just needed to see how some parts acted with others and that means yes I'm still going to rant about the misleading PS and Y!slow warnings as they are not at all apparent, and they are misleading.. or if you do visit their "help" pages you will get lost in double-speak (George Orwell would be proud)

alt, I'd love to hear if you can prove the weighting one way or another, but I think swa has it.. it exists.. and they'll change their "algo" to suit once they realise that some parts are useless (I actually think they have already in the built in FF addon all is not the same as it was) and unless we want to start becoming like the like the SE Update threads , my guess, and now a semi-informed opinion, is that it's just not worth it

coopster

WebmasterWorld Administrator coopster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 5:43 pm on Mar 14, 2011 (gmt 0)

so just how do you minify JS?


My preference is the Yahoo YUI compressor [developer.yahoo.com]. Download [yuilibrary.com] the java package and expand it in a directory of your choice. Next, to make it as easy as possible, do this:
  1. Copy/move your original js file into the "build" directory
  2. Run the following command:
    java -jar yuicompressor-x.y.z.jar myfile.js -o myfile-min.js --charset utf-8
  3. Move the js and the minified js back to your working directory

Only use the
--charset option if your file is encoded in utf-8.
SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 12:53 pm on Mar 15, 2011 (gmt 0)

Thanks coop, I will try that

however if any of you've been checking my progress this last weekend you will note that everything is now uncompressed again, and scores are down accordingly - the CDN (Parallel downloading thing) is not active. This is because of unrelated (really stupid!) error on my behalf like going to update/delete one module and deleting them all, meant to backup copy them all first before only deleting/updating the Forum (stupid stupid Suzy!)

it is now rebuilt, just to show that every cloud does have a silver lining; it turns out that because of the work I did re-factoring the CSS for this post I was quickly able to update it for the modules that had changed, i.e. those that I didn't want to update

the remaining errors are non-CSS related but mean I'll need time to get back that 93 score as I have a module clash now and need to work with the files uncombined until I work through the issues.. did I tell you I was stupid.. I will try the compressor when I get back to only JS left to tidy up

the other good news about this "forced update" is that I now know which of the core modules are modified namely the Rich Text Editor and the User Login Bar (you do forget over time if you've changed something in earlier versions, I only change something I've no intention of updating or need non-tech users mean RT Editor is a requirement) - so I have separated them even further, and took copies as soon as they were reconfigured - .. unfortunately the "Fake" CDN module is one of them, while I had it working before it to ease the parallel downloading issue it no longer works with my updated image cacher, so I presume there some trade off there, and with all the images being DataURI's I'm not sure if it will needed any more anyway.. we'll see what happens when I finish the Forum and re-compress

and another good thing.. the server where this site is hosted needs to become a multisite install for another related project so it would have required a bit of shuffling around anyway, so that's done now too..

seems fate intended me to "get familiar" with the inner working of the site for more reason than one ;)

Anyway I just wanted to let you know in case anyone's looking at the site and thinking "what's she on!" .. I will recombine and minimise all files again as soon as I've re-themed/optimised the Forum.. which was the only thing I had left to do anyway!

Suzy
:)

alt131

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 4274514 posted 9:34 pm on Mar 22, 2011 (gmt 0)

Not meaning to string the thread along, but we've recently had a national disaster. The rescue is over, now locating lost souls, but we're near-by for "refugees", and the weather has just turned. Not much time to post, but appreciate WebmasterWorld and less gloomy subjects.

@coopster Appreciate the reference to YUI Compressor. Without doing a full "product feature" comparision, can you briefly summarise why that's the tool you prefer?
(Reason for asking - (lots), but the large number of tools available must make it hard for coders to "decide" which one might be "best". Some highlights might help that decision)

@Suzy Your last post caused lots of thoughts, but need some time to internally summarise - "I'll be back" ... ;)

coopster

WebmasterWorld Administrator coopster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 10:49 pm on Mar 22, 2011 (gmt 0)

hehe, great question alt131. Short answer? I did my research. At the time this was the best thing going. Smallest footprint (highest compression), most accurate. The bonus was ease of use (once you read all the documentation). But that's why I threw down the 3 steps, to save others time on how to use this utility.

I review my own processes quite often and am still using this effort. I wish Google would make their Closure Inspector and JS minifier easier to use in Page Speed. I need a tool that runs locally because my initial development is 100% offline. I don't have time to implement a FireFox add-on to minify my JS within the Add-ons available today. But one would think G could do that for you. I've seen quite a bit of back-and-forth in the G forums in this regard and eventually it will probably get to that point, but for now, I use that which is, IMHO, the best option.

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 12:39 am on Mar 23, 2011 (gmt 0)

alt I wish you well, and send {hugs} to you and yours - a diversion is often best and that we can give you, hopefully..

I got it all back CDN as well, but landed sidetracking myself, deliberately I hasten to add, because I realised how fragile an open source cms can be.. I left it 3 years before trying to implement more than what was on the surface, but learned than in one fell swoop I can still break it! I spent nearly 2 weeks trying to figure out why something that was there (in database and there when trying to edit) wasn't appearing.. nothing to do with CSS btw!

I eventually got it back but I'm nae happy (and this should no longer be in CSS it's so far off track!) in order to work on the php I had to know to "undo" any compression/caching I might have done - thankfully that's saved as mentioned above

as for live/dev environment this was such a stupid mistake it wouldn't have mattered, I always always back up a directory before making changes, in this case I simply deleted instead of copying ..still can't believe I did that!

as for this project, I've sickened myself and haven't been able to get back to get back to the point I was at, although I did get the image cache and the cdn working again, I can't bear to start CSS'ing a forum until I know all the users can even log back in (yes that was a side effect! I had to learn how to force them to log off and wait.. did I mention that patience is not one one of my better qualities ;))

coop, re: the JS on that other site I sent you (alt it's my "embarrassing site") it appears it's only the google (analytics) code itself causing a problem I would presume you don't touch that.. I did update the adsense to async code and the warnings stopped about it

I sidetracked myself by trying to apply the logic in this thread to my archived site, it wasn't necessary because the site was not ever changing any more.. however it started at alts's perceived average - 85% on both PS and Y!slow, it's now 99% on PS and 96 on Y!Slow

this is a static site now so I don't expect those numbers are easily achievable, but the point is that with some CSS (no image optimsation was done on this site, nor was unused CSS removed) and some htaccess expires settings it was hitting 99%. ALL CSS was made "very efficient" it's still giving me advice to remove unused, optimise 1 x image to save 3%, set an expires header on my GA js. I don't think even if I were to remove the unused CSS it would change the score... to get from 99 to 100 I think the whole lot would need doing which is a: not possible right now and b: proves to me that unused selectors are not worth looking for but "very efficient ones" are worth writing (if you
can!)

coopster

WebmasterWorld Administrator coopster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 10:45 pm on Mar 23, 2011 (gmt 0)

it appears it's only the google (analytics) code itself causing a problem I would presume you don't touch that


HA! You are correct! I didn't see that because I (tinfoil hat guy) block GA in my browser. Therefore the script didn't activate and your site return the full 100 in PageSpeed. But isn't that just funny? GA is causing your site to not score as desired. Go figure.

SuzyUK

WebmasterWorld Senior Member suzyuk us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4274514 posted 1:27 am on Mar 24, 2011 (gmt 0)

well well well.. "too funny" indeed thanks.. you and your tin-foil hat eh! - I too block GA (normally except on my own sites or ones I'm checking) - I couldn't figure out how you got the 100 and I got 99 .. so if you're getting the 100 and it's just the script that then means I have nothing further to do with the CSS.. and I can leave all that unused stuff and be damned mwhaahaa ;)

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Code, Content, and Presentation / CSS
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved