Forum Moderators: open
In this research-based talk, Steve does not look at database efficiency or other back end improvements. He focuses instead on the front end, the user's experience within their browser. His research shows that, by far, the front end is the main area where significant website speed gains can be had.
His list gives us 14 best practices, culled from his research:
1. Make Fewer HTTP Requests
2. Use a Content Delivery Network
3. Add an Expires Header
4. Gzip Components
5. Put Stylesheets at the Top
6. Put Scripts at the Bottom
7. Avoid CSS Expressions
8. Make JavaScript and CSS External
9. Reduce DNS Lookups
10. Minify JavaScript
11. Avoid Redirects
12. Remove Duplicate Scripts
13. Configure ETags
14. Make Ajax Cacheable
There are some great tips here in the many details and side comments he makes, such as using multiple hosts names to allow browsers to do more parallel downloads. Another key point is that IE and Firefox browsers will stall or block other downloads and executions whenever they are downloading any javascript file. Opera is a bit better, and will continue to download image files in parallel. But Opera still will not do a parallel download of any other script.
Souders has integrated this information into a Yahoo tool called YSlow [developer.yahoo.com], an extension of the Firebug add-on to Firefox. He has also published a book with O'Reilly about all these goodies, called "High Performance Web Sites".
Sometimes you hear someone talk and just know that they've "got the goods." Steve Souders has definitely got the goods.
[edited by: tedster at 10:16 pm (utc) on Jan. 13, 2008]
I'm not 100% sure that I agree with all of the findings for the site I just tested it against, but it does give a lot of food for thought in improving design methods. There is one change that I am making immediately, one that I had long forgotten.
.
I think I found a minor bug or two.
I have a main CSS file which further imports several other CSS files. They are all reported as being outside the document <head> but they are not (as far as suggesting they are in the <body> or somesuch).
On the page listing the objects that don't have an expires header, or one that isn't set far enough in the future, the date format is the default US m/dd/yyyy style and not the one that I have selected in the main Windows options.
.
I am not sure why this is listed as the second object in the list: http://www.domain.eu/robots.txt#resize_iframe%26remote_iframe_0%26102$.
I am going to have to look up what "minify" a JS file means.
Pfffft. I got an "F".
I must check why GZIP isn't on for this site, and set up the expires headers correctly.
If you look in google webmaster tools, and check out Tools - Set Crawl Rate you'll see some interesting graphs including time spent downloading a page.
Before the changes I had huge spikes and a very erratic graph - now the graph shows the average time cut down in half, and almost totally steady.
Adsense and other revenue went up and traffic as well.
Needless to say, implementing some of those changes was probably the best thing I did for my site all year. Take some time and read the post or watch the video and work on it.. Some of the changes only take a few minutes to do.
Seeing as he mentioned it, I would recommend madmatt69's thread 20% gain in adsense income after speeding up site [webmasterworld.com] which has some good tips for increasing speed with a PHP-driven site.
I am going to have to look up what "minify" a JS file means
In this context, the author is suggesting reducing to a minimum the Javascript file size:
Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab).
The example uses an extra parameter on a URL to show the date the information was last changed. This could potentially lead to duplicate content issues if those URLs should ever be indexed.
.
Although eTags are set up for all files, I got an F for that too. Not sure why.
I've seen cases, for instance, where using a second host for images actually hurt speed - for reasons that were difficult to address in that particular configuration. But just knowing about the issue and the options is a good thing. Getting a 40% improvement in website front end speed can be a major factor in online success, and such improvements are often well within reach.
When a responsible designer understands that their design is more than commercial art, but can directly influence business success from a technical direction, they are often happy to contribute to the overall achievement. I had one such conversation with a designer, and the next version of the site abandoned such visual frills as rounded corners, gratutious gradients and the like. This made a real difference in load times and site stats altogether, and the site still looked really sharp.
(Interesting side note - on January 7, 2008
Steve Souders left Yahoo for Google [blog.wired.com].)
g1smd wrote: "One site recommends turning off eTags not adding them."
The Yahoo help page also recommends turning off eTags in some situations, but not in others. The wording in the rule - "configure eTags" - is a bit ambiguous and further reading illustrates why.
If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently.If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether.
[developer.yahoo.com...]
Also, using eTags does give you a lower "grade" with the YSlow tool.
1. Make Fewer HTTP Requests
Here's some impressive statistics...
Google - 2 http requests at 14,438k total.
WebmasterWorld - 4 http requests at 59,139k total.
Live - 5 http requests at 22,640k total.
And, not so impressive?
Yahoo! - 77 http requests at 353,113 total.
CNN - 262 http requests at 712,377k total.
For the CNN site, 175 of those requests are CSS background images.
He focuses instead on the front end, the user's experience within their browser.
Then Steve should be FIRED because he's failing miserably.
They slowed down the Yahoo Movie site to the point my older laptop now kicks up an error asking if I want to terminate the javascript on the page due to it running too long.
Likewise the new Yahoo Mail is slower than hell, reverted back to the original format, so on and so forth.
Didn't say the changes were neat, but they quickly made a still useful old Thinkpad next to useless where Yahoo is concerned.
Yup, I'll sit right down and waste an hour learning how I too can mess up my site the same way.
Thanks for the tip Tedster! ;)
Some designs may have incredibly bloated javascript that could be made more efficient, but that part of efficiency is NOT within his purview. His is more a "given that you have serve up this bloated page that you have no control over, how do you do it fast?"
You shouldn't take Yahoo's abysmal speed as an indicator of Souder's expertise.
Of course not all of these suggestions can be implemented on all websites - but a few of (the more obvious of them) can help in almost all cases: adding "expires" headers and cutting the comments out of javascript files... I wonder if doing the same to php files would help? Of course this means keeping a duplicate comment-ed "working file" locally so that I can keep track of what all my script does...
My biggest problem is with IE's caching of Ajax http requests... for the time being I am obliged to add a "dummy" timestamp variable to catalog pages to ensure that any updates get shown instead of the old version. No can-do for any improvement there.
According to the video, page generation (the PHP part) takes in average only five percent of the total time the visitor has to wait before the whole page is rendered on his screen. So removing the comments may reduce that 5% to maybe 4.9%.
Reducing javascript size has much more effect because the size not only determines how much time the browser needs to parse it, but also how much bytes/packets have to be sent over the internet connection before parsing can even start.
>>no one mentioned CSS sprites.
#1. Reduce HTTP Requests. He specifically talks about sprites.
First off, what most concerned me was the "order of load" bit - this is not essential to many of my websites, but it would seem logical to keep all the (generated) html on the same domain to avoid conflict.
Based on the above question, would it be feasable to have html on one server, and, say, images and swf content on another? Would putting javascript requests as well on another domain create possible "load order" conflicts? How about CSS? I could well possibly imagine spreading things out such if it sped things up.
Lastly, what exactly does he mean by "domains"? Would sub-domains suffice as "alternate domains"?
From what I understand, the optimal way to move to load images is indeed to have them servered by a different server, but with certain prerequisites. For the most part, the real advantage comes from being able to use a stripped-down super-light HTTP server for the serving of static content (your images, CSS, JS files, etc..). In this way, the HTTP server can be pre-compiled without support for PHP, MySQL, Perl, mod_rewrite, etc., etc.. This gives the server a much smaller memory footprint and quick load-time. Naturally, your main Web server will retain such functionality. This all being in addition to the browser's parallel-loading advantage!
There's a few issues with having an images.domain.tld static file server though. If you have a single box with both your main HTTP server and your optimized static file server, you can't run both of them on port 80 on the same IP. One option is to re-port your static file server to another port, eg. 81. This is obviously a pain because your images will need to be referenced as images.domain.tld:81..., which can cause logistics problems down the road.
The better way to do it is to have a separate box (or the same box with 2 unique IP addresses). This way you can map your main domain to one IP, the static server which is the sub-domain to the other IP, and both can run on port 80!
It's quite an interesting subject that I've spent a bit of time on... To be honest though, I wouldn't recommend doing any of this unless you have a combination of servers under heavy load, along with an obvious problem in page-load times, but the load-speed increase is very slight in the majority of cases, to be honest. It really only matters with very large pages and very high traffic levels. Hope this helps.