Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: incrediBILL
In this research-based talk, Steve does not look at database efficiency or other back end improvements. He focuses instead on the front end, the user's experience within their browser. His research shows that, by far, the front end is the main area where significant website speed gains can be had.
His list gives us 14 best practices, culled from his research:
1. Make Fewer HTTP Requests
2. Use a Content Delivery Network
3. Add an Expires Header
4. Gzip Components
5. Put Stylesheets at the Top
6. Put Scripts at the Bottom
7. Avoid CSS Expressions
9. Reduce DNS Lookups
11. Avoid Redirects
12. Remove Duplicate Scripts
13. Configure ETags
14. Make Ajax Cacheable
Souders has integrated this information into a Yahoo tool called YSlow [developer.yahoo.com], an extension of the Firebug add-on to Firefox. He has also published a book with O'Reilly about all these goodies, called "High Performance Web Sites".
Sometimes you hear someone talk and just know that they've "got the goods." Steve Souders has definitely got the goods.
[edited by: tedster at 10:16 pm (utc) on Jan. 13, 2008]
I'm not 100% sure that I agree with all of the findings for the site I just tested it against, but it does give a lot of food for thought in improving design methods. There is one change that I am making immediately, one that I had long forgotten.
I think I found a minor bug or two.
I have a main CSS file which further imports several other CSS files. They are all reported as being outside the document <head> but they are not (as far as suggesting they are in the <body> or somesuch).
On the page listing the objects that don't have an expires header, or one that isn't set far enough in the future, the date format is the default US m/dd/yyyy style and not the one that I have selected in the main Windows options.
I am not sure why this is listed as the second object in the list: http://www.domain.eu/robots.txt#resize_iframe%26remote_iframe_0%26102$.
I am going to have to look up what "minify" a JS file means.
Pfffft. I got an "F".
I must check why GZIP isn't on for this site, and set up the expires headers correctly.
If you look in google webmaster tools, and check out Tools - Set Crawl Rate you'll see some interesting graphs including time spent downloading a page.
Before the changes I had huge spikes and a very erratic graph - now the graph shows the average time cut down in half, and almost totally steady.
Adsense and other revenue went up and traffic as well.
Needless to say, implementing some of those changes was probably the best thing I did for my site all year. Take some time and read the post or watch the video and work on it.. Some of the changes only take a few minutes to do.
Seeing as he mentioned it, I would recommend madmatt69's thread 20% gain in adsense income after speeding up site [webmasterworld.com] which has some good tips for increasing speed with a PHP-driven site.
I am going to have to look up what "minify" a JS file means
Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab).
The example uses an extra parameter on a URL to show the date the information was last changed. This could potentially lead to duplicate content issues if those URLs should ever be indexed.
Although eTags are set up for all files, I got an F for that too. Not sure why.
I've seen cases, for instance, where using a second host for images actually hurt speed - for reasons that were difficult to address in that particular configuration. But just knowing about the issue and the options is a good thing. Getting a 40% improvement in website front end speed can be a major factor in online success, and such improvements are often well within reach.
When a responsible designer understands that their design is more than commercial art, but can directly influence business success from a technical direction, they are often happy to contribute to the overall achievement. I had one such conversation with a designer, and the next version of the site abandoned such visual frills as rounded corners, gratutious gradients and the like. This made a real difference in load times and site stats altogether, and the site still looked really sharp.
(Interesting side note - on January 7, 2008
Steve Souders left Yahoo for Google [blog.wired.com].)
g1smd wrote: "One site recommends turning off eTags not adding them."
The Yahoo help page also recommends turning off eTags in some situations, but not in others. The wording in the rule - "configure eTags" - is a bit ambiguous and further reading illustrates why.
If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently.
If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether.
Also, using eTags does give you a lower "grade" with the YSlow tool.
1. Make Fewer HTTP Requests
Here's some impressive statistics...
Google - 2 http requests at 14,438k total.
WebmasterWorld - 4 http requests at 59,139k total.
Live - 5 http requests at 22,640k total.
And, not so impressive?
Yahoo! - 77 http requests at 353,113 total.
CNN - 262 http requests at 712,377k total.
For the CNN site, 175 of those requests are CSS background images.
He focuses instead on the front end, the user's experience within their browser.
Then Steve should be FIRED because he's failing miserably.
Likewise the new Yahoo Mail is slower than hell, reverted back to the original format, so on and so forth.
Didn't say the changes were neat, but they quickly made a still useful old Thinkpad next to useless where Yahoo is concerned.
Yup, I'll sit right down and waste an hour learning how I too can mess up my site the same way.
Thanks for the tip Tedster! ;)
You shouldn't take Yahoo's abysmal speed as an indicator of Souder's expertise.
My biggest problem is with IE's caching of Ajax http requests... for the time being I am obliged to add a "dummy" timestamp variable to catalog pages to ensure that any updates get shown instead of the old version. No can-do for any improvement there.
According to the video, page generation (the PHP part) takes in average only five percent of the total time the visitor has to wait before the whole page is rendered on his screen. So removing the comments may reduce that 5% to maybe 4.9%.
>>no one mentioned CSS sprites.
#1. Reduce HTTP Requests. He specifically talks about sprites.
First off, what most concerned me was the "order of load" bit - this is not essential to many of my websites, but it would seem logical to keep all the (generated) html on the same domain to avoid conflict.
Lastly, what exactly does he mean by "domains"? Would sub-domains suffice as "alternate domains"?
From what I understand, the optimal way to move to load images is indeed to have them servered by a different server, but with certain prerequisites. For the most part, the real advantage comes from being able to use a stripped-down super-light HTTP server for the serving of static content (your images, CSS, JS files, etc..). In this way, the HTTP server can be pre-compiled without support for PHP, MySQL, Perl, mod_rewrite, etc., etc.. This gives the server a much smaller memory footprint and quick load-time. Naturally, your main Web server will retain such functionality. This all being in addition to the browser's parallel-loading advantage!
There's a few issues with having an images.domain.tld static file server though. If you have a single box with both your main HTTP server and your optimized static file server, you can't run both of them on port 80 on the same IP. One option is to re-port your static file server to another port, eg. 81. This is obviously a pain because your images will need to be referenced as images.domain.tld:81..., which can cause logistics problems down the road.
The better way to do it is to have a separate box (or the same box with 2 unique IP addresses). This way you can map your main domain to one IP, the static server which is the sub-domain to the other IP, and both can run on port 80!
It's quite an interesting subject that I've spent a bit of time on... To be honest though, I wouldn't recommend doing any of this unless you have a combination of servers under heavy load, along with an obvious problem in page-load times, but the load-speed increase is very slight in the majority of cases, to be honest. It really only matters with very large pages and very high traffic levels. Hope this helps.