Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: incrediBILL
In this research-based talk, Steve does not look at database efficiency or other back end improvements. He focuses instead on the front end, the user's experience within their browser. His research shows that, by far, the front end is the main area where significant website speed gains can be had.
His list gives us 14 best practices, culled from his research:
1. Make Fewer HTTP Requests
2. Use a Content Delivery Network
3. Add an Expires Header
4. Gzip Components
5. Put Stylesheets at the Top
6. Put Scripts at the Bottom
7. Avoid CSS Expressions
9. Reduce DNS Lookups
11. Avoid Redirects
12. Remove Duplicate Scripts
13. Configure ETags
14. Make Ajax Cacheable
Souders has integrated this information into a Yahoo tool called YSlow [developer.yahoo.com], an extension of the Firebug add-on to Firefox. He has also published a book with O'Reilly about all these goodies, called "High Performance Web Sites".
Sometimes you hear someone talk and just know that they've "got the goods." Steve Souders has definitely got the goods.
[edited by: tedster at 10:16 pm (utc) on Jan. 13, 2008]
Yeah, I'm wondering about moving the images to images.domain.com, and wonder what effect that has with things, and how that depends on whether the subdomain is on the same server, or on a different server.
To do this properly, create cname for images.domain.com . This will allow browser to open additional set of parallel connections, hence speeding up load time (as ted mentioned in the post above). See my last post in this [webmasterworld.com] thread.
What's the downside? (Other than making sure there are no duplicate content issues created in the process.)
What's the downside?
DNS lookup times are the downside. And they can be expensive.
The real question is whether or not the expense of the additional DNS lookup times will be overcompensated by the increase in having parallel connections. If your HTML page has only 3 or 4 embedded static files (CSS, images, JS, etc..), then it might not make sense to use a multi-hostname approach, because the DNS lookup time required for the additional hostname(s) could be longer than just letting a single connection run on a sign hostname.
A page with 100 images or something would definitely benefit from using a multi-hostname approach because the DNS lookup times wouldn't be that costly as compared to letting a single connection serve all those hits in a serial/sequential manner (versus the parallel loading).
HOWEVER, just having a single second hostname is of trivial value if you're loading a lot of objects from a single page, since it too suffers the same problem as the main server (serial requests). The real performance increase in this case would come from having multiple hostnames, eg.. www.domain.tld, img1.domain.tld, img2.domain.tld, img3.domain.tld. ie. you'd split-up the images over the 3 img hostnames, and keep the main HTML document served from the www. That's where the logistics become a bit of a pain!
- Darrin Ward.
3. Add an Expires Header
4. Gzip Components
Conteg does both of the above by default. Include an edit-date in the DB with the content & you can also take advantage of 304s, etc.
I'm especially pleased to see Gzip included. It seems self-evident that smaller files will be delivered faster, yet folks have argued over this in the past ad nauseum. Conteg's Gzip is also load-balanced, which removes the last objection.
OK, I'll go and watch the video now.
I had not been keeping track, but I had been following your Conteg development since the old days:
It's been on my TODO list *forever* to take your Conteg class and see how much of it I can can work into Drupal (some aspects Drupal already handles, some it doesn't).
It is such a shortcoming in dynamic scripts and I think this is really great work.
In regards to conteg, can't you just set those same headers in the httpd.conf?
The point of dynamic content is just that - it is dynamic; every page potentially is different. Most (not all) of the httpd.conf settings are ignored by Apache as soon as it handles dynamic content, as the assumption behind most of the settings is that it is handling static content.
Let's take one of the most fundamental parts of the http spec: providing a 304 when the content hasn't changed (typically IMS header). The video (excellent, btw, and fully recommended) assumed that every website would be setup for 304s, and talked in terms of how to avoid them (304 as a backstop). The default for PHP is never to send a 304. PHP doesn't even get into this particular ballpark to play.
There are certainly some things that are better placed into the apache config. One of the efforts with Conteg has been to provide a simple means (via the Apache Notes mechanism) to allow interaction between the httpd.conf & PHP.
The etag is supposed to suggest an 'expires'/reload. Apparently, there's a way to disable generation of these. But it's new to me, too. I was interested in what it took to gzip, compress files, particularly in IIS6.0 if one is on a virtual host and doesn't have adminstrative access to the control panels. On Apache, the default is apparently, deflate, and you merely have to configure your own htaccess to comply. Couldn't be easier.
If IIS6 proves to be a headache, I might end up switching a few sites over to Apache. I don't know.
As for images, wasn't JPEG2/2000 supposed to provide better compression with less seeming visual loss (sort of like 32-40bit stereo MP3 with music)? And if requests themselves are the problem, and not repeated partial requests and responses, then someone did suggest sprites for various images. I use a sprite 'block' myself for a whole host of buttons, on/off states, up/down visual feedback (they didn't work in IE6, no surprise, even with the Imagecache 'fix').
The argument against compression has (I think) been of the extra load that it would put on the server, coupled with bugs within early browsers. Since the *nix mindset has seemingly been that a '386 was good enough to serve up a (static) website, that was always a good argument. It kind of falls away with modern multi-GHz servers + clients, however.
Typical compression for html pages is 75% (four-fold reduction). I have some pages that better 90% (approaching 20-fold reduction).
Most image files are never compressed (no point, for the reason you mention).
Another good method is to pre-compress as-is files (html files that already contain a header, with a compressed body). That way you can get the best of both worlds.
That was the official reply I got, as well, when I asked the hosting outfit about this. They think it's switched off. Little do they know. I figured it out. But then that's Apache. Don't know about mod-gzip. But deflate can be set at the directory or 'root' level, as well as system. So.
I wish it were as easy with IIS. But from what I read, you need administrative privileges. And these Apache guys were not idiots. They weren't some 'call center'. Yet even they didn't understand, really, how the server worked, or at least the guy replying to my email didn't. I'll get far less comprehension from the hosting outfits running IIS, I would guess. I understand there is something roughly equivalent to .htaccess in IIS. But that would be a question properly aimed at that forum.
As you suggest, I tried to send out pre-gzipped .js for example, as x.js.gz, and using rewrite rules. But I couldn't get it to work. And the auto-compression from the server sends a file of about the same size, anyway. I have no problem uploading a precompressed file. But maybe I used the wrong deflate with gzip, I don't know. As for sub-domains, I do have a lot of separate graphics which are used for selectable tiled backgrounds (part of a 'preferences' screen, and they all cannot be made a single image 'sprite'). If half came off the main domain, and another half of the images off a sub-domain, would that mean faster downloading of the site?
As you suggest, I tried to send out pre-gzipped .js
# For files that include their own HTTP headers:
#AddHandler send-as-is asis
If half came off the main domain, and another half of the images off a sub-domain, would that mean faster downloading of the site?
The fundamental issue is that browsers will restrict themselves to 2 requests per site. Hence, using a different site for images to webpages will double the number of browser HTTP requests and halve the page-load time. So, the answer lies in the browser definition of "site". Certainly, Google views sub-domains as different sites, but does a browser? Possibly.
A really good question. Test it & report back.
To do this properly, create cname for images.domain.com . This will allow browser to open additional set of parallel connections, hence speeding up load time (as ted mentioned in the post above). See my last post in this thread.