homepage Welcome to WebmasterWorld Guest from 54.166.173.147
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Code, Content, and Presentation / HTML
Forum Library, Charter, Moderators: incrediBILL

HTML Forum

This 46 message thread spans 2 pages: < < 46 ( 1 [2]     
Speeding Up Your Site - best practices for the front end
a Google video from Yahoo's Steve Souders
tedster




msg:3547253
 6:07 pm on Jan 13, 2008 (gmt 0)

I highly recommend investing an hour in watching this video: High Performance Web Sites and YSlow [video.google.com]. It's part of the Google Tech Talks series, and this one features Yahoo's Steve Souders. Steve currently holds the job of "Chief Performance Yahoo!" (Note: The sound is uneven at the beginning of the video, but that gets corrected early on.)

In this research-based talk, Steve does not look at database efficiency or other back end improvements. He focuses instead on the front end, the user's experience within their browser. His research shows that, by far, the front end is the main area where significant website speed gains can be had.

His list gives us 14 best practices, culled from his research:

1. Make Fewer HTTP Requests
2. Use a Content Delivery Network
3. Add an Expires Header
4. Gzip Components
5. Put Stylesheets at the Top
6. Put Scripts at the Bottom
7. Avoid CSS Expressions
8. Make JavaScript and CSS External
9. Reduce DNS Lookups
10. Minify JavaScript
11. Avoid Redirects
12. Remove Duplicate Scripts
13. Configure ETags
14. Make Ajax Cacheable

There are some great tips here in the many details and side comments he makes, such as using multiple hosts names to allow browsers to do more parallel downloads. Another key point is that IE and Firefox browsers will stall or block other downloads and executions whenever they are downloading any javascript file. Opera is a bit better, and will continue to download image files in parallel. But Opera still will not do a parallel download of any other script.

Souders has integrated this information into a Yahoo tool called YSlow [developer.yahoo.com], an extension of the Firebug add-on to Firefox. He has also published a book with O'Reilly about all these goodies, called "High Performance Web Sites".

Sometimes you hear someone talk and just know that they've "got the goods." Steve Souders has definitely got the goods.

[edited by: tedster at 10:16 pm (utc) on Jan. 13, 2008]

 

tedster




msg:3549799
 4:13 pm on Jan 16, 2008 (gmt 0)

There's also an advantage in that a browser will parallel only two http requests at one time to any given hostname, so using a dedicated second server for images takes you to 4 parallel downloads - that can give a nice jump in speed in some cases.

g1smd




msg:3549849
 4:48 pm on Jan 16, 2008 (gmt 0)

So, to clarify, a subdomain would be a different hostname in this context?

What's the downside? (Other than making sure there are no duplicate content issues created in the process.)

Tastatura




msg:3549872
 5:17 pm on Jan 16, 2008 (gmt 0)

g1smd:
Yeah, I'm wondering about moving the images to images.domain.com, and wonder what effect that has with things, and how that depends on whether the subdomain is on the same server, or on a different server.

To do this properly, create cname for images.domain.com . This will allow browser to open additional set of parallel connections, hence speeding up load time (as ted mentioned in the post above). See my last post in this [webmasterworld.com] thread.

What's the downside? (Other than making sure there are no duplicate content issues created in the process.)

there shouldn't be any duplicate issues - you are not moving whole site copy to a just images (or css, scripts, etc.)

Chico_Loco




msg:3549936
 6:20 pm on Jan 16, 2008 (gmt 0)

What's the downside?

DNS lookup times are the downside. And they can be expensive.

The real question is whether or not the expense of the additional DNS lookup times will be overcompensated by the increase in having parallel connections. If your HTML page has only 3 or 4 embedded static files (CSS, images, JS, etc..), then it might not make sense to use a multi-hostname approach, because the DNS lookup time required for the additional hostname(s) could be longer than just letting a single connection run on a sign hostname.

A page with 100 images or something would definitely benefit from using a multi-hostname approach because the DNS lookup times wouldn't be that costly as compared to letting a single connection serve all those hits in a serial/sequential manner (versus the parallel loading).

HOWEVER, just having a single second hostname is of trivial value if you're loading a lot of objects from a single page, since it too suffers the same problem as the main server (serial requests). The real performance increase in this case would come from having multiple hostnames, eg.. www.domain.tld, img1.domain.tld, img2.domain.tld, img3.domain.tld. ie. you'd split-up the images over the 3 img hostnames, and keep the main HTML document served from the www. That's where the logistics become a bit of a pain!

- Darrin Ward.

g1smd




msg:3550142
 11:47 pm on Jan 16, 2008 (gmt 0)

Thanks for that.

More things to digest, and very much suggesting with all the aspects here, that "it depends", and that "one size does not fit all".

The duplicate content concern arises when something.domain.com serves the same content as domain.com/something if you're not careful.

AlexK




msg:3550464
 11:00 am on Jan 17, 2008 (gmt 0)

A small promotion for Conteg - Content Negotiation for PHP [webmasterworld.com].

3. Add an Expires Header
4. Gzip Components

Conteg does both of the above by default. Include an edit-date in the DB with the content & you can also take advantage of 304s, etc.

I'm especially pleased to see Gzip included. It seems self-evident that smaller files will be delivered faster, yet folks have argued over this in the past ad nauseum. Conteg's Gzip is also load-balanced, which removes the last objection.

OK, I'll go and watch the video now.

ergophobe




msg:3550793
 4:48 pm on Jan 17, 2008 (gmt 0)

Alex,

I had not been keeping track, but I had been following your Conteg development since the old days:

[webmasterworld.com...]

It's been on my TODO list *forever* to take your Conteg class and see how much of it I can can work into Drupal (some aspects Drupal already handles, some it doesn't).

It is such a shortcoming in dynamic scripts and I think this is really great work.

Tom

madmatt69




msg:3551138
 9:57 pm on Jan 17, 2008 (gmt 0)

In regards to conteg, can't you just set those same headers in the httpd.conf?

AlexK




msg:3551373
 4:18 am on Jan 18, 2008 (gmt 0)

madmatt69:
In regards to conteg, can't you just set those same headers in the httpd.conf?

You can certainly set site-wide defaults into the apache config, and that will help a lot. That also includes gzip, but once again, not load-balanced.

The point of dynamic content is just that - it is dynamic; every page potentially is different. Most (not all) of the httpd.conf settings are ignored by Apache as soon as it handles dynamic content, as the assumption behind most of the settings is that it is handling static content.

Let's take one of the most fundamental parts of the http spec: providing a 304 when the content hasn't changed (typically IMS header). The video (excellent, btw, and fully recommended) assumed that every website would be setup for 304s, and talked in terms of how to avoid them (304 as a backstop). The default for PHP is never to send a 304. PHP doesn't even get into this particular ballpark to play.

There are certainly some things that are better placed into the apache config. One of the efforts with Conteg has been to provide a simple means (via the Apache Notes mechanism) to allow interaction between the httpd.conf & PHP.

fside




msg:3595639
 3:33 pm on Mar 9, 2008 (gmt 0)

> URLs have an eTag: <

The etag is supposed to suggest an 'expires'/reload. Apparently, there's a way to disable generation of these. But it's new to me, too. I was interested in what it took to gzip, compress files, particularly in IIS6.0 if one is on a virtual host and doesn't have adminstrative access to the control panels. On Apache, the default is apparently, deflate, and you merely have to configure your own htaccess to comply. Couldn't be easier.

If IIS6 proves to be a headache, I might end up switching a few sites over to Apache. I don't know.

As for images, wasn't JPEG2/2000 supposed to provide better compression with less seeming visual loss (sort of like 32-40bit stereo MP3 with music)? And if requests themselves are the problem, and not repeated partial requests and responses, then someone did suggest sprites for various images. I use a sprite 'block' myself for a whole host of buttons, on/off states, up/down visual feedback (they didn't work in IE6, no surprise, even with the Imagecache 'fix').

AlexK




msg:3596050
 8:26 am on Mar 10, 2008 (gmt 0)

GZip & Compression:
It's only within the last month or so that the MS bot has started employing IMS (304s) & compression. I think that that is the best comment on just how up-to-date IIS & MS is with Content Negotiation.

The argument against compression has (I think) been of the extra load that it would put on the server, coupled with bugs within early browsers. Since the *nix mindset has seemingly been that a '386 was good enough to serve up a (static) website, that was always a good argument. It kind of falls away with modern multi-GHz servers + clients, however.

Typical compression for html pages is 75% (four-fold reduction). I have some pages that better 90% (approaching 20-fold reduction).

Most image files are never compressed (no point, for the reason you mention).

Another good method is to pre-compress as-is files (html files that already contain a header, with a compressed body). That way you can get the best of both worlds.

fside




msg:3599051
 11:21 pm on Mar 12, 2008 (gmt 0)

> the extra load <

That was the official reply I got, as well, when I asked the hosting outfit about this. They think it's switched off. Little do they know. I figured it out. But then that's Apache. Don't know about mod-gzip. But deflate can be set at the directory or 'root' level, as well as system. So.

I wish it were as easy with IIS. But from what I read, you need administrative privileges. And these Apache guys were not idiots. They weren't some 'call center'. Yet even they didn't understand, really, how the server worked, or at least the guy replying to my email didn't. I'll get far less comprehension from the hosting outfits running IIS, I would guess. I understand there is something roughly equivalent to .htaccess in IIS. But that would be a question properly aimed at that forum.

As you suggest, I tried to send out pre-gzipped .js for example, as x.js.gz, and using rewrite rules. But I couldn't get it to work. And the auto-compression from the server sends a file of about the same size, anyway. I have no problem uploading a precompressed file. But maybe I used the wrong deflate with gzip, I don't know. As for sub-domains, I do have a lot of separate graphics which are used for selectable tiled backgrounds (part of a 'preferences' screen, and they all cannot be made a single image 'sprite'). If half came off the main domain, and another half of the images off a sub-domain, would that mean faster downloading of the site?

AlexK




msg:3599701
 5:00 pm on Mar 13, 2008 (gmt 0)

As you suggest, I tried to send out pre-gzipped .js

Yes, that is one way to handle it, but I was talking about as-is files:

(httpd.conf):
#
# For files that include their own HTTP headers:
#
#AddHandler send-as-is asis

As you can see, I do not use them. But it occurred to me that it is perfectly possible to have such a file with a header stating that it has a compressed body (and, of course, gzip--whatever--the body of the file). Therefore the perfect combination of static file and compression with no extra load whatever.

If half came off the main domain, and another half of the images off a sub-domain, would that mean faster downloading of the site?

Ach! I do not know. Test it, and find out.

fside




msg:3601169
 1:19 am on Mar 15, 2008 (gmt 0)

> Ach! I do not know. <

Gots to be allowed to ask questions, here. Don't know, is fine. Don't need to say more.

AlexK




msg:3601536
 5:24 pm on Mar 15, 2008 (gmt 0)

It's a really good question, but I'm uncertain of the answer.

The fundamental issue is that browsers will restrict themselves to 2 requests per site. Hence, using a different site for images to webpages will double the number of browser HTTP requests and halve the page-load time. So, the answer lies in the browser definition of "site". Certainly, Google views sub-domains as different sites, but does a browser? Possibly.

A really good question. Test it & report back.

directrix




msg:3623596
 2:14 pm on Apr 10, 2008 (gmt 0)

Tastatura:
To do this properly, create cname for images.domain.com . This will allow browser to open additional set of parallel connections, hence speeding up load time (as ted mentioned in the post above). See my last post in this thread.

Would creating an A name for images.domain.com also suffice to fool the browser into opening an additional set of parallel connections?

This 46 message thread spans 2 pages: < < 46 ( 1 [2]
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Code, Content, and Presentation / HTML
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved