Forum Moderators: open
D O N
I have found that careful optimization of gifs can yield nice results with smaller file sizes even for photos (the smaller and the more monochromatic the better)
Couple things to keep in mind:
1. Reducing the palette on a gif file by even a couple colors will make a significant difference in file size. Photoshop 5.5+ has a greatly improved gif export tool that makes this a snap. Keep this in mind when your laying out your slicemap.
2. A gif file is optimized horizontally,...meaning making pixels side-by-side the same color will have a greater impact on the file size than making pixels on top of each other the same color. This is what makes CRLI optimization so effective. CRLI (Consecutive Run-Length Insertion) simply alternates rows of pixels with a solid color. It's a very simple process that you can use to "water-mark" an image if the white is used or you can increase the contrast by using a darker color while almost halving the file size. This is the most versatile optimization technique in my toolbox. Here's how:
a. Create a new image in PS, 1x2 px, 72dpi transparent.
b. Make one pixel the color you want and keep the other transparent.
c. Rectangle Marquee the entire image, then click Edit->Define Pattern and name it.
d. Open the file you wish to optimize and create and select a new layer on top of the image.
e. Edit->Fill->Pattern->Select Your Pattern
d. Export the gif as you normally would.
Note: By locking the transparency on your CRLI layer you can play with the colors and find the effect you want. Reducing the transparency of this layer to anything below 100% will negate the file size advantages.
It will take some experimentation to get the desired results, but this simple step will allow for rich/textured graphics at a fraction of the d/l cost.
3. My final bit of advice is that no program, no matter how sophisticated will be able to replace the pain of going through a graphic pixel by pixel...it is the only way to have nice looking, anorexic graphics. The payoff is cumulative,...5 bytes here 2 bytes there on an entire slice map can translate into seconds of performance.
Depending on color and the image CRLI tends to look more like a screen because they're single pixel lines. That's why experimenting with the colors is important, for example I've found that #333333 usually looks less liney than straight black.
I would not propose that CRLI should be applied blindly across the board. It's appropriate for certain graphics and simply takes advantage of one of the properties of a gif that most designers (in my experience anyway) don't know very much about.
CRLI technique - don't you end up with very liney images?
I've used this approach at times where bandwidth really matters a lot, and yes, you see obvious lines. But you can even approach that look intentionally. It sort of looks like TV scan lines, kind of artsy, and for a while it was quite the trend on corporate sites.
Here's a couple other tricks to lower the number of colors in a gif palette. They involve choosing "Save as" a gif, not using the GIF export plug-in.
1. Select every pixel that LOOKS white (or the lightest color) and make it actually that hex color. One easy way is to keep your target color as the background color and then press Delete. Safest approach here is to be sure your monitor is set to 24 bit color. If you do this in 16 bit, it make look odd on a 24 bit color display.2. Do the same kind of operation on every pixel that looks black, or the darkest color.
3. If you're already in indexed color mode for techniques 1 and 2 above, then go to RGB then and back to indexed color to actually make the embedded palette smaller. Savers a few more bytes.
4. Here's a poorly documented Photoshop feature that can weight the color palette toward a particular part of the gif image. It's very handy when gif "almost" works, but one part of the image shows color banding unless you crank up the number of colors too high.
Just make a selection in the area of the banding and leave the selection active when you go to indexed color mode. The chosen palette will give higher weight to those colors.
5. If you want to squeeze more compression into a jpg image, try taking it to Lab color space first and run a very big (3-4px) Gaussian Blur filter on just the a and b channels. Because the L channel holds all the lightness detail, you can afford this big a blur without losing clarity. When you return to RGB space, the jpg compression and squeeze the image down smaller thanks to your blurring.
However, the new algorithms in Photoshop 7's companion application, ImageReady 7, do a humdinger of a job even without CRLI or the above tricks.
Adobe has done some solid work tuning up the compression algorithms in version 7. The old rules of thumb about what should be a jpg or a gif don't necessarily apply any more. Sometimes just using "Save for the web" without any of the above techniques gives surprising results - results that are counter to conventional advice on which format to use.
Alternately, if a photographic is limited in its color range (mostly warm, mostly cool, etc.) then it's worth giving gif a try.
If a "flat color" image has lots of curvy edges, then it's worth giving jpg a try. The new JPEG algos allow much better compression before the artifact around sharp edge transitions become visible. And there's also been great improvement to the jpg weighting feature (it uses an alpha channel) which was pretty buggy in version 6.
Fireworks, in my opinion is the best proggy for optimising images, I've hammered images down to tiny files and continue to do so. But then maybe maybe this is because I don't use imageready and still hold in my mind that Imageready is used by designers that used photoshop, which I've found lends itself to look and feel rather than speed of download.
You could also use progressive JPEGs which means that the image will at least load on the screen gradually, and again, that's something of more interest than just a background.
If the images are big then you might consider using some kind of animated gif which is say 0.5K, this loads first while the other larger file is cached. The result could be an animation that caputures the interest while the rest of the page is cached.
Style sheets will definantely cut the page sizes down.
You could also put a quickNav drop down at the top of the page, that way repeat visitors on a slower connection can select from a drop-down box and jump to the relevant page before the whole page has loaded.
Hope this helps.
Glyn.
Furthermore, html compression depends on the style of site. I've done great savings on pages with large tables (100s of rows). first by splitting it into smaller tables you create an incremental display. Don't split each line or you're wastign space again, split into sets of perhaps 20 rows. About a screen full.
Also, when you want to go eextreme on long pages of repeated similar elements (such as navigation, item selection) You can use javascript compression.
I had a page that with font tags was about 150k, with CSS was about 36k and with JavaScript Compression is aroun 12k. I'm currently using the CSS version because it reduces to around 5-7k with mod_gzip, and because hte javasript can be slwo on slwo machines and foil spiders.
But if you really need the extra speed, and I mean SPEED, you should use JavaScript compression. I could deliver a 700 row table to dial up almost instantly. Basically it consist of only storing the changing elements in the HTML, and building all the reapeated elements using JavaScript. This can create extremem savings, and make certain kinds of pages possible that are normally unbearable (like a 700line browsable list).
SN
I think that someone posted a link to Andy Kings site earlier in this thread (www.websiteoptimization.com). In his book, Speed Up Your Site, he goes on about some pretty 'extreme' stuff, like file sizes and tcp/ip packets:
On a high speed connection, the first packet out runs roughly 1500kb - 40kb(tcpip stuff) - (header size)kb. Each subsequent packet is then 1500kb - 40kb.
So, for example, a 900kb CSS file would leave room in either the first opr later packets for more data wrt that http request.
Alternatively, a 1600kb file (script or image or whatever) would use 2 packets, and could leave close to 1000kb of empty packet.
I think I have that right. Interesting, hard to optimize for stuff (the packets), but minimizing http requests is easier...
RE, combining graphics. I've found trhat certain small gifs are faster as one, i.e. header bars of rounded boxes, rather then having the corners and title separate.
On the other hand, the REAL important factor is percieved speed, and I've found sites whee large graphics were split into small squares which simulated an incremental loading, making the site FEEL much faster, which is teh important thing in the end.
SN
However i notice that it removes all the spacing and carriage returns in the HTML Code
Apart from this being difficult to read - does anyone know whether this will effect how a search engine bot reads the code - i.e. will it get meta tags mixed up... or will it read it in exactly the same as before?
I doubt it will make any difference to spiders at all.
JavaScript on the other hand is a client side scripting language. So if you put common data into an external JS file you'd indeed download that part only ocne and be able to use it in many places.
But this is HIGHLY unrecommended, because of associated scripting issues.
Best to rely on something like mod_gzip which can very effectively deal with highly repetetive HTML code. With the compression you can save large amounts of data.
SN
No gain in speed by using PHP to generate you HTML as such, as the same HTML has to get sent to the user's browser regardless of how it's created, but a lot of the comments in this thread still relate to the actual HTML you generate. In other words, make sure that any dynamically generated HTML in your PHP code uses CSS and isn't loaded down with spaces and tabs.
PHP will allow you to shift any browser sniffing you do to the server, which can be a speedier alternative to delivering two sets (or more) of content and working it out on the client side with Javascript. Having a single external PHP-enabled style sheet with simple PHP Netscape 4 sniffing is a personal favourite :)
[edited by: jetboy_70 at 1:49 pm (utc) on June 23, 2003]
Hi Mipapage
That goes against my philosophy, so I was wondering if you have any stats to show the pros & cons, or any guidelines on how to do that properly.
My philosophy for scripts, css etc is to move as much as possible to external files. The reason is that you typically use the same scripts and css on all the pages in your site, and most browsers are pretty good at caching things, so they only download it once. The overhead of the http request is trivial, and only applies for the first page. subsequent pages then load much faster as the javascript files and css are cached.
As for images, breaking them up into a matrix makes sense for two reasons: Firstly, if you choose the matrix such that some of the image segments are just plain colours or images with very low complexity, then you can really reduce the size with little degradation using higher jpeg optimisation for some segments (or reducing the colours if a gif). Compared to the saving, the http request overhead is trivial. Secondly, you get the impression of faster loading because of the files are downloaded in parallel.
Regarding the rest of your post, paying attention to packet sizes to make optimum use of bandwidth is a great idea; thanks for the tip!
Shawn
I agree with killroy & jetboy_70, but I'd like to add:
No, you'd get performance degradation! Use php for dynamic (i.e. context sensitive) content. If the content is static, use a templating engine to generate the static html pages so your server can serve them faster.
Shawn
I know there are plenty of php fans out there who might not agree... Thats what makes the world interesting: all the different points of view ;)
A badly coded template engine is slower then a well coded php processor.
(PS I've written my own scripting language and processor and use it in all my commercial sites for years)
But of course, everything else beeing the same, teh template engine might be a tiny fraction faster as it's simpler.
I wouldn't rely on it though.
SN
[edited by: jetboy_70 at 1:49 pm (utc) on June 23, 2003]
I don't mean a server-side scripting templating engine like smarty, which in effect just produces php. I mean a program which merges your content with your template to produce static html pages, which you then put on your server. Whenever your content changes, you regenerate the relevant html pages and put them on your server, so your server serves static html, not php.
But you guys are right, it is a big debate, and I don't think there is a right answer for all situations. It is very contextual. Static html pages might be good for a small to med site; On a big site the number of pages might be prohibitive so server side scripting might be more sensible.
Shawn
Anyways, this is only a saving in CPU time on the server (which may be important) but doesn't cahnge at all what needs to be downloaded.
In fact for my own scripting language I created caching systems wich would store the result of the script execution in a database, and only re-executed the script if certain source databases have changed. It all really depends on your traffik and needs.
But I recently estimated the cost for several servers in a load balanced cluster, and found it really cheap and practicable. In fact I could get about 3 functional servers for the price of a standard desktop, easily sidestepping performance issues.
Excuse the tangent, this isn'T page download speed issues, just server CPU load stuff.
SN
In the past I never used external javascript(.js) files to lower the html of the page because of the mentioned fact some visitors will have this option turned off or not available. But, a site I am currently working on will have the download size savings without support concerns. Though I should mention up front this can only be done on a dynamic site, scripting or ssi/asp.
This site is 100% perl based. No static pages, in fact the document_root only has .htaccess with a mod_rewite bla bla bla...
Back to the point. On the first request of a new session the page includes a small javascript to add a variable (&js=yes) to all anchors/forms AND to download the javascript include(.js) files. The code to download the js files is added at the end of the page(just before </body>) so the page will display without lag. These files are to be used on the next page request. On the next request my scripts will change the output of the html to be supported by the already downloaded .js files. If the scripts do not see the 'js=yes', then javascript is not being supported and the entire page is sent to those poor souls.
On the server side to keep script bloat down(yes this matters too) I actuall have 2 different .pm files to write the pages. One with the js and one with standard html. Then in the BEGIN I just 'require' the correct .pm file depending on the value of param('js').
The .js files are basically broken down into defining variables then just use a document.write->this variable where needed
Since the first request of any page being supported by external js files needs to download them on the first request anyway, I see no difference in speed at all from the "unsafe" way of doing this outside of the fact that the first page has <1k extra of javascript. The only thing to watch for in doing this is to make sure to use the correct 'version' of javascript when you test for it.( the latest version your actually using on your pages as some visitors may support only older versions)
The site is fast. small reused images(except for specific product image), almost 0 white space, external .js/css files, streamlined code(few tables,no nesting), and nothing but meta's and actuall page content on subsequent requests.
Achieved only 2-3 seconds on 56k but shows an actual page that would normally take >10sec.
Q-> Can anyone see any reason I should not continue using this.
------------------
since I read so many good tips on image use, I'll throw mine in.(maybe not so great though). This has come in handy many times for some great image file/physical size reduction
<img src="./some_graphic.ext" style="background-image:('./some_supporting_graphic.ext');">
this can work great for tiling behind a gif
or
<span style="borders;padding;background-colors;etc;"><img src="./some_graphic.ext"></span>
excellent for emphasising product images without the borders/padding/bgcolors many times needed
remember you must have some alpha channel in your image or nothing will show through...