Forum Moderators: open
If WW (Webmasterworld) supported just Level 1 GZIP compression on its servers, 56K modem users would have 4 times faster page loads. The webmasterworld servers communications load would be cut by a factor of 4. System CPU usage would drop, while application CPU usage would increase. At level one GZIP compression the savings due to reduced bytes to handle, far fewer context switches per message, etc, would balance with application CPU usage increase. It seems like a win win.
Vastly better performance for end users!
That results in machines wasting CPU cycles trying to compress something that is now already compressed. In fact, the machines you most imagine will benefit from the compression (dial-ups) are the most likely to already have hard-wired compression running in the conversation.
If you have your own server or even just a page with PHP, try the gzip function or mod_gzip yourself - forget theories without realworld trials. Whatever connection you use gzip on the server side will always be faster. Anyone who tries this will instantly understand why it's a no-brainer. Forget all the theories about dialup modems doing 4:1 compression etc. Sending one fourth the data in the first place is always faster than the host isp modem re-compressing it. v42bis compression steps out of the way when it sees pre-compressed data anyway.
The only worthy argument against is FBIB and I think it's mute anyway with the right configuration. The more text on a page the more valuable gzip is and it's *extremely* valuable for any forum.
I'm sure there's a good reason for it though. Maybe a wholesale change to external CSS for WebmasterWorld would disrupt WebmasterWorld's excellent G rankings? Is there another possible reason?
A better solution than mod_perl is FastCGI. FastCGI forks your perl web apps as daemons (outside of apache), preforking if necessary, and them intercepts the IO and redirects it to a small apache module. It will dynamically adjust the waiting process pool based on utilization and can kill off processes that grow too large after they finish their current request. The process pool can exist on multiple machines, if necessary. The "application server"-ish paradigm ensures that the number of potentially memory-intensive processes are minimized and that a thin apache can be used to serve content.
But wow look at the Adsense forum, almost 10X compression if GZIPped.
http://www.webmasterworld.com/forum89/ is not gziped. If it were gziped the requested page (42832 bytes) would be the following sizes at:
Level bytes % of orig size 1k/sec 3.5k/sec 10k/sec 100k/sec utime
0 42848 100.0374 41.8 12 4.2 0.4 0
1 5307 12.3903 5.2 1.5 0.5 0.1 0
Again Webmasterworld is greatly appreciated, I'm sure, by all who posted here.
I found it an interesting discussion with quite a few useful suggestions, with some ads (useful as well) thrown in! Time to put it to rest.
On a seperate note how are you measuring this:
" the mass majority of this audience is on high speed net connections. We run about 80% on dsl/cable or other high speed. We even have about 10% on t1's or higher here. "
> We run about 80% on dsl/cable or other high speed.
some mods really let me have it on that stat. That stat is "page views" and *not* uniques. Which uniques are about 60% cable/dsl/faster. That means cable users abuse things at a higher rate... ;)
badtzmaru - thanks for the tip on mod perl - I might just try it though on a couple of mission critical programs.
Out of "everything" (cpu & ram)... box is 2.8m intel with 2gig ram redhat.
> percieved
There is a noticeable 5 second delay before anything shows in the brower. At that point it appears people start hitting reload because they think the page has stalled (which causes more server load and more people to press reload). That 5 second delays continues to grow as the server slowly fades away under the load.