Forum Moderators: phranque
I tried to find info on server performance for mid-size files but came up blank. How much CPU and memory is consumed by a single request for a 1Mb file? I guess my thinking was that any file that takes the server more than a second to download means that the server's CPU is tied up for more than a second, which I don't like, but maybe it takes such a small fraction of the CPU's attention that I needn't worry about it.
Perhaps a good specific question is, how often can I serve a new 1Mb file before performance degrades, on an otherwise good P4/1Mb server that normally has a load average of about 0.1? And does anyone know of any good references for server performance for newbies? Thanks much.
mod_status might be able to give you an idea of the cpu time spent servicing the request, and the inverse of that is the number of max connections per second you can take before getting backed up on the CPU.
My only concern would be keeping enough httpd children around to service the other requests. If it takes 30 seconds to download the file and you have 10 people/second downloading the file, that means you need MaxClients >= 300 just to serve those files, let alone the other requests. As much as I love Apache, lighttpd or a reverse proxy like squid would be better suited for the larger volume cases.
I wrote a series on LAMP tuning over at ibm.com/developerWorks/linux that goes over some of the basics of Apache tuning, though mostly targeted to dynamic stuff.
Sean