Forum Moderators: bakedjake
I've got a dedicated server that gets very sluggish under heavy loads. It's running cgi forum software and a cgi ad-serving script. My memory allocation looks like this:
total used free shared buffers cached
Mem: 516952 502776 14176 134000 10488 68160
-/+ buffers/cache: 424128 92824
Swap: 265064 248048 17016
Symptoms are slow performance, with scripts timing out. I could barely log in via SSH due to slow response time.
How would you Linux pros start attacking this problem? If I do a ps aux, I've got pages of processes listed, but nothing pops out as a gigantic problem. Maybe the pages of processes is the gigantic problem. Advice for digging into this?
You want the second line
-/+ buffers/cache: 424128 92824
to show a more 50/50 split under normal load, that is half you memory taken up permanently by processes like apache, mysql, whatever, and the other half available for dynamic usage, like performing large SQL queries, building up a huge page, caching stuff in memory for speed, etc.
You need free memory to have a snappy system also under load.
Adding memory should help if you are swapping. Also, if you suspect that the zillions of processes are causing the problem, try adding up the resident set sizes of all of the processes that are not present during light load conditions. (There should be options to ps that cause the
resident set sizes to show up.)