Forum Moderators: phranque

Message Too Old, No Replies

Testing a webserver

         

timboellis

10:42 am on Apr 19, 2005 (gmt 0)



I have setup a webserver (apache (1.3.33) and wondering what would be the best way to test it with regards to CPU usage, is there something I could install from the public_html side of this as if a user would that may challenge the server, just so i can see if my computer is up to it or not.

mack

4:08 am on Apr 20, 2005 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



What you really need to look for is a server load tester. I can't recoment any one specificaly because they all tend to be pretty much the same. There is a free one available from Microsoft and I belive there are quite a few in the open source.

As oposed to installing it on the server, you install it on a different computer. The load tester will then hit the server with a high level of simulated hits to see how well the server holds up. It works well if you test it over a local area network. That way internet connection speed wont slow things down. The traffic will be as fast as your netork will allow.

Mack.

sitz

1:06 am on Apr 21, 2005 (gmt 0)

10+ Year Member



Mmmm. They /can/ be the same, if they're just brute-force 'hammer on this URL until I tell you to stop' types. There are also the "feed me a snippet of an access log and I'll reply those requests back to the server" types.

Benchmarking is a bit of a black art, really. Unless you REALLY know what you're doing (or there's something very specific you're testing for), the numbers you get may actually be meaningless. Let's take the simplest example; you have a site with 20-30 static documents, and you want to know how many hits/sec your server can handle. Easy, right? Benchmarking a single URL will tell you how fast your webserver returns that page (and ONLY that page); what if 30% of your documents are < 15k, but the others are all 350k? The amount of time the server spends serving up one of the latter files will reduce your overall capacity. Additionally, knowing how much if each file you can return per second isn't all that useful if you don't know what your actual traffic is going to look like; you won't know what ratio of requests come for small files vs larger files.

With dynamic content, things get a lot trickier; are you waiting on a database? Are you parsing local files to generate your HTML? Are you making HTTP requests to remote systems to generate your HTML?

The main thing to figure out is *precisely* what you're testing *for* and test for that one thing. For instance, I'm going to be setting up some benchmarks tonight to compare the performance of two different webservers in a controlled environment, since they currently have radically different (and as-yet unexplained) performance characteristics. For this test, I'll be firing up the 'good' webserver, replaying a log at it at a specific rate, and seeing how many requests per second I get back, and watching the server with a syscall tracer (such as 'truss' or 'strace') to see where the process is spending its time; this test will run for 10-15 minutes at minimum. I'll then shut down the 'good' webserver and fire up the 'bad' webserver and run the same test. The results of the syscall traces will be compared, and the overall results will be brought up in a meeting tomorrow for discussion.