|Logging Request-to-Response Times|
Live monitoring via PHP script(s)
Hi folks, thought I'd teach myself a new language, namely PHP. Getting on alright so far, but a new contract has a strict requirement;
"Response to client clicks should be no slower than 5 seconds maximum. Server request-to-response times should be recorded accurately, and presented on a regular basis."
Basically, I'd like to create a monitor for this using PHP, as I imagine that it might be the best language to use (possibly). I would like the script(s) to work in two ways;
1. Log the date, time and IP of each request-to-response activity in a file (a simple database or even just a text file). This log file would then be interrogated, on a regular date, to create the required report.
2. Show these stats, live, in a back-office page of the main site (or maybe even from the front, as one of those "current active users" blurbs).
Could anyone shed some light on which script tags I should be working with for this? All comments greatly appreciated - deadline for this feature is quite tight...
You might want to look at PHP.net under microtime()
There are some excellent feedbacks on how to track running time of scripts and response times of server.
Here is a direct link to the knowledge base : [php.net...]
You might also want to look at exec() so you can "ping" the client at least once for connection speed. A 5 second response time requirement is nice, but what about latency that you should not be responsible for? If I have a 14.4K connection your latency can be has high as 2 even 3 seconds... What about traffic-jam in between visitor and server?
I think you should make sure of just what they want or expect. It's one thing to average a .01 second response time on something that doesn't rely on external resources, but it's a whole nother problem when you have external resources to depend on. Having to spend time to gauge how fast a client is going to be able to request a page is kind of in the same boat, it's extremely difficult and inaccurate, and if you do it from your script you could be slowing the entire process down. If you must measure response times for browsers your best option is to write a program that they can run on their own machine that can make a few requests an hour and keep track of the times there.
Yah, the above post is completely right.
I use Mozilla under Linux for development. Using microtime to time my scripts (db access, etc), the times would be sub-second, but it would take seconds to render in Mozilla.
Using other faster browsers (IE, Opera, Konq) would make the site really responsive.
Thanks for the responses.
The project brief is actually quite vague, and while I have tried to get firmer information with regards to exactly which response time they need, none of the contractees actually seems to know anything about it(!!!).
All of which is very irritating, and proving to be a major headache to spec at this end.
Basically, for the moment I'm only focussing on server-side processing times, as I'm certain the client doesn't understand what they're really asking for outside of this.
You can also use a tick function to time how long each function is taking so that you can rewrite parts to be less time consuming... that is what I had to do with an extremely large parser.
Call me mad, but I've had an idea;
Each page in this new project will rely on dynamic template header and footer includes. I was thinking that I could create a counter in the header, tag it with the session ID, and insert the current session details (e.g. YY/MM/DD, HH:MM, URL, Referer, etc) into a text file (etc.), then immediately start the counter.
Then the rest of the page is displayed, the footer is loaded, which includes a stop command for the counter. The counter result is then output in relation to the information taken via the header (date, time, etc.) and appended to the relevant entry.
1. Would this be worth doing, with regards to how much it might slow down the server under heavy load?
2. (This may be redundant, but) How accurate would this be when considering heavy server load/stress?
Just as a point of reference regarding accuracy, I was thinking of setting the timer to record 10ths of a second (rather than milliseconds).
NetGrease, here I was all ready to post a reply by the 4th sentence of this thread and you beat me too it.
A counter logger at the very top of the page, and a counter logger at the very bottom of the page.
Then compare the two. You will have major differences (5%-20%) in the 2 counters sometimes. It is a very good indication of box speed. I usually use an ssi counter as it will be the most accurate. A graphic counter will not work because browsers don't download graphics in the order they are on the page.
I've also tried putting a graphic at the very top of a graphics intensive page and one at the very bottom. Then compare the pull times for all the graphics on the page (find the lowest and then find the highest).
As for specific script tags, just general PHP will do.
- Read the environment variables. (get referrer, time, browser, page viewed...etc)
- Append data to the disk log file.
It's nice to know I'm not (completely?) bonkers.
The actual time taken won't need to be made public, although I do like the graphics idea - a very nice visual yardstick I think. It just needs to be recorded for "back office"-style performance reports each week/month.
So I will go with the header/footer idea. I'm assuming of course that my "startCounter" and "stopCounter" need to be the very first, and very last, line of the header and footer respectively, to garner the most accurate results?
For those of you interested, a colleague and I have completed the code to record request processing times. If you're interested, you can sticky mail [webmasterworld.com] me for the code. I (obviously) won't provide support for this, but you're welcome to muddle through it at your leisure.
Interested in client side response time measure ?
Nothing more easy if you use MSIE and you have some C++ programming skills.
MSIE fires events when the page is requested and when the transfer is finished, take timestamps and you are OK.
How usefull would be this kind of client side tool ?
I like the idea of "start to finish" timing since it eliminates all external factors. I wonder, though, if some clients might find the numbers suspect - what if there is a delay, for example, in grabbing the file to start loading?
One other way to demonstrate the performance of a site but eliminate client factors is to place page requests from a variety of different locations at about the same time, and track the time to deliver the page. I used a free service at www.tracert.com to demonstrate to a web host that their server had a problem. They speculated that slow click response times were due to some weird problem on my end - my ISP, my PC, my browser, etc. (Naturally, they alleged that mine was the ONLY site experiencing any problems.) When I documented that pages were slow (or timed out) from ten different locations around the country, they got serious and fixed the problem. The same test, of course, could be used to demonstrate that a site is working properly. I don't think this is exactly what NetGrease's client is looking for, but it's a nice "real world" test that could be used to support the locally calculated load times.