homepage Welcome to WebmasterWorld Guest from 54.242.231.109
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
Forum Library, Charter, Moderators: coopster & jatar k

PHP Server Side Scripting Forum

    
Slow response times (php + MySQL)
takes about 4-5 secs for page to load
walkman




msg:1252965
 12:19 am on Apr 29, 2005 (gmt 0)

so my site is finally done but the main page takes about 4-5 seconds to load. It's about 50KB and I have a 1.5 MB DSL so it's not the loading time. Right now it searches through about 1500 records and returns about 50 of them with name, URL, image and a few other options.

The site is not public yet and I notice that mysql take 60-90+% of the CPU but barely 3% of the memory when I call that page. Tried it plenty of times to make sure that's why it spiked.

What's scaring is that the records will increase to at least 10,000. I have eAccelerator and I have no problems with a php-nuke site with way more records. It loads really fast, but I have removed a lot of the bells and whistles; here I need them.

The server: Fedora II, a 2.4 P4, 1 GB Ram, MySQL 3.58, server load almost never goes above 0.50, even during the "delays" and when mysql taking 98%. After an OS /Apache 2.0 upgrade, I've noticed that I only have one mysql proccess running, before I had dozens. Is this better or worse?

[root@********etc]# more my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

[mysql.server]
user=mysql
basedir=/var/lib

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

I have no problems with individual pages at all in this site. What do you guys suggest? Is my .cnf the problem? Unfortunately the program is finished, too late for database changes, if that's the problem. Can I "cache" the index page with an additional program and update it manually or every hour or so? Eventually I will upgrade to MySQL 4.* but right now I am swamped with updating and testing the new site and can't risk downtime. Any help would be greatly appreciated.

some more mysql data, in case it makes sense to someone ;):
------------------------
system locking is not in use

Possible variables for option --set-variable (-O) are:
back_log current value: 50
bdb_cache_size current value: 8388600
bdb_log_buffer_size current value: 0
bdb_max_lock current value: 10000
bdb_lock_max current value: 10000
binlog_cache_size current value: 32768
connect_timeout current value: 5
delayed_insert_timeout current value: 300
delayed_insert_limit current value: 100
delayed_queue_size current value: 1000
flush_time current value: 0
innodb_mirrored_log_groups current value: 1
innodb_log_files_in_group current value: 2
innodb_log_file_size current value: 5242880
innodb_log_buffer_size current value: 1048576
innodb_buffer_pool_size current value: 8388608
innodb_additional_mem_pool_size current value: 1048576
innodb_file_io_threads current value: 4
innodb_lock_wait_timeout current value: 50
innodb_thread_concurrency current value: 8
innodb_force_recovery current value: 0
interactive_timeout current value: 28800
join_buffer_size current value: 131072
key_buffer_size current value: 8388600
long_query_time current value: 10
lower_case_table_names current value: 0
max_allowed_packet current value: 1048576
max_binlog_cache_size current value: 4294967295
max_binlog_size current value: 1073741824
max_connections current value: 100
max_connect_errors current value: 10
max_delayed_threads current value: 20
max_heap_table_size current value: 16777216
max_join_size current value: 4294967295
max_sort_length current value: 1024
max_tmp_tables current value: 32
max_user_connections current value: 0
max_write_lock_count current value: 4294967295
myisam_max_extra_sort_file_size current value: 256
myisam_max_sort_file_size current value: 2047
myisam_sort_buffer_size current value: 8388608
net_buffer_length current value: 16384
net_retry_count current value: 10
net_read_timeout current value: 30
net_write_timeout current value: 60
open_files_limit current value: 0
query_buffer_size current value: 0
record_buffer current value: 131072
record_rnd_buffer current value: 0
slave_net_timeout current value: 3600
slow_launch_time current value: 2
sort_buffer current value: 2097144
table_cache current value: 64
thread_concurrency current value: 10
thread_cache_size current value: 0
tmp_table_size current value: 33554432
thread_stack current value: 65536
wait_timeout current value: 28800

 

AlexK




msg:1252966
 2:07 am on Apr 29, 2005 (gmt 0)

walkman:
After an OS /Apache 2.0 upgrade, I've noticed that I only have one mysql proccess running ...
some more mysql data ... :
------------------------
thread_cache_size current value: 0
tmp_table_size current value: 33554432
thread_stack current value: 65536
wait_timeout current value: 28800

Have had exactly the same problem on RH8 (is also Apache 2.0, so may be a connection here). PHPMyAdmin page (as root) showed hundreds of concurrent connections; 30,000 in a 24 hour period. The combination of the 2 highlighted lines above is the reason. The values shown are for the default installation, BTW, so everyone suffers this (dumb, huh?).

Your instinct was right. Alter `my.cnf'. See [webmasterworld.com...] (msg #3) for details.

walkman




msg:1252967
 5:55 am on Apr 29, 2005 (gmt 0)

Thanks for replying,
I just changed all that, restarted MySQL, even rebooted the server and got very little improvement (and that could because it's slow now, 1 am). The script does call three tables on the same db but still. I was told that it could handle 200+ VB or other BB users...

I use one of those
$time_end = microtime_float();
$time = $time_end - $time_start; (there's more to it)
and it says that every page loads within this range
0.0018****... seconds. Are they reliable or do they they just count the html drawing?

**************current MySQL settings**************
system locking is not in use

Possible variables for option --set-variable (-O) are:
back_log current value: 50
bdb_cache_size current value: 8388600
bdb_log_buffer_size current value: 0
bdb_max_lock current value: 10000
bdb_lock_max current value: 10000
binlog_cache_size current value: 32768
connect_timeout current value: 5
delayed_insert_timeout current value: 300
delayed_insert_limit current value: 100
delayed_queue_size current value: 1000
flush_time current value: 0
innodb_mirrored_log_groups current value: 1
innodb_log_files_in_group current value: 2
innodb_log_file_size current value: 5242880
innodb_log_buffer_size current value: 1048576
innodb_buffer_pool_size current value: 8388608
innodb_additional_mem_pool_size current value: 1048576
innodb_file_io_threads current value: 4
innodb_lock_wait_timeout current value: 50
innodb_thread_concurrency current value: 8
innodb_force_recovery current value: 0
interactive_timeout current value: 28800
join_buffer_size current value: 131072
key_buffer_size current value: 314568704
long_query_time current value: 10
lower_case_table_names current value: 0
max_allowed_packet current value: 1047552
max_binlog_cache_size current value: 4294967295
max_binlog_size current value: 1073741824
max_connections current value: 100
max_connect_errors current value: 10
max_delayed_threads current value: 20
max_heap_table_size current value: 16777216
max_join_size current value: 4294967295
max_sort_length current value: 1024
max_tmp_tables current value: 32
max_user_connections current value: 0
max_write_lock_count current value: 4294967295
myisam_max_extra_sort_file_size current value: 256
myisam_max_sort_file_size current value: 2047
myisam_sort_buffer_size current value: 67108864
net_buffer_length current value: 16384
net_retry_count current value: 10
net_read_timeout current value: 30
net_write_timeout current value: 60
open_files_limit current value: 0
query_buffer_size current value: 0
record_buffer current value: 1044480
record_rnd_buffer current value: 0
slave_net_timeout current value: 3600
slow_launch_time current value: 2
sort_buffer current value: 2097144
table_cache current value: 300
thread_concurrency current value: 8
thread_cache_size current value: 10
tmp_table_size current value: 67108864
thread_stack current value: 65536
wait_timeout current value: 30

walkman




msg:1252968
 6:57 am on Apr 29, 2005 (gmt 0)

I'm also thinking of having the script/s in question print the output to a .txt file and include it. Tried it with some bogus test it's very fast. I can call it from an admin url, and everytime it's called the .txt file is updated.

I hope I can solve this though...

arran




msg:1252969
 7:35 am on Apr 29, 2005 (gmt 0)

What SQL you running against the database?

walkman




msg:1252970
 2:37 pm on Apr 29, 2005 (gmt 0)

Hi arran,
MySQL version 3.23.58

arran




msg:1252971
 4:07 pm on Apr 29, 2005 (gmt 0)

sorry walkman, i meant the actual query :)

walkman




msg:1252972
 4:11 pm on Apr 29, 2005 (gmt 0)

"sorry walkman, i meant the actual query"

no need to be sorry for my stupidity ;)
how /where would I find that out?

jusdrum




msg:1252973
 4:49 pm on Apr 29, 2005 (gmt 0)

I experienced the same problem because I had a complex query and did not index certain fields, which made it run really slow. Once I indexed a particular field it ran lightning quick.

Can you you paste in your query and the table schemas (if they're not too big)?

walkman




msg:1252974
 5:20 pm on Apr 29, 2005 (gmt 0)

Ok,
In one example for popular "products":
I have a table that has a bunch of numbers that correspond to either a type 1 or type 2 product. I manually enter and save those numbers via a web form. The script checks to see what numbers are in the table, then it searches the type1, type2 and product name to show the link to the product_name page, and draw the list of the matching type1+2 grouped.

popular (table) (each type1 or 2 has a number, I enter a few here)
product_name (with general product info, contact etc. etc)
type1
type2
---------widgetized----------
$query = mysql_query("select S.product_name, S.id_product
from popular P, product S, type1 T1, type2 T2
where (T1.id_type1=P.id_type1 and T1.id_product=S.id_product) or (T2.id=P.id_type2 and T2.id_product=S.id_product)
group by S.id_product order by S.product_name asc",$conn) or die(mysql_error() . " - popular");

after that it searches for type1:
// T1hecking type1s //
$q = mysql_query("select T1.id_type1, T1.expire,T1.description,T1.code,T1.name
from type1 T1, type1_popular P
where T1.id_type1=P.id_type1 and T1.id_product = '$product->id_product'
group by T1.id_type1

and then the same thing for type2.
// checking type2s //
$q = mysql_query("select T2.id,T2.id_product,T2.type2,T2.description,T2.exp
from type2 T2, popular P
where T2.id_product='$product->id_product' and T2.id=P.id_type2
",$conn) or die(mysql_error());

This takes about 10 seconds with just one person (me) requesting it. We know how long 10 seconds can be.

As far as indexing...is it too hard to do on an already finished script? What woudl happen to the data already entered? Also, how would I know if it's already turned on or not (maybe performance is a hint but still ;))? Adding is blazing fast and even if it takes a lot more resources, I rather wait 5 seconds for myself on the admin side...

arran




msg:1252975
 6:30 pm on Apr 29, 2005 (gmt 0)

Based on your queries you should probably add the following indexes:

popular(id_type1, id_type2) (i.e. create an index on the 'id_type1' and 'id_type2' columns for the table 'popular')
product(id_product)
type1(id_type1, id_product)
type2(id, id_product)
type1popular(id_type1)

Don't worry about the index creation having any negative effect on the existing data - this data will also be indexed.

You could also have a look at changing your schema. For example, tables 'type1' and 'type2' look pretty similar - why not merge them into one table of the form:

type(id_type, id_product, <other columns>, is_type_1) (where 'is_type_1' is a boolean value)

This would reduce the number of queries from three to two.

walkman




msg:1252976
 6:41 pm on Apr 29, 2005 (gmt 0)

thanks for the suggestions,
will suggest it to the programmer in a nice way.

This is so I understand it better: is the
"popular(id_type1, id_type2)" etc., added to the script in question or on another script? Also, where is that index saved, mysql cache or ...? This entire db will never be larger than 7-10MB so I dont; think memory is an issue. Sorry for the novice questions.

As far as merging: unfortunately the entire site (dozens of tiny scripts) is done that way. They're very very different things in the way that are shown and entered in the database. It maybe too late to change that.

arran




msg:1252977
 6:48 pm on Apr 29, 2005 (gmt 0)

popular(id_type1, id_type2) is only my shorthand. The real syntax would be:

create index <index_name> on <table_name> (<column-1>,<column-2>,...,<column-n>)

e.g.

create index popular_index on popular (id_type1, id_type2)

The indexes are persistent so these "create index" commands need only be run once - no changes would be necessary in your script.

walkman




msg:1252978
 7:06 pm on Apr 29, 2005 (gmt 0)

cool. I looked at another DB I have and some fields have the "Key" "Indexed". Looks like it's a mysql command that changes the database fields, and everything that's there, or entered after is indexed. Did I get close to it ;)?

any drawbacks on indexing for a dedicated server (2.4 Ghz, 1 GB Ram, low server load) and a relatively small DB (8-10MB if that)?

on edit: noticed that all my tables have the first field Key with Primary (it's an ID)

thanks again,

[edited by: walkman at 7:26 pm (utc) on April 29, 2005]

arran




msg:1252979
 7:25 pm on Apr 29, 2005 (gmt 0)

An index is basically a series of pointers to the underlying data in a table. By having these pointers, you can bring order to your data and make it easier/faster to extract the rows you want. Running a query against an unindexed table is like looking for the queen of hearts in a shuffled pack of cards - all you can do is start at the beginning and look through the pack until you find it (this is called a "tablescan" in the database world and is a performance killer). On the other hand, if i gave you an ordered pack of cards, you would find the queen quicker - this is what indexing is all about.

Your server specification is not an issue - it looks like the bottleneck is the database.

Indexing does incur a slight overhead when you insert data into the table (a new pointer needs to be added to the index) but this shouldn't be significant.

The fact that your database is only 8Mb suggests that the schema might be more of a problem than a lack of indexes. However, try adding them and see if things improve.

ps. sorry for the pack of cards analogy ;)

AlexK




msg:1252980
 7:27 pm on Apr 29, 2005 (gmt 0)

(Boy, but this brings back memories; I spent 9 months going through every sql statement on my php-site because my host said it was my lousy PHP that was causing my server problems. It turned out to be a bad memory chip. However, the server folks were correct about [some of] the SQL. And [some of] the PHP.)

walkman:
I just changed all that, restarted MySQL, even rebooted the server and got very little improvement ...

Can you confirm that the 99% CPU-spikes have gone?

I use one of those
$time_end = microtime_float();
$time = $time_end - $time_start; (there's more to it)
and it says that every page loads within this range
0.0018****... seconds.

If that is start-of-script => end-of-script then forget the SQL for now, it is fine. Obviously, such benchmarking is purely script-execution-time and cannot include network-transmission-time.

The best tool I found for SQL optimisation was phpMyAdmin. If you paste a sql statement into the sql-run box (echo() it out from your PHP) it will not only give a page of results, but will benchmark it at the same time. There is also an Explain SQL link which is useful for tracing the specifics within a table Schematic which may be causing the greatest delays. I obtained the greatest reductions by adding indexes, but some very slow queries needed the Schematics altering to fix them.

As far as indexing...is it too hard to do on an already finished script?
Using phpmyadmin, as easy as pi**ing into a pot.

What woudl happen to the data already entered?
Doesn't touch it.

Also, how would I know if it's already turned on or not
If the performance does not improve then there is no point in having the index. That is what it is for.

$query = mysql_query("select S.product_name, S.id_product
from popular P, product S, type1 T1, type2 T2
where (T1.id_type1=P.id_type1 and T1.id_product=S.id_product) or (T2.id=P.id_type2 and T2.id_product=S.id_product)
group by S.id_product order by S.product_name asc",$conn) or die(mysql_error() . " - popular");

There are 2 suspicious items here (I think - phpMyAdmin will confirm/deny my suspicions): The
where clause contains an or, and these are not very well optimised in mysql. Probably even worse, however, is the group + order combo. This latter is probably causing mysql to have to examine every single row in all the relevant tables, make temporary tables to process it, and this will take an age.

The point of indexes is that--whenever possible--mysql will pull what it needs from the index. This makes Querys lightning fast. If it is forced to examine the actual data in the tables row-by-row the whole thing will slow down. The key to speed, then, is to both provide indexes AND to write the sql so that all sql pre-processing can be done from info in the indexes. Extracting the data is then just the final item, involving a handful of rows, and very fast.

walkman




msg:1252981
 9:59 pm on Apr 29, 2005 (gmt 0)

the microtime /page loading stats I posted are misleading. Looks like it's showing how it took for the itself to display.
I found another script and placed part of it on top and the echo part on the bottom, and the index page takes 2+ seconds to load, just with me calling it (not live yet). It's not a lifetime, but you add 100 more people calling it and we have a problem :). Based on how much longer I wait for the popular script, it probably takes 10-12 seconds to load.
That I disabled for now.

I tried removing the sort and it didn't make a noticable difference. At most they're only 30-50 items pulled /to be sorted. I'm stumped. I suggested the index to the programmer (in addition to the primary thing he has on now) and will make this write to a .txt file (when I call it or via cron) to be included as php include. I tried that and it's very fast.

thanks again to everyone,

walkman




msg:1252982
 4:21 am on Apr 30, 2005 (gmt 0)

Update:
after spending almost two days on the mysql side, I can safely say that it isn't the server.

I left just one product on the "popular" page and it still takes 10+ seconds to load. One product and nothing else. Not even a template. Total of 2-3kb or so.

On the index, left about 3-4 ones and the server load was cut to 15% when called (compared to 60+%)and loaded much faster but still way too slow compared to the rest of the scripts. The products that show on the index have a "Y" on one field so I guess it starts checking them all everytime and then pulls all their info. I hope he can index them or something.

what a day ;)

SeanW




msg:1252983
 3:27 pm on May 1, 2005 (gmt 0)

Look into the EXPLAIN command. Just put it in front of your query, and it will tell you how the query would be executed, including what the candidate and used indices are, and any intermediate filesorts. From there you can figure out what is required to speed up the query.

While it is possible (and tempting) to index every column, it takes a lot of overhead on disk, not to mention a big decrease in INSERT/UPDATE speed.

Feel free to post the results of the EXPLAIN if you need some help interpreting it.

Sean

walkman




msg:1252984
 4:06 pm on May 1, 2005 (gmt 0)

Hi Sean,
Thanks. I will try that even though the problem is "fixed". The programmer made it so the server intensive script writes to an html file, and it's called as an include. That is FAST. The rest of the things (minor) for these pages are still called on the fly. This works for me since everything is decided where it goes (index page, popular etc.) manually and when I'm done, I just build the page. The categories and individual product pages are very fast and don't need the fix.

Regardless of that, I think I should be able to call a few records without crashing a dedicated server, so will try to get it fixed somehow.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved