Forum Moderators: coopster
The site is not public yet and I notice that mysql take 60-90+% of the CPU but barely 3% of the memory when I call that page. Tried it plenty of times to make sure that's why it spiked.
What's scaring is that the records will increase to at least 10,000. I have eAccelerator and I have no problems with a php-nuke site with way more records. It loads really fast, but I have removed a lot of the bells and whistles; here I need them.
The server: Fedora II, a 2.4 P4, 1 GB Ram, MySQL 3.58, server load almost never goes above 0.50, even during the "delays" and when mysql taking 98%. After an OS /Apache 2.0 upgrade, I've noticed that I only have one mysql proccess running, before I had dozens. Is this better or worse?
[root@********etc]# more my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
[mysql.server]
user=mysql
basedir=/var/lib
[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
I have no problems with individual pages at all in this site. What do you guys suggest? Is my .cnf the problem? Unfortunately the program is finished, too late for database changes, if that's the problem. Can I "cache" the index page with an additional program and update it manually or every hour or so? Eventually I will upgrade to MySQL 4.* but right now I am swamped with updating and testing the new site and can't risk downtime. Any help would be greatly appreciated.
some more mysql data, in case it makes sense to someone ;):
------------------------
system locking is not in use
Possible variables for option --set-variable (-O) are:
back_log current value: 50
bdb_cache_size current value: 8388600
bdb_log_buffer_size current value: 0
bdb_max_lock current value: 10000
bdb_lock_max current value: 10000
binlog_cache_size current value: 32768
connect_timeout current value: 5
delayed_insert_timeout current value: 300
delayed_insert_limit current value: 100
delayed_queue_size current value: 1000
flush_time current value: 0
innodb_mirrored_log_groups current value: 1
innodb_log_files_in_group current value: 2
innodb_log_file_size current value: 5242880
innodb_log_buffer_size current value: 1048576
innodb_buffer_pool_size current value: 8388608
innodb_additional_mem_pool_size current value: 1048576
innodb_file_io_threads current value: 4
innodb_lock_wait_timeout current value: 50
innodb_thread_concurrency current value: 8
innodb_force_recovery current value: 0
interactive_timeout current value: 28800
join_buffer_size current value: 131072
key_buffer_size current value: 8388600
long_query_time current value: 10
lower_case_table_names current value: 0
max_allowed_packet current value: 1048576
max_binlog_cache_size current value: 4294967295
max_binlog_size current value: 1073741824
max_connections current value: 100
max_connect_errors current value: 10
max_delayed_threads current value: 20
max_heap_table_size current value: 16777216
max_join_size current value: 4294967295
max_sort_length current value: 1024
max_tmp_tables current value: 32
max_user_connections current value: 0
max_write_lock_count current value: 4294967295
myisam_max_extra_sort_file_size current value: 256
myisam_max_sort_file_size current value: 2047
myisam_sort_buffer_size current value: 8388608
net_buffer_length current value: 16384
net_retry_count current value: 10
net_read_timeout current value: 30
net_write_timeout current value: 60
open_files_limit current value: 0
query_buffer_size current value: 0
record_buffer current value: 131072
record_rnd_buffer current value: 0
slave_net_timeout current value: 3600
slow_launch_time current value: 2
sort_buffer current value: 2097144
table_cache current value: 64
thread_concurrency current value: 10
thread_cache_size current value: 0
tmp_table_size current value: 33554432
thread_stack current value: 65536
wait_timeout current value: 28800
After an OS /Apache 2.0 upgrade, I've noticed that I only have one mysql proccess running ...
some more mysql data ... :
------------------------
thread_cache_size current value: 0
tmp_table_size current value: 33554432
thread_stack current value: 65536
wait_timeout current value: 28800
Have had exactly the same problem on RH8 (is also Apache 2.0, so may be a connection here). PHPMyAdmin page (as root) showed hundreds of concurrent connections; 30,000 in a 24 hour period. The combination of the 2 highlighted lines above is the reason. The values shown are for the default installation, BTW, so everyone suffers this (dumb, huh?).
Your instinct was right. Alter
`my.cnf'. See [webmasterworld.com...] (msg #3) for details.
I use one of those
$time_end = microtime_float();
$time = $time_end - $time_start; (there's more to it)
and it says that every page loads within this range
0.0018****... seconds. Are they reliable or do they they just count the html drawing?
**************current MySQL settings**************
system locking is not in use
Possible variables for option --set-variable (-O) are:
back_log current value: 50
bdb_cache_size current value: 8388600
bdb_log_buffer_size current value: 0
bdb_max_lock current value: 10000
bdb_lock_max current value: 10000
binlog_cache_size current value: 32768
connect_timeout current value: 5
delayed_insert_timeout current value: 300
delayed_insert_limit current value: 100
delayed_queue_size current value: 1000
flush_time current value: 0
innodb_mirrored_log_groups current value: 1
innodb_log_files_in_group current value: 2
innodb_log_file_size current value: 5242880
innodb_log_buffer_size current value: 1048576
innodb_buffer_pool_size current value: 8388608
innodb_additional_mem_pool_size current value: 1048576
innodb_file_io_threads current value: 4
innodb_lock_wait_timeout current value: 50
innodb_thread_concurrency current value: 8
innodb_force_recovery current value: 0
interactive_timeout current value: 28800
join_buffer_size current value: 131072
key_buffer_size current value: 314568704
long_query_time current value: 10
lower_case_table_names current value: 0
max_allowed_packet current value: 1047552
max_binlog_cache_size current value: 4294967295
max_binlog_size current value: 1073741824
max_connections current value: 100
max_connect_errors current value: 10
max_delayed_threads current value: 20
max_heap_table_size current value: 16777216
max_join_size current value: 4294967295
max_sort_length current value: 1024
max_tmp_tables current value: 32
max_user_connections current value: 0
max_write_lock_count current value: 4294967295
myisam_max_extra_sort_file_size current value: 256
myisam_max_sort_file_size current value: 2047
myisam_sort_buffer_size current value: 67108864
net_buffer_length current value: 16384
net_retry_count current value: 10
net_read_timeout current value: 30
net_write_timeout current value: 60
open_files_limit current value: 0
query_buffer_size current value: 0
record_buffer current value: 1044480
record_rnd_buffer current value: 0
slave_net_timeout current value: 3600
slow_launch_time current value: 2
sort_buffer current value: 2097144
table_cache current value: 300
thread_concurrency current value: 8
thread_cache_size current value: 10
tmp_table_size current value: 67108864
thread_stack current value: 65536
wait_timeout current value: 30
I hope I can solve this though...
no need to be sorry for my stupidity ;)
how /where would I find that out?
popular (table) (each type1 or 2 has a number, I enter a few here)
product_name (with general product info, contact etc. etc)
type1
type2
---------widgetized----------
$query = mysql_query("select S.product_name, S.id_product
from popular P, product S, type1 T1, type2 T2
where (T1.id_type1=P.id_type1 and T1.id_product=S.id_product) or (T2.id=P.id_type2 and T2.id_product=S.id_product)
group by S.id_product order by S.product_name asc",$conn) or die(mysql_error() . " - popular");
after that it searches for type1:
// T1hecking type1s //
$q = mysql_query("select T1.id_type1, T1.expire,T1.description,T1.code,T1.name
from type1 T1, type1_popular P
where T1.id_type1=P.id_type1 and T1.id_product = '$product->id_product'
group by T1.id_type1
and then the same thing for type2.
// checking type2s //
$q = mysql_query("select T2.id,T2.id_product,T2.type2,T2.description,T2.exp
from type2 T2, popular P
where T2.id_product='$product->id_product' and T2.id=P.id_type2
",$conn) or die(mysql_error());
This takes about 10 seconds with just one person (me) requesting it. We know how long 10 seconds can be.
As far as indexing...is it too hard to do on an already finished script? What woudl happen to the data already entered? Also, how would I know if it's already turned on or not (maybe performance is a hint but still ;))? Adding is blazing fast and even if it takes a lot more resources, I rather wait 5 seconds for myself on the admin side...
popular(id_type1, id_type2) (i.e. create an index on the 'id_type1' and 'id_type2' columns for the table 'popular')
product(id_product)
type1(id_type1, id_product)
type2(id, id_product)
type1popular(id_type1)
Don't worry about the index creation having any negative effect on the existing data - this data will also be indexed.
You could also have a look at changing your schema. For example, tables 'type1' and 'type2' look pretty similar - why not merge them into one table of the form:
type(id_type, id_product, <other columns>, is_type_1) (where 'is_type_1' is a boolean value)
This would reduce the number of queries from three to two.
This is so I understand it better: is the
"popular(id_type1, id_type2)" etc., added to the script in question or on another script? Also, where is that index saved, mysql cache or ...? This entire db will never be larger than 7-10MB so I dont; think memory is an issue. Sorry for the novice questions.
As far as merging: unfortunately the entire site (dozens of tiny scripts) is done that way. They're very very different things in the way that are shown and entered in the database. It maybe too late to change that.
create index <index_name> on <table_name> (<column-1>,<column-2>,...,<column-n>)
e.g.
create index popular_index on popular (id_type1, id_type2)
The indexes are persistent so these "create index" commands need only be run once - no changes would be necessary in your script.
any drawbacks on indexing for a dedicated server (2.4 Ghz, 1 GB Ram, low server load) and a relatively small DB (8-10MB if that)?
on edit: noticed that all my tables have the first field Key with Primary (it's an ID)
thanks again,
[edited by: walkman at 7:26 pm (utc) on April 29, 2005]
Your server specification is not an issue - it looks like the bottleneck is the database.
Indexing does incur a slight overhead when you insert data into the table (a new pointer needs to be added to the index) but this shouldn't be significant.
The fact that your database is only 8Mb suggests that the schema might be more of a problem than a lack of indexes. However, try adding them and see if things improve.
ps. sorry for the pack of cards analogy ;)
walkman:
I just changed all that, restarted MySQL, even rebooted the server and got very little improvement ...
I use one of those
$time_end = microtime_float();
$time = $time_end - $time_start; (there's more to it)
and it says that every page loads within this range
0.0018****... seconds.
If that is start-of-script => end-of-script then forget the SQL for now, it is fine. Obviously, such benchmarking is purely script-execution-time and cannot include network-transmission-time.
The best tool I found for SQL optimisation was phpMyAdmin. If you paste a sql statement into the sql-run box (
echo()it out from your PHP) it will not only give a page of results, but will benchmark it at the same time. There is also an
Explain SQLlink which is useful for tracing the specifics within a table Schematic which may be causing the greatest delays. I obtained the greatest reductions by adding indexes, but some very slow queries needed the Schematics altering to fix them.
As far as indexing...is it too hard to do on an already finished script?Using phpmyadmin, as easy as pi**ing into a pot.
What woudl happen to the data already entered?Doesn't touch it.
Also, how would I know if it's already turned on or notIf the performance does not improve then there is no point in having the index. That is what it is for.
$query = mysql_query("select S.product_name, S.id_product
from popular P, product S, type1 T1, type2 T2
where (T1.id_type1=P.id_type1 and T1.id_product=S.id_product) or (T2.id=P.id_type2 and T2.id_product=S.id_product)
group by S.id_product order by S.product_name asc",$conn) or die(mysql_error() . " - popular");
whereclause contains an
or, and these are not very well optimised in mysql. Probably even worse, however, is the
group+
ordercombo. This latter is probably causing mysql to have to examine every single row in all the relevant tables, make temporary tables to process it, and this will take an age.
The point of indexes is that--whenever possible--mysql will pull what it needs from the index. This makes Querys lightning fast. If it is forced to examine the actual data in the tables row-by-row the whole thing will slow down. The key to speed, then, is to both provide indexes AND to write the sql so that all sql pre-processing can be done from info in the indexes. Extracting the data is then just the final item, involving a handful of rows, and very fast.
I tried removing the sort and it didn't make a noticable difference. At most they're only 30-50 items pulled /to be sorted. I'm stumped. I suggested the index to the programmer (in addition to the primary thing he has on now) and will make this write to a .txt file (when I call it or via cron) to be included as php include. I tried that and it's very fast.
thanks again to everyone,
I left just one product on the "popular" page and it still takes 10+ seconds to load. One product and nothing else. Not even a template. Total of 2-3kb or so.
On the index, left about 3-4 ones and the server load was cut to 15% when called (compared to 60+%)and loaded much faster but still way too slow compared to the rest of the scripts. The products that show on the index have a "Y" on one field so I guess it starts checking them all everytime and then pulls all their info. I hope he can index them or something.
what a day ;)
While it is possible (and tempting) to index every column, it takes a lot of overhead on disk, not to mention a big decrease in INSERT/UPDATE speed.
Feel free to post the results of the EXPLAIN if you need some help interpreting it.
Sean
Regardless of that, I think I should be able to call a few records without crashing a dedicated server, so will try to get it fixed somehow.