Forum Moderators: phranque

Message Too Old, No Replies

concurrent connections too high?

concurrent connections performance

         

x_merlin_x

8:11 am on Sep 24, 2010 (gmt 0)

10+ Year Member



Hello,

I am running Apache 2.2.11 on a suse linux box with PHP 5 and MySQL 5.
The system gets very slow with about 100 concurrent connections and above (measured with ps -ef | grep httpd | wc -l) . I recently hat 160 connections with a load of 24 measured with top and apache delivered the page in about 20s. Problem with that is, that the access log shows that other users have the same problem and PI go down despite the fact that there are many users that want to access.

Hardware is: amd dual core opteron 1218HE, 4Gig RAM

My questions are:

- How many connections do you think are possible on the system? Maybe it is not configured that well and I could improve it.
- Do you believe it would be good to redirect connections above 100 to a static page that tells the user system is overloaded. If yes, how?
- Should I place the number of concurrent connections down? Currently 200.

Thank you for any help!

Merlin

jdMorgan

2:15 pm on Sep 24, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Don't redirect because of overload. Fix the overload.

The first thing to look at is database utilization. Look to see if your database-accessing scripts are slow and inefficient, or if they fail to disconnect from the database unless they actually need to maintain the connection.

If the format of your database has changed or grown over time, you may want to re-examine the "keys" that are used to index and sort its records, to be sure that they are still the most-efficient choice.

Next, look at your scripts in general. Perhaps they (or some part of them) are coded inefficiently.

Also look at your Keep-Alive settings. If many of these concurrent connections are really idle, you may want to shorten the Keep-alive time, so that once a user request is "done" his HTTP connection will be closed sooner.

In and of itself, Apache is very small and efficient compared to most applications. A plain Apache server, with few or no applications, can be run on 10-year-old hardware while supporting 200 connections with fast responses.

I'm no Apache server-tuning expert, so the above are just the basics. But the first step in fixing a slow server is to find out specifically why it is slow.

Jim

x_merlin_x

8:44 pm on Sep 24, 2010 (gmt 0)

10+ Year Member



Hello Jim,

thank you for your very helpful response. I do also believe that a huge part of the problem is the database. There are a view queries that must get improved. Unfortunatelly those are highly komplex and I am looking for other possbilities that give me a quick fix for now. After a bit of reading on your suggestion with keep alive, I reduced the keep alive time to 2s and timeout to 50s. Let's see how that will help.
I have configured mysql to get 64% of the ram, the rest is for apache and a few smal usual services.

About the connection from the DB. How could I find out if the db is maintaining the connection and they fail to disconnect?

Merlin

jdMorgan

1:16 am on Sep 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



You'd be getting "cannot connect" error messages if the requests were never disconnecting. But sometimes, the code is written to open a connection at the start, and only close it at the end, when in reality, it could be re-written to get everything all set up, open the db, do what needs to be done (with pre-computed valuse, for examples), close it, and then finish up with the rest of the script. There's no use, for example, keeping the db open until the entire HTTP response-body is written. That kind of design error causes performance problems -- usually years after the programmer has moved on...

Your keep-alive time should be above the time needed to serve an "average" page. But some servers are set up with very long keep-alive times -- over a minute. That's too long, at least for most sites that I have experience with.

Jim

x_merlin_x

5:41 pm on Sep 25, 2010 (gmt 0)

10+ Year Member



Hello Jim,

I believe your suggestion on putting down keep alives helped a lot. Now the server is running on the about same system load (2-3), but I do have way less httpd connections on (50-70) and the system is a lot more responsive and much faster. However, sometimes I find that mysql takes a lot of time to deliver a page (2-3s) while when I continue to surf the page it drops down to 0.00x s. I believe that mysql is not able to cache all the important queries because it is not well configured. Here is what mysqltuner (v. 1.0.1) says:

-------- General Statistics --------------------------------------------------
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.1.32-log
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics -------------------------------------------
[--] Status: -Archive -BDB -Federated -InnoDB -ISAM -NDBCluster
[--] Data in MyISAM tables: 3G (Tables: 209)
[!] Total fragmented tables: 19

-------- Performance Metrics -------------------------------------------------
[--] Up for: 7d 4h 2m 5s (89M q [144.314 qps], 4M conn, TX: 163B, RX: 17B)
[--] Reads / Writes: 91% / 9%
[--] Total buffers: 768.0M global + 10.2M per thread (175 max threads)
[OK] Maximum possible memory usage: 2.5G (65% of installed RAM)
[OK] Slow queries: 0% (88K/89M)
[OK] Highest usage of available connections: 73% (129/175)
[OK] Key buffer size / total MyISAM indexes: 256.0M/870.6M
[OK] Key buffer hit rate: 100.0% (72B cached / 1M reads)
[OK] Query cache efficiency: 57.7% (28M cached / 49M selects)
[!] Query cache prunes per day: 299498
[OK] Sorts requiring temporary tables: 0% (30K temp sorts / 8M sorts)
[!] Joins performed without indexes: 166829
[!] Temporary tables created on disk: 30% (2M on disk / 8M total)
[OK] Thread cache hit rate: 98% (75K created / 4M connections)
[!] Table cache hit rate: 0% (935 open / 1M opened)
[OK] Open file limit used: 43% (1K/2K)
[OK] Table locks acquired immediately: 99% (70M immediate / 70M locks)

-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
Increasing the query_cache size over 128M may reduce performance
Adjust your join queries to always utilize indexes
Temporary table size is already large - reduce result set size
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
query_cache_size (> 256M) [see warning above]
join_buffer_size (> 2.0M, or always use indexes with joins)
table_cache (> 1200)

The fragmented tables are not that important, they will get fragmented after a week, but just a bit.

The more important thing are the prunes I guess.

Here is my config:

[mysqld]
port = 3306
skip-locking
key_buffer = 256M
max_allowed_packet = 8M
table_cache = 1200
join_buffer_size = 2M
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 256M
tmp_table_size = 256M
max_heap_table_size = 256M
max_connections = 175
thread_concurrency = 4
long_query_time = 1

I do use indexes everywhere, I believe some numbers are not correct, due to a tmp tabel I am building in a search script. Do you see any major flaw I should concentrate to fix?

Thank you, Merlin

jdMorgan

11:09 pm on Sep 26, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Any MySQL tuners in the house?

<bump>

brotherhood of LAN

11:22 pm on Sep 26, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I'm not a guru tuner but here's my 2 centes

[OK] Slow queries: 0% (88K/89M)


Is always a good sign.

I'd suggest using EXPLAIN on some of the more frequent SQL queries to see if indexes are indeed being used and if they can further be optimised.

query_cache_size (> 256M) [see warning above]


MySQL will cache queries into memory when possible. If you have spare memory, up the cache size as the optimiser suggests. I believe I've read somewhere that using 25% of your available RAM is a good general yardstick (in your case, 1GB)

join_buffer_size (> 2.0M, or always use indexes with joins)


I think this is suggesting that some queries can be optimised or that you could increase this value to allow more memory for queries with joins that are reading a lot of data into memory

table_cache (> 1200)


You must have a lot of tables or partitioned tables, as MySQL is already caching > 1200 of them. Tables that aren't cached are read from disk and therefore a lot slower to read. Retaining them (or indexes) in memory is many times quicker.

In an ideal world your indexes would all be stored in memory (maybe possible for you?)

This site is well worth a look around if you aren't aware of it already: [mysqlperformanceblog.com...] ... there are a lot of benchmark tests on there with questions and outcomes to get a better gist of tuning an SQL server to your needs.

x_merlin_x

6:40 am on Sep 27, 2010 (gmt 0)

10+ Year Member



If I up the query cache size, I will use more then 65% of the total RAM for MYSQL. How much would I need for Apache and some smaller services (Subversion, Cron etc.)? I fear that I will take to much RAM for MYSQL and Apache will slow down eventually due to a lack of RAM.

About the table_cache, that number is increasing everytime a temp. table is used and therefore the numbe is irelevant as I do have tmp table in a highly frequent spot.

I guess it all comes down to the question, how much RAM I can allocate to MYSQL, without hitting Apache. The rest might be try and error.

Any suggestions?