homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Visit PubCon.com
Home / Forums Index / Code, Content, and Presentation / Databases
Forum Library, Charter, Moderator: open

Databases Forum


 6:36 am on Sep 19, 2005 (gmt 0)

Let me explain my situation first. Iíve MySQL 4.1.13a on my hosting server. I use PHP and use persistent connection to connect and query the database (InnoDB). My database is very simple: just one table with 4 or 5 columns, one primary key, but it has 500000 (approx.) records.

Now my fact is that, for couple of hours on a particular day approximately 20000 users will try to query the database. In addition, only select statement will execute. Iíve no use of insert, delete or update statement on my application.

Is MySQL 4.1.13 capable of absorbing this kind of hit?
Do I need to use replication?

Or how can I improve the performance?
Any other suggestions are welcome
Thanks in advance


Matt Probert

 7:14 am on Sep 19, 2005 (gmt 0)

I'm not an expert on MySQL, but with regards to any server side performance you could try talking to your host about the possibilities of "load balancing". That is operating a number of identical parallel servers that effectively share the processing effort. You'll want to talk to your host as they need to understand how to do it, and to be able to quote you a cost.



 8:35 am on Sep 19, 2005 (gmt 0)

If you are talking about 20000 simultaneous users, that may actually be a serious load...

I'm no MySQL expert either (SQL Server here), but my #1 suggestion would be: create the right index(es) to support the query!

There is a performance hit when you create/modify an index; however, you say that your app is "select-only"; and an index can make a huge difference when retrieving data.

If your select statement looks like this:

SELECT a, b, c FROM t
WHERE a = 123 and e < 234

you'll probably want an index on the columns a and e.


In addition, you may want to look into using "stored procedures" (not sure if MySQL supports these, but if it does, they're probably faster than "ad-hoc queries").



 9:58 am on Sep 19, 2005 (gmt 0)

Correct me if I'm wrong, but I think that the use of pconnect will cause thousands of mysql processes hanging around idle. In other words, you may hit the limit of max. allowed connections.


 10:27 am on Sep 19, 2005 (gmt 0)

I believe you are correct RonPK. The MySQL documentation notes suggest using mysql_connect() instead of mysql_pconnect(), unless you have a good reason. These are developer notes, but they've been allowed to fly.

The first thing I thought of was max_questions. I think it's set to 50000 (it was on my host) and with 20000 daily visitors it could potentially be exceeded. I was able to get the limit raised on one account, but on another I was told they can't raise the limit (confused look).

The problems of too many connections or too many questions doesn't exactly address the issue of improving search performance, but they should not be overlooked. It's advisable to have a 'plan B' in case either becomes an unexpected issue.

R e b r a n d t

 10:23 pm on Sep 24, 2005 (gmt 0)

How about you show us your table schema, few rows of data and a query?


 1:41 pm on Sep 25, 2005 (gmt 0)

This could be very useful if the same queries keep getting executed.




 6:17 pm on Oct 2, 2005 (gmt 0)

First, you don't have to use persistant connection;
Second - creating of index (on the row by which you're searching) may help you (to do it right it would be better if you show the table schema).

InnoDB works well with large tables, but I think that if you don't use transactions MyISAM will work faster... I don't really remember, but I think that the query optimization in such case doesn't depend on type of table.

Best regards,


 2:54 pm on Oct 3, 2005 (gmt 0)

Second - creating of index (on the row by which you're searching) may help you

It will be essential - 1000+ users table scanning a 0.5 million row table is a non-starter.

Personally, i think you'd be better off looking at other options (e.g. SQL Server).



 4:48 pm on Oct 3, 2005 (gmt 0)

hm... let's imagine that you try to search on a varchar field. if you make an index on first 8-12 characters such index table can extremely speed up select queries (as i tried). I didn't have 500000 records in one table, the largest amount was 200.000 or so, but i think that mysql can process much more if you build your queries right.
and when it's possible, try to make well-structured databases.


 8:12 pm on Oct 16, 2005 (gmt 0)

Eltiti's INDEXING theory helped for my problem.

I was waiting 9 hours so far for 23,760 lines of data to use the UPDATE statement from my PHP, to get it into MYSQL. I Indexed the UPC code, and instantly all of the data went in. I didn't wait more than 5 seconds.

Eltiti you rule ;) - I been trying to do this for years, but no one ever posted this information.

Global Options:
 top home search open messages active posts  

Home / Forums Index / Code, Content, and Presentation / Databases
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved