homepage Welcome to WebmasterWorld Guest from 54.196.199.118
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
Forum Library, Charter, Moderators: coopster & jatar k

PHP Server Side Scripting Forum

    
PHP & MySQL Database Size
username




msg:4118452
 10:05 pm on Apr 19, 2010 (gmt 0)

Hi all, I am building a db table which will hold contact details. However I would like to know what is a reasonable level of records for one table, before I create a second table to hold the remaining records? I would also like to know whether you can set a mysql table to create a second table once full or when it reaches a certain limit to automate this process without php.

What is the recommended practice here for managing large record counts i.e. 1,000,000 records?

Thanks

 

tangor




msg:4118462
 10:27 pm on Apr 19, 2010 (gmt 0)

Reasonable or max? Depends on your hardware/memory specs and number of indexes. I have one database with 50M records in six tables...

username




msg:4118477
 11:12 pm on Apr 19, 2010 (gmt 0)

The current setup is in a high level VPS environment. I would like to know any tried and tested methods, and the maximum record counts also. Any info on creating secondary extension tables of full tables would be great also.

Readie




msg:4118480
 11:17 pm on Apr 19, 2010 (gmt 0)

the maximum record counts also.

I think this is just the limit you set the id field to.

If you set it to an unsigned BIGINT then, by my logic, you should have a limitation of, wait for it:

18446744073709551616

(256^8)

username




msg:4118518
 1:55 am on Apr 20, 2010 (gmt 0)

Ok, so that's a massive number. But in terms of lookup performance, is it better to split a total of say 200,000 records into two db tables, or leave as one. Surely this is hardware dependant, but at what point? If each row has only 10 columns then what's the recommendation?

tangor




msg:4118521
 2:12 am on Apr 20, 2010 (gmt 0)

Performance is always based on the data and indexing. Suffice it to say that dealing with <1M is no problem, even on "small" machines.

brotherhood of LAN




msg:4118524
 2:26 am on Apr 20, 2010 (gmt 0)

- File systems can be a factor limiting table size. Some systems have a 2GB limit.

- You can have *huge* tables. Some major applications on the web use MySQL.

- Have a look at partitioning [dev.mysql.com]. In cases where you would be limited to 2GB tables, you could bump this up to 2048GB with the (current) maximum of 1024 partitions of a table. partitions are like 'tables within tables', and in any event would save you splitting tables up without it.

IanKelley




msg:4118556
 4:29 am on Apr 20, 2010 (gmt 0)

I often store 10's of millions of rows in a single table without any performance issues on a single machine.

If you're looking at over 100 million rows, you should probably think about ways to break the data up.

You should be fine with a million records in a single table on a VPS, but if you have columns that contain a large amount of data, which aren't accessed as often as the other columns, consider putting them in their own table.

dreamcatcher




msg:4118575
 6:12 am on Apr 20, 2010 (gmt 0)

You definitely have to use indexes or else the db will crawl. I have clients with billions of rows in their databases, but with indexes the data is returned in seconds.

dc

username




msg:4118595
 6:54 am on Apr 20, 2010 (gmt 0)

Ok, so assuming I implement a primary key, i.e. ID, and index based upon that, there should be a significant performance boost as oppose to searching records based upon text fields? If this is the basis of my lookups, there should be little to no issues, correct?

tangor




msg:4118604
 7:28 am on Apr 20, 2010 (gmt 0)

The indexing is the best part of performance. You have up to 32 indexes per table. Use as many as needed to get the job done.

When you approach a billion records you might think about sub-tables.

IanKelley




msg:4118998
 6:34 pm on Apr 20, 2010 (gmt 0)

I doubt this applies in the case of the OP, just responding for the sake accuracy :-)

When you approach a billion records you might think about sub-tables.


So, say, 750 million records before you start to rethink your schema? I could see that working for data that you only need to lookup occasionally, which is probably what you're referring to.

However, if it's data that's getting accessed frequently, especially anything that visitors or members can cause to be accessed, 100's of millions of records is a bad idea IMO.

With a table that size it doesn't matter which field you're indexing, the index file is going to be huge... and (up to) the entire file has to be read from disk or memory for every search. During which:

A) Memory IO is tied up (provided you've bought the extra RAM to cover that, otherwise disk IO gets tied up and then you've really got a problem).
B) The script calling the database query waits for a response, holding data in memory and burning CPU cycles.

In the end the outcome is that you keep your visitors waiting longer, and buy extra servers you don't really need, when a simpler solution is to break the data apart using some form of simple logic that can be handled by the script before the database is queried.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved