Forum Moderators: coopster
our data base contain 70000000 records in 1 table.
we need help for how we fetch query very fast.
we have plan to divide this all records in multipal table.
50 tables
10000 tables
50000 tables
we want to know that how much table is good in mysql as well as fetch the query.
any ideas?
The first thing I would do before making any changes is to check the structure of the DB indexes (keys), and the queries. Could you post more of your structure and selection strings?
Without seeing, knowing the structure, or understanding what you are selecting, I can only guess, but there are some basic things: use multiple column indexes, force index selects (keeps you from, or limits, table scans), make sure query caching is disabled, run a single key cache (while you only have the one table -- 3 is standard), delay inserts and replaces, check you queries, and make sure what you are selecting is indexed and ordered correctly.
I believe the maximum size depends much more on selection and how light you can keep the index structure than on physical size -- you are going to have to hit the hard drive at some time, if you can know what row you are looking for before you get there, rather than having to read the disk to find the index that will tell you where the row is, you will still be fast.
Just some of my thoughts.
Justin