| 1:50 pm on Apr 29, 2011 (gmt 0)|
>> This is not a poor database design
You sure? ;)
Seems like you are going to need to setup some sort of aggregate read-only view/table that combines all of these tables into a data warehouse. Not sure if MySQL would be the best dbms to use for this much data, though. or if they have tools that would make this simpler.
But once you have the DW set up, you would just query against that.
|brotherhood of LAN|
| 2:14 pm on Apr 29, 2011 (gmt 0)|
If the tables are MyISAM the query should be reasonably fast anyway as there is a count in the metadata for each table. Otherwise... it'll probably be slow any other way than eelixduppy suggests.
| 9:10 pm on Apr 29, 2011 (gmt 0)|
This sounds like something I would create a cache table for. Dump all the bits from the other tables that you want to search, then run queries on the cache table.
| 1:54 pm on Apr 30, 2011 (gmt 0)|
Perhaps the merge storage engine could be useful? Although your tables have to be identical.
| 3:16 pm on May 2, 2011 (gmt 0)|
Have a look at sphinx search. You can index your tables in the background with a crontab. You will only have to search the index from there forward. Lightning fast.
| 3:26 pm on May 2, 2011 (gmt 0)|
Depends on the type of queries. If a majority of the data will be returned from a query, then using an index may actually slow performance.
| 5:17 pm on May 2, 2011 (gmt 0)|
Data is always returned with a corresponding query when using sphinx but I was referring to the context of the OP:
|about 200 tables in it, which need to be searched at once. |
search all these tables at once
Once the index is found (assuming this is relational data) an exact query will be substantially faster.
Hopefully that made sense now.