Forum Moderators: coopster
if I can't do that, how should it be done?
also, seperate question, if a table exists but doesn't have a primary key, is there any way to go back and add one (along with a key for each row)?
alter table mytable add column mykey int(8) primary key auto_increment first
something like that, I would read the linked page, check what size column you want and give all of it appropriate names though.
As far as reading 50 MB every page load, yeah seems like a lot. What is the table structure or what info are you storing? Do you see any way to break it up?
If so, 50MB is still a relatively small table.
- index the column you're searching on obviously
- convert all varchar fields to fixed-length fields
- break out any remaining fields that are not fixed-length fields (text and blob). This is de-normalizing the table, but it should speed up the data seeks.
- retrieve only the information you want obviously and never use SELECT *
On a 10MB table with many variable length columns including some pretty big text fields, and lots of varchar columns, using a very simple query that doesn't join with any tables:
SELECT DISTINCT lastname, firstname from mydb where firstname like '%la%'
on a live and somewhat busy server (but I forget exactly what machinery it's on) the query returned 845 records in .04 seconds
By way of comparison, a simple seek for the primary key using a distinct primary key to return the id, as in
SELECT id, firstname, lastname FROM mydb where id='%51'
also takes 0.04 seconds to return 164 records, but this drops to zero within significant digits if I retrieve just the "id" field.