|"hot switch" from MyISAM to InnoDB. Is this dangerous?|
| 4:14 pm on Aug 31, 2010 (gmt 0)|
We have a MyISAM table that is locking up under heavy load. Sometimes it even crashes the database. This is a table that gets a remarkable number of writes of all kinds: Updates, Inserts, Deletes.
We need to switch this table over to InnoDB.
I've chosen one or the other before, but always when setting up the db schema. I've never had to change engines on a full, running database.
I have some questions:
1) How long will it take?
the table is over 5 million rows
size is 825.2 MB
it's getting hit more than 50 times per second, at peak load
Is this like, 2 minutes, or 2 hours? What kind of downtime/interruption can I expect?
2) Is it safe to do this change "hot"?
eg. ALTER TABLE table_name ENGINE = InnoDB;
If I do this while the traffic is flowing (I'd choose a slow time of day, like 4am), will it blow up?
3) Will this introduce any functional glitches?
to wit: will the table work just like it did before. If I introduce functional bugs into this system, my arse is garse.
4) Is there a strict recommended procedure for doing this that I should be aware of?
Please help - traffic is growing daily and my database is groaning under the load. I want to do this in the next 48 hours
| 5:30 pm on Aug 31, 2010 (gmt 0)|
I'd recommend reading over the manual page first and foremost:
How long it will take depends entirely on your server configuration. How many processors? How much RAM? Is MySQL tuned to take advantage of extra RAM you've installed?
Functional glitches? Not that I can think of.
I wouldn't recommend doing any table-level operations "hot." What I would recommend is a multi-step process:
1. Create a copy of the table "tablename_innodb"
2. Convert the copy from MyISAM to InnoDB
3. Rename the original table "tablename_backup"
4. Rename the copy "tablename"
This obviously leaves out some records affected during the transition which you'd have to synchronize after the fact, but it's better than locking up an entire in-use table for an unknown period of time. If you're using autoincrementing keys, then the synchronization will be a big pain.
| 4:56 pm on Sep 1, 2010 (gmt 0)|
I would never do such hot switch myself. Just put "we'll be back" page for a few minutes via a single rule in .htaccess and then take it down as soon as you are done.
|This is a table that gets a remarkable number of writes of all kinds: Updates, Inserts, Deletes. |
But the bigger question is of course do you need to convert at all. The UPDATEs/INSERTs may be stacking up only because SELECTs are not done efficiently and quickly enough. Also, InnoDB does not do well with full text searches and generally any large text or blob fields. Counts as in COUNT(*) can be slow. And also important - InnoDB file sizes are much bigger which makes them slow to load and if your RAM is not too big (common on VPSes) - this may be a performance killer.
So, it's not really a silver bullet. You kinda need to look at your application first. Fix the bottlenecks there, see if you can actually use the benefit of transactions, tweak your mysql server settings. THEN decide if you still need to convert. And no, definitely not hot switch.
| 6:13 pm on Sep 1, 2010 (gmt 0)|
Thanks @whoisgregg & @1script,
Some reasons why InnoDB seemed like the right solution:
- SELECTs are all very simple single row queries, no joins, no subselects, no full text or "LIKE" queries. There are indexes on all the pertinent columns, and they run extremely fast - when they're not locked up.
- There are an immense % of writes going on, and in my SQL Admin panel I can see that many, sometimes hundreds, of threads are waiting with a status of "LOCKED".
- there are occasionally some COUNT(*) queries, but these are in the minority and have never been a bottleneck.
Here are the results of some initial experiments.
First, I took a dumped backup of the live database and put it in a sandbox, so I could do these things on a database with the same size
ALTER TABLE table_name ENGINE = InnoDB
That one query ran for over 45 minutes, and still wasn't done when I killed the process. My downtime curfew is 30 minutes; any solution that takes the service down for longer than that isn't acceptable.
Then I tried creating a new empty InnoDB table, and dumping the one into the other with INSERT INTO ... SELECT ...
That also ran for about 30 minutes before I killed it. At the time I killed the process, the table was still only 1/3 full
If this was done "hot" on the live db, there would be some pretty serious synchronization problems at the end. With ~50 queries per second over more than 30 minutes, it would be painful to clean up all the dangling messages when the switch is made to the new InnoDB table. Downtime is the only option, and this method exceeded my downtime limitation.
-- so --
I ended up changing the application logic. The old table is there, and the new table too. As users interact with the app, their data is being moved from the old one to the new one. It's a gradual migration, seamless from the user's experience, requiring no downtime at all.
The old MyISAM table gets: SELECT, UPDATE, DELETE.
The new InnoDB table gets: SELECT, UPDATE, DELETE, INSERT.
Already, the size of the old MyISAM table is shrinking, and the InnoDB table is growing. I still see locked threads when there's a DELETE performed on the MyISAM table, but overall the performance has improved.
| 11:59 am on Sep 2, 2010 (gmt 0)|
followup. During peak traffic hours, the InnoDB table fequently encountered "deadlock". Roughly 25% of requests were failing. I've switched back to MyISAM and I'm looking for more/better scaling solutions.