ergophobe - 6:11 pm on Mar 7, 2013 (gmt 0)
It's going to depend on your system and where the bottlenecks are, how much memory you have, how much you're writing versus reading. I'm not an expert by any means. For a production system, you should get the tuning done by someone who really knows how to scale InnoDB for you.
I should probably just leave it there and tell the OP to go do some research. I will elaborate, but I will also say that take what follows below as a starting point for your research and please come back and correct any errors I make below - I'm just sort of shooting from the hip here.
In my case, I just do basic playing with values to get my local dev station so that it can run Drupal reasonably well with InnoDB tables (and I assume WP would have the same issues with InnoDB).
The basic principle is this - InnoDB defers writing to disk when it can, but constraints on the memory it's allowed to use, the requirement to be ACID compliant and so forth can force it to write to disk more often than you need.
If you are monitoring disk activity and you run a query (say load a WP page), you'll see disk activity spike to very high levels and stay there long after the page is loaded as InnoDB "catches up", that is takes care of the deferred work it needs to do. With the wrong values, InnoDB can almost max out your disk all day long with hardly any queries actually getting run.
So if you're having I/O bottlenecks, your goal is to get it to buffer to memory more and write to disk less. But if you're having memory bottlenecks...
And if you're having both problems, you need to get more memory and an SSD probably.
Anyway, looking over my config file (my.cnf except on Windows, my.ini) and trying to remember which values I've customized... (again, check my work!)
innodb_buffer_pool_size = 256M
I believe the default here is 5M, which everyone says is inadequate. I've heard it said (I don't know as this is official or speculation) that the value is so low by default so that you *have* to set it to get it to work decently.
I'm running with a fairly low 256M. The docs say you can go as high as 80% of memory, but I would say that would probably only be okay on a dedicated DB server. Like PHP memory, you don't want to go so high that you exhaust memory for other processes. I should probably bump this up to 512MB though - this is an 8GB machine.
innodb_log_file_size = 64M
Docs suggest 25% buffer pool size
innodb_flush_log_at_trx_commit = 1
This determines how and when the buffer is written to disk. 1 is required for ACID compliance, but if you can afford some data loss in event of a server crash, then you can set this to 0 and reduce disk usage.
Some of these are not straightforward to change. To change some log file settings you need to shut down the server, move the files to a new location, change your settings, reboot the server so it will generate new files, then once you've verified that it all works, delete the original files. You can otherwise end up with a server that won't start.