|Dedicated or CLoud Instance?|
was looking at dedicated server but is cloud a better option in long run
| 12:52 pm on Aug 23, 2010 (gmt 0)|
I have a dedicated server, it hosts a number of small sites and one larger which also uses a lot of mysql which is giving me high server loads and therefor slow site etc.
I decided that I would get a 2nd server and move the database to that, it's a simple quick process and if things go smoothly can run a few cronjobs from the 2nd server as well.
Then 'Cloud Computer Instances' were suggested to me. I was told that if I went that way, it would be best to keep the database on the dedicated server and move apache over to the CCI.
I may be slow but I am struggling to get my head around it.
1. How do I know how many CCI's I am likely to need?
2. My host sells the CCI's by the hour or monthly, as it would be running a server then I would assume monthly would be better as web sites are 24/7, is this right?
3. If paying monthly price then there is not much difference between the cost of the CCI and an equivalent dedicated server. The main difference being scalability and if I get a server that's to small it becomes a pain so I would probably end up paying more for a server with 'spare' capacity.
Any advice in simple english would be really appreciated.
| 9:40 pm on Aug 23, 2010 (gmt 0)|
Before making a decision you have to know where your current performance bottleneck is. Splitting the database and webserver activities over two servers (dedicated or cloud) may give a better utilization of the CPU. But at the same time you will be generating a lot of network traffic for data from the SQL server to the web server which was previously in memory and handled directly by the CPU. Splitting the activities over two servers might therefore give lower overall performance of the sites, rather than a higher performance.
You are telling that you now see a high server load. What is exactly what you are experiencing now? Is the load average high, or the CPU utilization? These are two different things and you may need different actions to solve one or the other.
If you have access to the Linux command line, you might want to run the command top -n1 and copy/paste the first lines of the output of that command here in the thread. The header of the top command gives information about load average, CPU utilization and memory use and can help to decide if moving to a separate dedicated server and a cloud frontend will be a good step to increase the site performance.
| 10:10 pm on Aug 23, 2010 (gmt 0)|
top - 22:38:03 up 125 days, 11:32, 1 user, load average: 17.57, 6.12, 3.18
Tasks: 300 total, 6 running, 293 sleeping, 0 stopped, 1 zombie
Cpu(s): 11.5%us, 3.4%sy, 0.0%ni, 84.9%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8310788k total, 7111504k used, 1199284k free, 406400k buffers
Swap: 1052248k total, 88k used, 1052160k free, 3189756k cached
This is not constant, it surges for about 10 minutes and then drops back down to low average load. The CPU is a dual quad core.
After doing some research, it turns out that a friend had installed a couple of wordpress sites with a plugin that fetched rss feeds and posted. The database was filling up with post revisions and he had tables with 500k rows. We have cleaned up most but he still has some work to do etc.
| 6:53 am on Aug 24, 2010 (gmt 0)|
Memory usage looks fine. Even with this load you have 1GB of free memory in your server. CPU usage with 84.9% idle time is also no problem. The main problem seems to be the load average of 17.57. Load average is often confused with CPU utilization but they measure different things. The load average is the average number of processes waiting in the run queue. This does not only count processes which are temporarily stopped because another process uses the CPU, but also processes which are waiting for I/O. With a database driven site they are probably waiting for disk I/O.
In this specific situation I doubt that splitting the web server activity and MySQL activity over two different instances will actually increase performance much. Your bottle neck is hardware I/O and that won't change unless you make significant changes in the MySQL setup.
The 500k rows tables which you mention shouldn't be a problem by itself. I have a web server with tables with 10M rows each which don't give any performance problem. Problems only start if there is not enough memory allocated to MySQL to store intermediate query results, or when there are queries on that tables which don't have matching indexes. In your situation optimizing the MySQL memory allocation and adding the right indexes to tables to reduce your slowest queries now will probably have a higher improvement than spending money on extra hardware. The only hardware improvement I can think of is adding extra RAM, and then allocating that RAM exclusively to the MySQL process.
With heavily used MySQL servers which use the MyISAM table layout for large tables you may also run into locking problems. MyISAM only supports table locking where for every update the whole table is locked and other processes can't get access to the table. More sophisticated table layouts like InnoDB are able to do record level locking allowing more simultaneous accesses to heavily written databases than MyISAM.
| 12:15 pm on Aug 24, 2010 (gmt 0)|
Thanks again, I have taken what you have said on board and reviewed my mysql config and made changes where it appears it would benefit the most. I will see what happens over the next few days and then if necessary add some more memory.
As for MyISAM and InnoDB I am afraid to say it's one of those things where I used MyISAM when first using mysql and stick with it. I will look into the differences and go from there.
| 3:34 pm on Aug 24, 2010 (gmt 0)|
>>CPU usage with 84.9% idle time is also no problem.
At what point would you say it *is* a problem?
| 5:28 pm on Aug 24, 2010 (gmt 0)|
I was going on the average loads rather than CPU usage which has gone higher than shown at times but is normally lower.
If server load has a detrimental effect on the speed of my sites which it does at times then it IS a problem.
| 8:04 pm on Aug 30, 2010 (gmt 0)|
Have you considered a caching solution? I was in a similar predicament a few years back. MySQL was my bottleneck on my dedicated server. I ramped up my RAM, optimized my queries, created indexes, tweaked my config... still had problems.
Then I read about a caching solution that many big time sites use called Memcached. It is a general purpose caching software (free) that you can use with virtually any programming system. You need to do some development work to save pages to the cache, and pull them from the cache, and update the cache when the page changes, but once you get that ironed out it works wonders for your server performance.
Since we rolled it out we have not had our server get bogged down once in over a year. It use to happen almost daily. We have even weathered traffic surges a few months ago that peaked around 50k visitors a day from some big press/news/links.
Our server is also a dual-quadcore... we have 10 GB of RAM. We dedicate 2 GB of RAM to the memcache for storing our cached pages.
Plus with memcache you don't have to store pages, you can just store the results of queries, etc.