| 1:17 pm on Jan 3, 2005 (gmt 0)|
Yep, sounds like it's time to grow again!
Raid is important - it will keep you running if a hard drive crashes.
And you probably need to start looking at failover options as well.
| 1:43 pm on Jan 3, 2005 (gmt 0)|
It is time to move the SQL Server to its own box. Having it compete with IIS for CPU cycles is not good.
RAID 5 will help you in that the INSERT / SELECT requests will now be spread over multiple hard drives. You pay a slight penalty in RAID 5 for the parity information that protects you in case of a disk failure. You can avoid this penalty by going to RAID 10 (RAID 1 + RAID 0). This is accomplished by mirroring two striped sets. Another good feature to get for RAID is hot swappable hard drives.
SQL Server will benefit from multiple processors and all the memory that you can throw at it. The ideal situation (if possible) is to have enough memory so that the entire database is memory resident.
| 7:49 pm on Jan 3, 2005 (gmt 0)|
Thanks for the advice :), I increased the RAM to 16GB and got 2 250 gig ide drives in raid 1 config where i will put the OS on and the transaction log.
| 8:18 pm on Jan 3, 2005 (gmt 0)|
first of all: NICE setup!
our main DB currently runs on a raid-5 xeon and a couple little changes increased disk IO dramatically (50%):
- changing controller to 'cached-IO' and write back (not write-thru)
- under linux 'deadline' is the scheduler to use when it comes to random/DB access
all writes go to this 'big' box and all/most reads to 3 'slaves' which have ordinary scsi drives, some selects, especially if they are search related can lock up a DB so these should always go to a dedicated (cheap) slave server.
The best setup would probably be raid-10 which lets you survive 2 crashed disks at once - sometimes a second disk crashes exactly after a disk just crashed and while the raid setup is making a complete rebuild (->heavy disk-IO).
The other thing you can do is to setup multiple DBs for different tasks and to store/cache certain things directly in (shared) memory.
| 9:08 pm on Jan 3, 2005 (gmt 0)|
Move SQL to its own box.
With 25-30 million transactions per day you must be averaging something like 350 database transactions per second, which is pretty considerable.
It depends on what you are doing with the database, but unless your queries are pulling large datasets, you are probably more disk limited than you are processor limited.
You'd probably see more of a performance boost with a better disk infrastructure than you would with 4 processors, and you'd also avoid having to get a SQL Enterprise license, which would cost more than the hardware.
High end SQL setups should typically have at least 3 distinct physical volumes -- OS, transaction log, and data. Depending on the data, it may be further segmented to more physical volumes. In our environment, we typically have the OS on a mirror, the log files on a RAID1 or 10, and the data on a RAID5. The more drives the merrier as the more drives the better your performance. If budget allows, go for RAID10 on both transaction log and data arrays.
I wouldn't suggest putting the transaction logs on IDE Drives as they will become a bottleneck -- the log needs to be written before the data is written. Initially you may get a performance boost just because you'll be eliminating some contention all around, but it isn't a good long term strategy.
At the levels your talking about you need many more smaller drives, not fewer larger drives.
From a memory perspective it also comes back to query patterns -- large querysets will benefit from extra ram as SQL will use it for more data caching. The ram won't help your write performance.
You should also be moving to a platform that you can readily expand -- any solid raid platform should let you expand the RAID10 and RAID5 arrays 'live', which lets you add drives as you need them for performance or capacity reasons.
Don't forget about future expansion as well -- clustering is a logical requirement as your site grows and the downtime becomes expensive (or more expensive than it is already, as the case may be) -- make sure your infrastructure will support it.
| 9:21 pm on Jan 3, 2005 (gmt 0)|
I can second that... modern CPUs are incredible efficient, main DB serving some 400-500 requests per second here shows CPUs 70-80% idle. It's all about disks, disks, disks and RAM. And of course DB should be on its own server without any other processes running.
|you are probably more disk limited than you are processor limited. |
Don't waste $$$ on a quad server unless your application really needs these expensive DB CPU cycles.
| 9:22 pm on Jan 3, 2005 (gmt 0)|
Avoid RAID 5 on a database server -
|RAID 5 offers lower performance than RAID 0 or RAID 1 but higher reliability and faster recovery. |
RAID (Optimizing Database Performance (SQL Server))
| 2:21 am on Jan 4, 2005 (gmt 0)|
One of my competitors gets 3 times my traffic and has 10 4 way servers and 2 eight way servers for just the database. The webservers number in the hundreds.
Think of selecting all webmaster world members within a 100 mile radius of town abc and then sorting them by the time they last visited the site. Add to that allowing filtering by any criteria plus paging of results. This is done around 40 times a second and few hundred not so intensive requets a second. Many of the tables contain 2 to 4 million records so the selects/indexs are fairly large. I figure as the site grows larger i can break off parts of the database onto seperate machines, but for the core database i need something super fast.
As for Raid 5 i was reading that if you put in enough drives the write limitation is over come.
| 2:49 am on Jan 4, 2005 (gmt 0)|
|As for Raid 5 i was reading that if you put in enough drives the write limitation is over come. |
this is just wishful thinking.
a good source of information is the documentation from the various contoller vendors.
| 2:51 am on Jan 4, 2005 (gmt 0)|
sounds like a dating site ;) for this kind of job I would set up one raid-10-master DB and several replicating DB slaves and do all these selects on the slaves which can run on cheap hardware (dual, scsi)... everytime your site gets slow you just add an other slave. Your master would mostly do disk IO (saving / distributing data) and no CPU intensive work.
| 5:29 pm on Jan 13, 2005 (gmt 0)|
I did use this article to audit one of our clients, and it was really helpfull!