homepage Welcome to WebmasterWorld Guest from 54.198.94.76
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Visit PubCon.com
Home / Forums Index / Hardware and OS Related Technologies / Webmaster Hardware
Forum Library, Charter, Moderator: open

Webmaster Hardware Forum

This 31 message thread spans 2 pages: 31 ( [1] 2 > >     
SSD Hard Drives, ready for server use?
wheel




msg:4245866
 1:46 pm on Dec 24, 2010 (gmt 0)

I'm considering a new web server in 2011. Just installed an SSD in my wife's computer and it smokes.

But, are SSD hard drives ready for ongoing server use? Can I drop 2 or 3 drives in to a webserver and let them run for 3-4 years with no problems?

 

J_RaD




msg:4245878
 2:40 pm on Dec 24, 2010 (gmt 0)

SSDs have a set lifetime, so if its going into a server with high IO 3 to 4 years might be all they last.

im not sure if spinning drives will ever die, just think we'll see hybrid use of SSD and spinning disks.

SSD for OS
spinning for data

lammert




msg:4246288
 9:25 pm on Dec 26, 2010 (gmt 0)

SSD have a limited lifetime on writing data, not on reading. If your system depends on many random reads per second, it can be a good choice.

BillyS




msg:4246302
 11:31 pm on Dec 26, 2010 (gmt 0)

I just intalled one in my desktop. I read a lot before purchasing. I would say the answer is... it depends. As lammert points out, the wear is on writing, not reading. In fact, as these devices "wear out" they turn into read only devices.

MLC devices will wear out after around 10,000 write cycles, while SLC devices last about 10 to 30 times longer.

With wear leveling, you can almost calculate how long the drive will last. That's why I say, it depends. But I'm guessing, they last a lot longer than people think...

For example, a 120 gig mlc device can last up to 1,000 terabytes of written data. Over the course of four years, you'd have to rewrite the entire drive 7 times a day, 365 days a year for 4 years.

bcc1234




msg:4246370
 11:14 am on Dec 27, 2010 (gmt 0)

For example, a 120 gig mlc device can last up to 1,000 terabytes of written data. Over the course of four years, you'd have to rewrite the entire drive 7 times a day, 365 days a year for 4 years.


One of my servers that runs a busy forum has average writes of 1.2MB per second. That's 1.2MB times 86400 seconds in a day. Or around 100GB per day... or around 25,000,000 blocks written.

Most of it is updating the same files (database) over and over and over and over....

The total database size is only around 2GB.

So don't underestimate how much writing is going on.

Try running vmstat/iostat and see how much writing your server actually does.

J_RaD




msg:4246402
 3:53 pm on Dec 27, 2010 (gmt 0)

also have to look at what is the bottle neck in your server situation, if it isn't the HD then you won't see much improvement.

its all about your end to end baseline.

jskrewson




msg:4246415
 4:53 pm on Dec 27, 2010 (gmt 0)

I run SSD's on my front end servers and on my back end. They are very important to my websites performance, but my website size is around 80GB, with many random reads.

J_RaD




msg:4246860
 1:45 am on Dec 29, 2010 (gmt 0)

don't forget to look at PCIe SSD drives also.

wheel




msg:4247695
 10:50 pm on Dec 30, 2010 (gmt 0)

If I was to run SSD's, what should I be looking for? PCIe? There seems to be a variety of types, some that are for servers, some for consumer use. The choices and information is worse than choosing a cell phone plan.

J_RaD




msg:4247853
 2:35 pm on Dec 31, 2010 (gmt 0)

PCIe will be the fastest and also most expensive choice.

for the rest......get your glasses on

[pcper.com...]


---------------

[ocztechnology.com...]

looks like these guys are even creating a new controller/interface (HSDL)

shri




msg:4248048
 10:11 am on Jan 1, 2011 (gmt 0)

Invest in a very decent SAS card and 15K RPM disks in a Raid 10 type setup for reliability AND performance. SSDs are not worth it if your experience is limited to using them casually on desktops.

Data recovery, cheap cost replacement parts etc are far easier and cheaper with traditional rotating drives.

If you're looking for enterprise reliability and have unlimited $s, invest in a RAMSAN class device.

I'm not convinced the run of the mill consumer grade SLC or MLC drives are going to be ready for mere mortal use on Linuxy / FreeBSD type server for a few more years.

BillyS




msg:4248052
 12:30 pm on Jan 1, 2011 (gmt 0)

SSDs are not worth it if your experience is limited to using them casually on desktops.


I disagree. My wait time on boot up is about 15 seconds, it used to be around 2 minutes. Return from sleep is instantaneous. I didn't think it possible, but upgrading to Windows 7 and a SSD increased my productivity drastically.

We're also going to see third generation devices in Q1. Personally, I don't know if the PCIe devices are necessary for a web server. If you're worried about speed, wait until the SATA III devices are out later in Q1 Sandforce 2000 series.

The most important advice I can offer is make sure your O/S and hardware can support the device. For example, you might not be able to boot from PCIe or your machine might not have a SATA III controller.

Anyone willing to use a 7200 RPM drive should feel comfortable with a MLC device. If you're really worried about writes, then go for SLC.

If you don't think they're mainstream, check out the offerings of hosts.

jskrewson




msg:4248093
 5:13 pm on Jan 1, 2011 (gmt 0)

I'm not convinced the run of the mill consumer grade SLC or MLC drives are going to be ready for mere mortal use on Linuxy / FreeBSD type server for a few more years.

I use the Intel X25-E Extreme SATA SSD's with a Adaptec 5405 RAID card. The X25-E is a server/enterprise grade SSD, that isn't horribly expensive. BTW, Intel's current generation of enterprise SSD's only come in two, small sizes 32GB and 64GB.

I'm very excited about the next gen X25-E's, which should come in 100/200/400 GB sizes in early 2011. I personally feel that these larger capacity SSD's will be a game changer for servers, assuming Intel delivers as they should.

incrediBILL




msg:4248094
 5:18 pm on Jan 1, 2011 (gmt 0)

While SSD is screaming fast compared to a standard HDD, most people overlook the simple fact that a couple of GB of RAM used as a smart RAM disk cache coupled with a high end HDD can easily put SSD to shame when it comes to sheer speed and performance.

Most parts of a website that generate the most common bulk of bandwidth usage are typically static such as the images, css files, .js files, etc. which are best served from cache in the first place.

See Squid for starters: [squid-cache.org...]

Even databases tend to read/write mostly from a lot of common sectors that when cached in a RAM drive speed up the entire system significantly.

There's no write limit either, so it's a good long term choice.

Even if you do go with SSD, the combo of SSD and a RAM drive are even more impressive and an intelligent write-back cache which avoids rewriting sectors that didn't change (thanks to sloppy software) could even extend the life of the SSD.

If you're using a Windows server, SuperSpeed is a decent choice: [superspeed.com...]

seoArt




msg:4248099
 5:48 pm on Jan 1, 2011 (gmt 0)

shri, what does one's "experience" have to do with using an SSD drive?

If I don't have experience watching an HDTV, should I not get one?

jskrewson




msg:4248102
 6:01 pm on Jan 1, 2011 (gmt 0)

While SSD is screaming fast compared to a standard HDD, most people overlook the simple fact that a couple of GB of RAM used as a smart RAM disk cache coupled with a high end HDD can easily put SSD to shame when it comes to sheer speed and performance.

There is not question that RAM is better, if you can afford it. I always buy the maximum amount that I can afford, but the true amount of RAM my back end server would require adds an extra $1000 a month to my server costs, that I'm not willing to pay.

A RAID 0 array of 2 x 64GB SSD's is costing me less than $100 a month.

Here's just one simple example of how SSD's work really well for my setup. I have a dual quad core server and every day I need to unzip and scan about 1,000 large compressed files. I need this scanning to run very quickly, so I run about 14 threads at once. My 15K RPM drive array could not keep up with so many random reads, but if I take the time to copy 14 files at a time to an SSD array, I can keep all 16 CPU's (hyperthreading) at 85-90% of capacity throughout the entire process.

incrediBILL




msg:4248105
 6:13 pm on Jan 1, 2011 (gmt 0)

Typically how big are these files you unzip and scan?

Are the SSDs strictly for temporary file scanning, not permanent storage?

If so, why do you care if they last more than a few years?

Simply replace them when it's time, and since you rent them from your host, replacing them is their problem, not yours.

Also, with the kind of operation you're running I'd assume you'll be upgrading to a bigger faster server in a couple of years anyway so it's probably a moot point.

jskrewson




msg:4248113
 6:55 pm on Jan 1, 2011 (gmt 0)

Typically how big are these files you unzip and scan?

They are generally fairly large, so it takes awhile to unzip. All together its more than a terrabyte of uncompressed data. I should mention that I don't write the raw uncompressed data to disk and I process the data in chunks.
Are the SSDs strictly for temporary file scanning, not permanent storage?

I can't afford SSD permanent storage yet. The 64GB max size requires a 2U or larger box and a large number of drives. That's why I mentioned the next gen Intel enterprise SSD's. I'll probably go RAID 10 with 4x400GB SSD when they come out.

I do upgrade my backend hardware every year and because I lease, I don't care about SSD failures, although the X25-E's have been pretty reliable. My front end servers have been running the same SSD's for 1.5 years. I'd say I write about 90GB a day to them, not counting OS related writes.

shri




msg:4248186
 1:55 am on Jan 2, 2011 (gmt 0)

>> shri, what does one's "experience" have to do with using an SSD drive?

Can you predict how your drive will behave? For example, the whole write wearing issue bugs the hell out of me, simply because I cannot find much written up about it on an enterprise level.

Say a cell can handle 100,000 writes. Might seem ok on a consumer grade device because cell are not hammered on a regular basis, but what happens if that cell contains the "views" column of a database table that catalogs the number of times a thread has been read. We have several threads that can get a 100K+ views a month.

Will the drive behave properly?

Do any of the drives really support TRIM on a RAID card under Linux / FreeBSD setups?

Are they being used in 1000's of similar servers and conditions? Is there local datarecovery expertise to rebuild SSD data? Rebuild SSD Raid arrays?

And then there is the price advantage. Is there really a price advantage on the performance of a good SAS 15K array v/s the performance of a good SSD based array?

Have you looked at other exsiting ways of improving your performance? Is the drive the only part of the hardware / software stack where the performance can be improved? (I doubt it is..)

There is still far to little documented about how SSDs will predictably behave under enterprise loads for me to bet my business on it.

seoArt




msg:4248202
 2:56 am on Jan 2, 2011 (gmt 0)

Regular hard drives fail all the time. I trust SSD's more since there are no moving parts. If they have a shorter life expectancy, so be it. I don't plan on keeping the same drives in use much longer than a year.

I have seen test results where they outperformed the SAS 15k arrays. If I was sharing large files, maybe it wouldn't be much difference, but for a busy forum it makes sense to me.

I guess this is where the "experience" factor comes in. I just migrated to an enterprise server with SSD's in RAID 0. I'll have to report back after some time with them. Yes, if they crash out on me in 6 mos., I might go back to spinning platters.

J_RaD




msg:4248340
 7:04 pm on Jan 2, 2011 (gmt 0)


Regular hard drives fail all the time. I trust SSD's more since there are no moving parts.


once a normal spinning HD gets some hours under its belt its going to keep on spinning till its MTBF or well past then.

ddogg




msg:4248673
 7:35 pm on Jan 3, 2011 (gmt 0)

I hope this thread stays alive for the whole year because SSD's are something I'm interested in. Right now they are too expensive and I don't trust them, but I think they could change everything pretty soon. Seems you could put an entire powerful database server on a 1U with SSD since hard drives take up all the space on a server.

wheel




msg:4248943
 2:40 pm on Jan 4, 2011 (gmt 0)

Hard to say if they're too expensive right now. Looks like they're in the range of $200-$500 for sizes in the range of 60-200gig.

I can easily squeeze my entire webserver on to a 64 gig hard drive. And that price range isn't outside the realm of what I'd pay for a decent scsi drive already.

In a couple of years I'm sure they'll be huge and dirt cheap, but they seem to be in the affordable range already.

J_RaD




msg:4248960
 3:11 pm on Jan 4, 2011 (gmt 0)

that is the thing.... what else is going to happen in a couple of years.

6 core processors just got dumped on the masses.

seoArt




msg:4249380
 3:24 pm on Jan 5, 2011 (gmt 0)

I put one in my laptop yesterday. I got a 250g drive for around $420. It has sped things up considerably. Not the "instantaneous" load time I was expecting (based on what a friend had told me), but it has considerably increased the performance of my laptop. I am running an intel i7 w/ 8gb ram and the ssd drive.

The server has a slightly faster response time (just under half a second on the entire web page load time), not very noticeable going from 15k RAID 10 to SSD, but I have a lot of headroom on my web server. I suspect that the IO speed difference would be more noticeable if I was getting 2x-3x as much traffic as I am getting on it right now. It handles roughly 12k unique visitors per day at the present time, but has been growing steadily.

So far, so good.

wheel




msg:4249398
 4:21 pm on Jan 5, 2011 (gmt 0)

Can I ask what type of SSD drive you got for your server, i.e what type of specs?

seoArt




msg:4249460
 6:08 pm on Jan 5, 2011 (gmt 0)

2x 64gb intel x-25e in raid 0.

greatstart




msg:4253750
 8:37 pm on Jan 15, 2011 (gmt 0)

I have a 80 GB Western Digital HDD in my server running for a about 3 years now. The question I have is before it goes out, can I simply copy all of the files including the O/S over to a new SSD and use it, or do I have to reinstall the O/S and files from scratch for the new SSD ?

J_RaD




msg:4253934
 4:31 pm on Jan 16, 2011 (gmt 0)

yes you can, ghost it.

wheel




msg:4258010
 7:44 pm on Jan 25, 2011 (gmt 0)

There seems to be 2 interfaces, SATA II and PCI-E.

Am I correct that PCI-E is basically not a seperate hard drive, but instead is a pci card?

If so, is there a difference between pci-e and pci-e X 4?

My servers are older, but still reasonable (2 xeon 3 gig processors, 8 gigs of ram). But they only take scsi hard drives - they're older dell 1750's. Since I can't install a SATA II drive, could I instead buy a PCI-E SSD drive, remove the scsi drives and just install the PCI card/drive into the free slot?

That would save me upgrading the entire server (I've got a complete backup hardware so I'm happy keeping the existing machines). Which raises my last question, the machine is 32 bit, will that affect the pci-e? I don't think it should.

This 31 message thread spans 2 pages: 31 ( [1] 2 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Hardware and OS Related Technologies / Webmaster Hardware
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved