Forum Moderators: phranque

Message Too Old, No Replies

Using Ramdisks on web servers?

         

WebBender

2:57 am on Jan 31, 2004 (gmt 0)

10+ Year Member



I came across the mention of using a Ramdisk while reading a web server comparison on anandtec. Intel vs AMD.

The disk I/O was an issue so they put their database into a Ramdisk.

this brought back memories of my PC when 32MB ram was _a lot_ and loading all of Falcon 3 into a RAMDISK. :)

Now, I was wondering as cheap as RAM is and with disk drives being the bottleneck for some web servers...

Why isn't the use of Ramdisks much more wide spread? I did a few searches on it via web and goole groups and it doesn't come up much.

Anyone here use Ramdisk on their web server? Any drawback to it if you have plenty of RAM on the server?

TIA

WB

IanTurner

8:23 am on Jan 31, 2004 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



I have thought about using the USB flash disks on a webserver, but my worries at the moment are about disk access speed in comparison with a normal hard disk.

jdMorgan

9:33 am on Jan 31, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



WebBender,

The use of RamDisks was largely supplanted by the advent of much more sophisticated and much larger caching systems. In essence, the now-ubiquitous processor cache is a "smart" ramdisk, that keeps copies of the most-frequently-accessed data in memory in or close to the processor. Current Level 2 caches are often several times the size of the entire memory of the older PCs.

In most cases, it's not necessary to do anything to take advantage of this. In a few cases, it can be helpful to write an application that reads the most-frequently-accessed data into memory, locks the cache for that area of memory, and then reference the data by using the originally-loaded memory addresses.

---

Ian,

Flash memory is not intended to be a dynamic thing - They're designed for infrequent, not constant use. They have a write-wear-out mechanism. They used to be good for 10,000 writes, then it went to 100,000, then 200,000, and then I lost track. But the number of writes is limited by materials quality and physics so it will not improve indefinitely.

The problem is that in an application such as you propose, there is no way to know which locations get written infrequently, and which get written very often - you could easily exceed 100,000 writes to a small number of locations in just a few minutes. After that, the memory at those locations would no longer work.

Use flash for write-infrequently, unlimited-reading applications. Take the consumer products they're used in as examples; Cameras, solid-state "floppies" intended to replace "the sneakernet", security keys, user-preference settings in automobiles, etc. None of these applications include frequent writing.

In a hard-disk-replacement application, the most-frequently-accessed and first-to-wear-out area would likely be the "sector allocation map", and that is a *very* bad thing to have errors in!

Jim

sun818

9:59 am on Jan 31, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



> Why isn't the use of Ramdisks much more wide spread?

Its a shame isn't it? I wish it were more widespread. I can think of at least one situation where RAM Disks would be handy. Build yourself a cluster of web servers. The box would only need a floppy drive and 1GB of RAM. Floppy drive would boot up Linux web server. This web server would present a snapshot of the actual web site and update its RAM drive every 15 minutes with a new image. So, instead of making unnecessary calls to the database for real-time content, you can serve near-time content very quickly from a cluster of web servers.

bcc1234

11:04 am on Jan 31, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Most modern operating systems use a portion of ram to cache disk i/o operations.
If you can correctly tune it, you will not need a ram drive for those things.

Running a queue of a mail server (or a news server) on the ram drive might be helpful, assuming you can tolerate loss of data.
But as far as a database - that's just asking for trouble.
If you need to do a lot of near real-time reading, then put a proxy in front of your application server. The proxy will do disk i/o with the hotspots cached by the operating system.

WebBender

1:51 pm on Jan 31, 2004 (gmt 0)

10+ Year Member



Thanks for the replies. The article that got me thinking about it was:

Anandtech [anandtech.com]

I was thinking along the lines of content being placed in a Ramdisk...videos or high res images off an adult site, for example.

So, the gist is that there are just better ways to deal with disk I/O bottlenecks than a ramdisk and even with quality memory it would be prone to errors?

TIA

WB

Romeo

5:57 pm on Jan 31, 2004 (gmt 0)

10+ Year Member



> Most modern operating systems use a portion of ram to cache disk i/o operations.
> If you can correctly tune it, you will not need a ram drive for those things.

Yes, and you could test this cacheing behavior of the operating system by doing a lot of file operations once noting lot of disk activity, and then a second time being amazed how fast it runs now without much disk I/O (e.g.: try to grep thru a lot of files*.*).

If a system has lot of memory, then it would serve the most common webserver requests out of system I/O buffers.

Furthermore, subsystems with high I/O rates like database systems (DB2, Oracle) keep large internal cachebuffers within their own address space under their own control with sophisticated buffering/staging algorithms, so true classical ramdisking on filesystem level is not needed at all these days.

Regards,
R.

Macro

10:18 pm on Jan 31, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



jdMorgan is correct. CPUs now have a lot more L1 and L2 cache than they used to have. While you do get Xeons with different sizes of cache even the humble P4 now comes with an option of 2 MB (Extreme Edition of the Intel Pentium P4)

RAMdisk, Speeddisk etc were good ideas in their time. The idea is to speed up operations by having the CPU/CPUs access the faster RAM (nanoseconds) than the slower hard disk (miliseconds) for regular data reads. In the last couple of years hard disks have evolved to be phenomenally fast (74 GB Raptors with 8MB of cache & 10K rpm in RAID 0 will stun you), most motherboards still take a maximum of only 4 GB of RAM, 1 GB RAM modules (which are generally required if you want to achieve 4 GB) are very unreliable and fussy in respect of which boards they work with, and RAMdisk programs tend not to present the speed advantages they were originally expected to provide. I wouldn't bother with them.

There are lot of other bottlenecks if you want to go looking. The PCI bus is getting ridiculous now. PCI-X is still a server only product so you're stuck with a 33 MHz PCI bus that has to handle 400 MB firewire in addition to loads of other devices. There are other bottlenecks.

You can tune Vcache settings and virtual memory. You can have virtual memory on a separate or faster hard disk, you can have more than one hard disk with swap files (virtual memory) but I think you'll find it gives very marginal benefits.

I haven't had a chance to read Anandtech's review in full but I did glance at it. I do respect them greatly but I believe there are some serious flaws in this review. Apart from flaws it's all very nice to have a system than runs faster on a benchmark but is a nightmare in real world situations with hardware incompatibilites, problems running your software, half baked motherboard chipsets etc. That's the problem with reviews
:-(