Welcome to WebmasterWorld Guest from

Forum Moderators: bakedjake

Message Too Old, No Replies

Redundancy, Fault Tolerant, Server Cluster etc.

1:47 am on Apr 1, 2009 (gmt 0)

New User

10+ Year Member

joined:Mar 31, 2009
votes: 0


I am a noob. I am trying to create a fault tolerant system for my web site. How do people normally do this?

Do you make multiple DNS entries? How do you sync data? I am using LAMP. Is there a Linux program that would let me do this?

How do you create MySQL DBs (and/or web servers) to have a cluster of sync'ed servers to save/edit data from?

Thanks for your help!


7:16 pm on Apr 1, 2009 (gmt 0)

Senior Member

WebmasterWorld Senior Member wheel is a WebmasterWorld Top Contributor of All Time 10+ Year Member

joined:Feb 11, 2003
votes: 12

People 'don't' normally do this. That level of fault tolerance can be done a variety of ways, all expensive, all complex. It's a nice ideal, but don't waste your time. You're not amazon (neither am I :) ).

Here's the route to take. First, assume you get a good hosting company. That means from the internet right up to your webserver things should work fine all the time. Most decent hosting companies are at this level now; my hosting company probably hasn't had unscheduled outages for years. So don't worry about that, just get a good hosting company.

Secondly, web hardware tends to be pretty robust. If you're colocating, get a good server. If you're a bit retentive, buy two servers and keep one offsite for spare parts. I do this, but most people don't. My server's been running for years, no problems. So now I own two out of date servers instead of one :). But I knew if I needed a power supply or memory stick, I had one right there and could get it installed in the time it takes to get to the data center. Risk on this is pretty low to negligible for most people.

So, the internet to your computer is pretty robust. Your webserver hardware is robust. Where you're really going to get screwed on downtime is the information and files ON the webserver. That's prone to hacking, failure, and more likely D'Oh! as you do something stupid like delete some files. While the other things are unlikely to happen, if there's anything that's guaranteed about websites, it's that you're going to screw your own data someday. So it's that level that you need redundancies, backups, archives, all that.

There's a variety of ways to approach this, but basically you want something that gives you a complete, current and easy to access duplicate of the content on your webserver. Complete, current and easy are all variables that you'll need to decide on how much work and effort you want to put into it.

Some folks just keep backup copies somewhere else online. Me, I backup nightly offsite, then keep easy to access copies of each nightly build for a couple of months, then I have a once a week archive copy elsewhere for copies prior to that. For me that works well. I change most files rarely so if I screw up, get hacked, whatever, I pull the files from last night or the night before. Anything from today has potential of being lost - but I'm OK with that for my business.

The alternative is for you to spend $20,000 to cover the risk of you maybe being offline for an hour in the middle of the night, and the likelihood of that happening is one in a million over the next 5 years.

5:00 am on Apr 3, 2009 (gmt 0)

New User

10+ Year Member

joined:Mar 31, 2009
votes: 0

Thanks for your reply. I do understand your point. However I'd like to know what technology/ies that can be implemented. Or if anyone knows any resources on the web.


5:07 am on Apr 3, 2009 (gmt 0)

New User

10+ Year Member

joined:Mar 31, 2009
votes: 0

by that I meant both hardware/servers and data redundancy (besides backup).
4:06 am on June 14, 2009 (gmt 0)

Junior Member

10+ Year Member

joined:Feb 17, 2004
votes: 0

Assuming two servers here is a slight level of redundancy on the cheap. The underlying assumption is that you have the same software on both machines. Also I am only covering http/https/dns below.

set up www.example.com so that it goes round robin on all the ips which you are using. All hosts should be running a dns (tinydns or bind) and should be set up as nameservers with your hosting company.

listening either needs to be set up on a non default port or on a lan ip.

squid is a proxy that can be set up and run as a reverse proxy in front of your webserver(s). There are some benefits for doing this on a single server (ie caching static files and not passing the request on to something heavier such as apache). With multiple servers there is more benefit as squid can distribute requests and handle hosts going down.
Squid should be listening on [external ip]:80 (you can make squid handle ssl too if you wish - recommended otherwise your redundance is lost for ssl)

Monit will check the local machine for a downed service and restart it.

I use rsync to share some of the config files and /var/www between the hosts.

In essence the above will give you 2 (or more) ips with redundant round robin dns. HTTP(s) requests go into squid on any of the hosts. Squid routes to apache on any of the hosts. Apache servers up the request.

If apache fails on one of the machines squid catches it fairly quickly and routs all requests amoung the remaining machine(s). Hopefully monit is able to restart it.

If squid fails dns redundancy will force the user to try the other ip. Hopefully monit is able to restart it.

If dns fails the machine tries the next dns server. Hopefully monit is able to restart it.

Note on most of the above the configuration is a pain - I have set up a few different environments in this way but mainly do so now because I have done it before and can modify preexisting configuration files. Had I known up front what goes into setting up this type of configuration the first time I would not have done it...actually I probably still would have because it is cool :)


Join The Conversation

Moderators and Top Contributors

Hot Threads This Week

Featured Threads

Free SEO Tools

Hire Expert Members