| 6:34 pm on Oct 29, 2012 (gmt 0)|
A very topical point, iBill.
I thought that 'the cloud' was supposed to resolve the issue of potential failure.
Having had sites go offline because the webhost's data center was flooded, I know from experience how painful that failure can be.
What are your top tips to protect the site and the domain, should you find the worst happen?
| 7:27 pm on Oct 29, 2012 (gmt 0)|
|What are your top tips to protect the site and the domain, should you find the worst happen? |
I kind of mentioned these above but I'll spell 'em out.
My top tips would be:
* Local backups to restore to a different host because your backups at your host will probably be offline as well. Using a cloud backup service isn't a bad idea as long as they're in a different location using a different backbone. For instance you wouldn't want both to be on Peer 1's infrastructure.
* If you have multiple servers and your hosting company has multiple data center locations spread your servers around to various locations and keep your accounts mirrored so in an emergency at one data center you could switch the DNS on affected domains to the other servers. If feasible, spread your servers across multiple hosts but that's not always practical just from a configuration standpoint.
* Use 3rd party DNS on a different host and backbone. Where the DNS server is hosted actually your weakest link. If possible find a DNS service hosted in the side of a mountain protected by NORAD.
* Use alternative email accounts not in your primary domain for any required
* When possible, try to host with a company that uses redundant backbones.
Think about how it's linked together:
Registrar -> DNS server -> Hosting Company
I never put all 3 in the same location, ever. People that actually host with the registrar have a serious issue with all their eggs in one basket. Perfectly fine for sites you don't care about like the personal blog or family genealogy site but not fine for income sites that pay the bills.
If the registrar gets in trouble, which recently happened with hackers and DDOS targeting them, you may not be able to change your DNS servers if needed so at least be able to redirect your DNS elsewhere. Assuming the reason you're changing DNS is your host went down it's a good reason not to use your host for DNS. Being decoupled from the registrar and host can save your bacon in a crisis.
However, I assume if both your registrar and host are down, and they're in different geography and on different backbones, it's probably Armageddon anyway and perhaps planning for the end of the world is a little overkill ;)
| 10:28 pm on Oct 29, 2012 (gmt 0)|
Having had serious downtime due to a hurricane (Wilma), I gave it quite a bit of thought and reasoned that everyone is vulnerable to power outages so predicting the weather and trying to figure out who wouldn't go down wasn't the question, for me it boiled down to how fast will a grid come back up from a power outage...my thinking was "follow the money" so i concluded that the financial district might go down but would be back up pronto as it is of such importance.
Now I'm hoping I was right.
| 12:08 am on Oct 30, 2012 (gmt 0)|
|to how fast will a grid come back up from a power outage |
Shouldn't matter for a good data center typically has a generator and some claim they can supply their own power from a few days to a week in the even of an emergency.
However, that doesn't mean the backbone, other than the actual phone company, can do the same thing so it's a crap shoot IMO.
| 12:31 am on Oct 30, 2012 (gmt 0)|
The service that usually suffers during a sudden unforeseen DNS quick fix move is email. It usually gets left behind until the confusion settles. It's because of all the email client configurations scattered beyond the control of any one person. Users also tend to leave their keepers in folders on the server rather than local folders. I guess a good separated hosted email service would prove it's worth at a time like that.
| 1:51 am on Oct 30, 2012 (gmt 0)|
|so i concluded that the financial district might go down but would be back up pronto as it is of such importance. |
CNN just reported the NYSE trading floor is under a meter of water.
| 1:55 am on Oct 30, 2012 (gmt 0)|
|Shouldn't matter for a good data center typically has a generator and some claim they can supply their own power from a few days to a week in the even of an emergency. |
What they think they have and what they have can be two very different things, the outage I experienced was 10 days. Generators break, fuel issues, etc.. I was with a real deal big name outfit but they didn't have the goods to withstand a 10 day power outage. That's why I assume outages and consider the underlying infrastructure. Whoever is the squeaky wheel in an outage situation, that one will get the grease.
| 2:01 am on Oct 30, 2012 (gmt 0)|
At a previous host the power went out for several days, and while they had a generator, the switch to handle moving from external power to the generator refused to work so they went down. I forgot how many days they were out of operation, but their uptime guarantee didn't mean much at that point.
That was one of those times when having my DNS hosted elsewhere, and having offsite backups, was hugely helpful.
| 2:03 am on Oct 30, 2012 (gmt 0)|
You'd be hard pressed to find anywhere in the US that doesn't have weather. That sounds flippant, but honestly I can't think of anywhere. Temperature extremes count as weather. What happens when the a/c or heating for your server goes on the blink and the nearest repair guy is in the next county? (Edit: See preceding post, which overlapped mine-- and for once the timestamps give us both good excuse!)
Central Europe, now...
At least some kinds of weather is strongly seasonal. So if you've got a site with seasonal fluctuations, your options are a lot happier.
| 2:27 am on Oct 30, 2012 (gmt 0)|
CNN is now saying the reports of NYSE flooding was incorrect.
Lucy: No matter where I've hosted my sites weather has been a potential issue. In Dallas where I've got most of my servers now it gets blistering hot and when the power and a/c go out the servers go down. At least this host has data centers scattered around the country and I can bring up a new server in new location fairly quickly. I think that's an issue more people should consider when picking a host.
| 4:34 am on Oct 30, 2012 (gmt 0)|
What to avoid:
* Coastal areas prone to tropical storms or tsunami warnings, nothing too close to the ocean.
I don't lose power or internet during anything, we also have bulletproofish infrastructure due to the common occurrence of bad weather. I've built data centers that could run 12 hours with no backup power, and no interruption in service. (thank you UPS)
wiki runs most of their stuff outa FLA so i'd say we are pretty safe
| 10:21 am on Oct 30, 2012 (gmt 0)|
|Registrar -> DNS server -> Hosting Company |
I never put all 3 in the same location, ever.
this is the most important snippet from this entire thread.
within that i would add one more extremely important bit of redundancy and that is to insure that your DNS configuration specifies at least two completely autonomous systems - different subnets, organizations and locations.
next, regarding your timer settings on the SOA records:
make sure the Refresh time is short enough to match your maximum allowable downtime.
8 hours is an optimal number for efficiency reasons on a good day, but when the news is all about the weather you might want to temporarily bump that down to an hour or two in advance of a possible disaster.
likewise the Expire number should be long enough to ride out a bad connection to or temporary problem with a primary server.
7 days is typically a good number here.
| 11:42 pm on Oct 30, 2012 (gmt 0)|
:: insert boilerplate about yawn-provoking coincidences ::
I've just come from an unrelated forum at a site that was unavailable for several hours last night. There was nothing wrong with the site itself, its server, the colo, the DNS or the registrar. But requests from some regions were relayed through a particular facility-- and that facility was down. There was no alternative routing. And this wasn't in my corner of the country, where there is one physical Internet cable connecting us to the rest of the world. The site lives in a densely populated part of the east coast.