Welcome to WebmasterWorld Guest from 220.127.116.11
Forum Moderators: buckworks
A fire at a Seattle building disrupted a server farm that provides service to multiple Web sites, and affected television and radio stations that broadcast from the building. The small fire broke out around 11 p.m. Thursday in the basement of Fisher Plaza at an electrical vault - the section of the building where city power lines meet the building's transformers (...)
The small fire also affected a data center in the building, disrupting service to multiple Web sites and other Web services. How many Web sites experienced problems because of the fire is not clear yet.
Verizon Communications Inc. spokesman Jon Davies said the company's DSL service in the Seattle area was temporarily disrupted. Another company affected was Authorize.net Holdings Inc., based in Marlborough, Mass. The company provides credit card services for merchants. Authorize.net's Web site was down Friday morning.
I'm no expert, but back up center IN the same place as the data center seems like a bad practice. What if... earthquakes, terrorist attacks, fire, etc? Hopefully this will prod them into taking action and relocating the backup center to another location.
Does anyone know of any alternative gateways that offer something equivalent to CIM? (storing CC data for you so you don't have to go through PCI)
"The blown transformer knocked out power to the entire building, which is home to the Bing Travel servers," a message on the site says. "This is isolated to Bing Travel only, and there is no impact to any other aspect of Bing."
Bing Travel says it's working hard to restore service, and has set 5 p.m. PDT Friday as the target time for resumption of service. "In the meantime, you may use Microsoft travel partner Orbitz for your travel needs," the site says.
This is a good reason to have multiple payment processing options available not only to the customer, but yourself. And have those in place [i]before[\i] you need them.
PayPal Web Pro is a great alternative for blind CC processing (make sure you setup the statement display name in PP profile), we take PP anyway, and our cart system has secure capture card JIC we need to resort to that and run the payments later via Virtual Terminal.
Have moved over to backup systems once again.
@authorizenet #authorizenet Trying to find out what's going on with Interfaces and public site.
edit - transactions seem to be processing now...just slowly and access to homepage is intermittent
Don't have enough volume to warrent a backup processor. I can imagine the stress you guys under. Hope they get ya'll back online soon.
they still managed to bill MY credit card for gateway service on july 3rd. go figure.
So glad we have backup processing available. Am switching over to backup processing 'permanently' until they get this mess sorted.
absolute circus. /rant
hopefully, they will spend enough time and money after this fiasco to make sure it never, ever happens again, acts of God notwithstanding.
It doesn't take Murphy's Law to guarantee that backups fail only when they're needed. Backups are _used_ only when they're needed. Everything fails sometime.
I've long argued that the only reliable backup is two systems, running in tandem, ALL the time, each acting as backup for the other. In THAT scenario, the "backup" machine would PROBABLY fail only when the "primary" machine was there to back it up.
So far, nobody's listened to me.
An engineer tried to free a cable from the underfloor spagetti - guess which two power leads were disconnected?
Recovery plans are never seem to be tested properly and only ever seem to cover a sub set of possible issues. Of course with 24*7 systems literally pulling the plug on the live system on a Saturday afternoon to see what happens is no longer an option.
Our backup can get us back online in about a day and that is I feel good.
In this case they most likely have 300-400 servers with a firewall that cost mega bucks system controls that then would have to be monitored on both systems increasing the work load 2 times over.
Your correct but just the cost is so large on a company in most cases it isn't an option.
hutcheson is your backup system set up this way?
We did bi-annual simulations where they would kill the power to data center 1 after hours of course and we would switch over .. took a few hours for dns to propagate but overall we considered it a success and always felt confident in the case of a real emergency.
Granted, I worked for a VERY large corporation!
ssgumby what you did is what hutcheson suggested.
1-Do you have any cost estimates on the backup system you had set up.
2 Did you have the same number of servers set up at both locations?
I haven't and when I did a live chat they assure me the emails are going out. I told him that the emails are going through to my own servers, not gmail or yahoo, etc. and I do not have filters. He still said that they are going out. The one thing I HATE about authnet is they always just slap a bandaid on the issue and scurry the customer away. Their customer service never does anything really about the problem. I pointed out a glitch in their system and their CS just made up some stupid excuse. Being a programmer myself I told her it was an easy fix that should take a programmer 5 minutes but she assured me that it wasn't easy and basically she wasn't going to do anything about it. It is very sad when a companies liaison with the customers is realy just there to pat the customer on the head and tell that that that is just the way it is.