PCI guidelines are well documented. You can store the whole credit card number as long as it is encrypted, firewall protected, etc... The CVV2 must never be stored at all, but last I checked it was only for verification, thus optional. The CVV2 just ensures that the person making the transaction has the card in hand. You can process transactions without it manually.
Transactions for us are back up -
ancillary services such as Customer Information Manager, verification seal serving still down
We just came back up as well.
From seattlepi.com [seattlepi.com]:
|A fire at a Seattle building disrupted a server farm that provides service to multiple Web sites, and affected television and radio stations that broadcast from the building. The small fire broke out around 11 p.m. Thursday in the basement of Fisher Plaza at an electrical vault - the section of the building where city power lines meet the building's transformers (...) |
The small fire also affected a data center in the building, disrupting service to multiple Web sites and other Web services. How many Web sites experienced problems because of the fire is not clear yet.
Verizon Communications Inc. spokesman Jon Davies said the company's DSL service in the Seattle area was temporarily disrupted. Another company affected was Authorize.net Holdings Inc., based in Marlborough, Mass. The company provides credit card services for merchants. Authorize.net's Web site was down Friday morning.
Seems to be back up here too. But very, very, very slow.
I'm no expert, but back up center IN the same place as the data center seems like a bad practice. What if... earthquakes, terrorist attacks, fire, etc? Hopefully this will prod them into taking action and relocating the backup center to another location.
|Hopefully this will prod them into taking action and relocating the backup center to another location. |
Only losing customers or a lawsuit ever prods companies into taking action.
This will be filed under 'OOOPS' and forgotten.
Ugh... I use CIM to manage subscriptions, and I'm still getting denied. Lots of recurring revenue going down the drain as people can't sign up, and my site (a web service) looks really bad. If I can't handle billing, how can my site be trusted with their business data and their livelihood? This is really JV on Authorize.net's part... how can having a single point of failure for something like this be deemed acceptable on their part?
Does anyone know of any alternative gateways that offer something equivalent to CIM? (storing CC data for you so you don't have to go through PCI)
Bing Travel servers were hit in the same outage:
|"The blown transformer knocked out power to the entire building, which is home to the Bing Travel servers," a message on the site says. "This is isolated to Bing Travel only, and there is no impact to any other aspect of Bing." |
Bing Travel says it's working hard to restore service, and has set 5 p.m. PDT Friday as the target time for resumption of service. "In the meantime, you may use Microsoft travel partner Orbitz for your travel needs," the site says.
We've been back up since about 4pm EST
CIM is back up now. At least they had complete data redundancy - no data appears to have been lost.
That's about the only good thing I can think of to say.
It seems to be back up, but when we upload batches of transactions, i.e. csv files, every transaction fails. Anyone else having problems with batch uploads?
Wow, we went through similar situation (Datacenter fire) with our server host (The Planet) not to long ago...What are the odds?
This is a good reason to have multiple payment processing options available not only to the customer, but yourself. And have those in place [i]before[\i] you need them.
PayPal Web Pro is a great alternative for blind CC processing (make sure you setup the statement display name in PP profile), we take PP anyway, and our cart system has secure capture card JIC we need to resort to that and run the payments later via Virtual Terminal.
Right now I am finding authorize.net to be extremely slow and to be giving errors. Anyone else? 9:49 am Eastern Time.
Yes, we are also experiencing slowness and timeouts. Having problems even logging into the web interface. Whatever issues Authorize.net had it appears haven't been fully resolved. I wonder overall what impact the general "slowness" is having on merchants.
The best PCI guidline to follow is don't store any credit card numbers at all, thus eliminating any risk. If you must, store the last four. If your data gets compromised, the credit card company will go after your bank then your bank will go after you and can charge fines etc. We had a consulant come in and helped explain the somewhat vague PCI guidelines to us. If you do have to store the whole credit card number, every network and server that touches your database and web server must also be PCI compliant to prevent someone getting in from the backend. I was in PCI hell a few months back and unfortunately learned too much about it.
As of now, no response from CIM, transactions, homepage intermittently timing out.
Have moved over to backup systems once again.
|@authorizenet #authorizenet Trying to find out what's going on with Interfaces and public site. |
edit - transactions seem to be processing now...just slowly and access to homepage is intermittent
Man I feel for you guys all I have is Google checkout for a backup as Paypal kicked me out because they think I sell drugs but all my products are over the counter products.
Don't have enough volume to warrent a backup processor. I can imagine the stress you guys under. Hope they get ya'll back online soon.
even though their processing was down for all customers,
they still managed to bill MY credit card for gateway service on july 3rd. go figure.
So glad we have backup processing available. Am switching over to backup processing 'permanently' until they get this mess sorted.
absolute circus. /rant
hopefully, they will spend enough time and money after this fiasco to make sure it never, ever happens again, acts of God notwithstanding.
very funny rachel .. we saw the same thing. Our account was properly debited on July 3rd! They have a backup server for that ;)
....their back up data center was impacted as well!
It doesn't take Murphy's Law to guarantee that backups fail only when they're needed. Backups are _used_ only when they're needed. Everything fails sometime.
I've long argued that the only reliable backup is two systems, running in tandem, ALL the time, each acting as backup for the other. In THAT scenario, the "backup" machine would PROBABLY fail only when the "primary" machine was there to back it up.
So far, nobody's listened to me.
At my last company (about 12 years ago) our NonStop Tandem system was supposed to do that. As long as two specific servers weren't taken down at the same moment.
An engineer tried to free a cable from the underfloor spagetti - guess which two power leads were disconnected?
Recovery plans are never seem to be tested properly and only ever seem to cover a sub set of possible issues. Of course with 24*7 systems literally pulling the plug on the live system on a Saturday afternoon to see what happens is no longer an option.
hutcheson what you say is true but say we have 10 servers that run our system the cost of having this setup isn't an option as then when we replace our servers we have to replace 20 servers, the hosting fees are almost doubled and the list could go on and own.
Our backup can get us back online in about a day and that is I feel good.
In this case they most likely have 300-400 servers with a firewall that cost mega bucks system controls that then would have to be monitored on both systems increasing the work load 2 times over.
Your correct but just the cost is so large on a company in most cases it isn't an option.
hutcheson is your backup system set up this way?
A company I use to work for have the best recovery plan. We had our servers custered in one data center in a city .. we'll say Dalla Fort Worth. There, if one server went down the other would take the remaining load. Now, if a bomb went off and brought both the entire data center down then dns was switched to a complete different data center in lets say newark new jersey. Te database (Oracle) was always replicated in realtime to the two locations.
We did bi-annual simulations where they would kill the power to data center 1 after hours of course and we would switch over .. took a few hours for dns to propagate but overall we considered it a success and always felt confident in the case of a real emergency.
Granted, I worked for a VERY large corporation!
I know this is getting off the subject of Authorize net down but since they are back up seems a good conversation to take up.
ssgumby what you did is what hutcheson suggested.
1-Do you have any cost estimates on the backup system you had set up.
2 Did you have the same number of servers set up at both locations?
Is anyone using the ARB (Automatic Recurring Billing) through Authnet?
If so have you gotten settlement emails since the fire?
I haven't and when I did a live chat they assure me the emails are going out. I told him that the emails are going through to my own servers, not gmail or yahoo, etc. and I do not have filters. He still said that they are going out. The one thing I HATE about authnet is they always just slap a bandaid on the issue and scurry the customer away. Their customer service never does anything really about the problem. I pointed out a glitch in their system and their CS just made up some stupid excuse. Being a programmer myself I told her it was an easy fix that should take a programmer 5 minutes but she assured me that it wasn't easy and basically she wasn't going to do anything about it. It is very sad when a companies liaison with the customers is realy just there to pat the customer on the head and tell that that that is just the way it is.
1. I have no idea what the costs were, I was a developer and had no insight into the costs.
2. We had identical servers at both locations. Identical down to the fix pac.
| This 86 message thread spans 3 pages: < < 86 ( 1 2  ) |