| This 45 message thread spans 2 pages: < < 45 ( 1  ) || |
|WebHostingTalk Hacked and Offline|
Worst incident in ages
WebHostingTalk [webhostingtalk.com] was maliciously attacked over the weekend. WebHostingTalk is the largest online forum for discussion of Webhosting and Server related issues. WebHostingTalk is owned by iNet Interactive [inetinteractive.com]. They are also owners of HotScripts.com, and Search Marketing Standard magazine. They also own numerous other forum sites. These guys are not newbies to forum operations and have a quality tech and management system in place.
A hacker gained access to an offsite backup server and then used info on that server to walk into the main live server. The hacker deleted the backup databases, and then deleted the live site. Apparently, they also covered their tracks and over wrote the drives so that no possibility of recovery was possible. This is the most deliberate, sophisticated and calculated hack I have heard of in recent memory.
Unfortunately, the last local offline copy of the system is from late last year. So expect them to be offline for a bit, while they rebuild the db's.
This is a lesson for ALL forum operators. Our thoughts are with the WHT and iNet teams that are working on the issues.
/off doing backups to dvd disks.
Interviews from HostingCon2008
Including interview with iNet CEO Troy Augustine [searchengineworld.com]
Also, a previous thread on the topic here: [webmasterworld.com...]
[edited by: Brett_Tabke at 8:01 pm (utc) on Mar. 26, 2009]
|A friend from Uni has a proper multi-million grossing site with no offline backup. He says he doesn't need it. What do you think? |
There are people who don't wear a seatbelt when driving. Most of them are still alive... but that doesn't make me think it's a good idea
|Application server is only point-of-exposure to the internet. It has no authority to do anything to any other bit of hardware. Configuation cannot be done over public connection. Watchguard or similar is DNS target, forwarding HTTP or HTTPS traffic to App server, bloking all other ports and otherwise protecting against malware and attack. SQL-injection is prevented. Application server actually has mutiple copies offline for development purposes. App server has access to the DB server. DB server can ONLY be accessed from App server, plus managment server, not from internet. Management server can only be accessed over VPN. VPN can only be established from pre-determined IPs and terminates on watchguard, then a seperate SSL VPN needs to be established through that to management server. (Man server runs the backend stuff, including CRM). App server is on a separate VLAN to Man server. |
All of that sounds lovely... so I'll raise you - one rogue employee in the data center with physical access to the servers. This is listed as rule three [technet.microsoft.com] - and once you've read that list, this [technet.microsoft.com] is also worth a read.
As far as I'm concerned, you can come up with a way to attack anything - the question is, is it worth it?
If your friend feels his planning is bulletproof, that's nice. If I was a big investor in his business I'd be asking some pretty pointed questions...
Cron backup outside the webroot, downloaded locally every week and burned to hard media once a month. It's not perfect, but I'm not a bank. I was reminded yesterday as I watched a lady struggle to load a till with a new printer ribbon, that new technology brings new problems now matter what the promise.
I like to print my pages out at the end of every day, and then file them in my attic (to protect them from flooding in the basement). I use photo-paper since it seems more durable... longer lasting backups.
There is NO fool-proof server lock down and/or backup technique. As others have stated, even if you shut down every possible digital door, there is still a lot of physical and social doors open.
If someone is determined enough, they will always find a way. Which seemed to be the case here.
I wonder what % of usernames/e-mails and password combos they have access to would work at major financial websites?
I've been a member there since 2001 with the same username as here and I'm pretty sure the same password. But none of my financial or e-mail accounts use this same username or password combo.
Another important thing to think about is checking your backups to make sure they are actually functioning and backing up what you intend to back up.
"When, not if" seems to be the most useful mentality for approaching data loss. Or really, any type of loss.
Agreed on using rsnapshot. Also, rdiff-backup might be useful for some. If you aren't cronning a part of your backup procedure, a one-word alias for your backup script helps to combat laziness.
Never fully trust the backup policy of your webhost or datacenter. Routinely check on them by verifying the backups they make for you, and always maintain your own independent solution for keeping local backups. Backups on geodistant servers help me sleep a little better too.
External media, hard drives, databases, software, employees -- they all fail and/or corrupt regularly.
RAID is more of an availability solution than a backup solution.
Anyone have details on how to verify the integrity of an SQL dump file, or a master-slave setup -- beyond just checking/repairing the tables? If you have a forum with millions of posts, how do you "check" your backups for corruption, or nefarious alteration? It's often impractical to backup a new 10 gig file every day. Do you just keep daily diffs, and then backup fully once per month or so? What do you use for verifying database integrity?
Wow this does sound scary. Even though my site is set for daily backup, seems like it might be better idea to have another setup for daily backup from remote to local computer.
It would take tons of time to get site back up if everything is wiped out.
I feel sorry for WebHostingTalk, however do know that they have capable people to fix the issue and will be able to prevent such mishaps in future.
|it is pretty clear that only those working with the backups actually even knew what the server address was - let alone how someone got into it. |
Anyone who had a link in any article could find the backup server the moment someone clicked on a link from the backup site, be it bot/search engine/or other.
Anyone who had an image in any article could find the backup server the moment the image was loaded.
Having a backup database is great but if you run an actual backup site, meaning the whole site is live even if hidden/protected/robots.txt blocked, you've got the potential to be found.
Did they run a backup site or just maintain a backup copy of code without having a protected (but live) environment?
Disgusting....hope there's better luck to them and worse luck for the perp.
|Having a backup database is great but if you run an actual backup site, meaning the whole site is live even if hidden/protected/robots.txt blocked, you've got the potential to be found. |
Which is why a backup server, after initially being setup, should be firewalled off and be a black hole, nothing running, no pages being loaded, and only SSH access in and out until being called into service.
However, odds are whatever weakness is in your primary server has been cloned into the backup as well so if you get hacked once the backup is usually a sitting duck once it goes live.
Funny, I was reprimanded and ridiculed for my real time offsite backup setups, with daily hardcopy dumps, 3 separate devices onsite to do backups another way AND I used different types of systems to tie them in together so a new hack on one wouldn't work on the next server. Same deal for new fault introduced by day to day updates and revs...
So when I left the new guy got rid of all the documented processes except some basic server backups, last I heard he was restoring my last hardcopy backup (almost a year old) to recover data missing due to power, equipment failure and user error.
Even in the new trend of supposedly easy backups complacency will bankrupt you financially and emotionally when you can least afford either. You must pay homage to the zen of backup or it will stick you good.
Could the attackers done the same with Slashdot, Sourceforge and Freshmeat yesterday /today, we'll know when they'll come back live again, see [webmasterworld.com...]
they used vbulletin which uses a salted md5 hash.
to get any information they'll have to construct a rainbow table using that particular salt, if I'm not mistaken.
in other words, only "weak" passwords (probably dictionary, or simple variations of them) will likely be available to them.
if you were using a strong password, the chances of them finding it are slim to nil.
It just got worse news over there. Really feel for Troy and his team. Hacker scum and their shadow masters wanting to punish!
|UPDATE: 7:14pm est 04/07/09 |
From what we know now, there were more records on the database server where the credit card dump was taken. If research shows that a larger number of customer's data was compromised, we will contact those individuals directly.
UPDATE: 4:24pm est 04/07/09
We have contacted all major credit card companies and are awaiting their guidance. It should be noted that card holders will not be held liable for any fraudulent purchase made using their credit card.
ANNOUNCEMENT - 1:25pm est 04/07/09
This morning, the hacker who attacked WHT initiated further communication. He provided evidence that credit card information on one of our database servers was, in fact, compromised during that attack.
What is WHT and iNET Interactive doing about it?
If we have evidence or suspicion that your credit card information was leaked, you will be receiving further communication from WHT and iNET Interactive.
Why is WHT down and when do we expect it to be back up?
We're currently doing a full security sweep of our cluster to ensure the servers are secure. The site will be back up once this security review is complete.
Update -- re: the credit card breach I posted a specific thread about this topic here [webmasterworld.com].
| This 45 message thread spans 2 pages: < < 45 ( 1  ) |