incrediBILL - 4:49 am on Feb 23, 2010 (gmt 0)
Let's see you download 50mb+ daily with download limits
Perhaps it's time to find a new vendor.
First, you can use software like RSYNC and only download the daily changes, not every file you own. Some tools can even download just the changes in files, not the entire file, so a growing database only needs the updates downloaded, not the entire database each time.
Second, most 50MB data when zipped is much smaller and faster to move maybe as small as 2-10MB in size.
Lastly, you can actually backup to other places, not your home using a restrictive ISP.
For as little as $99/mo or even less, you can have a dedicated server with terabytes of bandwidth where you can backup to daily.
Worse case, you can store 7GBs in gmail for free sent as a big (or multiple) file attachment(s) or unlimited storage on Yahoo Mail.
It's not a matter of whether it can be done, it's which method works best for a given situation and finding (or building) simple scripts and tools to make it happen.
Heck, most modern FTP clients even do file compression on the fly so 50MB isn't 50MB anymore.
In my case I download about 100MB from one site daily, zips down to 11MB, it rotates in 7 folders by day or week, then a baseline copy is set aside periodically.
Like I said before, not blaming people who got burned, having been a host I know how the best laid plans can go totally south, just offering options to help avoid pitfalls in the future.
That it happened at Westhost seems just to be a case of bad luck. It could--and can--happen in other data centers of other providers.
I consider my current web host 2nd to none, they're top notch, but one of their data centers (where my big servers are) was hammered by a storm and failing generator once and we lost a few hours, then they were running electrical tests another time and triggered an electrical outage for a couple of hours, still not bad for a 5 year track record.
When I used to host in FL there was a major backbone router that blew in GA, host was up and running fine, site was down to everyone not in FL for 12 hours ;)
When I was a host, one fine morning L3 in So Cal had a single expensive part blow up and the only backup part was 3 hours away and they had to send someone to get it, 6 hours round trip, so most of So. CA (where our hosting facility was) was dead for a day. We were up and running fine, but the alternative routes were flooded with rerouting traffic so even backup data pipes were useless due to the nature and scope of the outage. All my customers were offline and it was nothing I did, nothing I had control over, but try to convince customers of that.
Just trying to give some insights into how anything can happen, and no matter how well you try to avoid it, a hosting failure will hit the fan eventually, it's inevitable.
Taking a week to restore service still seems a bit crazy.