Congrats guys but umm.. just lost a post of mine today :( Is it related?
Curious as to why you left Rackspace. I've had a lot of problems with their "support" and the misleading title of "fanatical". That get you in the end?
Or did you just get a simply fantastic deal by puttin the liquidweb button in the corner? I'd give my right arm and two of my kids to get a button up there!
(Just kidding ... I couldn't do without my right arm!)
you mean this site lives in far off server land. I would have thought that how long the site has been online it would be one of those you could walk up and look at in the rack at the office.
> lost post
Sorry Koan - it is gone. There was a permissions issue with several forums. When you would post - it would go off into the ether. SWN (should work now). (sorry)
> rack at the office
That is the dream. I hope I live to see the day.
Seriously, I would love to host it locally. Our best effort at bringing it in house worked out to about $25,000 a year for reliable internet. That about $1.5k a month for a quality fiber feed, and about $1k a month for a backup provider. That is before any hardware.
hosting locally on your own server farm is geek pron. yeah, I'd like it too, having a pile of shiny machines humming and purring out my websites, but who really wants to manage all that? If you're serving content on the scale of Facebook or Amazon, then sure... but even for a really popular site like WebmasterWorld, managed hosting makes so much sense both for finances and for sanity.
Congratulations Brett & team, and Liquidweb for landing such an influential customer!
|That is the dream. I hope I live to see the day. |
..or a nightmare.
Unless your office has real backup power, (+24 hr gas or diesel powered), redundant connectivity, redundant hardware, security monitoring, a fire suppression system, server monitoring, site monitoring, automated backup and remote restore, etc.. all in place, you are probably better off with a quality reliable host or co-lo facility. (Ask me about horror stories -- like a water pipe breaking and the fire department shutting down "everything" for safety reasons.. or cleaning people looking for an outlet to use for a couple hours).. never mind the "normal" drive failure, loss of connectivity and failure to remote reboot, etc.
What lexipixel said. Seriously - dealing with hardware is a whole other level of headache.
> but who really wants to manage all that?
me me me. We already have a shadow server that is a perfect copy (sans up-to-date forum data). I would relish running this from the office. We also run a pretty good set of machines from the office as it is now.
Is a no brainer. We have a backyard at the office that would be perfect for a fail safe generator. Battery backups for about 2k that will run a stock server for 12-24 hours without problem. A generator from HomeDepot is 3-4 hundred that would run you 15-24hrs. All that is easy peasy - no issue.
Finding a reliable high speed net connection at a reasonable rate is the biggest issue. We just can't come up with two. They just want a fortune to install something like a 100meg or 10meg sustained line .
I don't see the server hardware as even remotely challenging (that's the fun part). I find it is more difficult to maintain a server at a data center. Granted, I am not solo any more and could delegate maintenance while I was away to someone else.
|I don't see the server hardware as even remotely challenging (that's the fun part). I find it is more difficult to maintain a server at a data center. |
Obviously you haven't run your own server farm before because it's all fun and games until something goes so horribly wrong you're down.
If you plan to keep everything up and running 24/7 with minimal downtime it's not cheap. You need clones of all your hardware with everything pre-installed and ready to swap at a moments notice because anything can break and needs replacing at any time.
Then there are things you may not be able to afford to protect against such as those fancy new routers that can thwart DDOS attacks, etc.
The biggest problem I ever had was 1) UPS batteries dropping dead all the time and 2) routing issues with the upstream provider which are far fewer now that I'm using a backbone like Peer 1 than when I was using a local solution provider.
BTW, Comcast never did resolve your DNS change many hours later. I could see the change almost immediately from my servers so I finally had to switch my PC to use OpenDNS and flush my cache and VOILA! here I am.
P.S. it's MUCH faster, hope it stays that way under a full load.
Congratulations Brett and team with the move to the new server!
I can confirm that WebmasterWorld can still be reached from one of the most remote locations on earth. Unfortunately No IPv6 address for the server yet. You could have used this move to make the IPv6 jump and be ahead of the crowd.
LiquidWeb look interesting, but they do that "thing" with prices of dedicated server upgrades whereby they're basically charging you a monthly rate for the upgrade that's nearly the same as what it would cost you to buy the part in the first place! In other words, every month you'll be "buying" that memory or hard drive all over again. Too many hosting companies seem to do this, and at the prices components are at now, it's really time this stops...
|P.S. it's MUCH faster, hope it stays that way under a full load. |
Yes, it is... MUCH faster... and I too hope it stays that way.
I'm seeing a few posts missing too... but otherwise an amazingly smooth transition. Congratulations to all the team that made it happen.
We have pretty close to the same specs on our server. We upgraded last February. This was our first time with CentOS 5.0. We love it. I have not rebooted my server since February and it is fast as ever! It seems to manage memory really well and it cleans up after itself and bad processes well.
I agree with the others on managing one of these boxes in your office. A headache I'd gladly pay someone else to do. =)
Brett - Do you use any caching solutions like memcache? We found it can pretty much double to triple the server's capacity if implemented properly.
Congratulations on moving to a new server.
I'm very curious why you're running a 32 bit OS with 8GB of RAM though?
I looked and talked to everyone and I just didn't have any apps that would really take advantage of 64bit. All I could see is that we would burn more memory and not see any performance increase. We don't need 8gb for apps. We are barely touching half of that with apps right now. However, we wanted the availability of more ram for disk caching. I have an app in mind that will need a 1-2 gig ram disk.
>run your own server farm
Oh, we have about 10 network machines at the office running 24x7, plus the occasional hookup of 5-6 more. Rarely an issue. (not to mention the occasional testing of the pubcon system with it's 35 laptops and 5 desktops)
|Seriously, I would love to host it locally. Our best effort at bringing it in house worked out to about $25,000 a year for reliable internet. That about $1.5k a month for a quality fiber feed, and about $1k a month for a backup provider. That is before any hardware. |
Only until you've done it.
I had them dig up the street to pull fiber in to an office and all that stuff, ran my own little server farm for a year or two. Never, ever again.
The costs are exhorbitant, and it's a crazy business decision to pay the costs when you can call any one of hundreds of companies that have multi-honed fibre, and the power backups and the air conditioning already installed. There simply is no way you can duplicate what's available in a pro data site at any sort of sane cost. You'll end up with far more costs, and still not have as good a structure as what you can get elsewhere.
And as bill noted, when stuff goes wrong,and it does, you're now back to 24/7-365. DDOS, hard drive failure, router issues, on and on. Go on vacation for a week? Start worrying about hardware again.
I thought doing this would give me additional control. All it did was drain me for 10's of thousands and cause me grief.
Why worry about hardware. Stick to running forums, leave the hardware to the hardware experts. They've got volume of scale and an interest.
But yes, the server is working fine, other than last night when the directory was exposed. I trust you've turned that off :).
I'm curious about why the server is setup with 8GB if it can't access over half the memory? Is the hardware by the host or what you guys brought in plus how did you guys decide on the OS or was this like a sweet deal at the right time?
I think the best of both worlds would be bringing your own hardware with your own software configuration to an existing host. I know that it's an option with a limited few hosts though I don't know how much effect it would have in regards to on-site service?
Woohoo! She's pretty quick Brett - nice upgrade.
|I'm curious about why the server is setup with 8GB if it can't access over half the memory |
I have the same question..... can a 32bit OS address more than 4GB RAM? Some kind of middleware layer required direct between RAM and disk-drive (eg, a 4GB cache)?
I have CentOS 5.0 32-bit and it recognizes 10 GB RAM... there was a simple kernel patch my ISP applied to allow it to use all the RAM.
haha well we can def see who are the pure web dudes vs network/server guys here.
It is a logical choice of Brett to go for the 32 bit version vs. 64 bit. The limit for Centos 5 is 3GB per process (16GB for the whole system), which is for a web application like this forum more than adequate. The average Apache process running on the server will use in the order of 10MB. The main problem with a 64 bit operating system is that all data is aligned to boundaries of 8 bytes, in general causing 64 bit applications to use about 30% more memory than the same application compiled for a 32 bit OS. And 30% more memory per process counts.
> there was a simple kernel patch my ISP applied to allow it to use all the RAM.
Ya, stock redhat enterprise and centos are coming that way now.
hmmm - I was under the impression it could recognize up to 64gig of ram?
Ya JAB, I mean the server is only using about half the memory right now to run the server. We'll see how that holds next week when server load doubles and we flip on mod_deflate.
Typical memory usage so far:
I have a programming project that is going to need about 2gig in a ram disk for a ultra fast random access db. Linux has a killer disk cache system by default. It is almost always as fast a ram disk. The one exception where a ram disk will beat a disk cache system is on a web server system when dealing with thousands upon tens of thousands of files and/or lots of disk writes inner-spaced with reads. (going to toy with putting the entire board into a ram disk and see what happens - I think it will be radically faster - and a fun project)
Other than that, I think I got all the box we could take advantage of here.
>The average Apache process running on the server will use in the order of 10MB.
Ya, most threads appear to run about 4-5meg. The largest 5% of threads can pull 15-20meg on replies. Mod and Admin actions (like move thread to new forum), can pull 30-50meg). We will be addressing that top usage problem shortly (eg: fixing the dreaded 'forum x needs to be reindexed issue).
> Only until you've done it.
Maybe it is just me then Wheel. Remember, I've been doing this for a few years. Having "my box" on someone elses property, feels wrong. I've always been a 'buy - don't lease' kinda guy. It's moot anyway until I find a killer location for an office with lower priced inbound internet. I hear there is space next to Googles Austin office on Mopac (not kidding).
|Oh, we have about 10 network machines at the office running 24x7, |
I think you overlooked the hardware redundancy issues I spoke about.
If you're sitting around at 3am and an OC3 card drops dead you can't just pick up a spare and most people don't keep spares of $38K cards laying around and it's usually a specialty part that takes a long time to get replaced.
If you prefer to own instead of rent, colo is an option which avoids getting your own internet provider, just your server hardware.
Then again, when you're sitting around at 3am and the hardware fails, you're on the hook to drive to the colo to fix it which is why leasing theirs is always better IMO.
Now consider what would happen when you and your crew are at some tradeshow and your roll-your-own server network goes belly-up, like during the middle of PubCon.
How much fun would THAT be? :)
P.S. I've been that guy up at 3am many times, not fun.
IIRC, webhostingtalk moved to LiquidWeb, too.
[edited by: creeking at 7:38 pm (utc) on Dec 5, 2010]
>If you prefer to own instead of rent, colo is an option
Looked at that as well. However, the cost variance to 'your own server' is negligible with most colo's. It is less attractive to me than leasing because now you are responsible for the hardware with limited access to the hardware. If it is on the rack next to your desk, you can fix anything or throw up a another box in a short time. If it is at a colo - you have to go diagnose the problem - then design a fix that will only work with that box - go get the fix - install it - and hope it works. No bonus points there - only down side. Everytime you want to throw another box at a problem, they are going to want that connection fee. If it is at your office - run 1 or 100 boxes - no worries.
Ownership is about control. About throwing a $1k static ram drive at a system to 'just see' what if? Or working your own load balancer on 4-5 servers for near-zero latency. You would no longer be a slave to high performance hardware and could easily get by with 4-5 modest servers that could share the load.
> oc3 card
The line card for the 7200VXR we looked at was $300 before discounts. (they have dropped profoundly in the last 2 years Bill). 3com has one out for their oc3 router that is $150 on amazon. The fibre OC3 we looked at into the office came with a line router included.
Part of what I want to toy with the extra ram is to setup a "read from ram - write to all ram/and hard disks" system to create a simple load balancer that would work in an almost unlimited number of boxes system. You read from src ram, and then write to control server that with write to all the hd's in your network - 100% real time syncing without the annoying delay (like /.) alot of load balancer systems have as they sync dbs.
|he line card for the 7200VXR we looked at was $300 before discounts |
I looked at some current prices before posting and I'm seeing 7200VXRs starting at $13K on dealtime, sure we're talking about the same gear?
I like to play with hardware too but trust me, in the end it's just a big time suck, time that is better spent doing actual money making activities that allow standard sleep cycles :)
Getting an internal server error trying to post in the database forum. Connected to the new server?
Might add, I was able to put a new post in Foo, but still getting rejected by the database forum with an internal server error :(
|Ownership is about control. |
|...in the end it's just a big time suck, time that is better spent doing actual money making activities... |
Having last year poured thousands of dollars on a project where the gearheads insisted we needed the control only a firm such as Rackspace could provide, only to have very little tech control (or control over the budget), you can put me into Bill's camp now.
I understand what Brett is saying regarding wanting to
|toy with the extra ram is to setup a "read from ram - write to all ram/and hard disks" system to create a simple load balancer that would work in an almost unlimited number of boxes system... |
But, to make Bill's point another way, it is amazing how budget and management considerations can inform technical requirements in a useful way.
Still, this said, budget and management considerations can shift as the business matures and technical capacity can provide creative insights.
I don't see his logic, but it's difficult to dismiss Brett's gut. And, it's great that he shared this. Very interesting.
|Remember, I've been doing this for a few years. |
| This 59 message thread spans 2 pages: 59 (  2 ) > > |