|Upgrade Linux web server|
| 2:08 pm on Sep 18, 2009 (gmt 0)|
So my linux webserver in the offsite datacenter is running on a 2006 OS which no longer gets security updates. Time to upgrade to a 2009 OS.
I've got my live webserver, a typical jungle of a dozen user accounts, each with some websites, some variety of libraries installed and services running.
I've got an almost clone of the hardware at my office. Same machine with different sized harddrives, a bit less ram, and no raid controller. Otherwise, same platform. And during the upgrade I'm moving from the hardware raid to software raid, so that difference shouldn't be an issue).
(one other consideration - the hard drives in my production server are bigger, but I think only 10k. My spare/cold server drives are smaller but 15k. Might consider ending up with the 15K drives, though honestly the machine's still got way more horses than I need).
What would you suggest is the best way to end up with a tested webserver that's been upgraded?
| 3:33 am on Nov 11, 2009 (gmt 0)|
Here's the route I've taken, almost there.
First, I hired a linux guy to come into my office to work on the spare machine directly. After 3-4 days, he still didn't have a working machine. Sigh.
So, I installed a new copy of linux on the spare machine, fresh.
Then I created new user accounts on this machine.
Next, I drove the machine into the datacenter and installed it. This will let me migrage slowly in a low stress fashion. I've given this spare machine it's own external IP, and the current server and this second server also have a second network interface that's on an internal network, so I can copy large amounts of data between the two without going over the internet.
Next, I rsynced all the user files over.
Next, I did a
mysqldump -u username -p --all-databases>alldatabases.txt
that makes mysqld dump all the databases into a text file. I copied that over and then did:
mysql -u username -p<alldatabases.txt
which imports the databases into the new machine.
Copied over the apache config file. that seemed to mostly work. It's serving files.
Left to do:
- get bind running. I've hired a linux consulting firm to do that, I don't have time to learn a bind instal right now.
- install postgrey, and copy over the email config files
- make sure the shadow/password files are copied over for the users.
- do a final rsync and database dump.
In addition, I want to fine tune apache/mysql once it's running on the new server.
Then I'll let things settle. then I'll wipe the original server and do a fresh linux install. Then it should be mostly just a straight copy of files. Then I can pull my spare machine from the data center again.
What I like about this method is that with both machines in the datacenter, I can switch at my leisure just by assigning IP addresses from one machine to the other. rather than an abrupt, time sensitive switch which can go wrong, with both machines there if something on the new server isn't right, I can reassign IP addresses back, then go home for dinner and worry about it in the morning.
| 5:18 am on Nov 15, 2009 (gmt 0)|
Out of curiosity, which Linux distro did you originally have on the machine and which did you choose to replace it?
Be aware that if you had mismatched versions of MySQL, blindly copying all databases over (which would include the mysql database itself) can be a bad thing. Make sure you check version compatibility esp. with regard to passwords.
| 4:45 pm on Nov 16, 2009 (gmt 0)|
I was using mandriva 2006. it's at the end of it's life expectancy - no more updates, so I had to move to mandriva 2010.
Mandriva 2010, and I suspect other distro's, now have an incremental update process. I shouldn't ever have to do a full update again - just keep the updates going. My desktops have already done this - they updated seamlessly from 2009 to 2010 just by auto-downloading updates.
In terms of mysql, I didn't do a blind copy. The commands I list above do a database dump, including the mysql commands necessary to re-import. I've done the command and it worked seamlessly. Actually, it burped on a couple of old databases that were screwed up, but I deleted them and the rest were fine.
| 8:49 am on Nov 17, 2009 (gmt 0)|
Incremental upgrades can go wrong. Ubuntu (admittedly not the people with the best QA) have had serious issues at atleast twice.
| 11:50 am on Nov 19, 2009 (gmt 0)|
everything can go wrong - but it's got to be updated somehow. Either a 2 month process to upgrade everything all at once, or something I can do once a week from the comfort of my office chair :).
| 7:01 pm on Nov 19, 2009 (gmt 0)|
The main problem with incremental upgrades of Linux versions is the kernel. I have had never problems upgrading individual applications but upgrading the kernel in a running system with a new pre-compiled version from a repository always caused problems with me.
One of the reasons is that loadable modules like hardware drivers for RAID etc need to be upgraded at the same time because of small possible differences in API calls, available kernel functions etc. Upgrading the kernel without properly upgrading modules at the same time is a sure way to create an unstable Linux system.
| 8:14 pm on Nov 19, 2009 (gmt 0)|
Yep - almost. But I believe my distro handles all that stuff. All of it - when I update all that kernel stuff is looked after, including modules.
I've already done kernel updates on this new distro and it's gone flawlessly every time. I'm a happy guy.
As for Raid (tangent) I quit using the hardware raid in my server. I was told that if a drive failed, hardware raid meant I had to move that drive to a machine with the exact same raid controller. You can't just pull a drive from a hardware raid controller, plug it into another, identical machine without the raid controller, and have it work. Which coincidentally is exactly the setup I've got - spare/cold server that's the same except no raid card. I've gone to linux raid which I'm lead to beleive does allow me to pull a drive, insert in a new machine and have it boot.
| 2:13 pm on Nov 26, 2009 (gmt 0)|
Another hurdle in the upgrade. I had a unique IP address on machine A (old) and a unique on machine B (new). And I had all the rest of my current IP's assigned to the old machine.
I then moved all the IP's from A to B, then restarted the network on both machines. Should've worked flawlessly. But it didn't. When I queried the IP's that should now be on machine B, I got a 'network address unreachable'....from machine A!
Turns out that my host's routing equipment ties IP's to MAC addresses - the physical machine - and that's only refreshed every three hours. So I scrambled and put the IP's back on machine A, restarted, and I'm still live :). This type of screw up is exactly why I'm like this idea of two live servers that I can transfer over to. I figured this problem out saturday afternoon, I just moved the IP's back to machine A and continued on about my weekend, no stress.
The solution to this is that I need to move the IP's from machine A to machine B, then reboot both machines (not just restart the networks apparently). That's a bit trickier, I never like a remote reboot.
| 8:19 pm on Dec 7, 2009 (gmt 0)|
Reboot went OK, except that it didn't fix the router problem. Apparently a power down is actually needed, not just a reboot. In any event, the machines' moved over and we're fine tuning. Some packages have changed, missed installing some stuff like the pear libraries, etc. But we're almost there. Then I just need to rebuild the original server and move everything back over again.
| 2:23 am on Dec 8, 2009 (gmt 0)|
In terms of rebuilding the original server again so we can move back onto the 'better' hardware but now running a current version of mandriva, here's what we've cooked up (because the server is in a remote datacenter):
- I'm uploading a linux .iso file to the 'new' server.
- tech guys at datacenter are going to burn the dvd for me.
- data center is going to give me a KVM to IP solution. Basically I get keyboard/monitor function like a kvm switch, but on an ip address. That means I've got control right from the boot. They'll put the dvd in the drive, I'll reboot to the dvd remotely via the kvm solution. That will allow me to do a fresh install.
Once I've reinstalled linux, I can pretty much just repeat the process again to move everything back - except the second time should be faster as I've already got all the config files and stuff tested and working on the new version of the OS.
End result, move from old linux distro to a new on but on backup hardware, reformat and reinstall on main server, then move back to original machine. All remotely, with minimal downtime.
Once done, I can drive back to the datacenter and pick up my spare server.