Welcome to WebmasterWorld Guest from 220.127.116.11
Forum Moderators: bakedjake
I've got my live webserver, a typical jungle of a dozen user accounts, each with some websites, some variety of libraries installed and services running.
I've got an almost clone of the hardware at my office. Same machine with different sized harddrives, a bit less ram, and no raid controller. Otherwise, same platform. And during the upgrade I'm moving from the hardware raid to software raid, so that difference shouldn't be an issue).
(one other consideration - the hard drives in my production server are bigger, but I think only 10k. My spare/cold server drives are smaller but 15k. Might consider ending up with the 15K drives, though honestly the machine's still got way more horses than I need).
What would you suggest is the best way to end up with a tested webserver that's been upgraded?
First, I hired a linux guy to come into my office to work on the spare machine directly. After 3-4 days, he still didn't have a working machine. Sigh.
So, I installed a new copy of linux on the spare machine, fresh.
Then I created new user accounts on this machine.
Next, I drove the machine into the datacenter and installed it. This will let me migrage slowly in a low stress fashion. I've given this spare machine it's own external IP, and the current server and this second server also have a second network interface that's on an internal network, so I can copy large amounts of data between the two without going over the internet.
Next, I rsynced all the user files over.
Next, I did a
mysqldump -u username -p --all-databases>alldatabases.txt
that makes mysqld dump all the databases into a text file. I copied that over and then did:
mysql -u username -p<alldatabases.txt
which imports the databases into the new machine.
Copied over the apache config file. that seemed to mostly work. It's serving files.
Left to do:
- get bind running. I've hired a linux consulting firm to do that, I don't have time to learn a bind instal right now.
- install postgrey, and copy over the email config files
- make sure the shadow/password files are copied over for the users.
- do a final rsync and database dump.
In addition, I want to fine tune apache/mysql once it's running on the new server.
Then I'll let things settle. then I'll wipe the original server and do a fresh linux install. Then it should be mostly just a straight copy of files. Then I can pull my spare machine from the data center again.
What I like about this method is that with both machines in the datacenter, I can switch at my leisure just by assigning IP addresses from one machine to the other. rather than an abrupt, time sensitive switch which can go wrong, with both machines there if something on the new server isn't right, I can reassign IP addresses back, then go home for dinner and worry about it in the morning.
Be aware that if you had mismatched versions of MySQL, blindly copying all databases over (which would include the mysql database itself) can be a bad thing. Make sure you check version compatibility esp. with regard to passwords.
Mandriva 2010, and I suspect other distro's, now have an incremental update process. I shouldn't ever have to do a full update again - just keep the updates going. My desktops have already done this - they updated seamlessly from 2009 to 2010 just by auto-downloading updates.
In terms of mysql, I didn't do a blind copy. The commands I list above do a database dump, including the mysql commands necessary to re-import. I've done the command and it worked seamlessly. Actually, it burped on a couple of old databases that were screwed up, but I deleted them and the rest were fine.
One of the reasons is that loadable modules like hardware drivers for RAID etc need to be upgraded at the same time because of small possible differences in API calls, available kernel functions etc. Upgrading the kernel without properly upgrading modules at the same time is a sure way to create an unstable Linux system.
I've already done kernel updates on this new distro and it's gone flawlessly every time. I'm a happy guy.
As for Raid (tangent) I quit using the hardware raid in my server. I was told that if a drive failed, hardware raid meant I had to move that drive to a machine with the exact same raid controller. You can't just pull a drive from a hardware raid controller, plug it into another, identical machine without the raid controller, and have it work. Which coincidentally is exactly the setup I've got - spare/cold server that's the same except no raid card. I've gone to linux raid which I'm lead to beleive does allow me to pull a drive, insert in a new machine and have it boot.
I then moved all the IP's from A to B, then restarted the network on both machines. Should've worked flawlessly. But it didn't. When I queried the IP's that should now be on machine B, I got a 'network address unreachable'....from machine A!
Turns out that my host's routing equipment ties IP's to MAC addresses - the physical machine - and that's only refreshed every three hours. So I scrambled and put the IP's back on machine A, restarted, and I'm still live :). This type of screw up is exactly why I'm like this idea of two live servers that I can transfer over to. I figured this problem out saturday afternoon, I just moved the IP's back to machine A and continued on about my weekend, no stress.
The solution to this is that I need to move the IP's from machine A to machine B, then reboot both machines (not just restart the networks apparently). That's a bit trickier, I never like a remote reboot.
Once I've reinstalled linux, I can pretty much just repeat the process again to move everything back - except the second time should be faster as I've already got all the config files and stuff tested and working on the new version of the OS.
End result, move from old linux distro to a new on but on backup hardware, reformat and reinstall on main server, then move back to original machine. All remotely, with minimal downtime.
Once done, I can drive back to the datacenter and pick up my spare server.