|mount: RPC: Timed out|
Configuring NFS for backup
I have recently been working on a new backup system for a server farm I manage. Most servers will backup to one central machine (from which tape backups are made).
On my backup server I have configured NFS, and I've modified
iptables to only accept incoming connections from the local network.
However, on the client machine I keep getting mount: RPC: Timed out. However, if I flush iptables on the client, then it works fine. So, I assume there must be something amiss with the firewall settings on the client.
Any advise as to what the settings should look like would we welcomed. Here's what I currently have:
|-A OUTPUT -p tcp -m tcp -d ***.***.***.*** --dport 111 --syn -j ACCEPT |
-A OUTPUT -p tcp -m tcp -d ***.***.***.*** --dport 2049 --syn -j ACCEPT
-A OUTPUT -p tcp -m tcp -d ***.***.***.*** --dport 2219 --syn -j ACCEPT
-A OUTPUT -p udp -m udp -d ***.***.***.*** --dport 111 -j ACCEPT
-A OUTPUT -p udp -m udp -d ***.***.***.*** --dport 2049 -j ACCEPT
-A OUTPUT -p udp -m udp -d ***.***.***.*** --dport 2219 -j ACCEPT
NFS uses several daemons and several ports. You got two of them (portmap/sunrpc, and nfsd), but the others (mountd, lockd, statd, and possibly rquotad) use random ports, by default. They can all be forced to use specific ports, though...check out the config for each of those daemons.
Are you open to other suggestions w/r/t the process of getting backup files from the client to the server where they're dumped to tape? I use rsync over ssh -- only needs one port opened (and it's always open anyway), requires much less network and disk activity (backups are always incremental), and is easy to script (and can be configured to not require a password without sacrificing security). NFS requires 5-6 extra daemons running as root, several with long histories of security problems, some of which are design flaws. It's also slow and inefficient.
|open to other suggestions |
Of course... Tell me :)
Here's what I do:
Disk space is relatively inexpensive, so I err on the side of backing up more data than absolutely necessary. I used to be more selective, but it isn't worth it any more. I don't leave extra junk around on production servers, and I don't backup non-production servers (except code repositories on development machines).
On the server, I have directories for each system that will be backed up (e.g.: /backup/server1, /backup/server2, etc)
I like the idea of backups initiated from the central server, rather than from the client. I run a script on the server that goes to each "client" (note label flipflop) on the list and rsync's the whole filesystem. If you wanted to initiate from the client, you could do that too. Rsync can work either way.
rsync --archive --delete --verbose \
--exclude=/var/log --exclude=/proc --exclude=/dev \
--exclude=/tmp --exclude=/var/tmp \
Rsync is a brilliant piece of work -- it only sends the portions of files that have changed, so if there's nothing new, nothing gets sent (except the file checksums to determine the changes or lack thereof).
Rsync sends all of the data over ssh, so it's encrypted (this might not be important in your environment), and it doesn't require any additional ports opened (22/tcp). You can also get it to run without prompting for a password (useful for backups!) and with a little more work, you can make that password-less access safe -- so that if someone gets control of your backup server, they won't be able to cause any damage on the client.
If anyone's interested in that last bit, I'll go into more detail.. But for manual backups, there's no need to go to that complexity.
[edited to fix typo in commandline. duh.]