Forum Moderators: phranque

Message Too Old, No Replies

preserving server logs

easy way?

         

smallcompany

10:34 pm on Jan 13, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



How do you ensure you don't lose your server logs just because you forgot to copy them over on time?

I know that all serious people hanging around forums like this one already have something in place, and I wonder about how they do it.

What's the easiest way to automate copy procedure to your local machine, or even right on the server so you can manually sort it out later?

Thanks

wilderness

12:03 am on Jan 14, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



The new economy hosts ($5 or less per month) don't offer long accepted protocols for raw logs. The cheapo I'm using doesn't even offer FTP raw logs, which they claim is not possible on their server.

Many hosts have cron job in place which offers a five-day rotating logs. Other hosts traditionally zipped the daily logs for an entire months communication.

From your explantion, I'm assuming you don't see accumulation beyond a day?
If you want more extensive logs, try communicating with host or find another host. Raw logs are one of my main priorities. My current cheapo host provided in advance of my paying for a year that they provided rotating logs and the one-day rotating log in no way resembles rotating logs.

smallcompany

6:36 pm on Jan 14, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I actually refer to hosts where logs are kept, but I wonder about easiest way of ensuring you copy them every some time, whatever is the rotating cycle.

I have some where, by default, I still have logs from year ago, while with other it's about week or two. Both are VPSs.

I was curious about what people do in their specific cases.

Thanks

P.S.
I do use shared hosting too, but in my case, everything goes into one log which is really messy and I'm not sure if I could even read anything properly.

lammert

7:02 pm on Jan 16, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I have a daily cronjob running which performs a backup of all important files including the log files and pushes these files to another machine in the local network with FTP. The script first dumps the contents of the MySQL database in an ASCII file, then makes compressed archive files of the files in some subdirectories and tests the archive integrity. The files are pushed to a second server with an ftp call.

The file names are dynamically generated. Each day of the week a file X-myserver-var-log.tgz file is created for the log file where X runs from 1 to 6. On each Sunday the name of the backup file is changed to YYYY-MM-DD-myserver-var-log.tgz. with YYYY-MM-DD the current date.

In this way I have full daily backups of the last seven days, and full weekly backups for older data.

Because this task is run from cron, an email is generated with all output of the commands. So each morning the system sends me an email with the results of archiving, result of the FTP push and a list of the current files on the backup server.

File: /usr/local/etc/allback


# Set some general options.

BACKSER='myserver'
BACKDAY=`/bin/date +%w`
BACKDIR=/tmp/backup-dir.$$
BASEOPT="--create --ignore-failed-read --totals --gzip"

if [ "$BACKDAY" = "0" ] ; then
BACKDAY=`/bin/date +%Y-%m-%d`
fi

# The best way to backup the MySQL databases is to dump the
# contents to an ASCII dump file. We then know that the contents
# are consistent and it is also a sort of check that the database
# is still working fine.

echo ""
echo "Dumping MySQL databases to regular file"
/usr/bin/mysqldump --all-databases --quote-names --opt > /var/www/sqldumps/all_databases.sql

# We want to include the complete path( without trailing /) in
# the tar archives. We therefore have to make sure that we are
# located in the root directory before starting the tar calls.

/bin/mkdir $BACKDIR
cd /

# The tar archives are created. Gzip compression is used. We
# could use bzip2 compression but the results are only slightly
# better and test showed that the backup process may take five
# to ten times more time. During backup the server has a higher
# load than normal which may cause delays in HTTP requests. We
# therefore like to keep the high load period as short as possible.

echo ""
echo "Creating backup of /var/www"
BACKFILE=$BACKDIR/$BACKDAY-$BACKSER-var-www.tgz
/bin/tar $BASEOPT --file $BACKFILE var/www
/bin/gzip --test --verbose $BACKFILE

echo ""
echo "Creating backup of /var/log"
BACKFILE=$BACKDIR/$BACKDAY-$BACKSER-var-log.tgz
/bin/tar $BASEOPT --file $BACKFILE var/log
/bin/gzip --test --verbose $BACKFILE

# Now do the FTP action. The FTP does a dir and quota listing
# which should be visible in the resulting email log. This makes
# it easy to test if the backup succeeded.

cd $BACKDIR
/usr/bin/ftp -v -n -i ftpback < /usr/local/etc/allback.ftp
cd /

#
# We don't need our temporary backup directory anymore. We
# delete it to preserve diskspace.
#

/bin/rm -rf $BACKDIR

File: /usr/local/etc/allback.ftp


user username password

bin
mput *.tgz

dir

site quota

quit

smallcompany

3:10 am on Jan 17, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Now I really own you a six pack lammert. ;)

Is that maybe run on WestHost?

I'm just guessing as folder structure is familiar, although I guess it's the same on any Apache VPS...

Thanks

lammert

3:58 am on Jan 18, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



No, its not on WestHost. This folder structure is used on the RedHat family Linux distributions like RedHat, CentOS, Fedora and YellowDog.