Automatic databases backup

neojick

Verified User
Joined
Jan 11, 2010
Messages
11
Hello,
sorry for my english, I'm french.

I'm administrator of server with directadmin, and I configure automatic backup every night of all websites datas (files, databases(s), ftp accounts...).
That is ok and work fine.

I want to configure another automatic backup (every hours) with just all databases.
Do you know if it's possible ?


Thanks a lot.
 
Make a custom script to do it or use system backup feature and turn on mysql db only.
 
What is wrong with just using a one command cronjob?

Code:
mysqldump -uda_admin -pYourPassword --all-databases > mysqlbackup.sql_`date +"%Y%m%d%H%M%S"`
 
purhaps this helps:

i use this for my servers one script for additional backup of the databases
it keeps retention of X days and zips the files and clears old files

also i have one for webshops so i make a backups every X hour as an extra service this is the second script. samefunctionality but then with a config file to specify what databases to backup in an interval. (like if you do on storage with snapshots.. but this can stop as slave and start again)

also the script has a destination folder and you can chown the folder to a user so you can scp/ftp it off remotely..

furthermore i would say keep the scripts in your /root and the database destination somewhere else. and have the cron run the scripts.

hope it helps someone.

ps: i know these can be oneliners, but i like this better :p

updated this script. this script will delete all the backups except the last 7 days. and this script is designed to run also on a MYSQL SLAVE..
so you have correct backups without lock problems etc. it nicely stops and starts the slave.

Code:
#!/bin/sh
#MySQL backup script
### System Setup ###
KEEPDAYS=7
BACKUPPREFIX=backup
BACKUPPATH="/backups/mysql"
OWNER=sysadmin
### MySQL Setup ###
MUSER="root"
MPASS="#PASSWORD#"
MHOST="localhost"
MYSQL="$(/usr/bin/which mysql)"
MYSQLDUMP="$(/usr/bin/which mysqldump)"
GZIP="$(/usr/bin/which gzip)"
GREP="$(/usr/bin/which grep)"
CUT="$(/usr/bin/which cut)"
RM="$(/usr/bin/which rm)"
DATE="$(/usr/bin/which date)"
CHOWN="$(/usr/bin/which chown)"
FIND="$(/usr/bin/which find)"
MYSQLADMIN="$(/usr/bin/which mysqladmin)"
# sanity checks
regnumcheck='^[0-9]+$'
if  [[ ! $KEEPDAYS =~ $regnumcheck ]] ; then
        echo please enter a valid numeric number to keep the amount of backups
        exit 2
fi
if [[ ! -d $BACKUPPATH ]]; then
        echo unable to find the folder : $BACKUPPATH for var scriptpath
        exit 2
fi
 

echo starting cleanup
### Cleanup old backups ###
LSOUTPUT="$($FIND $BACKUPPATH -maxdepth 1 -type d -ctime +$KEEPDAYS -iname 'backup.*' -printf '%f\n' | sort)"
for results in $LSOUTPUT
do
if [[ ! "x$results" == "x" ]];
        then
        echo "$BACKUPPATH/$results"
        if [[ ! -d $BACKUPPATH/$results ]] ; then
                echo error the folder $BACKUPPATH/$results you are parsing should exist.. what are you doing?
                exit 2
        else
                echo deleting $BACKUPPATH/$results, its older then $KEEPDAYS days
                $RM -r $BACKUPPATH/$results
                        if [[ -d $BACKUPPATH/$results ]] ; then
                                echo error, i was unable to delete $BACKUPPATH/$results
                                exit 2
                        fi
        fi
fi
done
echo starting backup
### Start Backup ###
BACKUP=$BACKUPPATH/$BACKUPPREFIX.$(date +"%Y%m%d.%H%M.%S")
[ ! -d $BACKUP ] && mkdir -p $BACKUP || :
  if [[ ! -d $BACKUP ]] ; then
                                echo error, i was unable to create $BACKUP
                                exit 2
  fi

$MYSQLADMIN -u $MUSER -h $MHOST -p$MPASS stop-slave
### Start MySQL Backup ###
# Get all databases name
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
        echo $db
        FILE=$BACKUP/mysql-$db.$(date +"%Y%m%d%.%H%M.%S").gz
        $MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
$MYSQLADMIN -u $MUSER -h $MHOST -p$MPASS start-slave
$CHOWN -R $OWNER:$OWNER $BACKUP
 
Last edited:
Seems weird. Why not just run a master-slave replication or something.

You should also be locking tables when you do backups. If not your gonna get corrupted tables.
 
Seems weird. Why not just run a master-slave replication or something.

You should also be locking tables when you do backups. If not your gonna get corrupted tables.

true, but depends, look for the more serious work ofcource master-salve is better..or even better have a high available netapp/san cluster behind your servers/clusters or esx farms

but this is ment purely for low/load things ofcourse and you have atleast A backup in between (something is see unfortunatly many ppl dont do, this is then atleast something.. and in this script you have a retention so no worries in running out of space on the local server.. agree? ) :)

ps: for the locking tables, it could be added if one wants.. i guess pre the dump and unlock after the dump..
 
Last edited:
Back
Top