Where is proper backup system

anay

Verified User
Joined
Dec 7, 2005
Messages
114
Hi,
Since DA as almost pretty much everything a webhosting panel need but I am not able to locate backup system where I can transfer backup to Amazon S3 or even some SSH storage. I thought like other panels, there is SSH based backup system where I can make daily backup and keep them for 10 days or so.

Eg, in cPanel we have feature to create backup as per our wish and we can transfer it to server using SSH automatically, it also take care of removing old backups.

Please let me how can I do this in DA ie. (a) Use of SSH to transfer files (b)How can I remove backup 10 days or more old.

I hope this feature is there in DA otherwise will be a great disappointment considering how mature DA is now.
 
They are working on it, I think I read somewhere the ETA was summer.

You can, in the mean time use 3rd parties like:
 
That's huge disappointment. I never thought that in all those years they have not worked on backup system.
 
I think for Admin_Backup level you can do incremental backup by changing this value. In this example, it did incrementing backup up to 7 days. I think this is documented somewhere I forgot where it was.

incremental-backup.PNG

About the upload to cloud, you can use rclone, you can create a script inside a directadmin file /usr/local/directadmin/scripts/custom/user_backup_success.sh ( for user-level backup) and /usr/local/directadmin/scripts/custom/all_backups_post.sh (for Admin/Reseller-level backup) where if backup has successfully created it will call that script. after that, we can then upload it to any cloud services (you need to configure that using rclone config. Here is a sample script how does it looks like:

user-level backup - check backup and upload to onedrive:

Bash:
#!/bin/bash

# Directadmin user_backup_success.sh )
# Author - Arafat Ali
# This script will check the backup file for corruption that was triggered from DA backup
# It also will auto encrypt the backup file and upload only the encrypted backup to cloud (the encryption feature in user-level is not available in DA)

if ! [ "$local_path" == "/backup/admin_backups" ]; then
    script_path=$(dirname $(realpath -s $0))
    script_name=$(basename -- "$0")
    MAIL_BIN="/usr/local/bin/mail"
    REPORT_DATE=`/usr/bin/date +%s`
    REPORT_PATH="$script_path/log"
    REPORT_FILE="$REPORT_PATH/$script_name-$REPORT_DATE.$RANDOM.log"
    MAIL_BIN="/usr/local/bin/mail"
    MYEMAIL="[email protected]"
    BACKUP_DATE_TIME_NOW="$(date '+%d-%m-%Y_%H-%M-%S')" #31-03-2020--11-56-16
    BACKUP_FOLDER_NAME_NOW="${username}_backup_$BACKUP_DATE_TIME_NOW" #admin123_backup-31-03-2020_11-56-16-AM
    backup_source="/home/$username/backups" #eg /home/admin/backups
    BACKUP_TAR_NAME="$backup_source/$BACKUP_FOLDER_NAME_NOW.tar.gz"
    BACKUP_TAR_NAME_ENCRYPTED="$backup_source/$BACKUP_FOLDER_NAME_NOW.tar.gz.enc"
    backup_destination="Backup/Server/server.domain.com/user_backups/$BACKUP_FOLDER_NAME_NOW" # Destination of onedrive to store backup
    WARNING_STATUS="OK"
    ENCRYPT_SCRIPT="/usr/local/directadmin/scripts/encrypt_file.sh"
    # ENCRYPT THIS FILE WITH GPG, then it's only available on memory.
    ENC_PASS="/root/.enc_password" # This file should be encrypted. use GPG
    ENC_PASS_DECRYPT=$(gpg --decrypt $ENC_PASS) # Decrypt and read only at memory level
    mv "$file" "$BACKUP_TAR_NAME"
    file="$BACKUP_TAR_NAME"
    mkdir -p $script_path/log
    find $REPORT_PATH -name "*.log" -mtime +2 -exec rm {} \;
    echo "Performing non-admin level backup ..." | tee -a $REPORT_FILE
    echo "Checking backup archive for corruption ... (this may take some time)" | tee -a $REPORT_FILE
    if gzip -t "$file" &>/dev/null; then
        echo "[$script_name | info]: OK, backup archive file of [$file] is valid" | tee -a $REPORT_FILE
    else
     # Dont upload if file corrupt
        echo "[$script_name | info]: Warning, backup archive file of [$file] is corrupted" | tee -a $REPORT_FILE
        rm -f "$file"
        echo "[$script_name | info]: Backup file [$file] has been removed" | tee -a $REPORT_FILE
        WARNING_STATUS="WARNING"
    fi
    #create_backup_dir
    echo "[$script_name | info]: Creating new backup directory in [onedrive] as [$backup_destination] ..." | tee -a $REPORT_FILE
    bash -o pipefail -c "rclone mkdir onedrive:$backup_destination | tee -a $REPORT_FILE"
    return_code=$?
    if [ $return_code  == 0 ]; then
        echo "[$script_name | info]: OK, new backup folder [$backup_destination] created at [onedrive]" | tee -a $REPORT_FILE
    else
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: [$return_code] Warning, something is wrong with the $return code while creating directory at [onedrive]" | tee -a $REPORT_FILE
    fi
    # encrypt backup file
    if [ -f $ENC_PASS ]; then # If encryption key exist we encrypt the backup tar.gz file to tar.gz.enc
        echo "[$script_name | info]: Encrypting backup file of [$file] as $BACKUP_TAR_NAME_ENCRYPTED ..." | tee -a $REPORT_FILE
        $ENCRYPT_SCRIPT $BACKUP_TAR_NAME $BACKUP_TAR_NAME_ENCRYPTED $ENC_PASS_DECRYPT
        file=$BACKUP_TAR_NAME_ENCRYPTED
    else
        echo "[$script_name | info]: Warning, encryption key is not setup at [$ENC_PASS]. File upload operations aborted for security reason" | tee -a $REPORT_FILE
        echo "[$script_name | info]: Backup status: [$WARNING_STATUS]" | tee -a $REPORT_FILE
        $MAIL_BIN -s "[$script_name, $username | $WARNING_STATUS]: User-Level Backup Operation Report" $MYEMAIL < $REPORT_FILE
        exit 1
    fi

    #upload_backup_file (use move to send encrypted file and leave the unencrypted)
    bash -o pipefail -c "rclone move $file onedrive:$backup_destination/ --log-file=$REPORT_FILE --log-level INFO --stats-one-line -P --stats 2s"
    return_code=$?
    if [ $return_code  == 0 ]; then
        echo "[$script_name | info]: Success, backup file of [$file] has been successfully uploaded into [onedrive]" | tee -a $REPORT_FILE
    elif [ $return_code  == 1 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, syntax or usage error while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 2 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, error not otherwise categorised while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 3 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, directory not found while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 4 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, file not found while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 5 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, temporary error (one that more retires might fix) (Retry errors) while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 6 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, less serious errors (like 461 erros from dropbox) (NoRetry errors) while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 7 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, fatal error (one that more retries won't fix, like account suspended) (Fatal errors) while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 8 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, transfer exceeded - limit set by --max-transfer reached while performing file upload [$file]" | tee -a $REPORT_FILE
    elif [ $return_code  == 9 ]; then
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, operation successful, but no files transferred while performing file upload [$file]" | tee -a $REPORT_FILE
    else
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: Error, unknown error while performing file upload [$file]" | tee -a $REPORT_FILE
    fi
    echo "[$script_name | info]: Backup status: [$WARNING_STATUS]" | tee -a $REPORT_FILE
    #echo "[$script_name | info]: Email notification is set to [$MYEMAIL]" | tee -a $REPORT_FILE
    echo "===================================================================================="
    $MAIL_BIN -s "[$script_name, $username | $WARNING_STATUS]: User-Level Backup Operation Report" $MYEMAIL < $REPORT_FILE

fi


admin/reseller-level backup - check backup and upload to onedrive:

Bash:
#!/bin/bash

# Directadmin all_backup_post.sh
# Upload backup created by user, admin and reseller to onedrive
# Author - Arafat Ali
script_path=$(dirname $(realpath -s $0))
script_name=$(basename -- "$0")
MAIL_BIN="/usr/local/bin/mail"
REPORT_DATE=`/usr/bin/date +%s`
REPORT_PATH="$script_path/log"
REPORT_FILE="$REPORT_PATH/$script_name-$REPORT_DATE.$RANDOM.log"
MAIL_BIN="/usr/local/bin/mail"
MYEMAIL="[email protected]"
BACKUP_DATE_TIME_NOW="$(date '+%d-%m-%Y_%H-%M-%S')" #31-03-2020--11-56-16
BACKUP_FOLDER_NAME_NOW="admin_backups_$BACKUP_DATE_TIME_NOW" #system_backups-31-03-2020_11-56-16-AM
backup_destination="Backup/Server/server.domain.com/admin_backups/$BACKUP_FOLDER_NAME_NOW"
WARNING_STATUS="OK"
MARK_UPLOAD="backup_files_uploaded_to_onedrive.txt"
ENC_PASS="/root/.enc_password" # This should be encrypted. Can use GPG
ENC_PASS_DECRYPT=$(gpg --decrypt $ENC_PASS)
#/usr/local/directadmin/scripts/encrypt_file.sh
ENCRYPT_SCRIPT="/usr/local/maxiscript/utils/encrypt_file.sh" # DA ENCRYPTION DEPRECATED use this instead
return_code=0
backup_error=$error_result
mkdir -p $script_path/log

if [ -z "$backup_error" ]; then
     echo  "[$script_name | info]: Ok, no script error reported from the backup" | tee -a $REPORT_FILE
else
    WARNING_STATUS="WARNING"
    echo  "[$script_name | info]: Warning, there was an error reported from the backup script" | tee -a $REPORT_FILE
    echo  "[$script_name | info]: [$script_name] is terminated. Please inspect the backup files" | tee -a $REPORT_FILE
    $MAIL_BIN -s "[$script_name | $WARNING_STATUS]: Admin-Level Backup Operation Report" $MYEMAIL < $REPORT_FILE
    exit 1
fi
currentPWD=$PWD # Save the current directory
cd $local_path # Directadmin defined backup location eg: local_path=/backup/admin_backups/Tuesday

find $REPORT_PATH -name "*.log" -mtime +2 -exec rm {} \;

function check_valid_tar {

    find_file=(`find $local_path -maxdepth 1 -name "*.tar.gz"`)
    if [ ${#find_file[@]} -gt 0 ]; then
        if [ ${#find_file[@]} == 1 ]; then
            echo  "[$script_name | info]: There is ${#find_file[@]} backup file in [$local_path]" | tee -a $REPORT_FILE
        else
            echo  "[$script_name | info]: There are ${#find_file[@]} backup files in [$local_path]" | tee -a $REPORT_FILE
        fi
        for f in *.tar.gz
        do
            if [ "$f" != "*.tar.gz" ]; then
                echo "[$script_name | info]: Checking backup file of [$f] for corruption ... (this may take some time):" | tee -a $REPORT_FILE
                if gzip -t "$f" &>/dev/null; then
                    echo "[$script_name | info]: OK, backup archive file of [$f] is valid" | tee -a $REPORT_FILE
                    echo "-----" | tee -a $REPORT_FILE
                    # encrypt backup file
                    if [ -f $ENC_PASS ]; then # If encryption key exist we encrypt the backup tar.gz file to tar.gz.enc using DA openssl encryption
                        echo "[$script_name | info]: Encrypting backup file of [$f] as ${f}.enc ..." | tee -a $REPORT_FILE
                        $ENCRYPT_SCRIPT "$f" "${f}.enc" "$ENC_PASS_DECRYPT"
                        return_code=$?
                        if [ "$return_code" == 0 ]; then
                            echo  "[$script_name | info]: Successfully encrypted backup file of [$f] as ${f}.enc" | tee -a $REPORT_FILE
                            rm -f $f #Remove the unencrypted backup
                            echo  "[$script_name | info]: Unencrypted backup file of [$f] was deleted" | tee -a $REPORT_FILE
                        else
                            WARNING_STATUS="WARNING"
                            echo  "[$script_name | info]: Warning, unable to encrypt backup file of [$f]. OpenSSL error code is: $return_code" | tee -a $REPORT_FILE
                        fi
                    else
                        WARNING_STATUS="WARNING"
                        echo "[$script_name | info]: Warning, unable to read encryption key at [$ENC_PASS]. Please setup encryption password at [$ENC_PASS]" | tee -a $REPORT_FILE
                        echo "[$script_name | info]: Warning, backup files are not encrypted because unable to read encryption key at [$ENC_PASS]" | tee -a $REPORT_FILE
                        echo "[$script_name | info]: Warning, unencrypted backup files will stay in $local_path but it will not be uploaded into [onedrive] due to security reason" | tee -a $REPORT_FILE
                        #exit 1
                    fi
                else
                    WARNING_STATUS="WARNING"
                    echo "[$script_name | info]: Warning, backup archive file of [$f] is corrupted" | tee -a $REPORT_FILE
                    rm -f "$f"
                    echo "[$script_name | info]: Backup file [$f] has been removed" | tee -a $REPORT_FILE
                fi
            fi
    done
    else
        echo  "[$script_name | info]: OK, there is no backup files in [$local_path]" | tee -a $REPORT_FILE
    fi
    echo "[$script_name | info]: Finished checking valid tar backup files" | tee -a $REPORT_FILE
}


function create_backup_dir {
    echo "[$script_name | info]: Creating new backup directory in [onedrive] as [$backup_destination] ..." | tee -a $REPORT_FILE
    bash -o pipefail -c "rclone mkdir onedrive:$backup_destination | tee -a $REPORT_FILE"
    return_code=$?
    if [ $return_code  == 0 ]; then
        echo "[$script_name | info]: OK, new backup folder [$backup_destination] created at [onedrive]" | tee -a $REPORT_FILE
    else
        WARNING_STATUS="WARNING"
        echo "[$script_name | info]: [$return_code] Warning, something is wrong with the $return code while creating directory at [onedrive]" | tee -a $REPORT_FILE
    fi
}

function upload_all_backups {
    for f in *.tar.gz.enc #only upload encrypted enc file
    do
        if [ "$f" != "*.tar.gz.enc" ]; then
            echo "[$script_name | info]: Uploading the backup file [$f] into [onedrive] ... (this may take some time):" | tee -a $REPORT_FILE
            bash -o pipefail -c "rclone move $f onedrive:$backup_destination/ --log-file=$REPORT_FILE --log-level INFO --stats-one-line -P --stats 2s"
            return_code=$?
            if [ $return_code  == 0 ]; then
                echo "This backup folder is empty because it has been uploaded into onedrive" > $local_path/$MARK_UPLOAD
                echo "[$script_name | info]: Success, backup file of [$f] has been successfully uploaded into [onedrive]" | tee -a $REPORT_FILE
            elif [ $return_code  == 1 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, syntax or usage error while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 2 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, error not otherwise categorised while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 3 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, directory not found while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 4 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, file not found while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 5 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, temporary error (one that more retires might fix) (Retry errors) while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 6 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, less serious errors (like 461 erros from dropbox) (NoRetry errors) while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 7 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, fatal error (one that more retries won't fix, like account suspended) (Fatal errors) while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 8 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, transfer exceeded - limit set by --max-transfer reached while performing file upload [$f]" | tee -a $REPORT_FILE
            elif [ $return_code  == 9 ]; then
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, operation successful, but no files transferred while performing file upload [$f]" | tee -a $REPORT_FILE
            else
                WARNING_STATUS="WARNING"
                echo "[$script_name | info]: Error, unknown error while performing file upload [$f]" | tee -a $REPORT_FILE
            fi
        else
            WARNING_STATUS="WARNING"
            echo "[$script_name | info]: Error, no backup files found in [$local_path]" | tee -a $REPORT_FILE
        fi
    done
}

    check_valid_tar
    create_backup_dir
    upload_all_backups

echo "[$script_name | info]: Backup status: [$WARNING_STATUS]" | tee -a $REPORT_FILE
echo "[$script_name | info]: Report file location: [$REPORT_FILE]" | tee -a $REPORT_FILE
echo "===================================================================================="

$MAIL_BIN -s "[$script_name | $WARNING_STATUS]: Admin-Level Backup Operation Report" $MYEMAIL < $REPORT_FILE

cd $currentPWD
.
 
Last edited:
Actually, you should never have a server push backups onto another server. You should have a backup server pull backups from the webserver. This prevents access to the backupserver in case the webserver gets hacked and f*cks around with your backups. But... either way, remote backups is always better than local backups.
 
Actually, you should never have a server push backups onto another server. You should have a backup server pull backups from the webserver. This prevents access to the backupserver in case the webserver gets hacked and f*cks around with your backups. But... either way, remote backups is always better than local backups.


You can encrypt that backup files (as seen in the script). So, somebody who got that backup file need a keyfile or paraphrase to decrypt before they can use that backup files. I think this is the cheapest solution so far to use cloud backup with directadmin.
 

Attachments

  • rclone.PNG
    rclone.PNG
    22.2 KB · Views: 32
Last edited:
sysdev means that if anybody hacks your DA server, he will receive access to backup servers too, because access credentials here (if DA puts backups) so hacker can do what he want with backups too. That's why you must configure backup server to pull backups from DA server.
 
wait, if you pull backups from another server, you'd still need credentials to pull the backups....... So, if the backup server was hacked (highly unlikely but you never know).... Same thing.......
 
not same, in such case you must hack both servers, backup server doesn't have so much services which can have vulnerabilities, usually enough just SSH with port-knocking, and disable password authentication.
 
wait, if you pull backups from another server, you'd still need credentials to pull the backups....... So, if the backup server was hacked (highly unlikely but you never know).... Same thing.......
Well, a backupserver should only be accessible by you or a few servermanagers, not by a lot of (unknown) users. e.g. our backupservers live on a 172.16.* private network only and are only routed from the inside through a local gateway to the pubic servers. So you can't even connect from the webserver -> backupserver, only the other way around. Managers can only connect to backupserver using a vpn from specified addresses and require the correct client ssl certs.
 
sysdev means that if anybody hacks your DA server, he will receive access to backup servers too, because access credentials here (if DA puts backups) so hacker can do what he want with backups too. That's why you must configure backup server to pull backups from DA server.

Yes, I understood what he mean. That's the only case if hacker has gained access into root (this is the problem with how a server is setup with the good security permission). But, let's say they have root access, they still need to know paraphrase to the backup file. A backup file can be encrypted with no keyfile stored in the server itself (offline key file). So only you can open that backup file. If you mean backup server = cloud backup services such as onedrive, rclone supports config file encryption here so this can add more security to access credential to the cloud service: https://rclone.org/docs/#configuration-encryption. Even, if they have gained access into cloud service like onedrive, they couldn't do anything with the backup file because it's encrypted (offline key file). This is the best, cheapest cloud backup solution that is working with directadmin so far. So I don't have any problem using this.
 
Last edited:
Yes, I understood what he mean. That's the only case if hacker has gained access into root (this is the problem with how a server is setup with the good security permission). But, let's say they have root access, they still need to know paraphrase to the backup file. A backup file can be encrypted with no keyfile stored in the server itself (offline key file). So only you can open that backup file. If you mean backup server = cloud backup services such as onedrive, rclone supports config file encryption here so this can add more security to access credential to the cloud service: https://rclone.org/docs/#configuration-encryption. Even, if they have gained access into cloud service like onedrive, they couldn't do anything with the backup file because it's encrypted (offline key file). This is the best, cheapest cloud backup solution that is working with directadmin so far. So I don't have any problem using this.
Sure, please DO encrypt your backups. But also make sure you have a very good backup plan because encrypted backups can still be deleted and encryption serves no purpose if the webserver is hacked.
 
Sure, please DO encrypt your backups. But also make sure you have a very good backup plan because encrypted backups can still be deleted and encryption serves no purpose if the webserver is hacked.

That's why file permissions in linux is very important ^_^
 
I really agree the backup server should pull, or only be put by host if the backup cannot be deleted or altered once put there. Most easy to setup is pull so that's why it's the most common preferred choice. Having a host server compromised is a real risk; it's accessible for the wide world and 0 days and unpatched software is always a risk.
 
I really agree the backup server should pull, or only be put by host if the backup cannot be deleted or altered once put there. Most easy to setup is pull so that's why it's the most common preferred choice. Having a host server compromised is a real risk; it's accessible for the wide world and 0 days and unpatched software is always a risk.

I agree that it's not the best option to have backup server in local, but since OP requested the solution "to backup to cloud". That's one of the ways and example without extra money buying new server. Also, this is a common method that I have seen discussed many times in this forum and seems like they are preferred choice for them. rclone also has nice documentation about Amazon S3: https://rclone.org/s3/

But, in order to use this local method, there are actually many ways that you can harden the script to not expose plain text configuration like using GPG encryption with gpg-agent to encrypt most of the code in the scripts (this is what I currently use). So, even the server is hacked hacker can't do nothing to mess up wih the backup (unless he has access to the physical hardware). But the best he could do only to DELETE the ENCRYPTED backup IF ONLY he has gained access into the PHYSICAL hardware to decrypt gpg password (which is not too easy) and access into cloud backup service (which also has another layer of access protection like 2 step authentication). Encryption is a must if we choose that method.

Actually I never stored backup files in local server. All the backup files went straight to cloud services with nothing left in local. So, if I want to use the backup, I just need to decrypt it first with offline key from cloud, copy to local and restore using DA, and delete them.

But still the most secured backup should be pull from another server with less vulnerable software installed.
 
Last edited:
DirectAdmin should not reinvent the hot water just got to finish the borgbackup integration from the last year thread (Maybe by adding client restore option and database backup by separate base with restore option) It will be superb, and borgbackup already has the tools to solve the problems shown here


@maxi32

If you use the incremental strategy with borg https://forum.directadmin.com/threads/cli-backup-strategy-for-incremental-backup.58157/

You can use borg serve to limit what connected to ssh client can do https://borgbackup.readthedocs.io/en/stable/usage/serve.html

Allow an SSH keypair to only run borg, and only have access to /path/to/repo.

+ Append Mode https://borgbackup.readthedocs.io/en/stable/usage/notes.html#append-only-mode

A repository can be made “append-only”, which means that Borg will never overwrite or delete committed data (append-only refers to the segment files, but borg will also reject to delete the repository completely). This is useful for scenarios where a backup client machine backups remotely to a backup server using borg serve, since a hacked client machine cannot delete backups on the server permanently.


command="borg serve --append-only ..." ssh-rsa <key used for not-always-trustable backup clients>
command="borg serve ..." ssh-rsa <key used for backup management>
 
DirectAdmin should not reinvent the hot water just got to finish the borgbackup integration from the last year thread (Maybe by adding client restore option and database backup by separate base with restore option) It will be superb, and borgbackup already has the tools to solve the problems shown here


@maxi32

If you use the incremental strategy with borg https://forum.directadmin.com/threads/cli-backup-strategy-for-incremental-backup.58157/

You can use borg serve to limit what connected to ssh client can do https://borgbackup.readthedocs.io/en/stable/usage/serve.html



+ Append Mode https://borgbackup.readthedocs.io/en/stable/usage/notes.html#append-only-mode

Thanks for this.

-----------

Also I just want to share something that i find it useful, when doing site backup on current user, it's recommend to put all websites for that user into maintenance mode automatically, then create a maintenance page using ..htaccess until it finishes doing backup. So the backup data is less likely to become corrupt compared to when you do site backup with a live site that has a lot database transaction (even if u pull backup from other server, you still need to put the site in maintenance mode). when finished doing backup, we can enable the site to live mode. I actually wrote example of an automatic script here that work with DA using .htaccess

Main script

in /usr/local/directamin/scripts/custom/user_backup_pre.sh your script would be

Code:
#!/bin/bash
# Directadmin user_backup_pre.sh
MAINTENANCE_SCRIPT="/path_to_script/user_site_maintenance.sh above"
# maintenance on for each user. This script will be called more than 1 by DA
echo "Turning on maintenance mode for user $username "
$MAINTENANCE_SCRIPT $username 0 # 0 means maintenance mode

in /usr/local/directamin/scripts/custom/user_backup_post.sh

Code:
#!/bin/bash
# Directadmin user_backup_post.sh
MAINTENANCE_SCRIPT="/path_to_script/user_site_maintenance.sh above"
# Maintenance off for each user. This script will be called more than 1 by DA
echo "[$script_name | info]: Turning on live mode for user $username " | tee -a $REPORT_FILE
$MAINTENANCE_SCRIPT $username 1 # 1 means live mode

For example, user0 has 8 websites. user1 has 3 websites, user2 has 9 websites

When you do Admin/Reseller Backup by default, Directadmin will do backup for each user in sequence. Example starting with user0, this script will put the 8 websites into maintenance mode but the other users like user1 with the 3 websites is still in live mode. When backup is done for user0, the 8 websites will be put on live mode automatically and then do for the next user. This can reduce the downtime maintenance for each user. You can also modify the script to put some other services in maintenance mode.

This works on both user-level backup and admin-level backup. So you don't have to code extra.

Not sure if this is useful on user-level side. But it's very useful in admin side. I'm trying to find a way how to reduce the maintenance time even more example with site by site backup. For example doing backup on site a.com .. (now in maintenance mode).. when finish site a.com is live then do for next site. Maybe someone can have better idea.
 
Last edited:
Back
Top