[How-To] Add DirectAdmin current admin backup/restore cloud support

too bad, there is no all_restore_pre.sh

Thought the same, yet you need to mount the remote partition before you open the restore page so DirectAdmin can see the content and show the backup files for you to select.

A possible workaround would be to create a service and keep the partition mounted. You could even mount it as read-only to avoid possible data corruption on syncing the mounted partition. I also highly recommend avoiding writing through the FUSE system as it slows down the whole system and you won't be able to see the exit code and confirm the copy was successful other than reading the debug logs.

BTW rclone is available in epel
yum install rclone
Yes, yet usually it's an old version, and since cloud storage API's still change way to often, I recommend using the installation script instead. Currently epel CentOS 7 rclone version is 1.47.0 while rclone latest version is 1.53.2 and the changelog between those two is huge: https://rclone.org/changelog/
 
Sure that's the default location. I rather have it outside admin home folder due to old habits.
So you change the path in the admin backup and transfer when you schedule the backup? to run on schedule?
 
So you change the path in the admin backup and transfer when you schedule the backup? to run on schedule?
Correct. just select a local path where all your backups can fit with no problem. As I explained above I have two jobs, one for all accounts once per week and another for daily backups for accounts who pay extra. So I use different folders for each and a little if statement to see where it should copy them remotely based on what cron was completed.
 
I like this feature. It keeps asking me for a password to run every command. Did I set something up wrong?

I am using AWS not one drive maybe it's different.
Be sure to set the environment variable: RCLONE_CONFIG_PASS

Otherwise, rclone needs to ask for password to decrypt the config file and be able to continue.
 
/usr/bin/rclone copy $LOCAL JCAFTP:$DOW/ --ftp-concurrency 2 --log-level INFO --log-file /home/backup_logs/jcaftp.log

/usr/bin/rclone move $LOCAL Crypt:"$(date +"%Y%m%d")"/ --fast-list --drive-stop-on-upload-limit --drive-chunk-size 128M --delete-empty-src-dirs --log-level INFO --log-file /home/backup_logs/crypt.log
Also is one an unencrypted ftp copy?
moving the dir of local to remote onedrive encrypted?

So sending the same file 2 places?

RCLONE_CONFIG_PASS
so like
export RCLONE_CONFIG_PASS

Sorry for all of the questions. Just having a lot of fun learning it..
 
Last edited:
Also is one an unencrypted ftp copy?
moving the dir of local to remote onedrive encrypted?

So sending the same file 2 places?
Yes, I send it to two different destinations. First is send via private network to another of my servers using regular non encrypted FTP (private network so don't care) you could use FTP over SSL using flag --ftp-explicit-tls.

so like
export RCLONE_CONFIG_PASS
to be exact would be export RCLONE_CONFIG_PASS=password
You could also use the --password-command and type the command directly. More details and ideas on how to work this are found here: https://rclone.org/docs/#configuration-encryption

Sorry for all of the questions. Just having a lot of fun learning it..
Not a problem! Happy to help out. It's taken me some time to learn everything, thankfully rclone documentation is amazing so been able to figure things out as I stumble upon them.
 
Let's cover our basics, what rclone version are you running? Run
Rich (BB code):
rclone version
and paste the result.
rclone v1.51.0
- os/arch: linux/amd64
- go version: go1.12.12

Also, can you share the exact command you used for the transfer?
Same as your code:
/usr/bin/rclone copy /home/admin/user_backups onedrive:backup-folder/ --ftp-concurrency 2 --log-level INFO --log-file /home/admin/user_backups/onedrive_backups.log

Are you using your own client id for OneDrive? Check https://rclone.org/onedrive/#getting-your-own-client-id-and-key
I think this could be the issue. I did not use my client id but I did the url verification with token result.
I will check getting client-id + key and try again rclone move command.
 
rclone v1.51.0
- os/arch: linux/amd64
- go version: go1.12.12
Be sure to upgrade to latest version. rclone 1.53.2 has a lot of fixes for OneDrive since 1.51 including an issue with crypt and OneDrive chunk uploads which seems to be related.

/usr/bin/rclone copy /home/admin/user_backups onedrive:backup-folder/ --ftp-concurrency 2 --log-level INFO --log-file /home/admin/user_backups/onedrive_backups.log

If you are backing up to OneDrive, --ftp-concurrency 2 does nothing as it's for ftp backend. Check: https://rclone.org/onedrive/

I would recommend updating to latest version, creating your own client id and if speed transfers are slow, check the following flag and increase the limit based on the available RAM for your computer. I found my sweet spot in 160M (since it's cross compatible with GDrive and OneDrive) https://rclone.org/onedrive/#onedrive-chunk-size

I would also recommend this flag if you are using Office 365 enterprise (Sharepoint) but do not use it if you are using consumer OneDrive as it's not compatible: https://rclone.org/onedrive/#onedrive-no-versions this means that if you upload files with the same name and path, OneDrive will create a new version, still holding your old file and taking up space of your quota. For personal accounts, you would need to first delete the files in OneDrive before uploading the new ones (if same path/filename) to avoid this and fill your quota and problems down the road.
 
I am able to use AWS no issues. I am using sync instead of copy or move. Works great.
 
I am able to use AWS no issues. I am using sync instead of copy or move. Works great.
Great, glad to hear! Problem with sync is that if your local backup becomes corrupt by any reason (dying disk for example) it will sync the corrupted copy over to AWS and overwrite the old one. That's why for backup I feel more comfortable with move/copy. Also, if a file is accidentally deleted locally, it will also be deleted remotely. If you plan on using sync I recommend using --dry-run when adding new vectors to confirm nothing will be deleted.

I am trying to figure out sftp now..
sftp is pretty straightforward in the limited testing I've done, I end up falling over to rsync for those scenarios.
 
move/copy.
So let's say on the remote side I use copy/move isn't the corrupt backup still copied over?
let's say I have bkup11-2-20, bkup11-3-20, 4, 5, and so on.
So the latest backup of today 11-4-20 is corrupt locally if I sync, copy, or move it's still corrupt?

How do I only keep the last week of backups on the remote side? rclone purge?
 
So let's say on the remote side I use copy/move isn't the corrupt backup still copied over?
let's say I have bkup11-2-20, bkup11-3-20, 4, 5, and so on.
So the latest backup of today 11-4-20 is corrupt locally if I sync, copy, or move it's still corrupt?

How do I only keep the last week of backups on the remote side? rclone purge?

Sync will check the hash (if compatible) or mod time/size of the file and if it's different will copy, otherwise will skip. It will also make destination look exactly like local, deleting any files destination has that local does not. This could result in some data loss depending in how you handle your backups. Specially if you handle multiple backups from different days.

Copy would copy any local file over to the remote but do not delete any files the remote has that are missing locally. Move would do the same, but delete the local files after the file is successfully uploaded.

For example, let's say you have 3 backup files locally:
file1.tar.gz
file2.tar.gz
file3.tar.gz

and 4 files currently present in your cloud storage:
file1.tar.gz
file2.tar.gz
file3.tar.gz
file4.tar.gz

rclone sync would delete file4.tar.gz from the remote which could be a temporal error that resulted in the file not being generated. Even worse, you would be losing the backup file you had of that user in case the data was corrupted or the error was hardware related read error. Copy/Move would not do that and save that file.

As I explained before, I'm using GDrive unlimited storage, so I keep all the historical backups there and manually purge them after some time. Since you are using AWS and paying for storage, I would see how you don't want to pay for transfer and storage of the same file. Still copy/move command will also check size/mod time and hash before copying.
 
rclone sync would delete file4.tar.gz from the remote which could be a temporal error that resulted in the file not being generated
For me, that back up would be 4 days old and not really viable. I do an admin backup once a day every day. Mostly for DR purposes. It's not for clients it's just for my peace of mind.

I can see it would important if I was backing up say every few hours.
deleting any files destination has that local does not.
in the case of the hash checking, I think I see your point.

I would see how you don't want to pay for transfer and storage of the same file.
It's not so much the paying its. it's the cleaning of the remote side. I don't want months and months of files. I want to automate the removal of them. a simple example would be
I want to keep the last 7 days local and the last 30 days remote. I think i will have to come up with something more.

Mostly it is testing for me.
 
Well you could append day of week to your backup path locally.

Checking all_backup_prepostsh Documentation you could use $local_path variable and upload to $dayofmonth variable folder to accomplish this. You could use sync with this (to avoid keeping stale backups of users you delete) and using the all_backup_pre.sh to purge the directory where you are going to store the backups locally for that day.

Let me know your thoughts.
 
Well you could append day of week to your backup path locally.
If you mean in DA I use the full date option it gives folders like 2020-11-04 so every day I get
2020-11-04 2020-11-03 2020-11-02 2020-11-01

Which is why I was using sync. After it copies to aws and reports success I run a clean up on the DA side.
as to have
2020-11-04 2020-11-03 2020-11-02

I think what you are suggesting is locally on DA
have
2020-11-04 2020-11-03 2020-11-02 2020-11-01

Remotely on AWS have
2020-11-04wed 2020-11-03tues 2020-11-02mon 2020-11-01sun

yes?
 
I think what you are suggesting is locally on DA
have
2020-11-04 2020-11-03 2020-11-02 2020-11-01

Remotely on AWS have
2020-11-04wed 2020-11-03tues 2020-11-02mon 2020-11-01sun

yes?
No, what I'm suggesting is you have locally:
Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday

And remotely you can have 01,02,03,....29,30,31

To make this you would use append day of week in DirectAdmin backup path and your rclone command would be like:
rclone sync $local_path Remote:$dayofmonth
Those variables are sent over by DirectAdmin to all_backup_prepost scripts.

So it would be a set and forget. If you want you could use all_backup_pre.sh to simple
rm -rf $local_path
as to delete the backups before backup process starts and avoid having any stale user backups you don't longer need.
 
Back
Top