[How-To] Add DirectAdmin current admin backup/restore cloud support

I have to removed rclone because last version in YUM was not the same as rclone site.
So I followed steps in and then added Onedrive remote with Id and Secret. Everything OK.
Now version is v1.53.2

But when I added configuration password to rclone config then all_backups_post.sh stop working because was always waiting my password config input to run rclone move command. Local backups were created niclye but nothing moved to Onedrive.

- So, is there a way to run rclone copy local/path remote/path inside all_backups_post.sh without this password issue?

Thanks in advance!
 
I have to removed rclone because last version in YUM was not the same as rclone site.
So I followed steps in and then added Onedrive remote with Id and Secret. Everything OK.
Now version is v1.53.2

But when I added configuration password to rclone config then all_backups_post.sh stop working because was always waiting my password config input to run rclone move command. Local backups were created niclye but nothing moved to Onedrive.

- So, is there a way to run rclone copy local/path remote/path inside all_backups_post.sh without this password issue?

Thanks in advance!

Great! you can do testing with rclone config without encryption and encrypt once it's running and stable.

To avoid typing the password you would need to set a environment variable RCLONE_CONFIG_PASS. You could also use the --password-command and type the command directly. More details and ideas on how to implement are found here: https://rclone.org/docs/#configuration-encryption
 
Great! you can do testing with rclone config without encryption and encrypt once it's running and stable.

To avoid typing the password you would need to set a environment variable RCLONE_CONFIG_PASS. You could also use the --password-command and type the command directly. More details and ideas on how to implement are found here: https://rclone.org/docs/#configuration-encryption
Best I can tell there is no way to not store the config Password not in clear text somewhere. Which is not safe.

What am I missing?
 
Best I can tell there is no way to not store the config Password not in clear text somewhere. Which is not safe.

What am I missing?
I think you are right. eventually there has to be a way to store the password. DirectAdmin MySQL password is also stored in clear text, having the key inside the bin would be insecure as well as it would be shared among installations. This basically avoids having your config stolen as they would need to browse through all your files to find the script file with the password vs just copying the file.

After all, you should always change all your passwords and revoke all permissions after you know you were a victim of an attack.
 
Thanks @jca I will upgrade my rclone. I could not make work my transfers (copy) to Onedrive. I think is something wrong in my all_backups_post.sh file. I could try with BackBlaze but I am afraid of upload costs for a everyday backup action.
 
Thanks I upgraded all of my boxes. I assume i need to change my passwords on the config?
There is a script to confirm if you need to change your passwords. If so, you might need to change your crypt passwords which will require you to reupload the data using the new passkeys

Thanks @jca I will upgrade my rclone. I could not make work my transfers (copy) to Onedrive. I think is something wrong in my all_backups_post.sh file. I could try with BackBlaze but I am afraid of upload costs for a everyday backup action.
What error do you get? I backup some of the contents to OneDrive so hopefully I can be able to help.

Please share the app_backups_post.sh file and rclone log file to review.
 
Thanks @jca! the problem was a code error of my own. And then I could not keep on working on this but keep making backups from DA.
Last night I could solve this issue and make some test moving files backups up to my Onedrive account and it was great.

These are the lines I will set in my all_backups_post.sh. What do you think?
LOCAL=/home/admin/user_backups
LANG=C
FOLDER=$(date +"%Y-%m-%d")
/usr/bin/rclone move $LOCAL/$FOLDER/ onedrive:cloud/$FOLDER/ --fast-list --onedrive-chunk-size 250M --delete-empty-src-dirs --log-level INFO --log-file /home/admin/onedrive_backups.log

I added to chunk with 250MB file flag to see if big files (10gb) transfer correctly. Tonight I will test this.
I worked well with 2GB files but I do not know what is going to happen with larger files about 15gb.
 
Thanks @jca! the problem was a code error of my own. And then I could not keep on working on this but keep making backups from DA.
Last night I could solve this issue and make some test moving files backups up to my Onedrive account and it was great.

These are the lines I will set in my all_backups_post.sh. What do you think?

LOCAL=/home/admin/user_backups
LANG=C
FOLDER=$(date +"%Y-%m-%d")
/usr/bin/rclone move $LOCAL/$FOLDER/ onedrive:cloud/$FOLDER/ --fast-list --onedrive-chunk-size 250M --delete-empty-src-dirs --log-level INFO --log-file /home/admin/onedrive_backups.log

I added to chunk with 250MB file flag to see if big files (10gb) transfer correctly. Tonight I will test this.
I worked well with 2GB files but I do not know what is going to happen with larger files about 15gb.

That looks good. OneDrive doesn't support --fast-list command, so you can take it out (ref: https://rclone.org/overview/#optional-features).

Chunk-size basically helps with speed when you are working with big files, as there is a inherent latency between each chunk being scheduled. 250M is the maximum size supported by OneDrive, but you should be fine. If you increase it you might run into problems. (ref: https://rclone.org/onedrive/#onedrive-chunk-size)

OneDrive supports files of up to 100GB so you should be fine with 15GB files.

I plan on updating my script and remove the LOCAL variable as it's supplied by DirectAdmin and helps being more flexible, simple remove it and change $LOCAL for $local_path (ref: https://docs.directadmin.com/developer/hooks/backup-restore/)

Jose
 
Thanks Jose, I will remove this flag and try it tonight. I have about 70GB total with 60 encrypted files.
I will try to backup everyday to Onedrive. If this is possible. And then 2 backups per month to BackBlaze.
That's a good idea. My big backup is around 160GB and takes less than 30 minutes to upload.
 
Oh that is fast @jca.
In my case I think is going to take several hours to transfer 70gb. Now I am testing transfer out and is about 40 megabits/s.
¿Is it possible to set rclone to transfer smaller files first?

Thanks in advance!
 
Oh that is fast @jca.
In my case I think is going to take several hours to transfer 70gb. Now I am testing transfer out and is about 40 megabits/s.
¿Is it possible to set rclone to transfer smaller files first?

Thanks in advance!
Not that I'm aware, you would need to do two commands (one for files smaller than and another for bigger than) but it's too much hassle.

I also do not recommend increasing the number of transfers as Microsoft will throttle you quite fast.

40mbps looks kinda low, my server saturates 1gbps with no problem. Is your server IPv6 enabled? I had a lot of trouble with IPv6 that got fixed by binding to IPv4. You could also see what route your got to Azure network, there might be a bottleneck there.
 
Yes, I think 40mbps is low to. My server is a VPS with SSD and KVM virtualization located in Chicago.
I doubt about Azure. How can I see transfer speed in realtime or more details? I am new with rclone.
I recently tried to move files and saw transfer speed but then after 2hs, nothing appears in Onedrive.
Now I will try again.
 
This is rclone logs recently started:

2020/11/20 21:21:26 INFO :
Transferred: 121.140M / 64.850 GBytes, 0%, 2.021 MBytes/s, ETA 9h6m35s
Checks: 0 / 4, 0%
Transferred: 0 / 59, 0%
Elapsed time: 1m0.4s
Checking:

Transferring:
* user.admin.george.tar.gz.enc: 0% /5.711G, 567.610k/s, 2h54m50s
* user.admin.paul.tar.gz.enc: 7% /298.476M, 357.533k/s, 13m8s
* user.admin.ringo.tar.gz.enc: 0% /3.311G, 527.239k/s, 1h48m42s
* user.admin.john.tar.gz.enc: 1% /2.487G, 589.126k/s, 1h12m48s

¿Maybe Onedrive throttle down speed?

This is my network speed:

download_speed.png

From my NETDATA dashboard:

network_vps.PNG

Process is still going but too slow!
2020/11/20 21:46:26 INFO :
Transferred: 1000M / 64.850 GBytes, 2%, 656.437 kBytes/s, ETA 1d4h20m29s
Checks: 0 / 4, 0%
Transferred: 0 / 59, 0%
Elapsed time: 26m0.4s
Checking:

Transferring:
* user.admin.george.tar.gz.enc: 4% /5.711G, 0/s, 0s
* user.admin.paul.tar.gz.enc: 83% /298.476M, 0/s, 0s
* user.admin.ringo.tar.gz.enc: 7% /3.311G, 0/s, 0s
* user.admin.john.tar.gz.enc: 9% /2.487G, 0/s, 0s
 
Run ifconfig and check if you have IPv6 or just an IPv4.
I have only IPv4 but I can ask for one IPv6 for free. I dont think this could be the issue why Onedrive is throttling to zero my transfers.

This is in INFO log mode some lines from my log file.
Started with:
2020/11/21 01:58:54 INFO :
Transferred: 520.282M / 64.986 GBytes, 1%, 4.339 MBytes/s, ETA 4h13m36s
Checks: 4 / 4, 100%
Transferred: 1 / 60, 2%
Elapsed time: 2m0.3s
Transferring:
* reseller.admin.user1.tar.gz.enc: 2% /5.720G, 1.205M/s, 1h19m12s
* reseller.admin.user2.tar.gz.enc: 44% /298.479M, 1.192M/s, 2m19s
* user.admin.user3.tar.gz.enc: 3% /3.316G, 1.199M/s, 45m21s
* user.admin.user4.tar.gz.enc: 1% /2.489G, 1.187M/s, 35m17s
But then after some time:
2020/11/21 02:32:54 INFO :
Transferred: 1.063G / 64.986 GBytes, 2%, 516.177 kBytes/s, ETA 1d12h4m13s
Checks: 4 / 4, 100%
Transferred: 1 / 60, 2%
Elapsed time: 36m0.3s
Transferring:
* reseller.admin.user1.tar.gz.enc: 4% /5.720G, 0/s, 0s
* reseller.admin.user2.tar.gz.enc: 83% /298.479M, 0/s, 0s
* user.admin.user3.tar.gz.enc: 7% /3.316G, 0/s, 0s
* user.admin.user4.tar.gz.enc: 9% /2.489G, 0/s, 0s
And all the process finished with this (only first 4 small files copied OK):
2020/11/21 02:50:28 Failed to copy: couldn't list files: Get "https://graph.microsoft.com/v1.0/drives/c1c51bb6bc6a961/items/C1C51BB5BC6A961!105/children?$top=1000": couldn't fetch token - maybe it has expired? - refresh with "rclone config reconnect onedrive:": oauth2: cannot fetch token: 400 Bad Request
Response: {"error":"unauthorized_client","error_description":"AADSTS750015: Application with identifier 'r941dXXd-2283-4775-af53-c439cdXXdac' was not found in the directory '9188040d-6c67-4c5b-b112-36a304bXXdad'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
Trace ID: 86afe882-4a3d-4fr3-be2e-ab0ded966c00
Correlation ID: 26rr3ba1-f46c-47a9-a502-2h351dt6yb7e
Timestamp: 2020-11-21 05:50:30Z","error_codes":[700016],"timestamp":"2020-11-21 05:50:30Z","trace_id":"86aht3d2-4a3d-4fa7-be2e-ab0dwr942c00","correlation_id":"26eeaba1-f46c-47a9-a588-5b351d6a5b7e","error_uri":"https://login.microsoftonline.com/error?code=700016"}

- I was trying to move files with a bash script inside /usr/local/directadmin/scripts/custom and run from terminal. The previous log is the result of that process.

- Now I am trying to paste the same code inside all_backups_post.sh to se the difference and with rclone DEBUG mode log.

- Last test tomorrow is to move all backups to BackBlaze and compare speed and errors.
 
IPv4 is usually faster, as routing tables are better with more providers.

If you get good speeds and then speeds plumet, could be your VFS provider throttling you. You should try running a speedtest while throttled to see if you get full speed there at the same time.
 
Well well well. Onedrive is not working properly when I try to send bigger files (from 5gb to 12gb).
There is something in the way it receives chunks that slow down speed to almost zero.
I have to copy files with size filter, first all files below 1GB, then between 1GB and 3gb, between 3gb and 5gb, 5gb and 8gb and over 8gb.
This was the only way to copy 64 .enc backups files. I took about 6hs to perfom this and checking every set o files after every move. Insane.

So, I tried BackBlaze and everything was better. About 150-200mb/s.
I have to adjusts some flags because I received some timeouts because of transfer number. But always fast.

This was the line in my all_backups_post.sh
/usr/bin/rclone copy folder_backups/folder/ cloud-backblaze:cloud-whn1/folder/ --b2-upload-cutoff=400M --b2-chunk-size 200M --transfers 2 --b2-disable-checksum --fast-list --log-level DEBUG --log-file /home/admin/bkp_backblaze.log

I will try again tonight after backups.
What I like about BlackBlaze is that I can setup lifecycle and if I move backups files everyday I can moderate files storage costs.
 
Back
Top