External backup locations protocols

McBart

New member
Joined
Jan 7, 2016
Messages
2
DirectAdmin only has FTP for external backup locations

Look at the Installatron external backup location protocols: FTP (clear/TLS/SSL), SFTP, WebDav and Dropbox.com

DirectAdmin backups are more complete than Installatron backups. DA does all domain information including mail. Installatron only does backup the installed web app, like WordPress.

Clear FTP is just not enough in 2016. I would love to have more external backup options in DirectAdmin, especially WebDav

Not to mention I have this cool 1GB cloud account, that uses WebDave. Always plenty space! Wouldn't it be great to use that for my nightly DirectAdmin backups?

Thanks for creating DirectAdmin happily using it for many years now :-)
 
I would like to see Amazon AWS support also. This has affordable storage, versioning, lifecycle rules and advanced permissions, eg. only allow PUT operations, don't delete or download. Which is great for back-ups, combined with object versioning.

There would be 2 options to implement this

1. Create wrappers for the defacto tools/libraries. For example, I use a custom user_backup_post.sh script, which moves to AWS, when I specify the /home/admin/admin_backups/aws folder.

Code:
#!/bin/sh

#############
#set the backup credentials
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=

#where do you want to save the AWS copy?
SAVE_PATH=s3://my-bucket/${HOSTNAME}

#set this as needed
RESELLER=admin
#############

BACKUP_PATH=`echo $file | cut -d/ -f1,2,3,4,5`
REQUIRED_PATH=/home/$RESELLER/admin_backups/aws

if [ "$BACKUP_PATH" = "$REQUIRED_PATH" ]; then
      NEW_FILE=${SAVE_PATH}/`echo $file | cut -d/ -f6-`
      /usr/local/bin/aws s3 mv $file $NEW_FILE
fi
exit 0;
Which does the trick nicely for me, but misses UI etc. I tried changing the FTP script, but that was harder (because of some connection checks).

2. Use Flysystem: https://github.com/thephpleague/flysystem

This supports multiple adapters (Dropbox, (S)FTP, Webdav, AWS, Azure, Copy.com, Rackspace etc), with just PHP dependancies (PHP5.5+). It supports a common interface, with the ability to stream files for efficiency.
These could be installed be default, or with some kind of plugin.

3. Make it extendable. I don't know if it's possible, but it would be nice to be able to create custom Backup destinations and make it configurable what fields to fill in. Eg: AWS, ask from key/secret, bucket and prefix. Dropbox ask for app token + secret + folder etc.
 
Just having various transfer methods doesn't really add value. If you have large amounts of data, you won't be able to backup and transfer it within a reasonable time period (nightly). Hence, admins are using custom scripts and making incremental backups.
 
Just having various transfer methods doesn't really add value. If you have large amounts of data, you won't be able to backup and transfer it within a reasonable time period (nightly). Hence, admins are using custom scripts and making incremental backups.

It isn't just about large amounts of data. It's about reliability and versioning. With AWS, you have out-of-the-box versioning and access controls (eg, only PUT new versions, don't delete). FTP access is generally not as 'safe'. The credentials are stored somewhere, no default versioning, no scaling if size is full etc.

For example, you can create daily DB backups to AWS with versioning, so you have a version for each day. You can configure rules that after 30 days, old versions are moved to Glacier (cheaper) and after xx days, deleted. For FTP you would still have to write your own logic to transfer from FTP to AWS, which is cumbersome.
And every weekend, create a full backup to AWS, and store that for a different period etc.
 
It isn't just about large amounts of data. It's about reliability and versioning. With AWS, you have out-of-the-box versioning and access controls (eg, only PUT new versions, don't delete). FTP access is generally not as 'safe'. The credentials are stored somewhere, no default versioning, no scaling if size is full etc.

For example, you can create daily DB backups to AWS with versioning, so you have a version for each day. You can configure rules that after 30 days, old versions are moved to Glacier (cheaper) and after xx days, deleted. For FTP you would still have to write your own logic to transfer from FTP to AWS, which is cumbersome.
And every weekend, create a full backup to AWS, and store that for a different period etc.

Curious if you ever did anything in this space or had anything written?
 
Back
Top