Hi guys,
Previously I was creating backup locally and uploading them via FTP to a backup server, but this method has certain downsides:
- It requires way too much free space on the web server to prepare and create backups (I should pretty much have at least half of the HDD free);
- Using the same HDD for reading and writing when creating the final .tar.gz significantly affects the performance of the whole server until the backup job is completed. This may take hours, and the server is slow for hours.
- Deleting a really huge .tar.gz file after successful FTP upload (e.g. deleting a 200 Gb file) effectively locks down the whole server for about 10 minutes, as the HDD subsystem is busy freeing up the blocks occupied by the file - CPU load soars, web pages time out etc.
So I decided to mount a directory from the backup server on the web server (via SSHFS) and backup right there as if it was a local directory.
This method seems to work way better, noticed just one issue here, and it is as follows:
- For local backups, the backup job seems to be first preparing certain backup files in a <backup_dir>/username subdirectory, copying lots of files there (sql backups, user data, unreadable data etc.) and then gzips-tars the whole thing.
- Gzipping-tarring on a network mount incurs significant network overheads, slows down the backup job, and puts additional load on the network subsystem.
So what I would like to suggest is to add a text input field to the backup job add/edit form to specify the temporary path where backups will be prepared before final gzipping-tarring (in my case this will be a local directory on the web server, outside the network mount).
Thank you.
Previously I was creating backup locally and uploading them via FTP to a backup server, but this method has certain downsides:
- It requires way too much free space on the web server to prepare and create backups (I should pretty much have at least half of the HDD free);
- Using the same HDD for reading and writing when creating the final .tar.gz significantly affects the performance of the whole server until the backup job is completed. This may take hours, and the server is slow for hours.
- Deleting a really huge .tar.gz file after successful FTP upload (e.g. deleting a 200 Gb file) effectively locks down the whole server for about 10 minutes, as the HDD subsystem is busy freeing up the blocks occupied by the file - CPU load soars, web pages time out etc.
So I decided to mount a directory from the backup server on the web server (via SSHFS) and backup right there as if it was a local directory.
This method seems to work way better, noticed just one issue here, and it is as follows:
- For local backups, the backup job seems to be first preparing certain backup files in a <backup_dir>/username subdirectory, copying lots of files there (sql backups, user data, unreadable data etc.) and then gzips-tars the whole thing.
- Gzipping-tarring on a network mount incurs significant network overheads, slows down the backup job, and puts additional load on the network subsystem.
So what I would like to suggest is to add a text input field to the backup job add/edit form to specify the temporary path where backups will be prepared before final gzipping-tarring (in my case this will be a local directory on the web server, outside the network mount).
Thank you.