large user backup problem

spoonfed

Verified User
Joined
Mar 26, 2008
Messages
23
Hi,

I added one automatic backup per account to be sent to a remote ftp server about a week ago. One site is much bigger then the rest, about 30GB in size compared to the second largest which is at about 2GB. The first night i got an error that it couldnt create a backup for this site because not enough space was available, the other sites were backed up fine.

So i made some room, there is now about 65 GB of free space and the night after all sites were backed up and sent to the remote server without any problems. But ever since then i've been getting this error every night.

Subject: An error occurred during the backup. Today at 03:47
User xxx has been backed up.
Cannot open local file /home/tmp/admin/xxx.tar.gz for reading: No such file or directory.
ncftpput /home/tmp/admin/xxx.tar.gz: could not open file.

And every day i've scratched my head, double checked that there was room on the server and double checked the ftp-settings, double checked that there is room on the target server and with no more ideas just hoped it would work the following night.

File system on server is ext2, i dont think it has any problems with files of this size.

Any ideas? all other backups runs every night without any issues, and this backup did work that one night before deciding to throw this error every night instead.
 
File size depends on several factors, including your kernel version and the block size of your file system. You need to do some research. Wikipedia has some good examples.

Though this article is about ext3, the limits are probably the same.

Jeff
 
Thanks, i checked some things out.

Block size is 4096, which should give a maximum file size of 2 TiB.

Right now i called for an immidiate local backup of the account and is tracking how it goes.

I think it might be a disk space problem after all, what i didnt know is the backup scripts copies all the stuff to a folder in user backups first (or in /home/tmp/ if its remote backup), then creates a gzip from that making it take up roughly an extra 65 gig of space on the harddrive before the backup is done.

I'm gonna try to clear up some more space. But i think once that site will have to be put on a new server in a few months time.

I dont suppose there is any way to make the tar process delete files as it adds them to the gzip archive? That would help a lot.
 
Mount a NFS to the other server and the make the tar.gz files be created there instead of the local drive.
 
I dont suppose there is any way to make the tar process delete files as it adds them to the gzip archive? That would help a lot.
If you don't suppose that you'd be wrong, though DirectAdmin doesn't do it that way.

You can use the -A switch to append tar files to an archive, so you can create a script to add files to the archive, and remove the full files, as you wish.
Code:
$ man tar

Jeff
 
I dont suppose there is any way to make the tar process delete files as it adds them to the gzip archive?

I would like clarification on your question here. Do you mean delete the files it just backed up? If so then you end up with only a backup and no user files.
 
I would like clarification on your question here. Do you mean delete the files it just backed up? If so then you end up with only a backup and no user files.

DirectAdmin does not create the backup file from the user file, it creates the backup file from a folder it creates during the backup. So first it copies everything, makes database dumps and so on, and then it creates a gzip and deletes the temporary backup folder.

Mounting an NFS sounds interesting, but unforunately i dont know the first thing about it :p I guess its something i could look into, would i be able to connect the remote backup server as a drive on the server so i could tell DA to put its backups there basically?



Thanks jlasman, its good to know it would be doable if i need it some time, however this doesnt seem to be a space problem after all. I cleared more space so theres now almost 80GB of free space, and the local backup i did manually from DA worked fine (which i then deleted so it is not taking up space).

So again i thought problem solved, but this morning i woke up to this:

Subject: An error occurred during the backup. Today at 03:56
User xxx has been backed up.
/home/tmp/admin/xxx.tar.gz.cfg: No such file or directory

Why is it now a .cfg file that is missing? i'm lost.
 
DirectAdmin does not create the backup file from the user file, it creates the backup file from a folder it creates during the backup. So first it copies everything, makes database dumps and so on, and then it creates a gzip and deletes the temporary backup folder.

So you want it to delete the files in /home/tmp as it tar gzips them? Keep in mind that the bulk of your data is going to be the web site files which do not get copied to a tmp folder. They are tar gzipped in place.
 
Last edited:
Oh, so the web site files are not copied first? Ok.

Mostly i just want the scheduled backups to work, whichever method works.

Just performed a manual backup of the large site to the remote server, worked perfectly, but still the scheduled backup fails every night, i think im going to have to report this as a bug to the DirectAdmin team.
 
Hello,

The cfg is a ftp upload config file.
Created by /usr/local/directadmin/scripts/ftp_upload.php
The script runs as root.

One guess might be if /bin isn't in the $PATH.. the calls to "echo" would fail, thus the cfg file would be missing.
Try changing the echo calls to be /bin/echo calls (and for chmod and touch too), to see if that does anything.

The ftp_upload.php is a standard shell script, so if you're able, adding debug output for things like:
Code:
/bin/date >> /tmp/debug.txt 2>&1
/bin/ls -la /home/tmp/admin >> /tmp/debug.txt 2>&1
/bin/ls -la /home/tmp >> /tmp/debug.txt 2>&1
to the script after the cfg was "created", that would help figure out what's going on.

John
 
Hello,

I've recently checked another system for this issue, and I believe the issue is becaus /home/tmp/admin is used for all backups and that directory is always forceably removed before any backup run to ensure things are "clean". If you have multiple backups (seperate crons) created, and there is any overlap, the 2nd run will end up possibly deleting the first one which would explain why it's somewhat random and can't be duplciated when run manually at off-backup hours. Solution is to spread out your backups more so there is guranteed to be no overlap or else combine multiple cronjobs into 1 backup cronjob (if possible) so 1 instance of the dataskq does them all and they don't end up fighting. I'll be looking at possibly DA solutions but for now, that's the workaround.

John
 
Back
Top