Problem backing up to NFS volume

IT_Architect

Verified User
Joined
Feb 27, 2006
Messages
1,114
I backup VMs up every night to an NFS volume exported by a Microsoft Server. It works great. I have set up rights for DA's admin user, and it has full rights. I do not have the space to backup locally, which is what I have done with other transfer backups that has worked so well. I can log into the FreeBSD DA server with PuTTY as the admin user, and navigate to the NFS directory and do anything I want to, including making folders, files, deleting, and renaming anything I want to.

When I use Admin Backup/Transfer, and select Local and the path to NFS share /mnt/nas-1/backups/ServerTransfer/, it creates the admin subdirectory, does its thing, then deletes the subdirectory, leaves a file at named admin.root.admin.tar.gz at /mnt/nas-1/backups/ServerTransfer/, and deletes the admin subdirectory. The entry in Messages shows admin backed up fine.
Code:
User admin has been backed up. <12:07:25>

However, for all of the other users, all of whom are under that admin reseller, I get a messages like these:
Code:
Unable to write /mnt/nas-1/backups/ServerTransfer/theuser/backup/theuserd.org/email/quota : Unable to get Lock on file:
open error for /mnt/nas-1/backups/ServerTransfer/theuser/backup/theuserd.org/email/quota.lock: No such file or directory
/mnt/nas-1/backups/ServerTransfer/theuser/backup/theuserd.org/email: No such file or directory

Error reading /mnt/nas-1/backups/ServerTransfer/theuser/backup/theuserd.org/domain.conf to insert local_domain & private_html_is_link: Unable to open /mnt/nas-1/backups/ServerTransfer/theuser/backup/theuserd.org/domain.conf for reading.

Error writing /mnt/nas-1/backups/ServerTransfer/theuser/backup/apache_owned_files.list : Unable to open file for writing

Error Compressing the backup file /mnt/nas-1/backups/ServerTransfer/theuser/backup/home.tar.gz : tar: Failed to open '/mnt/nas-1/backups/ServerTransfer/theuser/backup/home.tar.gz': Permission denied

Error renaming /mnt/nas-1/backups/ServerTransfer/theuser/user.admin.theuser.tar.gz to /mnt/nas-1/backups/ServerTransfer/user.admin.theuser.tar.gz : Unable to move /mnt/nas-1/backups/ServerTransfer/theuser/user.admin.theuser.tar.gz to /mnt/nas-1/backups/ServerTransfer/user.admin.theuser.tar.gz: 
A directory component in oldpath  or  newpath does not exist or is a dangling symbolic link.

What is odd, is it DOES create the subdirectory /mnt/nas-1/backups/ServerTransfer/theuser/backup/ and write the files in those directories, and I can see them build. However, when it finishes, it deletes the user named subdirectory, and moves to the next user. All that remains after it goes through all of the users is the one created in the beginning from the admin backup, /mnt/nas-1/backups/ServerTransfer/user.admin.theuser.tar.gz, and it is the same size as after the admin backup.
 
Last edited:
I am surprised nobody else does this. If they had, the would have never been able to escape the same experience that I did.

Someone suggested to me earlier on another thread that I had to give the admin user full rights on the NFS volume for it to work. That is false. All you get is an admin backup, just as I did. The user changes as you backup. What happens is admin reads the files and puts them out there, but then the user credentials are used for certain operations. When they fail, the source files are erased, and it reports a failure. When there is a success, such as when you do it locally, you get a zip file owned by the admin, and the group will be the user. A user not granted rights has read-only rights at best. Therefore you need to come up with a way to make the NFS volume world-writable so that anonymous users have full write access. The reason my nightly backups work to the volume is they are VM backups, which only uses one user.
 
Last edited:
FTP backups don't take up much space on the server you are backing up, which can me a good thing when you don't have much space to begin with. However, that also means that it does one user at a time. First it copies the files somewhere, then it compresses them, and then it FTPs them. After the file transfer has completed it goes on to the next user. However, even considering all of that, the copy process is crazy slow. It takes a while to transfer even a 1 meg file. After all of time, there were errors.

I would say at this point, if you don't have the space to do a full admin backup into the admin folder, that you still backup to the default admin directory, but while they are building in the admin folder, manually copy them to the NFS folder and delete them from the admin folder as they build. Then you can archive and copy at the same time, and the copy is far faster, and no errors.
 
Last edited:
Reading this thread it almost looks like some posts have been deleted.

Did you solve this? From your explanation it reads like you found the solution.
 
From your explanation it reads like you found the solution.
I haven't settled on a strategy yet.

1. As mentioned, FTP is not practical.

2. Backing up to /home/admin/admin_backups is a solid method, however, one of the reasons to move to a larger server is space. The largest volume is always the home volume. When you backup a large user, it requires a huge space to move it to a subdirectory, and then turn around and compress it. After that, it copies or moves the zip to the parent directory. As you go, /home/admin/admin_backups uses up more and more space. One method is to copy the files deposited in that directory to an NFS share. However, if you have a user that exceeds the amount of space you have left, it compresses until it runs out of room, and then copies/moves what it got done so far, and leaves you with a file that you don't know is incomplete. Sometimes you do when it leaves you a tar.gz that is zero bytes long.

3. Using an NFS volume has its issues. When it creates the backup, for a single domain, it switches users all over the place, not just admin. Even if you export the group and passwd files, you still need to map the all to an NFS user with sufficient rights, and for a Windows Server NFS at least, a unique NFS user for each entry.

*The way this is shaking out, if you can't do a local admin backup, you can't get a backup. NFS can move the process off line, but then you have to set up the rights mapping and I don't know where that all ends. It may be more than just admin and the user. So basically, if you need more space, you need to come up with more space on the old server. You need to add a drive and add a volume. I have 3 users that I cannot move. One is 16G, one is 8.5G, and another is 57 megs. The first two may well be size because they are the two largest. The third one has weird log messages. First it says it failed to open it, next it talks about passwd file and others, then it says Catch all is now set to :fail:, next fail changes to :blackhole:, then when it is all done, it says it was restored.

Since I put ESXi on the bare metal and all servers are VMs, I'm thinking the safest way is to simply add another virtual disk and backing up to there. For bare metal, you would add a USB hard drive. Another more risky option is to do a backup of some, then copy off the backups, then deleting the user on the old server. However, that is full commitment to a backup that you do not know is good. I want to know the new server is solid before I drop the old one.
 
Whenever tried to do backups to a mounted network device I've ended up with a custom script which copies a final tar.gz file only to it. All backups and temporary files are still located on a local device while directadmin creates them. I don't have any other solution.
I have found that to be true. The problem comes in when you don't have room necessary to create the temporary files and final tar.gz locally prior to moving it to the NFS volume.

Questions I do have are:
1. Were you able to do a restore from the NFS volume, or did you need to copy the compressed files locally?
2. If you restored from the NFS, did it then expand them locally before bringing them in?
 
Last edited:
Copying many files to a network drive might be too slow, that's why we use a custom script.

Restoring files might be an issue here, directadmin unpacks them into /home/ I believe without copying to a local drive.
 
Copying to an NFS volume is plenty fast. I'll try to unpack one from an NFS drive and see how it goes. Thanks!
 
Well, lucky you are. In our case it was too slow for a big number of small files. I don't recall all the details but I do know we ended with a custom script. I don't recall any issues with restoring backups. Probably the customer did not have any incident and hence no need to restore anything.
 
Back
Top