Backs up to tmp but doesn't FTP

IT_Architect

Verified User
Joined
Feb 27, 2006
Messages
1,088
It backs up to the tmp directory. I get the message that it backed up fine. However, it doesn't backup fine. It never FTPs it to the other site. I can go into the tmp directory and ftp it by hand using the same credentials, no problem. I don't see any evidence that it even tried nor do I know where to look to tell.
 
What backs up to the tmp directory? You're not specific enough to have any idea how to troubleshoot your issue.

Jeff
 
What backs up to the tmp directory? You're not specific enough to have any idea how to troubleshoot your issue.Jeff
I've tried ftp and normal backups. It makes it in the /home/tmp/admin/test2.tar.gz fine and /home/admin/user_backups/test2.tar.gz. However, with the FTP option, it never gets FTPed to the other site nor can I determine if it ever attempted to. Yet when I SSH in and FTP it with the same credentials and path set up in the backup, it works fine.

Thanks!
 
Have you checked the directadmin log to see if it's got any information?

Are you sure you're using the same username and password?

Are you sure you're manually copying into the same directory the backup uses?

Have you checked the /var/log/messages file on the server you're trying to ftp to?

Jeff
 
Hi Jeff,

Have you checked the directadmin log to see if it's got any information?

No errors in logs

Are you sure you're using the same username and password?

copy and paste. Tried several times

Are you sure you're manually copying into the same directory the backup uses?

Guaranteed I'm using / in both cases and since it's ftp, it's mapped to the /home/userdir

Have you checked the /var/log/messages file on the server you're trying to ftp to?

No evidence of a login or attempted login for that matter. The proftpd log shows only the manual logins.

The first part of the backup happens every time. What doesn't happen is the ftp transfer.
 
On my system sysbk (for the System Backups) and DA (for the User Backups at admin or reseller level) both use "ncftpput"; check that your system has it and run ncftp.sh in /usr/local/directiadmin/scripts if it hasn't.
 
On my system sysbk (for the System Backups) and DA (for the User Backups at admin or reseller level) both use "ncftpput"; check that your system has it and run ncftp.sh in /usr/local/directadmin/scripts if it hasn't.
- It's there in /usr/bin/ and it asks for flags when I run it.
- I also ran the script and it installed again. I restarted DA and tried a backup again. Again, it wrote the file to /home/tmp/admin, but it did not ftp it to the other server.

Thanks!
 
Did you check the "Send a message when a backup has finished." setting in User Backups? When a trasfer fails I see the exact error in the message.
 
Did you check the "Send a message when a backup has finished." setting in User Backups? When a trasfer fails I see the exact error in the message.
There is no way to do an FTP backup at the user level. At the reseller level, where I'm doing the backup for the user, I do have that checked. I don't get a message for FTP backups, presumably because they don't complete, but I do get a message that it completed successfully for non-ftp backups.
 
Uhm, that's weird. If you don't get any message for FTP-enabled backups, it means that there is a stale process. Try to do a transfer from your server to the backup server using both PASV and PORT transfers method, my guess is that ncftpput by default uses a method that won't work with your firewall, while the client you use to test it uses the other one.
 
my guess is that ncftpput by default uses a method that won't work with your firewall, while the client you use to test it uses the other one.
- No firewall.
- It left a task.queue.tmp behind. The commands and path makes sense but the password is nothing like what I entered. Maybe it encrypts it?
- If it was even trying at the other server, I would think the proftpd log would have an entry for it. It does for the successful ones at least.
 
Last edited:
Yes, you are right... the stale process must happen before or during the FTP login, otherwise you would see the log entry. Try a "ps -H auxww" and look for "dataskq", there should be one or many of them. The important part is the last component of the tree, it's the stale process.
 
none means the task.queue is complete. So DirectAdmin thinks the process is complete.

That's reasonable behavior even though you don't get a message, if (and presumably this if is true) that message is sent by the backup process and not by a specific process called by the task.queue.

Have you tried backing up to an IP# instead of a FQDN? Bad DNS resolution is the only thing I can think of right now.

Jeff
 
Hello,

First, try updating to DA 1.33.4.
Prior to that, the commands like echo, touch, chmod etc, were all just called without the full path. The full paths to programs are used in the ftp_upload.php script now. If any of those $PATHs were missing in the env as called by DA, that would explain why they didn't work.

Beyond that, ensure /home/tmp is chmod 1777.. which it likely is, since you mentioned the tar.gz are showing up.

Else it's possibly an issue with the program or call itself..
Edit:
/usr/local/directadmin/scripts/ftp_upload.php

Find the line:
Code:
$FTPPUT -f $CFG -V -t 25 -m "$ftp_path" "$ftp_local_file" 2>&1
and add one more line, so that it looks like:
Code:
$FTPPUT -f $CFG -V -t 25 -m "$ftp_path" "$ftp_local_file" 2>&1
echo '$FTPPUT -f $CFG -V -t 25 -m "$ftp_path" "$ftp_local_file"' >> /tmp/test.txt
Which would basically dump the exact command as called by DA into the /tmp/test.txt, so you can see what it's doing. There should also be a cfg file sitting next to the tar.gz for the upload, so check that as well, as it's needed for the above call.

Another way to test is to call the ftp_upload.php script by hand. eg:
Code:
cd /usr/local/directadmin/scripts
ftp_local_file=/home/tmp/your/file.tar.gz ftp_ip=1.2.3.4 ftp_username=bob ftp_password=secret ftp_path=/ ./ftp_upload.php
you have to edit the ftp_upload.php to change "/bin/sh" to show "#!/bin/sh" for this test, then change it back again when done, or you'll get an interpreter error.

Lastly, you can try 2 other ftp upload programs other than ncftpput. Curl or php:
http://help.directadmin.com/item.php?id=111

John
 
I have a similar problem...

Admin backup creates the tar.gz file in /home/tmp. But there is no sign of ncftp even trying. However, the e-mail I receive says that the backup was completed and files are uploaded...

ncftp works fine if I use it manually. DA is 1.33.6



I have done as suggested above, and the /tmp/test.txt file now shows:
$FTPPUT -f $CFG -V -t 25 -m "$ftp_path" "$ftp_local_file"

Does it mean that the $ftp_path and $ftp_local_file variables simply do not exist? And how can I solve the problem?
 
Last edited:
Grrrr. After spending several hours on this, I think I have found the problem...
The remote ftp path, if it is a subdirectory, should be set as "subdirname" rather than "/subdirname". Strangely enough though, DA's user interface will show the location as: ftp://xx.xx.xx.xxsubdirname/backup which looks weird without the "/" :-)
 
Short answer:
Modify your backup server's FTP daemon to automatically chroot to the user's home directory.

Long answer:

I'm using "/..." with a proftpd server without any problem, but I use "DefaultRoot ~" on the target system, instructing proftpd to use "/" as the home directory, not the actual system root path.

This is an old problem with FTP (which is a very old and scarily stupid protocol), and I guess the reason for most old FTP anonymous-only server distributions to chroot the ftpd binary to /var/ftp (or whatever): so that /var/ftp/public/bla would be accessible with any browser navigating ftp://host/public/bla, because the request isn't "public/bla" (which means "~/public/bla") but "/public/bla".

If you want the path to show correctly using /subdirname and you are not using virtual (or real) chrooting, set "/subdirname" in DA then create a symlink from "/subdirname" to "/path/to/subdirname" in the target system (it requires root access).
The best solution would of course be to do virtual chrooting in the user's home. This way when you write "/subdirname" the server knows that you meant "/home/user/subdirname".

This is what I found in the FTP RFC (rfc959):
pathname

Pathname is defined to be the character string which must be
input to a file system by a user in order to identify a file.
Pathname normally contains device and/or directory names, and
file name specification. FTP does not yet specify a standard
pathname convention. Each user must follow the file naming
conventions of the file systems involved in the transfer.
This means that "/" must be the root path, because this is what "/" means in any *nix system; not the user's home path.
 
Last edited:
Back
Top