Solved (Explicit) FTPS not working for admin backups.

Hello,

Pre-release contains changes to the ftp upload script, which is why you'd need to update to pre-release.

To downgrade back to the latest stable version, you would just run the upgrade without the '&channel=beta' flag. :)

And to update curl afterwards, just:
Code:
cd /usr/local/directadmin/custombuild/
./build update
./build curl
Just did some testing and now it works again on my production server. But still not on my development server.
Do you know what is changed in the list/upload script? Since i needed to make a custom script that can deal with Explicit FTPS.
 
Just did some testing and now it works again on my production server. But still not on my development server.
Do you know what is changed in the list/upload script? Since i needed to make a custom script that can deal with Explicit FTPS.
Run a diff on the two servers' files and see what is different between them. Also, see if they both use the same version of curl or not. Are they connecting to the same upload server, too?
 
Run a diff on the two servers' files and see what is different between them. Also, see if they both use the same version of curl or not. Are they connecting to the same upload server, too?
I have them both updated to the latest beta and recompiled Curl via custombuilds.
The servers are the same. Ony differents is there physical location.
But they both have custom ftp_upload, ftp_download and ftp_list that i made based in the existing one but with a couple of changes to allow Explicit FTPS.
And they connect to the same backup server with the same username/password (not at the same time of course).
 
Pre-release would have made changes to the ftp_* scripts. If you temporarily disabling the custom ftp_* scripts, does it work?
 
Pre-release would have made changes to the ftp_* scripts. If you temporarily disabling the custom ftp_* scripts, does it work?
I can not disable them. Thats why i am asking what is changed so i can update my custom scripts.
The default scripts always fails with the error that it can not login. But this is because it tries to use ftp:// and just not handle SSL/TLS but my server requires ftps:// and a working SSL/TLS connection since its a Explicit FTPS server.
 
I can not disable them. Thats why i am asking what is changed so i can update my custom scripts.
The default scripts always fails with the error that it can not login. But this is because it tries to use ftp:// and just not handle SSL/TLS but my server requires ftps:// and a working SSL/TLS connection since its a Explicit FTPS server.
I understand, however, the code that was causing that failure with FTPS and it to revert back to insecure FTP was fixed in pre-release some months ago, so you shouldn't need custom scripts now that you've updated. Here is precisely what was happening:

Despite "Secure FTP" being checked, it still attempted to utilize insecure FTP because curl changed the help flag output with a new version, so the old ftp_upload script code here:

Code:
SSL_REQD=""
if ${CURL} --help | grep -m1 -q 'ftp-ssl-reqd'; then
    SSL_REQD="--ftp-ssl-reqd"
elif ${CURL} --help | grep -m1 -q 'ssl-reqd'; then
    SSL_REQD="--ssl-reqd"
fi

wasn't being set to require ssl. The grep no longer found ftp-ssl-reqd or ssl-reqd since curl doesn't mention this in its --help output anymore. It had to be updated to use '--help tls' instead:

Code:
SSL_REQD=""
if ${CURL} --help tls | grep -m1 -q 'ftp-ssl-reqd'; then
    SSL_REQD="--ftp-ssl-reqd"
elif ${CURL} --help tls | grep -m1 -q 'ssl-reqd'; then
    SSL_REQD="--ssl-reqd"
fi

So with pre-release versions of the script, it properly detects, sets, and utilizes ftp with SSL.

curl 7.75 help output:
Code:
[root@host directadmin]# curl --help
Usage: curl [options...] <url>
 -d, --data <data>   HTTP POST data
 -f, --fail          Fail silently (no output at all) on HTTP errors
 -h, --help <category> Get help for commands
 -i, --include       Include protocol response headers in the output
 -o, --output <file> Write to file instead of stdout
 -O, --remote-name   Write output to a file named as the remote file
 -s, --silent        Silent mode
 -T, --upload-file <file> Transfer local FILE to destination
 -u, --user <user:password> Server user and password
 -A, --user-agent <name> Send User-Agent <name> to server
 -v, --verbose       Make the operation more talkative
 -V, --version       Show version number and quit

This is not the full help, this menu is stripped into categories.
Use "--help category" to get an overview of all categories.
For all options use the manual or "--help all".
[root@host directadmin]#
VS
curl 7.68 help output:
Code:
[root@host custombuild]# curl --help
Usage: curl [options...] <url>
     --abstract-unix-socket <path> Connect via abstract Unix domain socket
     --alt-svc <file name> Enable alt-svc with this cache file
     --anyauth       Pick any authentication method
 -a, --append        Append to target file when uploading
     --basic         Use HTTP Basic Authentication
     --cacert <file> CA certificate to verify peer against
     --capath <dir>  CA directory to verify peer against
 -E, --cert <certificate[:password]> Client certificate file and password
     --cert-status   Verify the status of the server certificate
     --cert-type <type> Certificate file type (DER/PEM/ENG)
     --ciphers <list of ciphers> SSL ciphers to use
     --compressed    Request compressed response
     --compressed-ssh Enable SSH compression
 -K, --config <file> Read config from a file
     --connect-timeout <seconds> Maximum time allowed for connection
     --connect-to <HOST1:PORT1:HOST2:PORT2> Connect to host
 -C, --continue-at <offset> Resumed transfer offset
 -b, --cookie <data|filename> Send cookies from string/file
 -c, --cookie-jar <filename> Write cookies to <filename> after operation
     --create-dirs   Create necessary local directory hierarchy
     --crlf          Convert LF to CRLF in upload
     --crlfile <file> Get a CRL list in PEM format from the given file
 -d, --data <data>   HTTP POST data
     --data-ascii <data> HTTP POST ASCII data
     --data-binary <data> HTTP POST binary data
     --data-raw <data> HTTP POST data, '@' allowed
     --data-urlencode <data> HTTP POST data url encoded
     --delegation <LEVEL> GSS-API delegation permission
     --digest        Use HTTP Digest Authentication
 -q, --disable       Disable .curlrc
     --disable-eprt  Inhibit using EPRT or LPRT
     --disable-epsv  Inhibit using EPSV
     --disallow-username-in-url Disallow username in url
     --dns-interface <interface> Interface to use for DNS requests
     --dns-ipv4-addr <address> IPv4 address to use for DNS requests
     --dns-ipv6-addr <address> IPv6 address to use for DNS requests
     --dns-servers <addresses> DNS server addrs to use
     --doh-url <URL> Resolve host names over DOH
 -D, --dump-header <filename> Write the received headers to <filename>
     --egd-file <file> EGD socket path for random data
     --engine <name> Crypto engine to use
     --etag-save <file> Get an ETag from response header and save it to a FILE
     --etag-compare <file> Get an ETag from a file and send a conditional request
     --expect100-timeout <seconds> How long to wait for 100-continue
 -f, --fail          Fail silently (no output at all) on HTTP errors
     --fail-early    Fail on first transfer error, do not continue
     --false-start   Enable TLS False Start
 -F, --form <name=content> Specify multipart MIME data
     --form-string <name=string> Specify multipart MIME data
     --ftp-account <data> Account data string
     --ftp-alternative-to-user <command> String to replace USER [name]
     --ftp-create-dirs Create the remote dirs if not present
     --ftp-method <method> Control CWD usage
     --ftp-pasv      Use PASV/EPSV instead of PORT
 -P, --ftp-port <address> Use PORT instead of PASV
     --ftp-pret      Send PRET before PASV
     --ftp-skip-pasv-ip Skip the IP address for PASV
     --ftp-ssl-ccc   Send CCC after authenticating
     --ftp-ssl-ccc-mode <active/passive> Set CCC mode
     --ftp-ssl-control Require SSL/TLS for FTP login, clear for transfer
 -G, --get           Put the post data in the URL and use GET
 -g, --globoff       Disable URL sequences and ranges using {} and []
     --happy-eyeballs-timeout-ms <milliseconds> How long to wait in milliseconds for IPv6 before trying IPv4
     --haproxy-protocol Send HAProxy PROXY protocol v1 header
 -I, --head          Show document info only
 -H, --header <header/@file> Pass custom header(s) to server
 -h, --help          This help text
     --hostpubmd5 <md5> Acceptable MD5 hash of the host public key
     --http0.9       Allow HTTP 0.9 responses
 -0, --http1.0       Use HTTP 1.0
     --http1.1       Use HTTP 1.1
     --http2         Use HTTP 2
     --http2-prior-knowledge Use HTTP 2 without HTTP/1.1 Upgrade
     --http3         Use HTTP v3
     --ignore-content-length Ignore the size of the remote resource
 -i, --include       Include protocol response headers in the output
 -k, --insecure      Allow insecure server connections when using SSL
     --interface <name> Use network INTERFACE (or address)
 -4, --ipv4          Resolve names to IPv4 addresses
 -6, --ipv6          Resolve names to IPv6 addresses
 -j, --junk-session-cookies Ignore session cookies read from file
     --keepalive-time <seconds> Interval time for keepalive probes
     --key <key>     Private key file name
     --key-type <type> Private key file type (DER/PEM/ENG)
     --krb <level>   Enable Kerberos with security <level>
     --libcurl <file> Dump libcurl equivalent code of this command line
     --limit-rate <speed> Limit transfer speed to RATE
 -l, --list-only     List only mode
     --local-port <num/range> Force use of RANGE for local port numbers
 -L, --location      Follow redirects
     --location-trusted Like --location, and send auth to other hosts
     --login-options <options> Server login options
     --mail-auth <address> Originator address of the original email
     --mail-from <address> Mail from this address
     --mail-rcpt <address> Mail to this address
 -M, --manual        Display the full manual
     --max-filesize <bytes> Maximum file size to download
     --max-redirs <num> Maximum number of redirects allowed
 -m, --max-time <seconds> Maximum time allowed for the transfer
     --metalink      Process given URLs as metalink XML file
     --negotiate     Use HTTP Negotiate (SPNEGO) authentication
 -n, --netrc         Must read .netrc for user name and password
     --netrc-file <filename> Specify FILE for netrc
     --netrc-optional Use either .netrc or URL
 -:, --next          Make next URL use its separate set of options
     --no-alpn       Disable the ALPN TLS extension
 -N, --no-buffer     Disable buffering of the output stream
     --no-keepalive  Disable TCP keepalive on the connection
     --no-npn        Disable the NPN TLS extension
     --no-progress-meter Do not show the progress meter
     --no-sessionid  Disable SSL session-ID reusing
     --noproxy <no-proxy-list> List of hosts which do not use proxy
     --ntlm          Use HTTP NTLM authentication
     --ntlm-wb       Use HTTP NTLM authentication with winbind
     --oauth2-bearer <token> OAuth 2 Bearer Token
 -o, --output <file> Write to file instead of stdout
 -Z, --parallel      Perform transfers in parallel
     --parallel-immediate Do not wait for multiplexing (with --parallel)
     --parallel-max  Maximum concurrency for parallel transfers
     --pass <phrase> Pass phrase for the private key
     --path-as-is    Do not squash .. sequences in URL path
     --pinnedpubkey <hashes> FILE/HASHES Public key to verify peer against
     --post301       Do not switch to GET after following a 301
     --post302       Do not switch to GET after following a 302
     --post303       Do not switch to GET after following a 303
     --preproxy [protocol://]host[:port] Use this proxy first
 -#, --progress-bar  Display transfer progress as a bar
     --proto <protocols> Enable/disable PROTOCOLS
     --proto-default <protocol> Use PROTOCOL for any URL missing a scheme
     --proto-redir <protocols> Enable/disable PROTOCOLS on redirect
 -x, --proxy [protocol://]host[:port] Use this proxy
     --proxy-anyauth Pick any proxy authentication method
     --proxy-basic   Use Basic authentication on the proxy
     --proxy-cacert <file> CA certificate to verify peer against for proxy
     --proxy-capath <dir> CA directory to verify peer against for proxy
     --proxy-cert <cert[:passwd]> Set client certificate for proxy
     --proxy-cert-type <type> Client certificate type for HTTPS proxy
     --proxy-ciphers <list> SSL ciphers to use for proxy
     --proxy-crlfile <file> Set a CRL list for proxy
     --proxy-digest  Use Digest authentication on the proxy
     --proxy-header <header/@file> Pass custom header(s) to proxy
     --proxy-insecure Do HTTPS proxy connections without verifying the proxy
     --proxy-key <key> Private key for HTTPS proxy
     --proxy-key-type <type> Private key file type for proxy
     --proxy-negotiate Use HTTP Negotiate (SPNEGO) authentication on the proxy
     --proxy-ntlm    Use NTLM authentication on the proxy
     --proxy-pass <phrase> Pass phrase for the private key for HTTPS proxy
     --proxy-pinnedpubkey <hashes> FILE/HASHES public key to verify proxy with
     --proxy-service-name <name> SPNEGO proxy service name
     --proxy-ssl-allow-beast Allow security flaw for interop for HTTPS proxy
     --proxy-tls13-ciphers <list> TLS 1.3 ciphersuites for proxy (OpenSSL)
     --proxy-tlsauthtype <type> TLS authentication type for HTTPS proxy
     --proxy-tlspassword <string> TLS password for HTTPS proxy
     --proxy-tlsuser <name> TLS username for HTTPS proxy
     --proxy-tlsv1   Use TLSv1 for HTTPS proxy
 -U, --proxy-user <user:password> Proxy user and password
     --proxy1.0 <host[:port]> Use HTTP/1.0 proxy on given port
 -p, --proxytunnel   Operate through an HTTP proxy tunnel (using CONNECT)
     --pubkey <key>  SSH Public key file name
 -Q, --quote         Send command(s) to server before transfer
     --random-file <file> File for reading random data from
 -r, --range <range> Retrieve only the bytes within RANGE
     --raw           Do HTTP "raw"; no transfer decoding
 -e, --referer <URL> Referrer URL
 -J, --remote-header-name Use the header-provided filename
 -O, --remote-name   Write output to a file named as the remote file
     --remote-name-all Use the remote file name for all URLs
 -R, --remote-time   Set the remote file's time on the local output
 -X, --request <command> Specify request command to use
     --request-target Specify the target for this request
     --resolve <host:port:address[,address]...> Resolve the host+port to this address
     --retry <num>   Retry request if transient problems occur
     --retry-connrefused Retry on connection refused (use with --retry)
     --retry-delay <seconds> Wait time between retries
     --retry-max-time <seconds> Retry only within this period
     --sasl-authzid <identity>  Use this identity to act as during SASL PLAIN authentication
     --sasl-ir       Enable initial response in SASL authentication
     --service-name <name> SPNEGO service name
 -S, --show-error    Show error even when -s is used
 -s, --silent        Silent mode
     --socks4 <host[:port]> SOCKS4 proxy on given host + port
     --socks4a <host[:port]> SOCKS4a proxy on given host + port
     --socks5 <host[:port]> SOCKS5 proxy on given host + port
     --socks5-basic  Enable username/password auth for SOCKS5 proxies
     --socks5-gssapi Enable GSS-API auth for SOCKS5 proxies
     --socks5-gssapi-nec Compatibility with NEC SOCKS5 server
     --socks5-gssapi-service <name> SOCKS5 proxy service name for GSS-API
     --socks5-hostname <host[:port]> SOCKS5 proxy, pass host name to proxy
 -Y, --speed-limit <speed> Stop transfers slower than this
 -y, --speed-time <seconds> Trigger 'speed-limit' abort after this time
     --ssl           Try SSL/TLS
     --ssl-allow-beast Allow security flaw to improve interop
     --ssl-no-revoke Disable cert revocation checks (Schannel)
     --ssl-reqd      Require SSL/TLS
 -2, --sslv2         Use SSLv2
 -3, --sslv3         Use SSLv3
     --stderr        Where to redirect stderr
     --styled-output Enable styled output for HTTP headers
     --suppress-connect-headers Suppress proxy CONNECT response headers
     --tcp-fastopen  Use TCP Fast Open
     --tcp-nodelay   Use the TCP_NODELAY option
 -t, --telnet-option <opt=val> Set telnet option
     --tftp-blksize <value> Set TFTP BLKSIZE option
     --tftp-no-options Do not send any TFTP options
 -z, --time-cond <time> Transfer based on a time condition
     --tls-max <VERSION> Set maximum allowed TLS version
     --tls13-ciphers <list> TLS 1.3 ciphersuites (OpenSSL)
     --tlsauthtype <type> TLS authentication type
     --tlspassword   TLS password
     --tlsuser <name> TLS user name
 -1, --tlsv1         Use TLSv1.0 or greater
     --tlsv1.0       Use TLSv1.0 or greater
     --tlsv1.1       Use TLSv1.1 or greater
     --tlsv1.2       Use TLSv1.2 or greater
     --tlsv1.3       Use TLSv1.3 or greater
     --tr-encoding   Request compressed transfer encoding
     --trace <file>  Write a debug trace to FILE
     --trace-ascii <file> Like --trace, but without hex output
     --trace-time    Add time stamps to trace/verbose output
     --unix-socket <path> Connect through this Unix domain socket
 -T, --upload-file <file> Transfer local FILE to destination
     --url <url>     URL to work with
 -B, --use-ascii     Use ASCII/text transfer
 -u, --user <user:password> Server user and password
 -A, --user-agent <name> Send User-Agent <name> to server
 -v, --verbose       Make the operation more talkative
 -V, --version       Show version number and quit
 -w, --write-out <format> Use output FORMAT after completion
     --xattr         Store metadata in extended file attributes
[root@host custombuild]#

If you cannot use the pre-release version of the script, you can try using an older version of curl that does have the older --help output:

Code:
cd /usr/local/directadmin/custombuild
echo "curl:7.68.0:" > custom_versions.txt
./build curl
./build php n

You might also try manually testing to ensure the connection details are correct:
Code:
curl --ssl-reqd -k --tlsv1.1 --show-error --ftp-create-dirs --user "$user:$password" ftp://$ip

Another issue that I've seen is that curl from the OS would be used instead of a much more recent CustomBuild managed version of curl. The version of curl needs to be new enough that it uses the updated functionality of the --tlsv1.1 flag, which, instead of explicitly requiring the version noted(tlsv1.1), it instead requires at minimum tlsv1.1, so it'll work with tlsv1.1 and newer.

I hope this addresses your issue. Please let me know if not so that we can get a DirectAdmin ticket open and debug your issue further.
 
I understand, however, the code that was causing that failure with FTPS and it to revert back to insecure FTP was fixed in pre-release some months ago, so you shouldn't need custom scripts now that you've updated. Here is precisely what was happening:

Despite "Secure FTP" being checked, it still attempted to utilize insecure FTP because curl changed the help flag output with a new version, so the old ftp_upload script code here:

Code:
SSL_REQD=""
if ${CURL} --help | grep -m1 -q 'ftp-ssl-reqd'; then
    SSL_REQD="--ftp-ssl-reqd"
elif ${CURL} --help | grep -m1 -q 'ssl-reqd'; then
    SSL_REQD="--ssl-reqd"
fi

wasn't being set to require ssl. The grep no longer found ftp-ssl-reqd or ssl-reqd since curl doesn't mention this in its --help output anymore. It had to be updated to use '--help tls' instead:

Code:
SSL_REQD=""
if ${CURL} --help tls | grep -m1 -q 'ftp-ssl-reqd'; then
    SSL_REQD="--ftp-ssl-reqd"
elif ${CURL} --help tls | grep -m1 -q 'ssl-reqd'; then
    SSL_REQD="--ssl-reqd"
fi

So with pre-release versions of the script, it properly detects, sets, and utilizes ftp with SSL.

curl 7.75 help output:
Code:
[root@host directadmin]# curl --help
Usage: curl [options...] <url>
-d, --data <data>   HTTP POST data
-f, --fail          Fail silently (no output at all) on HTTP errors
-h, --help <category> Get help for commands
-i, --include       Include protocol response headers in the output
-o, --output <file> Write to file instead of stdout
-O, --remote-name   Write output to a file named as the remote file
-s, --silent        Silent mode
-T, --upload-file <file> Transfer local FILE to destination
-u, --user <user:password> Server user and password
-A, --user-agent <name> Send User-Agent <name> to server
-v, --verbose       Make the operation more talkative
-V, --version       Show version number and quit

This is not the full help, this menu is stripped into categories.
Use "--help category" to get an overview of all categories.
For all options use the manual or "--help all".
[root@host directadmin]#
VS
curl 7.68 help output:
Code:
[root@host custombuild]# curl --help
Usage: curl [options...] <url>
     --abstract-unix-socket <path> Connect via abstract Unix domain socket
     --alt-svc <file name> Enable alt-svc with this cache file
     --anyauth       Pick any authentication method
-a, --append        Append to target file when uploading
     --basic         Use HTTP Basic Authentication
     --cacert <file> CA certificate to verify peer against
     --capath <dir>  CA directory to verify peer against
-E, --cert <certificate[:password]> Client certificate file and password
     --cert-status   Verify the status of the server certificate
     --cert-type <type> Certificate file type (DER/PEM/ENG)
     --ciphers <list of ciphers> SSL ciphers to use
     --compressed    Request compressed response
     --compressed-ssh Enable SSH compression
-K, --config <file> Read config from a file
     --connect-timeout <seconds> Maximum time allowed for connection
     --connect-to <HOST1:PORT1:HOST2:PORT2> Connect to host
-C, --continue-at <offset> Resumed transfer offset
-b, --cookie <data|filename> Send cookies from string/file
-c, --cookie-jar <filename> Write cookies to <filename> after operation
     --create-dirs   Create necessary local directory hierarchy
     --crlf          Convert LF to CRLF in upload
     --crlfile <file> Get a CRL list in PEM format from the given file
-d, --data <data>   HTTP POST data
     --data-ascii <data> HTTP POST ASCII data
     --data-binary <data> HTTP POST binary data
     --data-raw <data> HTTP POST data, '@' allowed
     --data-urlencode <data> HTTP POST data url encoded
     --delegation <LEVEL> GSS-API delegation permission
     --digest        Use HTTP Digest Authentication
-q, --disable       Disable .curlrc
     --disable-eprt  Inhibit using EPRT or LPRT
     --disable-epsv  Inhibit using EPSV
     --disallow-username-in-url Disallow username in url
     --dns-interface <interface> Interface to use for DNS requests
     --dns-ipv4-addr <address> IPv4 address to use for DNS requests
     --dns-ipv6-addr <address> IPv6 address to use for DNS requests
     --dns-servers <addresses> DNS server addrs to use
     --doh-url <URL> Resolve host names over DOH
-D, --dump-header <filename> Write the received headers to <filename>
     --egd-file <file> EGD socket path for random data
     --engine <name> Crypto engine to use
     --etag-save <file> Get an ETag from response header and save it to a FILE
     --etag-compare <file> Get an ETag from a file and send a conditional request
     --expect100-timeout <seconds> How long to wait for 100-continue
-f, --fail          Fail silently (no output at all) on HTTP errors
     --fail-early    Fail on first transfer error, do not continue
     --false-start   Enable TLS False Start
-F, --form <name=content> Specify multipart MIME data
     --form-string <name=string> Specify multipart MIME data
     --ftp-account <data> Account data string
     --ftp-alternative-to-user <command> String to replace USER [name]
     --ftp-create-dirs Create the remote dirs if not present
     --ftp-method <method> Control CWD usage
     --ftp-pasv      Use PASV/EPSV instead of PORT
-P, --ftp-port <address> Use PORT instead of PASV
     --ftp-pret      Send PRET before PASV
     --ftp-skip-pasv-ip Skip the IP address for PASV
     --ftp-ssl-ccc   Send CCC after authenticating
     --ftp-ssl-ccc-mode <active/passive> Set CCC mode
     --ftp-ssl-control Require SSL/TLS for FTP login, clear for transfer
-G, --get           Put the post data in the URL and use GET
-g, --globoff       Disable URL sequences and ranges using {} and []
     --happy-eyeballs-timeout-ms <milliseconds> How long to wait in milliseconds for IPv6 before trying IPv4
     --haproxy-protocol Send HAProxy PROXY protocol v1 header
-I, --head          Show document info only
-H, --header <header/@file> Pass custom header(s) to server
-h, --help          This help text
     --hostpubmd5 <md5> Acceptable MD5 hash of the host public key
     --http0.9       Allow HTTP 0.9 responses
-0, --http1.0       Use HTTP 1.0
     --http1.1       Use HTTP 1.1
     --http2         Use HTTP 2
     --http2-prior-knowledge Use HTTP 2 without HTTP/1.1 Upgrade
     --http3         Use HTTP v3
     --ignore-content-length Ignore the size of the remote resource
-i, --include       Include protocol response headers in the output
-k, --insecure      Allow insecure server connections when using SSL
     --interface <name> Use network INTERFACE (or address)
-4, --ipv4          Resolve names to IPv4 addresses
-6, --ipv6          Resolve names to IPv6 addresses
-j, --junk-session-cookies Ignore session cookies read from file
     --keepalive-time <seconds> Interval time for keepalive probes
     --key <key>     Private key file name
     --key-type <type> Private key file type (DER/PEM/ENG)
     --krb <level>   Enable Kerberos with security <level>
     --libcurl <file> Dump libcurl equivalent code of this command line
     --limit-rate <speed> Limit transfer speed to RATE
-l, --list-only     List only mode
     --local-port <num/range> Force use of RANGE for local port numbers
-L, --location      Follow redirects
     --location-trusted Like --location, and send auth to other hosts
     --login-options <options> Server login options
     --mail-auth <address> Originator address of the original email
     --mail-from <address> Mail from this address
     --mail-rcpt <address> Mail to this address
-M, --manual        Display the full manual
     --max-filesize <bytes> Maximum file size to download
     --max-redirs <num> Maximum number of redirects allowed
-m, --max-time <seconds> Maximum time allowed for the transfer
     --metalink      Process given URLs as metalink XML file
     --negotiate     Use HTTP Negotiate (SPNEGO) authentication
-n, --netrc         Must read .netrc for user name and password
     --netrc-file <filename> Specify FILE for netrc
     --netrc-optional Use either .netrc or URL
-:, --next          Make next URL use its separate set of options
     --no-alpn       Disable the ALPN TLS extension
-N, --no-buffer     Disable buffering of the output stream
     --no-keepalive  Disable TCP keepalive on the connection
     --no-npn        Disable the NPN TLS extension
     --no-progress-meter Do not show the progress meter
     --no-sessionid  Disable SSL session-ID reusing
     --noproxy <no-proxy-list> List of hosts which do not use proxy
     --ntlm          Use HTTP NTLM authentication
     --ntlm-wb       Use HTTP NTLM authentication with winbind
     --oauth2-bearer <token> OAuth 2 Bearer Token
-o, --output <file> Write to file instead of stdout
-Z, --parallel      Perform transfers in parallel
     --parallel-immediate Do not wait for multiplexing (with --parallel)
     --parallel-max  Maximum concurrency for parallel transfers
     --pass <phrase> Pass phrase for the private key
     --path-as-is    Do not squash .. sequences in URL path
     --pinnedpubkey <hashes> FILE/HASHES Public key to verify peer against
     --post301       Do not switch to GET after following a 301
     --post302       Do not switch to GET after following a 302
     --post303       Do not switch to GET after following a 303
     --preproxy [protocol://]host[:port] Use this proxy first
-#, --progress-bar  Display transfer progress as a bar
     --proto <protocols> Enable/disable PROTOCOLS
     --proto-default <protocol> Use PROTOCOL for any URL missing a scheme
     --proto-redir <protocols> Enable/disable PROTOCOLS on redirect
-x, --proxy [protocol://]host[:port] Use this proxy
     --proxy-anyauth Pick any proxy authentication method
     --proxy-basic   Use Basic authentication on the proxy
     --proxy-cacert <file> CA certificate to verify peer against for proxy
     --proxy-capath <dir> CA directory to verify peer against for proxy
     --proxy-cert <cert[:passwd]> Set client certificate for proxy
     --proxy-cert-type <type> Client certificate type for HTTPS proxy
     --proxy-ciphers <list> SSL ciphers to use for proxy
     --proxy-crlfile <file> Set a CRL list for proxy
     --proxy-digest  Use Digest authentication on the proxy
     --proxy-header <header/@file> Pass custom header(s) to proxy
     --proxy-insecure Do HTTPS proxy connections without verifying the proxy
     --proxy-key <key> Private key for HTTPS proxy
     --proxy-key-type <type> Private key file type for proxy
     --proxy-negotiate Use HTTP Negotiate (SPNEGO) authentication on the proxy
     --proxy-ntlm    Use NTLM authentication on the proxy
     --proxy-pass <phrase> Pass phrase for the private key for HTTPS proxy
     --proxy-pinnedpubkey <hashes> FILE/HASHES public key to verify proxy with
     --proxy-service-name <name> SPNEGO proxy service name
     --proxy-ssl-allow-beast Allow security flaw for interop for HTTPS proxy
     --proxy-tls13-ciphers <list> TLS 1.3 ciphersuites for proxy (OpenSSL)
     --proxy-tlsauthtype <type> TLS authentication type for HTTPS proxy
     --proxy-tlspassword <string> TLS password for HTTPS proxy
     --proxy-tlsuser <name> TLS username for HTTPS proxy
     --proxy-tlsv1   Use TLSv1 for HTTPS proxy
-U, --proxy-user <user:password> Proxy user and password
     --proxy1.0 <host[:port]> Use HTTP/1.0 proxy on given port
-p, --proxytunnel   Operate through an HTTP proxy tunnel (using CONNECT)
     --pubkey <key>  SSH Public key file name
-Q, --quote         Send command(s) to server before transfer
     --random-file <file> File for reading random data from
-r, --range <range> Retrieve only the bytes within RANGE
     --raw           Do HTTP "raw"; no transfer decoding
-e, --referer <URL> Referrer URL
-J, --remote-header-name Use the header-provided filename
-O, --remote-name   Write output to a file named as the remote file
     --remote-name-all Use the remote file name for all URLs
-R, --remote-time   Set the remote file's time on the local output
-X, --request <command> Specify request command to use
     --request-target Specify the target for this request
     --resolve <host:port:address[,address]...> Resolve the host+port to this address
     --retry <num>   Retry request if transient problems occur
     --retry-connrefused Retry on connection refused (use with --retry)
     --retry-delay <seconds> Wait time between retries
     --retry-max-time <seconds> Retry only within this period
     --sasl-authzid <identity>  Use this identity to act as during SASL PLAIN authentication
     --sasl-ir       Enable initial response in SASL authentication
     --service-name <name> SPNEGO service name
-S, --show-error    Show error even when -s is used
-s, --silent        Silent mode
     --socks4 <host[:port]> SOCKS4 proxy on given host + port
     --socks4a <host[:port]> SOCKS4a proxy on given host + port
     --socks5 <host[:port]> SOCKS5 proxy on given host + port
     --socks5-basic  Enable username/password auth for SOCKS5 proxies
     --socks5-gssapi Enable GSS-API auth for SOCKS5 proxies
     --socks5-gssapi-nec Compatibility with NEC SOCKS5 server
     --socks5-gssapi-service <name> SOCKS5 proxy service name for GSS-API
     --socks5-hostname <host[:port]> SOCKS5 proxy, pass host name to proxy
-Y, --speed-limit <speed> Stop transfers slower than this
-y, --speed-time <seconds> Trigger 'speed-limit' abort after this time
     --ssl           Try SSL/TLS
     --ssl-allow-beast Allow security flaw to improve interop
     --ssl-no-revoke Disable cert revocation checks (Schannel)
     --ssl-reqd      Require SSL/TLS
-2, --sslv2         Use SSLv2
-3, --sslv3         Use SSLv3
     --stderr        Where to redirect stderr
     --styled-output Enable styled output for HTTP headers
     --suppress-connect-headers Suppress proxy CONNECT response headers
     --tcp-fastopen  Use TCP Fast Open
     --tcp-nodelay   Use the TCP_NODELAY option
-t, --telnet-option <opt=val> Set telnet option
     --tftp-blksize <value> Set TFTP BLKSIZE option
     --tftp-no-options Do not send any TFTP options
-z, --time-cond <time> Transfer based on a time condition
     --tls-max <VERSION> Set maximum allowed TLS version
     --tls13-ciphers <list> TLS 1.3 ciphersuites (OpenSSL)
     --tlsauthtype <type> TLS authentication type
     --tlspassword   TLS password
     --tlsuser <name> TLS user name
-1, --tlsv1         Use TLSv1.0 or greater
     --tlsv1.0       Use TLSv1.0 or greater
     --tlsv1.1       Use TLSv1.1 or greater
     --tlsv1.2       Use TLSv1.2 or greater
     --tlsv1.3       Use TLSv1.3 or greater
     --tr-encoding   Request compressed transfer encoding
     --trace <file>  Write a debug trace to FILE
     --trace-ascii <file> Like --trace, but without hex output
     --trace-time    Add time stamps to trace/verbose output
     --unix-socket <path> Connect through this Unix domain socket
-T, --upload-file <file> Transfer local FILE to destination
     --url <url>     URL to work with
-B, --use-ascii     Use ASCII/text transfer
-u, --user <user:password> Server user and password
-A, --user-agent <name> Send User-Agent <name> to server
-v, --verbose       Make the operation more talkative
-V, --version       Show version number and quit
-w, --write-out <format> Use output FORMAT after completion
     --xattr         Store metadata in extended file attributes
[root@host custombuild]#

If you cannot use the pre-release version of the script, you can try using an older version of curl that does have the older --help output:

Code:
cd /usr/local/directadmin/custombuild
echo "curl:7.68.0:" > custom_versions.txt
./build curl
./build php n

You might also try manually testing to ensure the connection details are correct:
Code:
curl --ssl-reqd -k --tlsv1.1 --show-error --ftp-create-dirs --user "$user:$password" ftp://$ip

Another issue that I've seen is that curl from the OS would be used instead of a much more recent CustomBuild managed version of curl. The version of curl needs to be new enough that it uses the updated functionality of the --tlsv1.1 flag, which, instead of explicitly requiring the version noted(tlsv1.1), it instead requires at minimum tlsv1.1, so it'll work with tlsv1.1 and newer.

I hope this addresses your issue. Please let me know if not so that we can get a DirectAdmin ticket open and debug your issue further.
Sorry for the late response but I needed to test a couple of thing.
I have changed my custom scipts with the updated code that you provided. (I also added my custom script as attachment)
And here I am as of right now:

On my production server I get a "Backup was uploaded" 99% of the time and 1% of the time I get a curl 18 error with a FTP error of 426.
On my developement server I always get a curl 18 error with FTP error 426. (Even tho they both have the same custom script.)

And I also retried the standard script that is provided with DirectAdmin itself but then a always get a failed to login on both servers. (Thus proving the default script can not handle Explicit FTPS. But if am wrong please let me know with information.)

My thoughts (But with no prove):
To me it seems like something happens to the SSL Connections when the files get transfered to the backup server.
Like it just goes back to plain FTP while its transfering the files. (Wich is not allowed on Explicit FTPS of course.)

So if anyone can help or give pointers, that would be very helpfull since i do not know where to look anymore.
 

Attachments

  • ftp_list.php.txt
    3.2 KB · Views: 4
  • ftp_upload.php.txt
    3.8 KB · Views: 5
Sorry for the late response but I needed to test a couple of thing.
I have changed my custom scipts with the updated code that you provided. (I also added my custom script as attachment)
And here I am as of right now:

On my production server I get a "Backup was uploaded" 99% of the time and 1% of the time I get a curl 18 error with a FTP error of 426.
On my developement server I always get a curl 18 error with FTP error 426. (Even tho they both have the same custom script.)

And I also retried the standard script that is provided with DirectAdmin itself but then a always get a failed to login on both servers. (Thus proving the default script can not handle Explicit FTPS. But if am wrong please let me know with information.)

My thoughts (But with no prove):
To me it seems like something happens to the SSL Connections when the files get transfered to the backup server.
Like it just goes back to plain FTP while its transfering the files. (Wich is not allowed on Explicit FTPS of course.)

So if anyone can help or give pointers, that would be very helpfull since i do not know where to look anymore.
And just a quick WARNING:
I made the script to NOT verify the SSL/CA for easier testing. (I use Let's Encrypt Certificates but support for them on the backend is a bit of a hit or miss in my experience.) So that it will never fail because of the CA. But in a normal enviorment you always need to check the CA where possible.
 
Quick update:

After testing the script even more I did not find any issues with it.
But I still got the same errors and/or behavior.
So my next action was to look at my FTP server. And I did this by changing from Debian to Windows and use FileZilla Server as FTP server software. (I did this to rule out whether or not vsFTPd is also part of my issues.)
After that I did a quick one time backup. This happened without any errors on both my production install and my development install. (Do note that I still used my custom script since the default script did still not work after this change.)
So I have now configured 2 backups a day to see over the weekend if any error happen or not. But I am expecting to not find any errors.
 
Feel free to open a support ticket to log any errors you are still having with the ftp_* script after updating to pre-release. If you are still having errors, then others may be as well, so it would be beneficial to all if we could check and log a bug report if necessary to get it fixed for everyone.
Thanks!
 
Final Update (?):
After testing a lot more I have concluded that my vsFTPd installation was the cause of the error.
This has been proven by the fact that when I change to proFTPd I get different errors. (FTP error 450 to be exact.)
Even tho the errors are kinda the same. (There both an error for a unexpected file transfer interuption.)

So as of this issue I do not have a solution for this configuration/setup. The hole issue seems to be caused by my FTP server but since I did not get it working I can not 100% confirm/rule out. Even tho I am very certain, I am Almost sure its a FTP software issue since on my windows server 2019 with FileZilla server it works with the changes I made to the script.

As for how I fixed it in my case:
I changed to SFTP (SSH) with some really big changes to prevent shell access. (As I said before I am not a fan of SFTP since it also in most cases allows shell access. But it works so...)

And:
Could someone tell me if I should mark it as resolved? Since I have no clue if I should do it or not.
Since its not really solved but also kinda is?
 
@realcryptonight
"with some really big changes to prevent shell access."

Which? ;)
I currently only have the changes in my development branch but here is the list:

After I have a couple of successfull deployments i will set it to the master.
 
I currently only have the changes in my development branch but here is the list:

After I have a couple of successfull deployments i will set it to the master.
Do note that I also have safety changes in the add_ssh_user.sh script.
See: https://github.com/realcryptonight/...euse-scripts/standard/scripts/add_ssh_user.sh
 
Final update:

After weeks of testing and trying I just gave up and changed to SFTP (SSH based) backups with the help of zEiter and his SFTP script.
I figured out that the issue was NOT the upload script but my FTP server. (Even tho I still do not know why to this day.)
 
Back
Top