Hostname selection during install

kristian

Verified User
Joined
Nov 4, 2005
Messages
461
Location
Norway
On new servers I've installed recently, the setup script picks a generic server-<something>.da.direct hostname. The server already has a proper hostname, and it's valid in both forward and reverse DNS. Why does the install script insist on setting this third party hostname? At first glance, I can't find anywhere in the script that da.direct is mentioned - does that mean there are no checks if the server has a valid/sane hostname prior to selecting da.direct?

I suppose setting the environment variable DA_HOSTNAME prior to running setup.sh would resolve it, but that shouldn't be necessary when the server's hostname is already set up correctly.

Anyone know what the deal is here?
 
Yes if DA_HOSTNAME is empty an auto-hostname based on public IP address will be used. This helps us to ensure we are able to issue TLS certificate and have TLS capable DA install from the start (TLS at install time is now in alpha release channel).

This is good default, for anyone who wants explicit hostname DA_HOSTNAME environment variable should be used. Using non functional or not properly configured hostname will be up to the caller since explicit hostname was requested.

I understand this might be annoying for advanced users, but this saves a lot of trouble for less advanced users. Taking your feedback into consideration we can add some preliminary tests to try reusing existing hostname (y) .
 
Yes this is the tricky part. It is extremely hard to check if hostname is valid locally. Most of the distros adds server IP and hostname to the /etc/hosts file making simple hostname resolve check short-circut and return potential false-positive. Another layer is short circut at the local DNS server, hosting provider might allocate a link local hostname. Forcing DNS checks using external DNS providers like 8.8.8.8 or 1.1.1.1 fails in countries like China, etc...

So the most reliable way to check it would be involving a 3rd party server - send server hostname and on remote host make sure that it resolves to the same IP address we received the request. This is still not ideal since it needs to be performed on both IPv4 and IPv6 for higher accuracy to avoid using hostname resolves both IPv4 and IPv6 but works only over one of them...

Also we are working on a shift to make DA hostname less relevant. At the moment server hostname might not be in a DNS zone that is under control of DirectAdmin (@Richard G this is the reason why we are not moving forward with DKIM for server hostname, we can not ensure we will be able to add DNS records for DKIM if server hostname is not managed by DA DNS). Ideally hostname should be just a way to open DNS panel over TLS until some domain is added.
 
Ideally hostname should be just a way to open DNS panel over TLS until some domain is added.
Will that be sufficient? Because also some system message are send via the hostname.
As long as DKIM is not a real requirement on receiving parts, it does not have to be an issue. Although I've seen a thread which would fix the DKIM issue when a hostname is created in DA dns manager (like was done automatically before).
It's not a big thing for most of us I guess, to add a hostname manually into DA DNS if that would fix the problem.

However, if it was only to be used for a way to open DNS until some domain is added. How about panel system messages and ofcourse OS messages send via the hostname?

But I don't see a benefit of entering some server-something.da.direct hostname which isn't valid anyway either, instead of using the hostname command to check for a valid hostname.

Maybe I'm thinking the wrong way here, but that's because I don't understand why some xxx.da.direct hostname is added now, which imho is not valid either, unless I'm mistaken.

Addition: Is it really necessary to make this fool proove? Is it not that an admin has to be a bit of an admin to begin with and setup a good hostname?
 
Automatic host names in the da.direct zone are valid. They always point to the public IPv4 address of the server (or IPv6 address if host is IPv6-only). The only problem with them is that they might not be the one you wanted to use. But hostname could be changed after DA install if that is the case.

As you said we still need hostname for system originated email. We are not working on this actively but this could be mitigated to some degree by refusing to send any emails on behalf of the system (from server hostname) and sending emails from some user owned domain instead.

There is an eternal debate on how much to foolproof the product with no correct answer. Whenever possible we would like to make things as simple as possible while still leaving a way for advanced users to do what they want. The advanced users are usually in the better position of going extra mile compared to less experienced users. Of course making both happy is the holy grail :) verification of valid FQDN on install time with fall-back to auto-generated one probably falls in this category.
 
but this could be mitigated to some degree by refusing to send any emails on behalf of the system (from server hostname) and sending emails from some user owned domain instead.
I would like to ask to reconsider something like that, as also some OS messages could be send via the hostname. And it should just be possible imho, maybe even for some external stuff. I don't know any other panel refusing hostname messages either. Not hat you have to comply but I don't think it's a real good think to prevent that some way.

As for the rest, we're on the same line. ;)
 
I have just checked the DA install code. And it turns out we already have the logic in place to use existing hostname if it is valid ?. My initial assumption (without testing) was that we are not (why there would be complaints otherwise).

Current hostname detection logic works like this:
  1. If there is DA_HOSTNAME environment variable - use it unconditionally.
  2. If uname -n does not contain . symbol (non FQDN) - use auto generated from da.direct.
  3. If uname -n contains . perform DNS lookup for it, iterate over the results if any of the returned IP addresses match server public or server local IP address use the server hostname if it does not match use auto generated from da.direct.
Important point here is that we are using hostname as reported by the kernel (same as calling uname -n or hostname from CLI). Usually fresh installations set server hostname to non FQDN, for example to server instead of to server.example.net and keep the domain opart in /etc/resolv.conf. In such case rule 2 will be used and hostname will be auto-generated. I suppose @kristian tripped on exactly this case.

The reason for using hostname directly from the kernel (expeting FQDN to be in hostname) instead of trying to guess the domain is a combination for historical, portability and reliability reasons.

TLDR; If hostname returns server and hostname -f returns server.example.net you will get an auto-generated hostname from the da.direct zone. Ensure you store FQDN in hostname and DA will use that if it resolves to your server IP.
 
My uname -n/hostname does indeed only contain the hostname, while hostname -f returns the FQDN. This is how I generally prefer the setup to be (and is the "Debian way"), but it also really doesn't matter all that much, and for DirectAdmin servers I try to accept what I need to accept. ;)

I think maybe for a single hostname as returned by uname -n in logical step 2 above, some additional tests could be done, to try and obtain the FQDN and then perform the DNS checks from step 3 on it? If that's a hostname -f or something else I don't really know in terms of portability.

My ansible playbook failed at a later point in the run because of this third party hostname, since when connecting to the API using the actual hostname didn't match the SSL certificate as issued to the da.direct hostname. Frustrating! :)

I will force the hostname with DA_HOSTNAME - it's an easier adjustment for me than to change the way the system hostname is set up prior to DirectAdmin installation.

Thanks for the replies and information! :)
 
On a related note, it seems /etc/hosts is updated to contain a line such as

Code:
<public ip> <fqdn> <hostname>

I don't like having public addresses in the /etc/hosts file, I prefer this line to have 127.0.0.1 or 127.0.1.1. Can I affect this in any way during the setup?
 
On a related note, it seems /etc/hosts is updated to contain a line such as
Hmmz... would that change already present values? Because on our default OS installes it always contains the localhost value's and the server ip with fqdn hostname.
Or is this only changed by DA if no fqdn hostname is found during the check?

P.s. hostname detection logic is fine that way you stated in post #8 , thanks for checking @fln.
 
To be completely honest I do not like the public IP in /etc/hosts as well. However this is how DA (and many linux distros) behaved for ages, so it is hard to tell what exactly would break (or not break) if we would switch to using 127.0.0.1 there. DA installer adds a record with public IP it if it is not present as @kristian pointed out.

The historical reason for having hostname in the /etc/hosts was to allow requests to the hostname to work even without network connectivity (access to DNS servers). Name resolution would short-circut with /etc/hosts instead of failing. This is not that big of an issue nowadays compared to decades ago. Internet access is now ubiquitous and server not having internet access is usesless anyway :).


The difference between having public IP vs having 127.0.0.1 is quite murky. If each and every service on your host is listening on :: or 0.0.0.0 then it does not matter much. But once you start bounding services to listen on one particular IP you might run into problems.

If you have service listening on 127.0.0.1, for example redis, and you have your public IP in /etc/hosts bound to your domain name you can reach it with redis-cli -h localhost, but trying redis-cli -h server.example.net would fail. This kind of works as expected if you think about it from normal DNS resolution perspecitve. We try connecting to the same IP that we would try there were no /etc/hosts entry at all or if we were connecting from 3rd party server. Same goes the other way around, if you have HTTP sever listening not on all interfaces but only on your local IP address connecting to it via localhost would not work, while connecting via FQDN would work.

When we start adding loopback IP and your hostname to /etc/hosts the behaviour becomes a bit less what people would expect. Using the redis example you would be able to connect to it with both redis-cli -h localhost and redis-cli -h server.example.net. But if you had HTTP server bound to your IP you would not able to reach from local machine using the FQDN, while other machines could reach it just fine. This behaviour could be very unexpected.

All of these /etc/hosts hacks are just trying to mimic what public DNS servers does. Ideally you should have the same records in /etc/hosts for your server as you have in your public DNS service. The idea of putting loopback IP there came for supporting laptops or workstations without fixed IP address. These machines does not have FQDN anyway so for the short hostname it is even better to resolve to 127.0.0.1 rather than try catching public IP addresses.

@kristian I a huge fan of the Debian way despite DA sometimes straying away from it ? . Official Debian docs are saying:

Code:
...

For a system with a permanent IP address, that permanent IP address should be used here instead of 127.0.1.1.

For a system with a permanent IP address and a fully qualified domain name (FQDN) provided by the Domain Name System (DNS), that canonical host_name.domain_name should be used instead of just host_name.

...

I personally would not write entries to the /etc/hosts at all and would always make DNS lookup if server name is used! However this approach clashes with how historically hostname -f is designed to work ?. It tries reading /etc/hosts as source of truth for FQDN!
 
Yeah, I'm more pragmatic in my approach to things these days that what I once was, so whatever works is ok (within reason). It's interesting that the Debian docs actually say to use the public IP though, I must've missed this. I disagree with it, and I'm with you that DNS should be The One Place to do hostname lookups, no matter what they are. I've been bitten by entries in /etc/hosts left over by $someone, causing unexpected results/behaviour. It's just confusing. :) When hostnames and IP adresses are permanent, it tends to not be a big deal either way.

(That said, I like that the /etc/hosts option exists, e.g. when you want to be able to refer to internal-only hostnames that live on private IP addresses, but that's a whole other scenario.)
 
What about the default localhost DNS zone files? they no longer exist on a fresh (Debian11) install. Do I need to recreate them in DA? When I change the server name via CP, DA doesn't create DNS for the new hostname.
 
When I change the server name via CP, DA doesn't create DNS for the new hostname.
It did before. But that stopped probably when DA stopped using the hostname in a DNS entry or the server or what the user put in and started using their own ip.ad.re.ss.da.direct or something for better compatibility on fresh installs.
So you have to change DNS for new hostname yourself.

I always do what DA did before and use a seperate DNS entry for the hostname. If hostname would change (I can't think of a reason why we should do that), then I would delete that one an create a new one. Which you also might want to do.

As for the default localhost zones, that seems normal. On a VPS I installed last week with Almalinux 8 there is also no localhost.zone file anymore, while it is present on a Alma server which was installed around 2 years ago.

But I would advise to keep the localhost settings at least in the /etc/hosts file as some applications might look for it.
 
Back
Top