-- Slave Dns Solution --

Status
Not open for further replies.

Maverick

Verified User
Joined
Jan 26, 2005
Messages
18
-- SLAVE DNS SOLUTION --

Solution to Secondary DNS Issue:

Dear DA Community:

I purchased my first DA license last night and have been busy configuring/securing it. I have also been reading all the interesting suggestions and solutions that various members of the community have put forward. But here is what I suggest and I believe it should be very simple to implement and if my shell-scripting knowledge didn't let me down, I would have done it in a few hours:

I have tested this on RedHat EL 3 and it works and I don't see why it shouldn't on other *nix environment

OK here it goes:

1- DIRECTADMIN is the primary Name Server

2- Consider an arbitrary Linux-box/VDS/whatever as the slave DNS.
Configure it so that named.conf and the "slaves" directory are in an isolated area on its harddisk.

You can call it /slaveconfdir/
(so it will contain named.conf file and the "slaves" subfolder.

3- NFS Export "slaveconfdir" to the DIRECTADMIN server and NFS mount it , say in
/mnt/slaveconfdir

By now I hope you know where I'm getting at.

4- All the DA script needs to do now is to add

zone "example.com" {
type slave;
file "/slaveconfdir/slaves/example.com.db";
masters {
ip.add.of.directadmin_server;

};
};

to named.conf on the mounted "slaveconfdir" directory hence writing directly to the configuration file of the slave server


I manually did this and the "slaves" directory was populated after bind was reloaded on the slave box. A cron job can do this periodically



I really wish I was better at shell scripting so that I could implement this today so that a script closely linked to DA's scripts would make sure that the entry is removed from the "slave server's conf file"/"slaves"directory if it's no longer required.

I'd sure appreciate it if anyone could help me out with this.
This is really getting me down. :(

DA is brilliant but lack of this feature is a real let-down. And as you can see it's really simple to implement.
 
Last edited:
Nice thinking, but how about this one:
You have multiple servers; on one server there's an customer of an reseller who has a certain domain he transferes to another account directly from you, on another server.
Now we have a slave server that doesn't know which entry is right so BIND crashes...

So far, there are already 2 different working solutions (this makes 3) of an slave DNS floating around. The problem is, the other 2 make use of http for config transfers, so there isn't a need for a new service.


I admire the initiative. If you want, you can have my DNS replication solution to play around with, the problem above is the only real bug left, so far.


Note: you can also use DA on the slave server, includes in named.conf rule :)
 
Good point. If I understand your point correctly I think it should be the job the script on each Master server to make sure there are no duplicate entries. I should think in the situation you describe the new Master server will have to change the following


zone "example.com" {
type slave;
file "/slaveconfdir/slaves/example.com.db";
masters {
ip.add.of.directadmin_server;
};
};


to


zone "example.com" {
type slave;
file "/slaveconfdir/slaves/example.com.db";
masters {
ip.add.of.new_directadmin_server;
};
};


So I guess what I missed out on was the "check for duplicate entries" step.

Again, This shouldn't be difficult, unless I'm missing something?:confused:

I'd appreciate any help. If you have a solution I'm all ears!

ps. I believe nfs mount would be more secure than FTP, don't you?
 
Help me understand two points:

1) how a DA script (If I'm following you correctly, you said a DA script and I'm confused) can write to the slave server

2) how running a dupecheck on an individual master can find dupes on the slave server obtained from multiple masters.

I may be misunderstanding you and I'm willing to be educated.

I'm also waiting for my programmer to finish up the scripts I've already worked on.

If he can't do it by the end of the week I'll pass them on to someone else.

Jeff
 
Server1 = Slave DNS
Server2 = Master DNS1 + DA1 Server
Server3 = Master DNS2 + DA2 Server

Server1 - NFS Export "slaveconfdir" with write access (trust root) to Server2 and Server3. Please keep in mind that "slaveconfdir" contains nothing but named.conf and the "slaves" directory. (I'm not even sure if the slaves directory needs to be there.) If named on Server1 deletes the slave db file automatically when it can't find it in /slaveconfdir/named.conf then there's no need for the "slaves" directory to be in the same folder as named.conf otherwise DA on Server2 and Server 3 would have to remove the slaves db files if/when necessary.

Server2 - Mount "slaveconfdir" on, say, /mnt/slaveconfdir
Server3 - Mount "slaveconfdir" on, say, /mnt/slaveconfdir

jlasman[/i] 1) how a DA script (If I'm following you correctly said:
2) how running a dupecheck on an individual master can find dupes on the slave server obtained from multiple masters.

Perhaps I wasn't clear, if say "someclient.net" is to be moved from Server3 to Server2

DA on Server2 will notice there is an entry on "/mnt/slaveconfdir/named.conf" already. Therefore Server2 will register itself as the Master for "someclient.net" and will start notifying Server1 of changes

i.e.

from


zone "someclient.net" {
type slave;
file "/slaveconfdir/slaves/someclient.net.db";
masters {
ip.add.of.server3;
};
};


to


zone "someclient.net" {
type slave;
file "/slaveconfdir/slaves/someclient.net.db";
masters {
ip.add.of.server2;
};
};




Named on Server3 will stop notifying Slave DNS on Server1. Zone file on Server3 will be removed when the time is right. But DA on Server3 would be intelligent enough to realise Server3 is not the master anymore (by examining "/mnt/slaveconfdir/named.conf" + a flag indicating that "someclient.net" is moving out of it) and therefore it should not remove entry in "/mnt/slaveconfdir/named.conf" and it should not remove the slave zone file on "/mnt/slaveconfdir/slaves/"

I am not saying my thinking is spot-on. This is just a quick brain-storming. I believe that if this idea is worked on the DA community as a whole can benefit from it. I am not an expert in shell scripting so I hope someone will pick this up and take things forward. I'll do whatever I can to help because I believe putting all your eggs in one basket is not a good idea.
 
Last edited:
Your idea is about the same as Jeff and mine, although the problem will still be downtime.

If you're moving something it will have downtime, but in this case, you're sure you have to do more than just move.

Problem is you can't sync all servers at exactly the same milisecond, thus adding an domain to server 3 while server 2 has stopped notifying server 1 but hasn't removed the config/zone yet on server 1, will cause server 1 to fail when server 3 uploads a new config & restart BIND.
If this happens, in my latest version BIND will report an error to the logs and die.

In earlier versions I restarted BIND entirely but reloading all zones is pointless and takes up CPU time.

The only way to do this correct is to allow server 3 to rewrite the config created with server 2 for server 1 in realtime, and trigger an auto reload of the modified config or something.
If that's possible the system could also search for existing domains on other slave server configs, and skip those domains if needed.

It's an interesting idea, to create an 'I am always alive' connection between servers to do this, due to the fact, server 3 can read the entire config of server 2 without problems.
With my version of the system, this is a bit hard, due to the fact all named.conf transfers are done by HTTP.
 
Problem is you can't sync all servers at exactly the same milisecond, thus adding an domain to server 3 while server 2 has stopped notifying server 1 but hasn't removed the config/zone yet on server 1, will cause server 1 to fail when server 3 uploads a new config & restart BIND.

Perhaps I'm missing something here. I don't understand why you would need to update all servers exactly at the same time. Surely if we follow a procedure for migrating someclient.net from Server3 to Server2 it would be ok. Something like this:


1- reduce TTL
2- move data
3- Server 3 - step down as master (as explained above) -while making sure Server1 is aware of this change
4- Move Data to Server2
5- Server 2 - take over as master (as explained above) -while making sure Server1 is aware of this change
6- increase TTL after 24 hours
7- Remove data from Server3


it's just like migrating from one ISP to another except the slave doesn't change. :)
 
Last edited:
Let's see, where do I begin...

I suppose I'll start by admitting I don't like the idea of running NFS across the Internet.

And I'll continue by saying that the folks over at the bind-users list (created by ISC, who created bind) still seem to believe that the best way to implement zone transfers is by the way built in to bind.

And I continue to support that.

The good news is we're going into private beta testing on our solution this week; possibly as early as tonight or tomorrow.

And barring unforseen difficulties it shouldn't be more than another week before we go into public beta.

Jeff
 
I suppose I'll start by admitting I don't like the idea of running NFS across the Internet.

Jeff I completely agree with you. Running NFS does bear its drawbacks. At the end of the day we want it to be automated and if DA can come up with a solution soon NFS/FTP/HTTP or whatever, as long as it works I'm happy.

Good Luck with the Beta Testing.

Looking forward to the results.
--
 
Is their a good reason to not use low TTL 24/7.

I use TTL 600 all the time as I hate waiting a long time for changes to propogate.
 
Time To Live, TTL, as I'm sure you know, is the amount of time any nameserver is allowed to cache the data. After that time expires, the data must be discarded and new data is requested from the Authoritative name servers for a given domain name.

If the TTL is set to a low value say 600 the disadvantages are :

1- you can't afford to have an outage Longer than 600 seconds.
2- your name servers as the authoritative name servers of your domains will be under more pressure.

Hope that helps.
 
I am willing to rewrite some of my DNS replication code to make it a bit more readable and publish it in an howto.
I just don't know if anyone will be using it due to the big fat warning message mentioned that it can crash bind at your slave server entirely...
So if you're still interested in using it for anything, please mention it.

I use TTL 600 all the time as I hate waiting a long time for changes to propogate.
I use TTL of 7200 and I am already getting warnings from the Dutch domain registry I should make the TTL higher. (Somewhere around 4 hours minimal)
 
Maverick said:
Time To Live, TTL, as I'm sure you know, is the amount of time any nameserver is allowed to cache the data. After that time expires, the data must be discarded and new data is requested from the Authoritative name servers for a given domain name.

If the TTL is set to a low value say 600 the disadvantages are :

1- you can't afford to have an outage Longer than 600 seconds.
2- your name servers as the authoritative name servers of your domains will be under more pressure.

Hope that helps.

Ahh I see, well I have redundacy in my nameserver's so the risk to me is low and have never experienced a total dns outage, but I see it can be increased risk.
 
Maverick said:
1- you can't afford to have an outage Longer than 600 seconds.
Since this is not true, I'm wondering where you got the idea?

Thanks.

Jeff
 
I always thought expiry was the total length of time it lives for in an outage and ttl is only when it checks for updates, but have never been 100% sure, but since I run a minimum of 3 nameservers for each domain, I am fairly confident this wont be an issue for me.
 
jlasman said:
Since this is not true, I'm wondering where you got the idea?

Thanks.

Jeff

Clarification: As the issue was propagation&TTL I thought it was needless to say. Your name servers cannot be down for longer than TTL if you change your zone files. The new data may not be reflected on some caching servers if TTL is too short and you have an outage longer than TTL during update. Obviousely your zone file will not expire before the expiry, but the update may not be reflected on certain parts of the network

Adding new records is usually not a problem. But if you wish to update your records this is the procedure you need to follow:

1- Generally updating zone files need to be planned
2- Reduce "Old-TTL" (recomended value is 24 Hrs) to "New-TTL" for the particular record(s) you wish to update 24 hours beforehand
3- Update record(s)
4- Reload data across all your name servers
5- After "New-TTL" seconds has passed, increase TTL to its original optimum level i.e. "Old-TTL"

Basically "let your data live long enough for it to propagate".

Chrysalis said:
but since I run a minimum of 3 nameservers for each domain, I am fairly confident this wont be an issue for me.

True, You have 3 name servers and this may not be a problem for you but just imagine what would happen if all name servers had short TTLs. DNS traffic is crazy as it is. Therefore it is recommended to keep your configuration at an optimum level. :)
 
Last edited:
Chrysalis said:
I run a minimum of 3 nameservers for each domain, I am fairly confident this wont be an issue for me.

Just out of interest, are these domains on Direct Admin? How did you manage that? On Direct Admin all your nameservers for a given domain essentially are on one actual server. How did you get around this problem? I would be very interested to know. :) That is actually why I initiated this thread in the first place. Direct Admin is probably the best option around. It's simple, clean, and if it's overhead is much lower than it's rivals. But lack of DNS redundancy is not so good. I supposed it could be argued that if your Direct Admin Server goes down, it wouldn't make any difference if you have one ... or ten name servers. :) but that's beside the point....
 
I use the hidden master dns method, this allows users of directadmin to keep control of their dns.

I have 3 slave servers each at different providers for redundancy, they are all slaves using the directadmin server as the master. The directadmin server has no glue record or a NS record.
 
Interesting approach. But how do your slaves know that a domain name has been added to/removed from your Direct Admin Server?:)
 
Maverick said:
Your name servers cannot be down for longer than TTL if you change your zone files.
I still have no idea what you mean, since as far as I understand DNS you can't change your zone files at all while your servers are down.
The new data may not be reflected on some caching servers if TTL is too short and you have an outage longer than TTL during update.
I'm still quite confused. New data transfer to slave servers should have nothing to do with TTL, and in BIND, doesn't. Perhaps you're using some other nameserver software?

What you call "new data" is pulled by slave nameservers according to the value of "refresh", not of TTL. And the "expire" field tells the slave nameserver how long to continue to deliver data in the absence of being able to get an update, before it believes it too stale to be reliable.

The TTL field is only used by non-authoritative nameservers. Master and slave nameservers shouldn't care about it at all.
Obviousely your zone file will not expire before the expiry, but the update may not be reflected on certain parts of the network
Unless I'm missing something this information is a red herring; I don't understand how it makes any point concerning short vs long TTLs.
Basically "let your data live long enough for it to propagate".
I'm not sure what you mean by this either, since the shorter the TTL the less time it will take data to propagate across the 'net.
just imagine what would happen if all name servers had short TTLs. DNS traffic is crazy as it is.
That depends entirely on what you mean by crazy. One of my associates is happily serving DNS for 3,000 domains on a Cobalt RaQ 1. That was not a powerful machine.

Another correspondent who runs a Florida-based ISP recentlly told me he hosts DNS for 7500 domains on a PII with 256MB.

There are an awful lot of DNS queries on the 'net, and each one takes one packet in each direction. Since each dns request is generally followed by an awful lot of packages being moved between machines, DNS makes up an almost unmeasurable amount of data transit on the 'net.
Therefore it is recommended to keep your configuration at an optimum level. :)
The question then becomes what's an optimum level? Perhaps you have your ideas; I know that after hosting DNS for thousands of domains since the middle of 1999, and a few hundred incidentally before that, I have mine.

Jeff
 
Status
Not open for further replies.
Back
Top