[Exim order issue] The bad_sender_hosts_ip does not work with cidr's?

Btw - may I ask, if its there an advantage to use bad_sender_hosts_ip over csf ip deny list? Is it faster?
The advantage is less resources if I'm not mistaken. The CSF deny list creates iptables lines which are always loaded, so using resources.
The bad_sender_hosts_ip is just a file which is only checked when some system tries to send mail, so not continuously loaded.

Again.... if I'm not mistaken. I'm not 100% sure.
Also for myself I like to have a better overview as to why a system is present in the blacklist. If they only send spam, it's not always needed to have the system blocked.
And I also don't have to remember to use the "do not delete" addition, to prevent them being removed when I remove al blacklists in CSF. I do that once a while.

As for possible ranges or if it will slow down things, I can't answer because I don't know.
But it seems logical that if it contains a very lot of ip's or ranges, that Exim has to search longer and it might slow down a bit depending on the system resources. But that is just a thought I have about it, no fact.
Having said that, personally I add this whitelist to the conditions for the RBL
How do you do that? I rather not edit exim.conf because it can be overwritten and I don't want to chattr it, because if there is a new one, mostly it's improved.
At this moment I've got a /etc/exim.strings.conf.custom file which contains blacklists, but not whitelists.
Code:
RBL_DNS_LIST==cbl.abuseat.org : bl.spamcop.net : b.barracudacentral.org : bl.mxrbl.com : zen.spamhaus.org
I could add hostkarma.junkemailfailter.com in there, but if that is a whitelist, would that work?
 
@Swift-AU There we go, the first false positive already present. Facebook is on Sorbs and seems Spamhaus too.
Code:
2022-10-27 06:45:11 H=69-171-232-151.mail-mail.facebook.com [69.171.232.151] X=TLS1.2:ECDHE-RSA-AES128-GCM-SHA256:128 CV=no F=<advertise-noreply@suppo
rt.facebook.com> rejected RCPT <[email protected]>: Email blocked by zen.spamhaus.org (127.0.0.3)
2022-10-27 06:45:12 H=69-171-232-151.mail-mail.facebook.com [69.171.232.151] incomplete transaction (RSET) from <[email protected]>
The 69.171.232.151 ip is from facebook.

There are also some good blocks, but also other false positives, so I will remove Spamhaus again.
 
I could add hostkarma.junkemailfailter.com in there, but if that is a whitelist, would that work?

The whitelist cannot be added to RBL_DNS_LIST, as it would be treated as a blacklist. You would need to edit exim.conf, and reapply your edit after updating it.
 
The 69.171.232.151 ip is from facebook.

Over 1400 SPAM hits for this entry on SORBS. Seems pretty spammy to me :LOL:

It's been removed from zen.spamhaus.org already, so the message would be re-sent and delivered to you. Are you really worried about a message from facebook being delayed a few hours?
 
Are you really worried about a message from facebook being delayed a few hours?
Not really, but it's not for me. However even then I don't really mind. But if it's refused for spam, it probably won't be delivered again.
Anyway, as said, I also again discovered another false postive, from a normal domain.

Ok I enabled it again and will test a bit longer. I'm not so trustworthy about Spamhaus due to the experiences in the past. :)
 
FYI also @johannes I removed Spamhaus again. Facebook notifications again were blocked and customers start complaining they can't receive their FB mail. Spamhaus is the only one blocking them so I stopped it.
 
I too removed Spamhaus and Hostkarma because of too many false positives. Maybe its only here in Europe that they are wrong doing.
 
Hi guys,

Thanks for the report.
I've shifted the check around.
The new exim.conf is here:

with the diff:
I ended up pushing the "accept" of "senders = :" to after all of the various denials to prevent abuse of that functionality.
Let us know if you spot anything else (please do bump us with a ticket as we don't check the forums as often as tickets)

Thanks!
John
 
Maybe its only here in Europe that they are wrong doing.

I think maybe that's the case. Here in Australia I've been checking my logs and no Facebook mail has been rejected. I'm using about 6 different RBLs and haven't had a false positive that I'm aware of in a long time.
 
Thanks for the report.
I've shifted the check around.
You're welcome and thank you very much for fixing it!!

As for the ticket, I can use the ticket system on one license if I'm correct, but I don't want to disturb you guys too much, and since fln and smtalk are on the forum regularly I thought mentioning them here would be working too.
I'll send in a ticket next time. ;)
Thank you!
 
@DirectAdmin Support When will this file be pushed? As I don't see it yet in the spamblocker versions list (ends with 4.5.42) and not when I do a ./build update.

I presume this will be something for tonight or tomorrow?
Or will this be done now due to the new way of working, without versions.txt on the next DA update?
 
The advantage is less resources if I'm not mistaken

On that note, the absolute least resources would be to blackhole an IP range. You don't get the benefit of seeing how much you rejected, but if we're counting CPU cycles and I totally get that, blackhole is the absolute winner.
 
would be to blackhole an IP range.
Blackhole can be done in two ways so which way do you mean? You mean block in the firewall?
If yes, then isn't it CPU cycles against memory? Because Exim does not run continuously, iptables does. But you could be right as to CPU cycles if using a lot of cidr blocks in the exim block file. I've only been thinking on memory used, didn't think of CPU cycles.
 
Blackhole can be done in two ways so which way do you mean? You mean block in the firewall?
If yes, then isn't it CPU cycles against memory? Because Exim does not run continuously, iptables does. But you could be right as to CPU cycles if using a lot of cidr blocks in the exim block file. I've only been thinking on memory used, didn't think of CPU cycles.

This is what you want:

ip route add blackhole 1.2.3.4/5

This is the lowest impact way to block an IP range.
 
Ok i just googled it myself to understand what it is, i got "All the IP packets destined for the destination address are discarded without notifying the source host. This route is called a blackhole route. When the network is subject to attacks, you can configure a blackhole route to discard packets destined for a certain address."
And to do it, its a command on the shell, docu: https://lowendbox.com/blog/linux-blackhole-tutorial-adding-and-removing-a-null-route/
Just one question @mxroute - is this saved somewhere also as a file? Can I see, how many i have maybe already blocked?
 
Ok i just googled it myself to understand what it is, i got "All the IP packets destined for the destination address are discarded without notifying the source host. This route is called a blackhole route. When the network is subject to attacks, you can configure a blackhole route to discard packets destined for a certain address."
And to do it, its a command on the shell, docu: https://lowendbox.com/blog/linux-blackhole-tutorial-adding-and-removing-a-null-route/
Just one question @mxroute - is this saved somewhere also as a file? Can I see, how many i have maybe already blocked?
You'd do:

ip route | grep blackhole
 
ip route | grep blackhole
So it's only in memory? Or is it also to be found in some file?

Like in the /etc/sysconfig/network-scripts/route-eth0 or something like that?
Because that command must read it from somewhere, right?
 
So it's only in memory? Or is it also to be found in some file?

Like in the /etc/sysconfig/network-scripts/route-eth0 or something like that?
Because that command must read it from somewhere, right?

Much like iptables by it's defaults, the rules you add manually to it don't persist unless you force them. I've never actually tried to configure a persistent blackhole as by the time I finally reboot, whatever problem I had with them is usually long gone. So I never really spent any effort on figuring out how I'd want to go about making them persist through reboot.

Suppose if there's no easier way one could make a file list and have a cron job iterate through it for blackholes, a redundant entry isn't added anyway so no real loss. Though if the file grew large enough, that cron job might reverse the concept of it being the lowest overhead.
 
Back
Top