Community ModSecurity Rules (Apache / DirectAdmin) – Feedback & Contributions Welcome

Hostking

Verified User
Joined
Jan 29, 2021
Messages
142
Location
South Africa
Calling all server admins (ModSecurity / Apache / DirectAdmin)

Following the positive feedback on the SpamAssassin thread, I’m starting a similar community-driven ModSecurity ruleset — this time focused on Apache/DirectAdmin environments.

The goal is to build a practical, real-world rule collection that works well in shared hosting setups — without breaking legitimate traffic.

Looking for contributions such as:
Rules with low false positives (critical for shared hosting)
Proven rules for bot abuse, crawlers, and bad user-agents
Protection against common web exploits (LFI, RFI, SQLi, XSS, etc.)
Smart rate-limiting / behavioral rules
Useful whitelisting techniques to reduce noise
Focus areas:
WooCommerce / WordPress abuse (cart spam, fake requests, etc.)
AI crawlers / aggressive bots (e.g. meta-externalagent, etc.)
Lightweight rules that won’t heavily impact performance

⚠️ Important:
When sharing rules, please specify whether they are for:

Imunify360, or
Standard ModSecurity (Apache / standalone ModSec / OWASP CRS)

This is important because rule handling and file locations differ depending on the setup.

DirectAdmin / Imunify360 notes (important)

For DirectAdmin users, it’s recommended to store custom rules in a separate directory (e.g. /etc/custom_modsecurity.d/) and include them via:

Code:
/etc/httpd/conf/extra/httpd-includes.conf

This ensures your rules are not overwritten during updates or ruleset reinstalls.

Example include:

Code:
<IfModule security2_module>
    IncludeOptional /etc/custom_modsecurity.d/*.conf
</IfModule>

Then restart Apache:

Code:
systemctl restart httpd
Helpful reference (bot blocking):

There’s also a useful guide on blocking bad bots with ModSecurity on DirectAdmin:
https://www.vpsbasics.com/security/how-to-block-bad-bots-using-modsecurity-with-directadmin/

Worth checking if you’re dealing with crawler abuse — especially for WooCommerce sites.

Testing & monitoring:

Always test rules before rolling out globally. You can monitor hits via:

Code:
tail -f /var/log/httpd/error_log

Or your ModSecurity audit logs depending on setup.

Code:
tail -f /var/log/httpd/modsec_audit.log

If you have rules that have worked well in production, please share — the goal is to build something practical and usable for everyone running shared or high-traffic environments.
 
Currently I added this as we use Imunify360 on this particular server with Apache/Modsec in "/etc/custom_modsecurity.d/custom_rules.conf":

Code:
# Aggressive Bots Slowing Down Websites (Imunify360)

SecRule REQUEST_HEADERS:User-Agent "@rx Amazonbot" "id:88345386,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Amazon AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx anthropic-ai" "id:88345387,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Anthropic AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx Applebot" "id:88345388,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Applebot AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx ChatGPT-User" "id:88345389,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for ChatGPT AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx ClaudeBot" "id:88345390,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Claude AI bots',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx DuckAssistbot" "id:88345391,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for DuckDuckGo AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx Factset_spyderbot" "id:88345392,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for FactSet AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx Google-CloudVertexBot" "id:88345393,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Vertex AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx GoogleOther/" "id:88345394,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Google AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx GPTBot" "id:88345395,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for GPTBot AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx ICCCrawler" "id:88345396,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for ICC-Crawler AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx Meta-ExternalAgent" "id:88345397,phase:1,pass,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Meta AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx OAI-SearchBot" "id:88345398,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for OpenAI AI search bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx Perplexity‑User" "id:88345399,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Perplexity AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx PerplexityBot" "id:88345400,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Perplexity AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx PetalBot" "id:88345401,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Petal AI bot',tag:'service_im360'"
SecRule REQUEST_HEADERS:User-Agent "@rx QualifiedBot" "id:88345402,phase:1,block,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Qualified AI bot',tag:'service_im360'"

SecRule REQUEST_HEADERS:User-Agent "@rx meta-externalagent" "id:99345397,phase:1,pass,nolog,auditlog,severity:5,t:none,msg:'IM360 WAF: Request rate tracking for Meta AI bot',tag:'service_im360'"

# General Protection - Outdated Browsers and Windows Versions (Apache)

SecRule REQUEST_HEADERS:User-Agent "(Windows 95|Windows NT 4\.0|MSIE [1-8]\.|Opera/[0-9]\.)" "id:9000100,phase:1,deny,status:403,log,msg:'Fake legacy browser UA blocked'"

and the following in /etc/custom_modsecurity.d/meta_bot_abuse.conf:

Code:
# Block Meta crawler abuse on sensitive WooCommerce endpoints (Apache)
SecRule REQUEST_URI "@rx /(cart|my-account|checkout)" "id:88346002,phase:2,deny,status:403,log,msg:'Block Meta WooCommerce abuse',t:none,chain"
SecRule REQUEST_HEADERS:User-Agent "@rx (?i)meta-externalagent" "chain"
SecRule QUERY_STRING "@rx (add-to-cart|remove_item)"
 
For bad bots, I use a text file, including all bots that I want to block, so the SecRule is clearer
# Block bad bots

SecRule REQUEST_HEADERS:User-Agent "@pmFromFile bad_bot_list.txt" "phase:2,t:none,t:lowercase,log,deny,severity:2,status:403,id:999000,msg:'Custom WAF Rules: WEB CRAWLER/BAD BOT'"

# Block old browser

SecRule REQUEST_HEADERS:User-Agent "@rx chrome/(?:[0-9]{1,2}\.|10[0-9]\.|11[0-1]\.)" \
"id:999001,\
phase:2,\
block,\
t:lowercase,\
severity:2,\
msg:'Outdated Browser - Chrome Version < 112 Blocked',\
logdata:'Matched UA: %{REQUEST_HEADERS.User-Agent}'"
 
#Block wp-file-manager and fileorginizer access usually installed by hackers to break customers sites - Allow it upon request only.
SecRule REQUEST_URI "@rx wp-file-manager|fileorganizer" "id:1000001,phase:2,deny,status:403,msg:'Blocked dangerous file manager plugin'"
 
Shouldn't relay the ratelimit on WAF, please do soft limit at webserver like nginx ratelimit,
 
I get your point regarding doing rate limiting at the web server layer (e.g. Nginx), and I agree that’s generally the most efficient place for it.

That said, this setup is primarily aimed at LiteSpeed / Apache environments rather than Nginx, where that layer isn’t always part of the stack.

In this case, the rate limiting aspect is handled through Imunify360, which tracks request behaviour and escalates abusive IPs over time (greylisting / blacklisting). The ModSecurity rules themselves are mainly used for detection and triggering that process, rather than acting as a standalone rate limiter.

These rules will still work on standard Apache + ModSecurity setups without Imunify360, but then they behave purely as matching/blocking rules. There’s no behavioural tracking or escalation, so actions like “pass” won’t result in rate limiting on their own. If needed, they can be switched to “deny” or “drop” for stricter enforcement, but that changes the behaviour from rate limiting to outright blocking.

Within the Imunify360 ecosystem, though, this approach provides a practical balance — especially in shared hosting environments — and has been working well for handling this type of abuse without needing additional layers in front.

ratetracking.png
 
Back
Top