Last Reviewed and Updated on July 18, 2024
Introduction
Rate limiting is a foundational tactic in the ongoing fight against bot abuse, API spamming, and denial-of-service attacks. When traffic spikes are legitimate, your infrastructure should scale. But when they’re driven by aggressive crawlers or malicious bots, you need something smarter—something that stops the noise before it drains your resources or affects your users.
This guide dives deep into two popular approaches used on Apache and NGINX-based web servers: mod_security
for Apache and ngx_http_limit_req_module
for NGINX. Both can throttle traffic, but they operate differently and suit different deployment goals.
Apache and mod_security
What is mod_security?
mod_security
is an open-source web application firewall (WAF) engine designed for Apache (and also available for NGINX via ModSecurity 3). It allows you to write custom rules that inspect incoming HTTP requests and decide what to do—block, log, allow, or modify.
At its core, mod_security acts as a programmable filter that can block bots, identify attack patterns, and apply traffic throttling based on complex rules.
Implementing Rate Limiting with mod_security
To create a rate-limiting rule, you need to define a counter that tracks the number of requests coming from a source (usually an IP address) and take action when a threshold is reached.
Example Configuration:
SecAction "id:900001, phase:1, nolog, pass, setvar:tx.req_counter=+1"
SecRule TX:req_counter "@gt 100" "id:900002, phase:1, deny, status:429, msg:'Rate limit exceeded'"
What this does:
- The first rule runs at the beginning of each request (Phase 1), silently increments a transaction variable
req_counter
. - The second rule checks if that counter exceeds
100
and, if so, responds with HTTP429 Too Many Requests
.
Important considerations:
- This implementation is per transaction, so without persistence, counters reset on each request. Use collections like
IP
orGLOBAL
to make this rule meaningful. - Persistent rate limiting (e.g., over a 1-minute window) requires rules like:
SecAction "id:900003, phase:1, pass, initcol:ip=%{REMOTE_ADDR},setvar:ip.req_counter=+1"
SecRule IP:req_counter "@gt 100" "id:900004, phase:1, deny, status:429, msg:'IP rate limit exceeded'"
- This tracks request counts per IP. You can add expiration timers and thresholds to further shape the policy.
Strengths and Weaknesses
Strengths:
- Highly customizable with complex matching logic.
- Works as a full WAF; rate limiting is just one part of its toolkit.
- Can integrate with OWASP CRS for broader security rules.
Weaknesses:
- Performance cost is non-trivial, especially at scale.
- Configuration is verbose and error-prone.
- Requires careful tuning to avoid blocking valid traffic or exhausting server resources.
NGINX and ngx_http_limit_req_module
What is ngx_http_limit_req_module?
NGINX ngx_http_limit_req_module is a lightweight built-in module that provides simple but efficient rate limiting based on request rate per key (usually IP address). It doesn’t inspect request content—it works at a lower level, ideal for high-throughput environments where raw efficiency matters more than deep filtering.
Implementing Rate Limiting with limit_req
Example Configuration:
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
server {
location / {
limit_req zone=one burst=5 nodelay;
proxy_pass http://backend;
}
}
}
Explanation:
limit_req_zone
creates a memory zone to store states (10MB can store ~160,000 IPs).$binary_remote_addr
is a compact representation of the client IP.rate=1r/s
means one request per second per IP.burst=5
allows up to 5 requests to queue before being dropped.nodelay
means excess requests (within the burst) aren’t delayed but processed immediately.
This approach applies minimal logic—just raw request counting—and is extremely efficient.
Strengths and Weaknesses
Strengths:
- Extremely fast and efficient; native in NGINX.
- Simple to set up and maintain.
- Minimal CPU and memory overhead even with high traffic.
Weaknesses:
- Not suitable for complex filtering or request inspection.
- No native way to base rules on request content (e.g., POST body, user-agent).
- Less visibility and logging control compared to Apache with mod_security.
Real-World Recommendations
So, which one should you use? It depends on what you’re trying to achieve:
- Choose Apache + mod_security when you:
- Already run Apache and need a security layer that can do more than just rate limiting.
- Want granular control over request characteristics and advanced logic.
- Need WAF-style rules that detect and block known patterns.
- Choose NGINX + limit_req when you:
- Need raw speed and efficiency for static sites or microservices.
- Are implementing simple, IP-based throttling for public APIs or frontend endpoints.
- Want a “set and forget” style limit on traffic that won’t chew up system resources.
- Use Both in a Hybrid Setup:
- Deploy NGINX as a reverse proxy in front of Apache. Let NGINX handle the bulk rate-limiting work.
- Offload deeper inspection and WAF duties to Apache + mod_security.
- This is a common architecture when you’re managing high-traffic servers or API gateways.
For ServerGuardian, the smart move may be to implement default recommendations for both NGINX and Apache inside your control panel plugin. That way, sysadmins don’t have to choose—they get a reasonable baseline rate limit regardless of stack, with options to customize as needed.
Conclusion
Rate limiting isn’t a silver bullet, but it’s a strong first line of defense against abusive traffic. Apache’s mod_security is ideal for deep inspection and flexibility, while NGINX’s limit_req module is unbeatable in terms of performance and simplicity.
The right choice depends on your goals, traffic patterns, and stack. If you’re running mixed environments, knowing how both work—and when to use which—gives you the edge in securing your infrastructure.