DDoS. Are we that helpless? [duplicate]

There is a similar question here:

The challenge with this question is that it asks for a solution to a fundamentally unsolveable problem. There's no tool or practice you can adopt that is going to protect you from a moderately competant attacker who is determined to take down your service.

mod_evasive is about as good a solution as you're going to get to this problem in the short term. It implements "best practices" throttling of requests, and will prevent your system from being taken down by a 5 line Perl script.

In the longer term, when your application becomes successful, you'll inevitably wind up deploying a load balancer in front of it. The mainstream commercial load balancers (like F5's Big-IP) all implement "DOS protection" throttling, so you can turn that feature on when you upgrade. But don't upgrade just to get that feature.

The problem with solving modern DDOS attacks is that they are launched from numerous unrelated unpoints (often, from huge botnets). Web application firewalls like Citrix/NetScaler, Imperva, and F5 will do a decent job with the canned attacks, but skilled analysts (preferably from your own team) are going to be needed to stop "real" attackers who know your name; you do that job by analyzing the attack traffic, finding characteristics in it particular to the attacker, and filtering it.

I think you're on the right track with free "plug-and-play" defenses for this, especially with a new application.

@tqbf


They are. The thing about DDoS is its power is inversely proportional to the defender's availability and redundancy strategies. The main problem isn't that DDoS can't be mitigated; it's that much of the web relies on centralized architecture, poor redundancy, and cascading single points of failure.

The original internet protocols were designed with availability and redundancy in mind, providing much more fault tolerance at the expense of trust or synchronization. Look at how DNS, BGP, SMTP, and NNTP were originally designed for perfect illustrations.

Moving back to the web, the primary problems with DDoS attacks are ensuring DNS remains available under heavy load, ensuring server redundancy is sufficient to handle stress at peak capacity, and ensuring individual connections can't take a disproportionately high amount of system resources relative to others.

Mitigation thus becomes a matter of rerouting or blackholing the traffic, spreading the impact over as much hardware as possible, providing non-programmatic mirrors, and other service assurance mechanisms relative to your user community. Much of this is rolled into the concept of highly available services and threat modeling, for anyone interested in the field.

I'll close by pointing out that even more answers exist on Server Fault, for anyone interested in the IT perspective of this problem.