The recent DDoS attacks against many of the North American financial firms had some unique characteristics that put a strain on the defenses in place and resulted in a number of well publicized service outages. The escalating threat is not new. It’s been steadily building up over the last few years as botnet command and control has matured, the tools available to exploit those botnets have gone mainstream, and the cost of using the tools has plummeted. What the attacks did do is raise the industry’s collective consciousness around how bad the situation has gotten. The effectiveness of the attacks has changed the way that Internet operators, whether service provider, hosting provider, government or enterprise think about their defenses. It has also raised a number of troubling questions.
The most common question that I have been asked is around the growing size of attacks and the capacity of Internet operators to withstand such threats. How big does an attack have to be to overwhelm the biggest, most prepared financial company? How big does an attack have to be to overwhelm the biggest and most prepared service provider? Is there an Armageddon attack on the horizon that threatens to take down the entire Internet? There are indications that this could be the case.
It should be noted that size is by no means the only means by which an attack can be effective. It’s a very visible way of taking down a network similar to the way a 7 mile backup on a local highway is a visible sign that you’re not getting to your destination quickly. Application layer attacks, IP protocol attacks, connection attacks and other stealthy attack methods can be just as effective in taking down a victim while being much more difficult to detect and mitigate. The financial sector attacks were multi-vector and had aspects of both volumetric and application layer attack traffic.
This article is going to focus on larger sized attacks and the possibility of an Armageddon attack. First, there are a few different measures of size including bandwidth (bps), packets (pps) and connections (cps). In all three cases Internet operators such as enterprises will have a limit which they can handle. Bps is the most commonly considered measure of size and it is easy to estimate network bandwidth limits. If the internet operator has 10Gbps worth of upstream bandwidth, then attacks bigger than this will overwhelm the links. Packet per second (pps) limits are more of a challenge to estimate limits because each device that is in-line with traffic will have limits in handling pps that will be dependent on the configurations that they are running and the type of traffic seen. High pps attacks often cause more challenges than high bps attacks because multiple bottlenecks may exist on the network. High cps attacks are typically targeted at stateful devices on the network that have a connection table. These tend to be the harder to measure because network traffic analyzers tend to focus on just bps or pps.
With all three attack types, all enterprise, government and hosting provider networks will have bottlenecks that can be over-run relatively easily by big DDoS attacks. Most enterprise and government datacenters have no more than 10 Gbps with some ranging slightly higher than this. Arbor Network frequently sees attacks much larger than this. As an example, Arbor’s ATLAS system receives anonymous attack statistics from hundreds of Arbor Peakflow SP deployments. The largest bandwidth attacks measured in 2011 and 2012 were 101.4 Gbps and 100.8 Gbps respectively. The largest packet per second attacks measured in 2011 and 2012 were 139.7 Mpps and 82.4 Mpps respectively. Another source of data is the annual security survey of Internet operators that Arbor runs. One of the survey questions is about the largest bps attacks seen over the previous year. The chart below reflects that biggest attacks reported each year since the survey was first conducted in 2002.
Based on the data from the chart above, there have been DDoS attacks capable of overwhelming a 10 Gbps datacenter since 2005. All this means that enterprises, governments and hosting providers need help from their upstream service providers to deal with threats of this magnitude. Many of these providers offer managed security services that will provide protection against bigger attacks. At a certain point, the attacks are big enough that the providers consider them their responsibility anyways because of the potential impact to multiple customers. However, it’s heavily recommended to have an agreement in place to ensure SLAS and guaranteed response times.
That brings me back to the question on whether an Armageddon attack is possible that can not only overwhelm the end victim but also all the Internet providers in between. Based on the current Internet environment, this is all too possible. The first thing that you need to consider what the available bandwidth is to generate an attack. There have been botnets discovered that have contained more than 1M infected hosts. Assuming an average of 1 Mbps worth of upstream access per host, a conservative estimate based on the number of broadband subscribers, 4G and 3G users deployed in the world, a 1M host botnet could generate an attack of 1 Tbps. Now what if this botnet and multiple other large botnets attack at the same time? Service providers have a lot of bandwidth throughout their network but there are limits to how much traffic they can handle. Attacks of that magnitude described would have profound effect on the Internet as a whole exploiting bottlenecks in many places simultaneously. No single service provider, even the largest tier ones, would be able to handle all this traffic without adversely affecting their user base.
Is this possible? It certainly seems so. Is it likely? It doesn’t seem so since it would affect everyone on the Internet and not just a single victim. That said, many attacks that didn’t seem likely before are now becoming commonplace as motivations have shifted. It is something that CSOs from within the carrier community are likely considering and hopefully taking steps to plan for the worst.