Attacks Used the Internet Against Itself to Clog Traffic

By JOHN MARKOFF and NICOLE PERLROTH
Published: March 27, 2013

An escalating cyberattack involving an antispam group and a shadowy group of attackers has now affected millions of people across the Internet, raising the question: How can such attacks be stopped?

The short answer is: Not easily. The digital “fire hose” being wielded by the attackers to jam traffic on the Internet in recent weeks was made possible by both the best and worst aspects of the sprawling global computer network. The Internet is, by default, an open, loosely regulated platform for communication, but many of the servers that make its communication possible have been configured in such a way that they can be easily fooled.

The latest attacks, which appeared to have subsided by Wednesday, have demonstrated just how big a problem that can be.

On Tuesday, security engineers said that an anonymous group unhappy with Spamhaus, a volunteer organization that distributes a blacklist of spammers to e-mail providers, had retaliated with a cyberattack of vast proportions.

In what is called a distributed denial of service, or DDoS, attack, the assailants harnessed a powerful botnet — a network of thousands of infected computers being controlled remotely — to send attack traffic first to Spamhaus’s Web site and later to the Internet servers used by CloudFlare, a Silicon Valley company that Spamhaus hired to deflect its onslaught.

This kind of attack works because the botnet exploits Internet routing software and fools Internet servers into responding to requests for information sent simultaneously by a large group of computers. The Internet servers that answer the requests are tricked into sending blocks of data to the victims, in this case Spamhaus and CloudFlare.

The attack was amplified because each of the servers in this case was asked to send a relatively large block of information. The data stream grew from 10 billion bits per second last week to as much as 300 billion bits per second this week, the largest such attack ever reported, causing what CloudFlare estimated to be hundreds of millions of people to experience delays and error messages across the Web.

On Wednesday, CloudFlare described the highly technical game of cat-and-mouse between itself and Spamhaus’s opponents that has played out over the course of the last nine days. After the attackers discovered that they could not disable CloudFlare, which had been hired by Spamhaus to absorb its attack traffic, they changed their strategy.

They took aim at the networks that CloudFlare connected to and began to attack the computer servers that serve as the network’s foundation. These are specialized “peering” points at which Internet networks exchange traffic. The attackers took aim at organizations like the London, Amsterdam, Frankfurt and Hong Kong Internet exchanges, which route regional Internet traffic and are also used by sites like Google, Facebook and Yahoo to pass traffic efficiently among one another.

Here, too, they were unable to stall the Internet completely, but they did slow it, particularly by focusing on the London exchange, known as LINX.

“From our perspective, the attacks had the largest effect on LINX,” said Matthew Prince, CloudFlare’s chief executive, in a description posted on the company’s Web site on Wednesday. For a little over an hour on Saturday, he said, the traffic passing through the LINX infrastructure dropped significantly.

The attacks were episodic, stopping and starting and shifting targets over nine days through Tuesday morning. On Wednesday, Mr. Prince said that there some indications that the attackers were planning further actions, although he said he did not know if they would include DDoS attacks.

Veteran Internet engineers said the attack was made possible by a combination of defects, loopholes and sloppy configuration of Internet routing equipment. Indeed, a number of computer security specialists pointed out that the attacks would have been impossible if the world’s major Internet firms simply checked that outgoing data packets truly were being sent by their customers, rather than botnets. Unfortunately, a relatively small number of Internet companies actually perform this kind of check.

The depth of the problem is illustrated by the fact that the basic principles for stopping such attacks have been widely recognized since at least 2000. That was the year that the Network Working Group of the Internet Engineering Task Force, a voluntary group of Internet and telecommunications engineers, laid out a set of “best current practices” that Internet companies and organizations were encouraged to follow to defeat a threat known as “I.P. address spoofing,” which is the ability for an attacker to hide behind a faked address that is crucial for denial-of-service attacks.

But these basic Internet engineering “rules of the road,” laid out in a document known as BCP 38, are followed by a relatively small number of companies. “They have just not been willing to invest the effort it would take to make things so much better,” said Mark Seiden, a member of the Security and Stability Advisory Committee of the International Corporation for Assigned Names and Numbers, which oversees the domain name system.

The Internet security community recently started “naming and shaming” operators of these open, misconfigured servers — called open resolvers — in an effort to shut them down. Organizations like the DNS Measurement Factory published a survey of top offenders by network, and more recently the Open Resolver Project published a full list of the 27 million open servers online.

Jeff Moss, a member of the president’s Homeland Security Advisory Council and chief security officer at the Internet Corporation for Assigned Names and Numbers, or Icann, said the campaign was slowly paying off, with thousands dropping off that list in the last few months.

“We are slowly trying to chip away at these open resolvers and let people know they really need to do the right thing,” he said.

Paradoxically, it is the very strength of the Internet — that it is composed of millions of independent computers — that also makes this type of vulnerability a continuing threat. If the attackers had started their attack from a single computer, it could be stifled, but botnets give the anonymous individuals who control them great potential power.

“Long term, it comes down to those machines being infected,” said Ulf Lindqvist, a director of research and development at the nonprofit research group SRI. “If this one was one source, you could knock that source. But when it’s coming from all over the place, and the targets have a hard time filtering what is legitimate traffic from what’s not, then it becomes extremely difficult to defend against.”

Internet engineers said they hoped that the attacks would have a silver lining. “Because the Internet is so open and so large, it takes one of these really nasty events for those configurations to be done properly,” said Dan Holden, a director of threat response at Arbor Networks, a computer security firm based in Burlington, Mass.

“This is an opportunity for us to educate network operators to reconfigure their networks,” said Rick Wesson, the chief executive of Support Intelligence, a San Francisco-based company that sells information about computer security threats to corporations and federal agencies. “We spend too much time discussing cyberwar and not enough time discussing what a peaceful Internet looks like — and that is one in which people implement BCP 38 and care about their neighbors.”

http://www.nytimes.com/2013/03/28/te...anted=all&_r=0