Two months before Russian tanks rolled into South Ossetia in Georgia, denial of service attacks began shutting down Web sites used by the Georgian government to communicate with citizens. Denial of Service (DOS) attacks use vast networks of computers to bombard servers with millions of messages. Eventually the servers shut down.
When accused of mounting the cyber attacks against Georgia, Russia denied involvement. The denial might have been credible had a physical attack not followed so soon on the heels of the DOS attacks.
More and more often, cyber attacks on government servers signal a physical attack in the offing.
And it’s not always a military attack. Last year, for example, Estonia, a former Soviet Socialist Republic, for example, Estonia, another former Soviet Socialist Republic, removed a war memorial commemorating the bravery of Russian soldiers killed in World War II. Some Russians didn’t like it, and they mounted a month-long offensive against Estonia that fired daily DOS barrages at government Web sites and shut them down.
When the attacks were in full swing, organized protest groups rioted in the city of Tallinn, where the memorial had stood.
Observers seem to believe that those involved were Russian citizens angered by Estonia’s decision to remove the statue. As an act of revenge, they mounted a physical and cyber attack designed to unnerve an entire country. They succeeded.
The attack didn’t shut down power grids, and hospitals didn’t lose power. “It wasn’t that kind of attack,” says Michael Maloof, chief technology officer with TriGeo Network Security in Post Falls, Idaho, who has studied the Estonian attack. “It was a denial-of-service attack, and technically the security people dealt with it effectively. But it did do what the attackers intended. It panicked government officials, who saw it as an act of war.”
Cyber terror is not science fiction or wild speculation. It is a risk recognized by the U.S. Congress in the enactment of the Federal Information Security Management Act or FISMA. The act orders the Office of Management and Budget (OMB) to establish information security policies for all federal agencies and to monitor compliance. FISMA also orders the National Institute of Standards and Technology (NIST) to develop standards and guidelines to enable federal agencies to demonstrate compliance with policies established by OMB.
If you think a successful cyber attack means little more than a day-off from e-mail, consider the Aurora experiment conducted by the Idaho National Laboratory in March 2007 under the auspices of the Department of Homeland Security (DHS). In the test, computer experts from the lab went on-line and broke into the control programs of a demonstration power plant, took over a giant electric generator and caused it to blow up.
“The theory is that if there were a live power grid, downstream systems would have been damaged as well, possibly knocking systems off line,” Maloof says.
Those involved in the test were stunned to discover that a cyber attack could produce physical damage. In a report on CNN last September, Jean Meserve interviewed Scott Borg, director and chief economist with the U.S. Cyber Consequences Unit. Meserve set up a nightmare scenario for Borg to comment on:
What if a cyber attack on generating plants cut off power to a third of the country for three months?
At first, Borg said, it would be inconvenient: no lights, dysfunctional ATMs, no working gas pumps, no television, no Internet and no news about what was going on. By the third day, stores would run out of food and businesses running emergency generators would lose their power as the generators ran out of gas.
If the crisis lasted three months, Borg continued, it would cause damage equal to 40 or 50 large hurricanes striking all at once. The consequences would be worse than the Great Depression.
In fact, cyber attackers are already coming at the United States. In 2006, they went after the e-mail system and Web site of the Naval War College in Newport, R.I., presumably looking for sensitive information. When the Navy’s Cyber Defense Operations Command discovered the attack, it had to unplug the system from the Internet. At least a half dozen other federal agencies, including the Defense Information Systems Agency, have reported attacks.
Failing marks for the government’s network security
What is the government doing to secure its networks against hackers, terrorists and nation states? Under FISMA, the OMB grades security systems protecting networks across the U.S. government. The most recent report card (for 2007) found disturbing trends in the nation’s cyber security programs.
While departments and agencies such as the Justice Department and Social Security received A+ grades, DHS got a B+, up from a D the year before. The Department of Defense (DoD) came away with a D-, up from an F in 2006. The Departments of Interior, Treasure and Agriculture all received Fs for 2007 and 2006.
Perhaps most disturbing of all, the Nuclear Regulatory Commission, which manages and tracks the security plans for the nation’s nuclear power generating plants, got an F.
Why can’t the government secure its networks?
There are many reasons that networks are not well secured. Perhaps the most unnerving explanation is mistakes.
“Recently a nuclear power plant in Georgia (U.S.) had connected its secure network to its administrative network,” says a security software vendor familiar with the case. “Because maintenance activities were taking place on the administrative network, a security feature triggered a plant shutdown. While the security feature eventually worked, for some period of time the secure network was connected to the administrative network which was connected to the Internet and vulnerable to attack.”
Bad as it sounds, mistakes happen, even if an agency does its best to eliminate them. So in addition to trying to secure networks against attacks, it is also important to cultivate the ability to respond and recover.
“While we are developing strategies to defend against different types of attacks, attackers are creative and smart,” says Ron Ross, project leader for the FISMA implementation project in the computer security division of NIST. “Their goal is always to stay a step ahead of us.”
According to Ross, no matter how sophisticated the defenders get, those on the offense will occasionally break in. His recommendation is to develop contingencies.
A federal agency responsible for managing sensitive systems can expect to be attacked, he says, adding that a certain percentage of attacks will succeed. In measuring the effectiveness of security programs, it is important to think beyond how many attacks have been warded off, but also to consider when an attack succeeds: Can the agency absorb it, respond, recover and continue to do business?
“This is a new mindset,” Ross says. “People have the idea that they want a bulletproof system. But that’s not how security works. Right now in government, we’re implementing a cyber defense concept called defense-in-depth. This is about layers of security controls.”
Ross describes 17 layers of controls that fall under three headings: technical, operational and managerial. Technical controls include layers such as network access control systems and encryption. Managerial controls include policies and procedures as well as risk assessments. Operational controls cover physical access control systems, security officers and video surveillance.
Operational controls also include contingency plans that lay out what to do after a successful attack. What steps must be taken to respond, recover and continue the agency’s work? “And the contingency plans must be practiced and drilled regularly,” Ross says. “In the military, they say ‘we fight as we train.’ That means that when they go into combat they are executing as they’ve been trained. We have to drill contingency plans for cyber attacks with the same goal in mind: When it happens, we know what to do.”
IP security technologies
Technologies designed to defend networks are growing more capable.
One of the initiatives the federal government is working on today to improve network security is an OMB mandate called TIC, short for Trusted Internet Connection. “A large federal agency might have a half million IP devices from desktop computers to phones and servers, all connected through the Internet back to a central office or into university research laboratories and vendors offices,” says Michael Markulec, COO of Lumeta Corp. in Somerset, N.J.
Lumeta is working on this initiative with the Federal Aviation Administration (FAA), DHS and DoD. The company makes a software application called IPsonar, which scans a network and records the devices connected to it. On a regular schedule, security personnel compare the initial scan with a new scan to discover what new devices might have been connected and whether they are authorized and comply with security requirements.
“IPsonar also enables agencies to reduce the number of connections to a manageable level,” Markulec adds. “The fewer connections you have the easier they are to monitor.”
Effect of social engineering
Technology is one layer of defense. Policies, procedures and continuing education for employees compose another layer. Many experts, especially those with the technical expertise that enables them to hack into systems, think that the human side of network security presents risks equal to the technical side.
To make the point, Chris O’Ferrell, vice president with Herndon, Va.,-based Command Information, an IPv6 Internet services provider that works with government agencies, describes how a hacker or technically sophisticated terrorist might worm his way inside the networks operated by police, fire and medical first responders in a large city.
“Hackers are methodical,” he says. “They work in war rooms using a host of techniques designed to learn everything they can about a target. If I were a terrorist and wanted to disrupt the ability of first responders to respond to a physical attack — a bombing perhaps — in a big city, I would start by identifying all of the first responders in the agencies I was interested in.”
That’s easy, continues O’Ferrell. Social networking Web sites such as MySpace, Facebook and others make it possible. “It’s illegal to use these sites for this purpose, but still easy,” O’Ferrell says. “I would sign on to a couple of sites, create an account and type in the name of the agency. I’d end up with hundreds of names of people that work there now and have worked there in the past.”
O’Ferrell then checks out what they say about themselves and their jobs. He gets their title, information about their experience and perhaps some personal information. He also gets e-mail addresses.
Next, he makes up an organization chart for the agency, showing the relationships between people. “Once you know the names and relationships, you can impersonate someone and send e-mails asking for things,” he says. “You’ll sound reasonable because you know the relationships. Maybe you use the Human Resource manager’s address and send an e-mail asking someone for a home address and phone number because ‘you’re updating the HR records.’”
When the person hits reply, an embedded feature in the e-mail sends the reply to O’Ferrell instead of the real HR manager O’Ferrell has impersonated.
The kind of information he goes after depends on the plan. Suppose, for instance, that he wants to confuse and delay the first response to a physical attack on a port facility in a coastal city. He can fill up the agencies with disinformation that will send police, fire and medical responders on wild goose chases. He can send commanders downtown for non-existent meetings. He can mount denial of service attacks on e-mail, VoIP and wireless communications servers.
And when the physical attack comes, a confused first response will raise the amount of damage and the casualty count.
The ultimate security piece: Human judgment
When an organization falls for a social engineering attack, it is a human failure, says David Gewirtz, editor-in-chief of Computing Unplugged and a Ph.D. in computer science.
“The biggest mistake we make is not learning enough about network technologies and what a malicious person can do,” Gewirtz says. “We say that we’re not technical so it is someone else’s problem to figure out the security problems. In fact, it’s dangerous not to have a base level of technical knowledge”
The General Accountability Office recently released a study recommending how a number of federal agencies could strengthen e-mail record keeping systems. In reading the report, Gewirtz discovered a network security flaw buried in the footnotes. The Federal Trade Commission (FTC) prohibits staff from accessing external Web-based e-mail with the agency’s Web browsers. But agency employees may use remote application software to obtain access to external Web-based e-mail as a convenience. A footnote explains that the remote application is Citrix.
“Citrix creates a tunnel from a remote desktop to your local desktop,” Gewirtz says. “The tunnel lets you move stuff from point A to point B. But you have no way of knowing who else is moving stuff from point A to point B through the tunnel.”
Like the nuclear power plant that connected its secure network to the Internet, the FTC vulnerability has nothing to do with technology. It is an error in human judgment.