Finden Sie Ihren nächsten buch Favoriten

Werden Sie noch heute Mitglied und lesen Sie 30 Tage kostenlos
Cyberspace in Peace and War

Cyberspace in Peace and War

Vorschau lesen

Cyberspace in Peace and War

Länge:
1,281 Seiten
17 Stunden
Freigegeben:
Oct 15, 2016
ISBN:
9781682470336
Format:
Buch

Beschreibung

This book is written to be a comprehensive guide to cybersecurity and cyberwar policy and strategy, developed for a one- or two-semester class for students of public policy (including political science, law, business, etc.). Although written from a U.S. perspective, most of its contents are globally relevant.

It is written essentially in four sections. The first (chapters 1 - 5) describes how compromises of computers and networks permit unauthorized parties to extract information from such systems (cyber-espionage), and/or to force these systems to misbehave in ways that disrupt their operations or corrupt their workings. The section examines notable hacks of systems, fundamental challenges to cybersecurity (e.g., the lack of forced entry, the measure-countermeasure relationship) including the role of malware, and various broad approaches to cybersecurity.

The second (chapters 6 - 9) describes what government policies can, and, as importantly, cannot be expected to do to improve a nation’s cybersecurity thereby leaving leave countries less susceptible to cyberattack by others. Among its focus areas are approaches to countering nation-scale attacks, the cost to victims of broad-scale cyberespionage, and how to balance intelligence and cybersecurity needs.
The third (chapters 10 - 15) looks at cyberwar in the context of military operations. Describing cyberspace as the 5th domain of warfare feeds the notion that lessons learned from other domains (e.g., land, sea) apply to cyberspace. In reality, cyberwar (a campaign of disrupting/corrupting computers/networks) is quite different: it rarely breaks things, can only be useful against a sophisticated adversary, competes against cyber-espionage, and has many first-strike characteristics.

The fourth (chapters 16 35) examines strategic cyberwar within the context of state-on-state relations. It examines what strategic cyberwar (and threats thereof) can do against whom and how countries can respond. It then considers the possibility and limitations of a deterrence strategy to modulate such threats, covering credibility, attribution, thresholds, and punishment (as well as whether denial can deter). It continues by examining sub rosa attacks (where neither the effects nor the attacker are obvious to the public); the role of proxy cyberwar; the scope for brandishing cyberattack capabilities (including in a nuclear context); the role of narrative and signals in a conflict in cyberspace; questions of strategic stability; and norms for conduct in cyberspace (particularly in the context of Sino-U.S. relations) and the role played by international law.

The last chapter considers the future of cyberwar.
Freigegeben:
Oct 15, 2016
ISBN:
9781682470336
Format:
Buch

Über den Autor


Ähnlich wie Cyberspace in Peace and War

Ähnliche Bücher
Ähnliche Artikel

Buchvorschau

Cyberspace in Peace and War - Martin Libicki

TITLES IN THE SERIES

The Other Space Race: Eisenhower and the Quest for Aerospace Security

An Untaken Road: Strategy, Technology, and the Mobile Intercontinental Ballistic Missile

Strategy: Context and Adaptation from Archidamus to Airpower

TRANSFORMING WAR

Paul J. Springer, editor

To ensure success, the conduct of war requires rapid and effective adaptation to changing circumstances. While every conflict involves a degree of flexibility and innovation, there are certain changes that have occurred throughout history that stand out because they fundamentally altered the conduct of warfare. The most prominent of these changes have been labeled Revolutions in Military Affairs (RMAs). These so-called revolutions include technological innovations as well as entirely new approaches to strategy. Revolutionary ideas in military theory, doctrine, and operations have also permanently changed the methods, means, and objectives of warfare.

This series examines fundamental transformations that have occurred in warfare. It places particular emphasis upon RMAs to examine how the development of a new idea or device can alter not only the conduct of wars but their effect upon participants, supporters, and uninvolved parties. The unifying concept of the series is not geographical or temporal; rather, it is the notion of change in conflict and its subsequent impact. This has allowed the incorporation of a wide variety of scholars, approaches, disciplines, and conclusions to be brought under the umbrella of the series. The works include biographies, examinations of transformative events, and analyses of key technological innovations that provide a greater understanding of how and why modern conflict is carried out, and how it may change the battlefields of the future.

This book has been brought to publication with the generous assistance of Marguerite and Gerry Lenfest.

Naval Institute Press

291 Wood Road

Annapolis, MD 21402

© 2016 by Martin C. Libicki

All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage and retrieval system, without permission in writing from the publisher.

Library of Congress Cataloging-in-Publication Data

Names: Libicki, Martin C., author.

Title: Cyberspace in peace and war / Martin C. Libicki.

Description: Annapolis, Maryland: Naval Institute Press, [2016] | Includes bibliographical references and index.

Identifiers: LCCN 2016031363 (print) | LCCN 2016035110 (ebook) | ISBN 9781682470336 (ePub)

Subjects: LCSH: Cyberspace operations (Military science) | Cyberspace—Government policy. | Cyberspace—Security measures. | Cyberterrorism—Prevention.

Classification: LCC U163 .L519 2016 (print) | LCC U163 (ebook) | DDC 355.3/43—dc23

LC record available at https://lccn.loc.gov/2016031363

Print editions meet the requirements of ANSI/NISO z39.48-1992 (Permanence of Paper).

242322212019181716987654321

First printing

Contents

List of Illustrations

List of Acronyms and Abbreviations

INTRODUCTION

PART I. FOUNDATIONS

CHAPTER 1. Emblematic Attacks

Cybercrime and Other System Intrusions

The Advanced Persistent Threat

Distributed Denial-of-Service Attacks

Stuxnet and Other Destructive Attacks

CHAPTER 2. Some Basic Principles

Cyberwar and Cyberspace

How Hackers Work

Agoras and Castles

Most Cyberattacks Have Transitory Effects

CHAPTER 3. How to Compromise a Computer

Abuses by Authorized Internal Users

Abuses by Everyday External Users

Altered Instructions via Supply Chain Attack

Malware

CHAPTER 4. The Search for Cybersecurity

Applications Are Often the Weak Links in the Security Chain

The Role of Input Filtering

The Role of Operating Systems

The Role of People

The Role of Cryptography

A Role for Firewalls?

The Role of Air-Gapping

Relationships among Machines, Systems, and Engineering

Mixing and Matching Security Actions

Measures and Countermeasures

What Could We Learn?

CHAPTER 5. Defending Against Attacks of High and of Broad Consequence

Attacks of High Consequence

Identifying Near-Catastrophes to Get Ahead of Catastrophes

Hedging to Deal with Exceptions to the Power-Law Rule

Scalability Influences How Well a Near-Catastrophe Predicts a Catastrophe

Attacks of Broad Consequence

Implications for Learning

Is Information Sharing a Panacea?

CHAPTER 6. What the Government Can and Cannot Do

First, Why Should the Government Do Anything?

What the Wise Men Recommended

Good Policies

Panaceas

Bad Ideas

On Using Extraordinary Incentives to Juice the Cybersecurity Workforce

Can Governments Cope with Surprise?

PART II. POLICIES

CHAPTER 7. What Should Be Secret

The Calculus

Denying an Adversary Something

Affecting Adversary Knowledge

Affecting Adversary Decisionmaking

Adverse to Us

Some Implications of Logical Classification Rules

The Importance of Aggregate Privacy

The Benefits of Discretion

Conclusions and Implications

CHAPTER 8. What Does China’s Economically Motivated Cyberespionage Cost the United States?

What’s at Stake?

How Much Trade Is at Issue?

Displaced U.S. Exports

Displaced Value Added by U.S. Corporations

Summary Calculations

Conclusions

CHAPTER 9. Return to Vendor

What Should the NSA Do About Zero Days?

Retain or Return: Some Criteria

After How Long Should a Zero Day Be Returned to Vendor?

Irrelevant Considerations

Conclusions

CHAPTER 10. Cybersecurity Futures

Better Offense

A Larger Attack Surface

Better Defense

New Tools for Defense

A Three Mile Island in Cyberspace

CHAPTER 11. Operational Cyberwar

Possible Effects

Timing Cyberattacks

The Role of Surprise

Hiding the Attack to Facilitate Its Repetition

An Operational Cyberwar Scenario

Would China Use Operational Cyberwar the Same Way?

Why Supremacy Is Meaningless and Superiority Unnecessary

Coda: A Note of Skepticism on the Potential of Operational Cyberwar

PART III. OPERATIONS

CHAPTER 12. Organizing a Cyberwar Campaign

Why a Campaign?

The Insertion of Operational Cyberwar into Kinetic Operations

Whose Campaign?

The Rogue Cyberwarrior Challenge

CHAPTER 13. Professionalizing Cyberwar

Battle Damage Assessment

Collateral Damage

Other Parameters

Programming and Budgeting for Cyberwar

CHAPTER 14. Is Cyberspace a Warfighting Domain?

Cyberwar Operations Are About Usurping Command and Control

Cyberspace as Multiple Media

Defend the Domain or Ensure Missions?

As for Offensive Operations

It Raises the Attention to DDOS Attacks

Other Errors from Calling Cyberspace a Warfighting Domain

No Domain, No Cyber Equivalent of Billy Mitchell

Conclusions

CHAPTER 15. Strategic Implications of Operational Cyberwar

Influencing Others Against Digitization

The Importance of Conventional Dissuasion

The Challenge of Alliance Defense in Cyberspace

CHAPTER 16. Stability Implications of Operational Cyberwar

Attack Wins

Getting the Jump Wins

The Risks of Acting Are Reduced

The Risks of Not Acting Are Increased

A Missing Element of Caution

Conclusions

CHAPTER 17. Strategic Cyberwar

Strategic Cyberwar May Focus on Power Grids and Banks

How Coercive Can a Strategic Cyberwar Campaign Be?

Strategic Cyberwar as Information War

The Conduct of Cyberwar

Indications and Warning

Managing the Effects of Cyberwar

Terminating Cyberwar

Conclusions

PART IV. STRATEGIES

CHAPTER 18. Cyberwar Threats as Coercion

A Limited Cyberwar Campaign

A Coercive Campaign

Conclusions and Implications for Mutually Assumed Disruption

CHAPTER 19. The Unexpected Asymmetry of Cyberwar

The Third World Disadvantage

The Particular U.S. Advantage

Was This All an Exercise in Nostalgia?

A Silver Lining Arising from Kerckhoff’s Principle

The Influence of Third Parties on the Balance of Power in Cyberspace

CHAPTER 20. Responding to Cyberattack

First-Strike Cyberattacks May Have a Variety of Motives

Some Supposed Attacks Are Not

Should the Target Reveal the Cyberattack—and When?

Non-Retaliatory Responses

Economic Responses

Sub Rosa Cyberwar

Doing Nothing Is Also an Option

Conclusions

CHAPTER 21. Deterrence Fundamentals

Cyberdeterrence Differs from Nuclear and Criminal Deterrence

The Rationale for Deterrence

What Makes Deterrence Work?

The Core Message of Deterrence

CHAPTER 22. The Will to Retaliate

The Risks of Reprisals

Third-Party Cyberattacks

There May Be Bigger Issues on the Table

Credibility May Not Be Easy to Establish

The Signals Associated with Carrying Out Reprisals May Get Lost in the Noise

The Impact of Good Defenses on Credibility Is Mixed

Can Extended Deterrence Work in Cyberspace?

Why Credibility Makes Attribution an Issue

Conclusions

CHAPTER 23. Attribution

What Will Convince?

How Good Would Attribution Be?

When Attribution Seems to Work

What Could Make Attribution So Hard?

When Can Countries Be Blamed for What Started Within Their Borders?

Will the Attacker Always Avoid Attribution?

Why an Attacker May Favor Ambiguous Attribution over None at All

What Should Be Revealed about Attribution?

CHAPTER 24. What Threshold and Confidence for Response?

A Zero-Tolerance Policy?

Non-Zero Thresholds

Should Pulled or Failed Punches Merit Retaliation?

What About Retaliating Against Cyberespionage?

A Deterministic Posture

Other Advantages of a Probabilistic Deterrence Posture

The Choice to Retaliate Under Uncertainty

CHAPTER 25. Punishment and Holding Targets at Risk

Punishment

The Lack of Good Targets

The Temptations of Cross-Domain Deterrence

Will Targets Actually Hit Back at All?

Summary Observations on Cyberdeterrence

CHAPTER 26. Deterrence by Denial

What Is Being Discouraged?

Complicating Psychological Factors

Dissuading Cyberattack by Defeating Its Strategy

CHAPTER 27. Cyberwar Escalation

Why Escalate?

Escalation and Operational Cyberwar

Escalation in Strategic Cyberwar

The Difficulties of Tit-for-Tat Management

Escalation into Kinetic Warfare

Escalation Risks from Proxy Cyberwar

Proxy Cyberattacks

Conclusions

CHAPTER 28. Brandishing Cyberattack Capabilities

What Is Brandishing?

Your Power or Their Powerlessness?

How to Brandish Cyberattack Capabilities

Escalation Dominance and Brandishing

Counter-Brandishing

Caveats and Cautions

CHAPTER 29. Cyberattack in a Nuclear Confrontation

The Basic Confrontation

Disabling a Capability versus Thwarting a Threat

Rogue State Strategies for Discrediting the Cyberwar Bluff

Third-Party Caveats

Other Caveats

Is There Much Point to Disarming a Target State’s Nuclear Capabilities?

Should Targeting the Nuclear Command and Control Systems of Major Nuclear Powers Be Abjured?

Summary

CHAPTER 30. Narratives and Signals

Narratives to Facilitate Crisis Control

A Narrative Framework for Cyberspace

Narratives as Morality Plays

Narratives to Walk Back a Crisis

Signaling

What Can We Say with Signals that Would Come as News to Others?

Ambiguity in Signaling

Signaling Resolve

CHAPTER 31. Strategic Stability

Would Nuclear Dilemmas Echo in Cyberspace?

Misperception as a Source of Crisis

Excessive Confidence in Attribution or Preemption

Can There Be a Cuban Missile Crisis in Cyberspace?

Conclusions

PART V. NORMS

CHAPTER 32. Norms for Cyberspace

Norms Against Hacking in General

Norms on Who Did What

Arms Control

Law of Armed Conflict: Jus in Bello

Law of Armed Conflict: Jus ad Bellum

From the Tallinn Manual to Las Vegas Rules

What the Tallinn Manual Says

Viva Las Vegas

But Not So Fast

Why Not Las Vegas Rules for Outer Space as Well?

Conclusions

CHAPTER 33. Sino-American Relations and Norms in Cyberspace

The United States Advocates Its Norms

Can We Trade?

One Deterrence, Two Deterrence, Red Deterrence, Blue Deterrence

Why Red and Blue Deterrence Matter to Cyberspace

A Modest Proposal for Improving Cyberspace Behavior

Coda: The September Agreement between President Xi and President Obama

CHAPTER 34. Cyberwar: What Is It Good For?

Modeling the Influence of Cyberattack Options

How Much Cybersecurity Do We Really Need?

Acknowledgments

Notes

Bibliography

Index

Illustrations

TABLES

1.1Intrusions of Note in Cyberspace

34.1Kinetic/Cyber Preferences and Outcomes

FIGURES

2.1A Hierarchical Decomposition of Unwanted Cyberspace Events

4.1A Hierarchical Decomposition of Cybersecurity Actions

4.2A Hierarchical Decomposition for Repelling Cyberattacks

21.1Cost-Effective Curves for Cyberattacks

27.1An Inadvertent Path to Mutual Escalation

Acronyms and Abbreviations

Introduction

In 1991, the Vice Chairman of the Joint Chiefs of Staff asked Al Bernstein, my boss at the National Defense University, to produce a report contemplating the world of 2025. I was assigned the technology portion to write. My sense of things to come was that the information technology revolution would continue to change conventional warfare dramatically. I envisioned a battlefield filled with sensors from which huge flows of data across a network would be fused to generate real-time information on enemy targets, which could then be struck with missiles and other precision munitions.

Perhaps not so coincidentally, in 1993, the National Defense University started to hire faculty to teach in its newly established school of information warfare studies (which operated from 1994 to 1996). The faculty also had a notion of future warfare—one nowhere close to my perspective, which they deprecated as kinetic and hence so 20th century. In the future, they claimed, people would wage war noiselessly, without violence, destruction, gunpowder, shot, or shell. They would just attack their foes’ computer systems, thereby disarming them. That seemed far-fetched. Moore’s Law implied that the amount of information doubled every year. Waging war against something that doubled every year and hoping for success seemed absurd. Even if one could succeed this year, the job would be twice as hard next year—and twice as hard as that a year or so later.

But they insisted, and because they insisted, I ended up writing a small monograph, What Is Information Warfare.¹ Its take on cyberwar was quite skeptical. Then, as now, it was hard to conclude that cyberwar was going to trump every other form of warfare. I was confident that the threat from cyberspace could be contained, in part because I believed that people, aware of the threat, would not willy-nilly connect critical systems (such as those that supply electric power) to the Internet. In this, I was wrong; for instance, there were next to no articles on cybersecurity in the late 1990s in trade journals such as Electrical World. People made decisions that were poor for cybersecurity because they did not worry about cybersecurity.

Then, as now, cyberwar and various forms of malicious mischief in cyberspace cannot be ignored; they are a real problem. Yet when facing a problem such as the threat from cyberspace, it pays to be serious but not desperate.² Desperate people do desperate things, and sometimes their desperation repels other people from doing anything at all. The problems of cyberspace require sustained attention but nothing close to panic. The world is not going to end if the problem is not addressed. There will be costs, but life will find a way to go on.

Although people have been writing about computer security since the 1960s, it would have been difficult to get into cyberwar much earlier than the early 1990s.³ Before then, much about information war was highly classified (it was linked with strategic deception). More important, popular global networking was scarce before 1992. True, there was the Cuckoo’s Egg incident in 1987 and, before that, in 1983, an entertaining little film called War Games in which a teenager hacks through the (presumably fictional) modem bank connected to the nation’s nuclear command and control.⁴ But cyberspace as a medium of cyberwar largely had to wait until 1992, when the National Science Foundation declared that people no longer needed to conform to acceptable use policies to get on the Internet.⁵ That essentially opened the Internet up to everybody, for better or for worse.

Fast forward to just after September 11, 2001, when I found myself listening to a lecture from a prominent cyberwar expert discussing the threat from cyberspace. He spoke of a fourteen-year-old who had managed to hack into the controls of the Roosevelt Dam and was a hair’s breadth from being able to take over the controls and flood large portions of southern Arizona. The story was passed around Washington, D.C., as a case of why the United States should increase its vigilance against hackers. But such stories rarely made it outside the Beltway—until the Washington Post picked it up, thereby bringing it to nationwide attention and hence to Arizona, whereupon those who actually operated the Roosevelt Dam saw the story.⁶ They wrote to the Washington Post admitting that although someone did get into the system, the systems in question were those that automated office work and matters such as billing and administration. But the hacker was not even in that system very long, and he never got anywhere close to the controls (and he was twenty-seven, not fourteen).⁷

There is a lesson here. Very important people believe the threat from cyberspace is something the United States must take very, very seriously. In June 2012, Secretary of Defense Leon Panetta said he was very concerned about the potential in cyberspace of being able to cripple our power grid, to be able to cripple our government systems, to be able to cripple our financial systems, and then to virtually paralyze this country—and as far as I’m concerned that represents a potential for another Pearl Harbor. In 2011, Admiral Michael Mullen, Chairman of the Joint Chiefs of Staff, said, The biggest single existential threat that’s out there is cyber. ⁸ Federal Bureau of Investigation (FBI) director Robert Mueller said early in 2012 that threats from cyberespionage, computer crime, and attacks on critical infrastructure will surpass terrorism as the number-one threat facing the United States.⁹ In March 2013, James Clapper, the director of national intelligence, named cyberattacks—with the potential for state or nonstate actors to manipulate U.S. computer systems and thereby corrupt, disrupt, or destroy critical infrastructure—as the greatest short-term threat to national security.¹⁰ Early in 2013, the secretary of the Department of Homeland Security (DHS) announced that she believed a cyber 9/11 could happen imminently.¹¹ In late 2014, Admiral Michael Rogers, commander of U.S. Cyber Command (CYBERCOM), testified that there are nation-states and groups out there that have the capability . . . to shut down, forestall our ability to operate our basic infrastructure, whether it’s generating power across this nation, whether it’s moving water and fuel.¹² The fear was extant outside the Beltway as well; a survey of security professionals (admittedly, a self-interested source) in 2013 found that 79 percent believe that there will be some sort of large-scale attack on the information technology powering some element of the U.S.’s infrastructure—and utilities and financial institutions were the most likely targets.¹³

While that has not come to pass, there are indeed threats in cyberspace. If your information is on a network connected to the Internet and is of interest to large foreign countries (that need not be any larger or more powerful than Spain), such information is probably in their possession.¹⁴ The fact that (putatively) Chinese hackers could break into the U.S. Office of Personnel Management (OPM), security clearance contractor USIS, health insurers Anthem and Premera, and United and American Airlines (although perhaps resulting in only a partial compromise) raises the question of whether there were any organizations with useful data they did not hack.¹⁵

As Shawn Henry, former head of the FBI’s cybercrime unit, has observed, There are two types of organizations—those that know they are under attack from cyberspace and those that don’t know they’re under attack from cyberspace.¹⁶ It has also become apparent that not all threats in cyberspace come from overseas; if even half the material Edward Snowden provided to the media is correct, the National Security Agency has gotten into an impressive number of systems. And the ability to penetrate a network often implies the ability to muck with it, at least for a while. Furthermore, if the United States has yet to experience a major cyberattack, it may be in large part because many of the world’s best hacker organizations are parts of the governments of U.S. allies (for instance, Unit 8200 is a division of Israel’s Defense Force) or have yet to conclude that a major cyberattack on the United States is in their interest (for example, Russia or China). In the latter case, we can only guess whether it is within their capabilities. And mid-size potential cyber powers (such as Iran and North Korea) have discovered the value of having cyberwar capabilities comparatively late vis-à-vis the larger players; they may not yet have reached their full potential. All we can say is that no one has died as a direct result of hacking—and while this should put scare stories in their place, it hardly justifies complacency (after all, no one has died from an intercontinental ballistic missile strike, either). Cyberwar is clearly worth understanding.

All this calls for a text to make readers more intelligent consumers of the news, more intelligent users of technical advice, and more intelligent critics of the decisions that countries make with respect to the threat from cyberspace. This is not a technical treatment of cybersecurity. If there is a virus on your network, if your router is not working, if your systems are misconfigured, seek help elsewhere. But for those who ask what their country should do about cyberespionage or how countries should integrate cyberspace into their threat planning or cyberwar into their war planning, then read on. The goal is to help you integrate how you think about cyberwar into other elements of national power, including military power.

We start with foundational material. Whereas a deep knowledge of atomic physics is not necessary to hold your own in discussion about nuclear strategy, being conversant in cyberwar is helped by some understanding of what happens in a computer; such knowledge helps illustrate the art of the possible.

The next section deals with policy. How does the exploitation of faults in computers put the systems that use them at risk? What national policies should be adopted to manage this risk without undue cost in terms of economic outlays, privacy, or other values?

The third section deals with operations, specifically the potential military use of cyberwar and some considerations for its command and control. In other words, here we put the war into cyberwar—or, more precisely, put the cyberwar into war.

The last section deals with strategy, the art of integrating cyberwar and peace in cyberspace into the broad approaches that countries use to manage their relationships with other countries. It discusses the potential of strategic cyberwar, deterrence strategies, escalation management, brandishing, signaling, narratives, international law, and negotiations.

PART I FOUNDATIONS

CHAPTER 1

Emblematic Attacks

Those who follow the news are having an increasingly difficult time avoiding stories associated with intrusions into other people’s computer systems. Nevertheless, because the history of these intrusions—even limited to those that hit the news—is long and well populated with many examples, it may help in understanding the topic to take up a quick review of the more emblematic attacks. Table 1.1 is the author’s list of top cybersecurity incidents (and non-incidents) grouped by type. This variegated list was selected not necessarily for the severity of the incidents but for their impact on the consciousness of the public and policymakers and for the lessons that might be drawn from them.

The rest of the chapter will concentrate on three particular types of cybersecurity incidents: the advanced persistent threat (APT) form of cyberespionage, distributed denial-of-service (DDOS), or flooding, attacks, and destructive cyberattacks. The first category, the APT, covers most of what state intelligence agencies do—spying—but can also be used as an entry point for later cyberattacks. DDOS attacks, the second category, are meant to disrupt; they constitute a lesser form of cyberattack. Destructive attacks, the third category, are rare; even so, the concept of destruction has to be stretched to include wiping the hard drive of computers (which can often then be restored to factory conditions).

But first, a brief run-down of the incidents in the other category of table 1.1 will be useful—partially because they raised consciousness, and mostly because the techniques used or demonstrated in those shed light on how the other three types of incidents work.

CYBERCRIME AND OTHER SYSTEM INTRUSIONS

Although the techniques of cybercrime are similar to those of cyberwar, crime—however expensive it can sometimes get—is not war, nor is war necessarily cybercrime grown unacceptably large.¹

The Cuckoo’s Egg incident involved state-sponsored cyberespionage on sensitive but unclassified U.S. Department of Energy laboratory systems. It probably would have gone undetected but for the efforts of Clifford Stoll, who noticed a tiny discrepancy between computer time used and computer time charged and assiduously tracked the discrepancy to its source, despite FBI indifference (at least that attitude has changed).

The 1988 Morris Worm resulted from an experiment to understand how an infection could spread across the entire Internet (which, in 1988, was much smaller than the Internet today). One parameter in the code was set incorrectly, infecting 10 percent of the Internet.²

The 1991 Michelangelo virus was one of many viruses that spread from floppy disk to computer to floppy disk. It was notable for two reasons: first, the agitation produced by the rumor that millions of computers would become inoperable on the great artist’s birthday, and second, a demonstration that any computer that could be infected could also be trashed to the point that a complete rewriting and reformatting would be necessary.

The 1995 Citibank hack was the first notable large-scale cybercrime. The hackers managed to transfer $10 million from various accounts but could only remove $400,000 from Citibank accounts before they were caught.³ Two lessons were that all this theoretical musing about cybercrime was real and that removing money from the banking system can be difficult.⁴

Eligible Receiver was a Department of Defense (DoD) exercise undertaken to demonstrate that hackers could disrupt major military operations without using particularly special techniques. Not only was confusion sown within a simulated wartime Pacific Command, but also the hackers made a convincing argument that they could take down electric power in Oahu. Some people in the Pentagon took the results seriously, but many had to have repeated lessons.

Solar Sunrise was an actual intrusion in 1998 into the Pentagon’s computers as DoD was preparing a series of strikes on Iraq’s air defense systems, which had radar-painted U.S. aircraft protecting Kurdish refugees.⁶ Officials initially feared that Solar Sunrise was the work of the Iraqis. Later investigation showed that these intrusions were carried out by two California teenagers working under the tutelage of a young Israeli. Two ostensibly contradictory lessons can be drawn from that episode: that the fears of state-level cyberattacks as an asymmetric response to U.S. military operations were exaggerated, and that mere teenagers could make such mischief.

The same year, a similar, stealthier attack dubbed Moonlight Maze was carried out against Pentagon computers. Patient tracing of the attack’s path led investigators to Moscow. After an initially promising dialogue, the Russians suddenly refused to help chase down the perpetrators or to allow U.S. investigators to do so, suggesting that what at first looked like random hackers may well have been the state security apparatus.

The unwillingness of the United States in 1999 during the Kosovo campaign to attack Greek banks where Serbian leader Slobodan Milosevic supposedly kept his money is a good example of an incident that did not happen.⁷ The lesson here was the sensitivity of lawyers and Treasury officials to attacks on the international banking system. An attack on a bank account is an attack on the bank’s memory. Its obligation to repay the customer for the money it borrowed, which is what a deposit is, does not go away; it just becomes harder to determine.

The 2000 I Love You virus infected millions of users of Microsoft’s Outlook e-mail system. It was another demonstration of the power of malware and was said to cost billions of dollars in lost time and remediation (an estimate that, if true, had to assume that everyone affected was totally immobilized for days).

The EP-3/Hainan confrontation in May 2001 between a U.S. Navy spy plane and a Chinese jet was echoed by an exchange of web defacements by each side’s partisans.⁹ This incident gave rise to the impression that China would wage low-level attacks on the United States through the use of proxies that allowed China deniability. It took a dozen years before that misimpression was corrected.

The 2001 Code Red worm was a rapidly propagating piece of malware strong enough to make people question whether the Internet could survive in its present form much longer.¹⁰ Code Red, however, was only the beginning of a wave of worms; it was followed by NIMDA, MyDoom, SoBig, Slammer, and MSBlaster. Each version seemed more virulent than the previous one, and they kept coming until Microsoft issued Windows XP Service Pack 2 in August 2004.

A story circulated in 2007 via a Central Intelligence Agency (CIA) presentation and a 2009 CBS news report claimed that hackers had caused a power outage in southern Brazil.¹¹ The Brazilians countered that while the power outage was real, the cause was an accumulation of soot in the power plant smokestacks for which the company was fined several million dollars.¹²

Buckshot Yankee was the name given to the remediation effort to eradicate a worm (Agent.BTZ) that had worked its way into DoD’s air-gapped secret Internet protocol router network (SIPRnet).¹³ Indications are that the malware was transferred via a universal serial bus (USB) stick to a computer on the SIPRnet and from there to many other machines. The two lessons were that air-gapped systems could be infected and that configuration management—knowing the inventory and state of all machines on the network—was complex and difficult.¹⁴

Heartland Payment Systems, part of the hidden infrastructure of finance, provides credit, debit, and prepaid card services to small and medium-sized businesses. In 2008, an American cyber-criminal managed to steal upward of 150 million credit card numbers from them. That huge number should have been a wake-up call but apparently was not enough of one until the Target hack in 2013.

In the spring of 2013, a defense contractor working at the National Security Agency (NSA), Edward Snowden, leveraged his position as a systems administrator to take millions of files detailing all manner of the NSA’s cyberespionage activities. Although not strictly speaking a hacking attack on computers (but a serious espionage crime nonetheless), it unmasked years of NSA activity, forcing the agency to rework its tools to recover its access to the world’s networks. Many echoes from this revelation are described below.¹⁵

Later in 2013, researchers at Google and Finnish firm Codenomicon discovered that a piece of commonly used code within the secure sockets layer (SSL)—a standard method of securing e-commerce transactions—had a vulnerability, Heartbleed, that allowed hackers to extract passwords from systems running the code.¹⁶ Once the flaw was revealed to the world, adroit system administrators quickly patched their systems. Unfortunately, some hackers managed to replicate the flaw and torment the systems of dilatory ones.

The 2013 Christmas season presented Target, the giant retailer, with the revelation that hackers had stolen information from tens of millions of credit cards.¹⁷ This information was sold into the black market and used to burn new credit cards, which led to charges showing up on the accounts of unsuspecting customers. Banks subsequently had to issue new cards and were left on the hook for hundreds of millions of dollars of fraudulent transactions. Normally, credit card transactions are encrypted except for a brief interval during which they are processed in the cash register. Malware in the cash register (notably, Windows XP machines) ensured that a record of that interval was faithfully captured. The subsequent dismissal of Target’s chief executive officer (CEO), albeit also for an ill-advised expansion into Canada, provided the wake-up call that company boards finally heeded.

Hackers of apparent Russian origin (but actually of Israeli and U.S. origin) established a presence on one or more servers of JPMorgan Chase (as well as some smaller banks) and managed to steal nothing more than a list of customers with their physical addresses, e-mail addresses, and phone numbers. Because the hackers appeared to have failed in doing something larger, speculation on their true goals bubbled. Perhaps spending $250 million a year as JPMorgan Chase did on cybersecurity may actually prevent hackers resident in systems for months from actually stealing a penny.

Russia and Ukraine, two comparably advanced countries with first-rate hackers, went to war with each other, and except for some minor DDOS capers, nothing much happened. There were no large DDOS attacks. Neither country went after the other’s infrastructure (until late 2015, when a hack apparently led tens of thousands of customers in western Ukraine to lose power for a few hours¹⁸) or successfully robbed the other’s bank accounts. No instances of hacks into the other’s military systems have been revealed. Indeed, so absent was cyberwar that the moniker was stretched to cover propaganda (of which there was plenty), jamming, and Russia’s takeover of Crimea’s phone service.

THE ADVANCED PERSISTENT THREAT

An APT denotes an intruder that can establish a persistent presence in a target network from which data can be constantly extracted and exfiltrated (leveraging a persistent presence also allows a disruptive or corruptive attack to be launched). Most systems are penetrated not to interfere with their workings, but rather to force them to share information with hackers; the number of cyberespionage incidents is far greater than the number of cyberattacks.

The information sought by hackers may not be classified secret. It may not even be particularly sensitive when each hacked piece of information is considered in isolation of others. But the aggregation of each datum into data can be valuable in the same way that a datum about Wal-Mart’s inventory is uninteresting, but the way that Wal-Mart manages billions of records can reveal much of how it came to be the world’s largest retailer. One defining feature of cyberespionage is that it can deal in quantity—literally terabytes. Exploitation—sorting through all the terabytes to find something of value—is another matter that begins and ends in darkness (unless the nature or volume of exfiltration is noticeable). By contrast, traditional espionage expends a great deal of time and effort to elicit a key fact—for example, where and when is the enemy going to attack—the exploitation of which is more obvious.

The APT moniker has often been used as a euphemism for Chinese espionage into Western (primarily American) systems. Although China may conduct most such espionage, the Russians, especially since 2014, and other countries are not entirely absent.¹⁹ Chinese sources were implicated in Titan Rain, a penetration of Department of Energy laboratories from 2003 to 2005.²⁰ The individual who chased the attacks down marveled at the methodical, efficient, and faultless procedures used. Subsequent attacks targeted the Naval War College, the National Defense University, and the Departments of Commerce and State. One brazen attack, revealed in 2007, compromised the machine personally used by the secretary of defense.

A well-documented case of cyberespionage, Snooping Dragon, targeted the Free Tibet movement in general and the Dalai Lama’s organization in particular.²¹ That the Chinese were responsible is suggested by the exfiltration of stolen data to Chinese servers in Xinjiang and Sichuan, and from the fact that no other country has such an interest in the status of Tibet. The attackers, purportedly posing as members of Tibetan discussion groups, sent e-mail with malware-impregnated attachments to various Tibetan monks. Opening the attachments on the clients’ computers released malware into these computers in the form of rootkits: programs operating deep within the operating system, thereby allowing hackers to tell infected computers to do anything a legitimate user could. Such rootkits were designed to evade file searches or attempts to log them as running processes. Infected computers then infected others on the network. Such computers, in turn, ran malware that examined and forwarded e-mails. The investigators concluded that such attacks did not require a sophisticated intelligence organization. A sufficiently diligent individual could have done this by exploiting access to hacker sites: Best-practice advice that one sees in the corporate sector comes nowhere even close to preventing such an attack. . . . The traditional defense against social malware in government agencies involves expensive and intrusive measures . . . not sustainable in the economy as a whole.²²

In January 2010, Google discovered that it too had been attacked and some of its source code removed via servers in Taiwan. Unprecedentedly, Google executives admitted as much and pressed the U.S. government to raise this intrusion as an international issue.²³ Although the software guarding the repository itself had exploitable errors, it appears that client machines were allowed rather free access into the repository, when more secure practice would have been to require multi-factor authentication (discussed below) for access (although if someone who legitimately wanted to download code had an infected machine, the malware could have gone to work as soon as the connection was made). The route into Google’s system was apparently through a vulnerability in Microsoft’s Internet Explorer version 6 unpatched by Microsoft (also known as a zero-day vulnerability) but actually known about since the previous September. Once the flaw was exposed in the popular press, a fix was generated within two weeks. Google, on its own, decided to swear off Explorer for its staff (but that may be because they had a rival product, Chrome). The so-called Aurora series of attacks affected thirty-three other systems as well.²⁴

In 2011, there were reports that other hackers had attempted to compromise Gmail accounts maintained by U.S. government officials and others. The technique called for sending users a phony e-mail directing them to a fake Google site where they were asked to log in again. The hackers thereby captured the credentials so that they could later log in to user e-mail boxes and steal their correspondence. Note that the security hole that permitted the attack would have been the users’, not Google’s.

Another series of attacks, Shady Rat and Night Dragon, showed the industriousness of the Chinese hackers. Shady Rat’s researchers at MacAfee found a server through which stolen files from seventy-four hacked firms were cached for later delivery. Most, but not all, of these firms were in the United States; the businesses ranged from industry to commercial real estate. Night Dragon’s hackers sought hints on how these companies evaluated certain oil patches and what they were prepared to bid on them—helpful in divining what such drilling rights were worth or how to underbid the oil majors for drilling rights.²⁵ Similar attacks have been carried out to determine what the target firm’s negotiation positions were. Law firms have proven to be soft targets for such penetrations, because they keep highly privileged data but have traditionally not been the most computer-savvy of institutions (or large enough to afford sophisticated information technology staffs).²⁶

The 2011 hacks of cybersecurity company RSA proved that even companies in the security business can be had. The effect of this attack may not have been limited to RSA, because the hackers stole the seed numbers from which the pseudo-random numbers in RSA’s tokens were generated. RSA advised its clients to migrate from a four-digit personal identification number (PIN) to a six-digit one but did not call for a rapid wholesale replacement of digital fobs. A few months after the attack, hackers supposedly used information garnered from the attack to go after Lockheed, but that attack was apparently thwarted.²⁷

Lockheed has been a prominent target for the Chinese, who in 2009 managed to break into the systems associated with the F-35 aircraft under development and purloin several terabytes of data.²⁸ The impact of what was taken remains debatable. In theory, all the Chinese took was unclassified data, and the amount that they could usefully learn from such data about the F-35 itself should have been limited (China has likely learned far more about making advanced jet aircraft by buying some from Russia and reverse-engineering what they found). Yet rumors persist that the aggregation of these purloined unclassified data might have provided China with information that was equivalent to top-secret data and that substantial cost overruns in building the aircraft may have been exacerbated by the need to redesign it because of what the Chinese learned about the then-current design.²⁹ There have also been reports that other acts of cyberespionage may explain the rapid increase in the quietness of Chinese submarines; similar stories about Russian submarines also growing quieter very quickly (albeit by purchasing European machine tools) were circulated in the 1980s.

What do all these intrusions say about APTs? First, there is a good reason for the word persistent. The average time between compromise and discovery is up to a year—and that is only for penetrations that have, in fact, been discovered (including as-yet-undiscovered penetrations would likely raise that average detection time substantially).³⁰ Oftentimes these penetrations are discovered only because servers that contain information about some penetrated companies are discovered in the course of looking for information on others. Organizations commonly find out from outsiders (such as the FBI) that they have been penetrated when they themselves had no clue.³¹

Second, the Chinese themselves have poor tradecraft.³² The feeble attempts made to hide the path along which malware came in or data went out seem unimaginative. The fact that the files found on intermediate servers are not encrypted means that those who find such files can read them, guess where they came from, and inform the victims, thereby allowing them to stanch the bleeding. Anyone who uses the same method to penetrate thirty-three companies, à la Aurora, is asking for trouble the first time a penetration is discovered.³³ In 2012, the NSA circulated estimates that a dozen groups in China are responsible for most of the APT intrusions.³⁴ The Mandiant report presented copious evidence that at least one group, unit 61398, worked for the People’s Liberation Army (PLA) and had its own office building.³⁵ Since then, others have been identified.³⁶ Nothing is usually done about the hackers—which is why they put so little effort into hiding their tracks.

Third, the United States is not the only victim, contrary to China’s line that such accusations are inventions of U.S. media looking to reinvent the Cold War. Accusations have come from Germany (Prime Minister Angela Merkel brought this issue up personally with her Chinese counterparts) and the United Kingdom (which warned companies in public against such threats), as well as Canada, Australia, Taiwan, Japan, and India.³⁷

Fourth, cyberespionage is of a piece with many other policies. Chinese or China-associated individuals have been implicated (and convicted) in many physical espionage operations.³⁸ China has trade restrictions on the import of certain types of content—for example, only a dozen films may be imported in any one year. Meanwhile, commercial intellectual property that finds its way into China is largely stolen; in early 2011, nine in ten copies of Microsoft Windows were bootlegged, which suggests how easily China’s infrastructure can be penetrated by hackers.³⁹ Applications to import products into or start manufacturing in China are frequently held hostage to demands that corporations release a great deal of their intellectual property to native firms before getting permission.

As later chapters relate, the United States has begun pushing back against Chinese APTs, notably by indicting five members of the PLA in May 2014 for carrying out cyberespionage against private corporations and a labor union in the Pittsburgh area. More calls were issued for sterner words with China.⁴⁰

DISTRIBUTED DENIAL-OF-SERVICE ATTACKS

In 1990, major e-commerce sites from Amazon to America Online became inaccessible due to an unexpected volume of web traffic directed their way. This attack, cleaned up after a few hours, was traced to a teenager in Montreal (also known as Mafia Boy), who learned how to craft malformed packets that interacted badly with transmission control protocol/Internet protocol (TCP/IP) and then send them out in flows large enough to tie up very large systems. Thus was born the first widely reported manifestation of the DDOS attack.

In April 2007, a DDOS attack was carried out that radically darkened how people viewed them. Earlier that month, Estonia had decided to relocate a statue of a Soviet soldier from downtown Tallinn to a military cemetery. Riots ensued (resulting in one death and many injuries), but what caught the world’s attention was that Estonia was bombarded by a DDOS attack, which peaked at 4 billion bytes per minute. The attack, directed against Estonian government sites, banks, and other infrastructure, made life difficult in a country that had so enthusiastically embraced the Internet that it called itself E-stonia. After a few days, Estonia cut its international connections, thereby cutting off most of the traffic. This allowed local access to local sites, but it also prevented overseas Estonians (notably guest workers in other parts of Europe) from accessing sites (such as their banks accounts). After waves of attacks stretching over days and weeks, matters quieted down. Estonia rerouted its Internet traffic with the assistance of router company Cisco and content distribution company Akamai. The option of blocking traffic from just Russia would not have helped much, since the attacks came from all over the world. By one estimate, one packet in six was from the United States.⁴¹ It is unclear whether the attacks were instigated by the Russian state, Russian citizens, ethnic Russians in Estonia, or some mix of them.⁴²

Nevertheless, someone in Moscow must have liked the results well enough, because something similar happened in August 2008 against Georgia. That attacks started just before Russia’s troops moved south into the Georgian lands of South Ossetia suggested some tipoff between the attackers on the ground and those in cyberspace. Because Georgia was not nearly so wired as Estonia, the harm was far less. The primary effect was to complicate efforts by the government of Georgia to communicate its perspective on the Russian invasion to the rest of the world. After a brief interruption, many of Georgia’s websites were rehosted on U.S. servers owned by Google and by Tulip, a U.S. firm that employed some Georgian nationals. Unconfirmed rumors allege that the DDOS attacks affected Georgia’s ability to command and control its armed forces.

Some DDOS attacks (those on Georgia and many carried out by Anonymous) generate volume by mobilizing like-minded computer owners to bombard a nominated site (also known as a low-orbit ion cannon). But the big attacks find and reprogram other people’s computers—thereafter known as bots or zombies—to flood selected websites with traffic.

A common way to recruit bots is to corrupt popular websites (often via their advertisements, which are handled by third parties), wait until they are accessed, and then have the sites download malware onto the machines of the unwary. Bot-herders are indifferent as to who is infected. Theirs is solely a numbers game. For this reason, bot-herders rarely bother with zero-day exploits because they do not need to. Bot-herders might threaten a DDOS attack to shut down a site that expects a high volume of lucrative traffic at a particular time: for example, gambling sites during or just before a major sporting event. Sometimes such sites have to be hit in order to put weight behind such threats. They are also used to knock dissident sites offline or to distribute malware.⁴³ Botnets also serve other purposes such as spamming or running pump-and-dump schemes that manipulate stock prices. Harvesting personal data or distributing banking malware (for example, GameOver Zeus) are other uses. The true attacker need not own a botnet since many are available for rent through one of many black markets.⁴⁴

The September and December 2012 DDOS attacks allegedly carried out by Iran against U.S. banks managed to subvert insufficiently protected WordPress blogging software servers rather than individual users to generate large floods.⁴⁵ This stands in contrast to most other botnets whose bots were created by subverting thousands or sometimes millions of computers belonging to less savvy users—those who do not patch their machines and may not even notice that their broadband-connected (and sometimes always-on) machines are spewing out a profusion of bytes. Even if they noticed, it is not clear that they would care much as long as their machines did not sputter.

The 2013 DDOS attack on anti-DDOS site Spamhaus was large enough to have clogged service to sites that had the bad luck to sit on the routes preferred by the bots.⁴⁶ If enough of the wrong type of traffic can be thrown against certain routers, they can crash (and be knocked offline), and then nothing gets through. As of the writing of this book, the largest DDOS attacks were carried out against independent news sites that organized mock elections for Hong Kong’s chief executive: one at 500 gigabits per second and an early 2016 attack that reportedly exceeded 600 gigabits per second,⁴⁷ both up several-fold from the record 90 billion bits per second of 2007.⁴⁸ The attack on Github was a very large-scale DDOS attack over a few days in April 2015 that was, in all likelihood, hosted on the backbone of China Unicom, which is not only a major service provider but also a host of parts of the Great Firewall.⁴⁹

By one estimate, as many as 100 million computers were considered to be bots. Some of the larger botnets, such as Mariposa or Conficker, have 5 million to 10 million computers.⁵⁰ Up to one in ten packets over the Internet had been considered part of some bot attack.⁵¹ Several years ago, the Internet passed the point where more than half of all e-mail traffic was generated by spam-bots. Fortunately for users, commercial e-mail providers have become quite good at filtering spam. But DDOS traffic still wastes bandwidth. Although bot-herders tend to come out of Russia or eastern Europe, the servers that host the command and control apparatus are commonly American.

Could Internet traffic be kept clean by installing scrubbing devices at key nodes and separating the wheat from the chaff? Unfortunately, nothing obvious distinguishes bot packets from legitimate ones. Examining similarities between the individual pieces of bot traffic within a broader pattern of daily traffic (for example, sources getting too many packets of too similar a nature, especially from previously identified bots) might provide hints of how bot traffic may be differentiated from normal traffic and thereby tossed from the system—but with costs. For instance, dropping traffic from a source that has never communicated with a site may eliminate most traffic but also may keep newcomers out. Filtering DDOS traffic requires accepting all that traffic in the first place and must therefore be done upstream of the targeted site.⁵²

Sometimes bot traffic can be squelched by looking at infected computers and seeing where they get their (invariably encrypted) commands from. However, if that technique starts to become a serious problem for bot-herders, they can adopt peer-to-peer command and control networks. Their doing so would greatly complicate the process of figuring out who gave which command.

Nevertheless, botnets have been taken down. Cooperation between the FBI and the governments of Spain and Slovenia dismembered the Mariposa botnet.⁵³ Court action against McColo, an Internet service provider (ISP) that became too chummy with some bot-herders, led to a drastic (but temporary) reduction in spam.⁵⁴ Microsoft has taken it upon itself to go after botnets and can claim some success against the Rustock botnet and others.⁵⁵ Scott Charney, a Microsoft security official, uses a public health analogy to argue exactly that: users suffer little from being part of botnets, but their victims can suffer a great deal—hence the argument for herd immunity.

Finally, can a DDOS attack take down the Internet by taking down its Domain Name System (DNS), the service that converts names in websites and e-mail addresses to machine locations? The largest such attack, in February 2007, had a limited effect on the DNS thanks to engineering fixes installed since the previous attack in October 2002.⁵⁶

A close cousin to the DDOS attack (in that the victims of the problem are also largely blameless) is an attack that leverages the Border Gateway Protocol (BGP). This protocol picks the route a packet makes to its destination by allowing ISPs to declare to the world that a given site is best reached through its gateways. If an ISP so chooses (or is hacked), it can deliberately misroute traffic by declaring itself part of the shortest route between two points, even if both are on the other side of the world.⁵⁷ Mistakes can create the same effect. Indeed, distinguishing attacks from mistakes is not trivial. In 2008, YouTube became unreachable for practically all Internet users after a Pakistani ISP altered a route in a ham-fisted attempt to block the service in just that country.⁵⁸ Several years later, an Indonesian ISP took out Google for thirty minutes.⁵⁹ In 2010, a large percentage of all U.S. traffic wended its way through China for eighteen minutes.⁶⁰ The next year, a large chunk of Facebook’s traffic was also diverted to China in an incident that one security expert called an accident and another called an attack (route hijacking).⁶¹ China was also on the receiving end of such an incident in early 2014: A large portion of Internet traffic in China on Tuesday was redirected to servers run by a small U.S. company. The company, which publicly opposes China’s efforts to control Internet content, says it wasn’t at fault.⁶² In a more suspicious incident, traffic from a British manufacturer of nuclear components was routed through Ukraine before returning to Britain along the same route.⁶³

The weaknesses in BGP arise from the assumption that ISPs are trustworthy and that hijacking is rare enough to justify not having them digitally sign their routing declarations. The more the world’s traffic is encrypted, the smaller the loss from route diversion (for example, because it is pointless to divert traffic that cannot be read). So far, these assumptions have more or less held. But this leaves the possibility of an ISP going completely rogue, in large part because its country has as well. A full-scale attack using BGP could seriously bedevil the entire Internet until the offending country is taken off the Internet map—a process that may take hours or longer.

STUXNET AND OTHER DESTRUCTIVE ATTACKS

In 2007, DHS and the Idaho National Laboratories ran an experiment named Aurora in which a generator of the sort that powered the Alaskan oil pipeline was fed errant instructions and went into self-destruct mode, eventually shaking itself to death in a cloud of smoke. People learned from this that cyberwar could have kinetic effects (even if the particular flaw was quickly fixed).

Three years later, researchers discovered that first worm, Stuxnet, which was designed to break machinery, specifically the uranium centrifuges used by Iran to enrich uranium in Natanz. Even now, Stuxnet still stands out for its sophistication and daring. No one before (or since) has succeeded in so penetrating computers not connected to the Internet or a phone system. The malware crossed from whatever open networks were near the centrifuges (in the sense of being networks of suppliers to Natanz⁶⁴) to the closed network that hosted the computers that could manipulate the programmable logic controllers (PLCs) governing the centrifuges in ways that would ultimately destroy them. The greater the number of infected computers around Natanz, the more the opportunities for such transfer—hence the need for a broad propagation mechanism to infect as many computers and hence as many USB sticks as possible (conversely, some reports suggest the owner of the USB stick was witting).⁶⁵ For most computers, the infection would have next to no effect apart from transferring itself to other computers.

How an infected USB stick can infect a computer into which it is inserted is worth noting. Prior to 2008, Windows computers, as a default mechanism, ran programs from the boot sector of USBs upon insertion. Early versions of Stuxnet relied on such mechanisms. When Microsoft awakened to the problematic nature of that mechanism and Windows stopped automatically running programs from the boot sector, the hackers found a flaw in the routine that told the computer what to do when it read the directory of those devices (it had to do with icons of icons). This trick was not widely known; hence, it was a zero-day vulnerability.⁶⁶ Stuxnet also had three other zero-day vulnerabilities; they helped escalate the privileges of the program introduced into the computer so that it could spread widely and quickly. Never before had four zero-day vulnerabilities been found on one piece of malware.⁶⁷ Finally, Stuxnet also exploited the use of stolen certificates from two reputable companies (that seemed to share a parking lot) so that the computers would recognize rogue code as a legitimately source.⁶⁸

The last step was going from the network infection to a reprogrammed centrifuge. It was initially thought that the centrifuges’ PLCs were infected while on the floor. More likely, the PLCs were infected when being programmed on computers running PCS7/WinCC software developed by Siemens, the same company that supplied the PLCs. Thus, the worm did not affect all centrifuges, only those being programmed prior to use (or reprogrammed after being pulled offline).⁶⁹ The older centrifuges were not subject to real-time control and thus were not affected by Stuxnet. More broadly, machinery that cannot be reprogrammed in situ and that does not need programming tends to resist being fed arbitrary instructions (although built-in instructions might be invoked to bad ends by insiders or by hijacking authentication). Those subject to real-time control can have their controls usurped. Normally, such chips require a password to be programmed, but every PLC of that type had the same password, which users could not change. The creators of Stuxnet merely had to go to hacker bulletin boards to find out what the password was.

These centrifuges, having been commanded to execute rapid changes in operating speeds, died over the subsequent weeks and months. Why did the Iranians not suspect that their programming had been corrupted? Perhaps they knew they were dealing with black- and grey-market parts of unpredictable quality. Meanwhile, they were getting no help from Siemens, which never knowingly authorized any such sale to the nuclear facility.

Iranians knew that many problems could make their equipment fail (and some of it had in fact been physically sabotaged prior to arriving at the loading docks). So failure could have had one of a hundred fathers. Thus, the premature death of the centrifuges may have been the unavoidable cost of doing everything under the table. Since the facility was air-gapped, operators may have been confident that the source of failure was not a cyberattack—until they found that it was. Stuxnet also reprogrammed the same chip that controlled how the centrifuges reported on what they were doing. Thus, operators were told nothing untoward was going on.

All this points to a fundamental blunder of process control: never put a controller, which may misbehave, and a monitor, which checks for misbehavior, on the same device, because both may err from the same cause—in this case, a cyberattack. Why were the Iranians insufficiently aware of this axiom? And why were they literally deaf to unexpected changes in rotational speeds that were well within the audible range (normal speeds were 1,000 revolutions per minute [rpm] as opposed to induced speeds, which

Sie haben das Ende dieser Vorschau erreicht. Registrieren Sie sich, um mehr zu lesen!
Seite 1 von 1

Rezensionen

Was die anderen über Cyberspace in Peace and War denken

0
0 Bewertungen / 0 Rezensionen
Wie hat es Ihnen gefallen?
Bewertung: 0 von 5 Sternen

Leser-Rezensionen