Sie sind auf Seite 1von 233

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/272482692

Blackout Experiences and Lessons, Best Practices


for System Dynamic Performance, and the Role
of New Technologies

Technical Report July 2007

CITATIONS READS

24 901

54 authors, including:

Ian Dobson Nikos D. Hatziargyriou


Iowa State University National Technical University of Athens
251 PUBLICATIONS 8,807 CITATIONS 524 PUBLICATIONS 14,590 CITATIONS

SEE PROFILE SEE PROFILE

Pouyan Pourbeik R. Adapa


Power and Energy, Analysis, Consulting and Ed Electric Power Research Institute
97 PUBLICATIONS 1,772 CITATIONS 71 PUBLICATIONS 4,110 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Advanced Communication and Control for the Prevention of Blackouts (ACCEPT) View project

PhD Thesis View project

All content following this page was uploaded by Luis S. Vargas on 01 May 2015.

The user has requested enhancement of the downloaded file.


IEEE Task Force Report

Blackout Experiences and Lessons,


Best Practices for System Dynamic
Performance, and the Role of New
Technologies

Final Report

May, 2007

Prepared by the
Task Force on Blackout Experience, Mitigation, and Role of New
Technologies , of the
Power System Dynamic Performance Committee, of the
Power Engineering Society, of the
Institute of Electrical and Electronic Engineering

IEEE 2007
The Institute of Electrical and Electronic Engineers, Inc.

No part of this publication may be reproduced in any form, in an electronic retrieval system or otherwise, without the

prior written permission of the publisher.


IEEE TASK FORCE ON BLACKOUT EXPERIENCE,
MITIGATION, AND ROLE OF NEW TECHNOLOGIES

CHAIRMEN: P. KUNDUR AND C. TAYLOR


SECRETARY: P. POURBEIK

MEMBERS AND CONTRIBUTORS

R. Adapa P. Gomes T. Ohno


E. Allen D. Hammerstrom B. Pal
G. Andersson N. Hatziargyriou L. Pereira
L. Angquist J. F. Hauer S. T. Phan
M. Bahrman M. Henderson W. Qiu
A. Berizzi C. Henville C. Rehtanz
N. Bhatt H. Huang W. Sattinger
S. Boroczky S. Imai V. Sermanson
R. Boyer C. Jiang M. Sforna
C. Canizares E. John C. J. Singh
Q. Chen R. Kumar A. K. Teh
J. Chow S. Larsson K. Uhlen,
S. Corsi N. Martins L. Vargas
J. E. Dagle J. D. McCalley V. Vittal
A. Danell K. Morison C. Vournas
R. W. de Mello K. Moslehi R. Wilson
I. Dobson E. Muljadi W. Wong
R. Farmer V. Nolto Porto

iii
INDEX OF AUTHORS

(LISTED IN ALPHABETICAL ORDER)

Chapter 1 Power System Dynamic Performance Aspects of


Experienced Large-Scale Blackouts
E. Allen, G. Andersson, A. Berizzi, S. Boroczky, C. Canizares, Q. Chen, S.
Corsi, J. E. Dagle, A. Danell, I. Dobson, R. Farmer, P. Gomes, N.
Hatziargyriou, J. F. Hauer, S. Imai, C. Jiang, S. Larsson, J. D. McCalley, T.
Ohno, B. Pal, P. Pourbeik (Lead Editor), W. Qiu, M. Sforna, V. Sermanson,
C. J. Singh, A. K. Teh, K. Uhlen, L. Vargas and C. Vournas

Chapter 2 Best Practices to Improve Power System Dynamic


Performance and Reduce Risk of Cascading Blackouts
R. Adapa, E. Allen, A. Berizzi, N. Bhatt, R. Boyer, J. Chow, S. Corsi, R. W. de
Mello, P. Gomes, J. Hauer, C. Henville, S. Imai, K. Morison, E. Muljadi, L.
Pereira, P. Pourbeik, C. Taylor (Lead Editor), V. Vittal and R. Wilson

Chapter 3 New Technologies to Improve Power System Dynamic


Performance
L. Angquist, M. Bahrman, J. Dagle, D. Hammerstrom, H. Huang, E. John, R.
Kumar (Lead Editor), V. Nolto Porto, K. Moslehi, P. Pourbeik, C. Rehtanz
and W. Wong

Main Editors: P. Kundur, P. Pourbeik and C. Taylor

iv
TABEL OF CONTENTS

EXECUTIVE SUMMARY ix
Background ix
Summary of Findings, Conclusions and Recommendations ix

CHAPTER 1 LARGE-SCALE POWER SYSTEM BLACKOUTS: CAUSES, POWER SYSTEM


DYNAMIC PERFORMANCE ASPECTS AND LESSONS LEARNED
1.1 Introduction 1-1
1.2 Major US/Canadian Blackouts 1-1
1.2.1 Northeast Blackout of November 9-10, 1965 1-1
1.2.1.1 The Power System and Initial Conditions 1-2
1.2.1.2 Sequence and Significance of Events 1-3
1.2.1.3 Restoration 1-5
1.2.1.4 Types of Instability 1-6
1.2.1.5 Lessons Learned and Resulting Practices 1-6
1.2.2 North American Northeast Blackouts of 1977 and 2003 1-10
1.2.2.1 Blackout of 1977 1-10
1.2.2.2 August 14, 2003 Blackout 12
1.2.3 Western Interconnection System Breakups of 1996 [6, 7] 1-18
1.2.3.1 Western System Breakup of July 2, 1996 [8-10] 1-18
1.2.3.2 Western System Breakup of August 10, 1996 [11-15] 1-18
1.2.3.3 Key Lesson Learned 1-20
1.3 Major Blackouts in Europe 1-21
1.3.1 Blackout in Southern Sweden and Eastern Denmark September 23, 2003 1-21
1.3.1.1 Course of Events 1-21
1.3.1.2 Restoration 1-25
1.3.1.3 Conclusion 1-27
1.3.1.4 Determined Actions 1-27
1.3.2 Italian Blackout of September 2003 1-28
1.3.2.1 Background 1-28
1.3.2.2 Dynamic Phenomena During the Event [19-23] 1-29
1.3.2.3 Restoration 1-33
1.3.2.4 Lessons Learned 1-35
1.3.3 French Blackout of December 19, 1978 1-36
1.3.3.1 System Conditions Leading Up to the Initial Event 1-36
1.3.3.2 First Stage The 8:26 am Incident 1-38
1.3.3.3 Second Stage: First Attempt of Service Restoration: 1-39
1.3.3.4 Third Stage: Second Event at 9:08 am 1-40
1.3.3.5 Fourth Stage: Service Restoration After 9:08 am 1-40
1.3.3.6 Summary and Comments 1-41
1.3.4 Blackouts in the Nordic Countries 1-41
1.3.4.1 Sweden 1983 1-41
1.3.4.2 Helsinki 2003 1-42
1.3.4.3 Western Norway 2004 1-43
1.3.5 Blackouts in Greece 1-43
1.3.5.1 Blackout of July 12, 2004 1-43
1.3.5.2 The Crete Power System Blackout of 25 October, 2001 1-48
1.3.6 Blackout in the Swiss Railway Electricity Supply System: 22 June 2005 1-51
1.3.6.1 Introduction 1-51
1.3.6.2 Description of the Blackout 1-52
1.3.6.3 Determined Actions 1-55
1.3.7 August 2003 Blackout in the United Kingdom 1-55
1.3.7.1 Overview of London Power System 1-55
1.3.7.2 Power Supply Arrangement of South London 1-56
1.3.7.3 Operating Arrangements on That Day and the Effect on Supply of Demand
1-56
1.3.7.4 Sequence of Events During the Incident 1-57

v
1.3.7.5 Root Cause Based on Subsequent Analysis 1-58
1.3.7.6 Conclusion: 1-59
1.3.8 UCTE Major Disturbance of 4 November, 2006 [39] 1-59
1.3.8.1 System Conditions Prior to the Event 1-59
1.3.8.2 The Event 1-61
1.3.8.3 Dynamic Issues During the Event 1-62
1.3.8.4 Remedial Actions Taken in the Three Subsystems: 1-63
1.3.8.5 Conclusions and Recommendations 1-64
1.4 Blackouts in South America 1-65
1.4.1 Brazilian Blackouts 1-65
1.4.1.1 Blackout of April 18th, 1984 ( 4:34 pm ) 1-65
1.4.1.2 Blackout of August 18th, 1985 ( 6:40 pm ) 1-65
1.4.1.3 Blackout of December 13rd, 1994 ( 10:12 am ) 1-65
1.4.1.4 Blackout of March 26th, 1996 ( 09:18 am ) 1-66
1.4.1.5 Blackout of March 11th, 1999 ( 10:16 pm ) 1-66
1.4.1.6 Blackout of May 16th, 1999 ( 06:05 pm ) 1-67
1.4.1.7 Blackout of January 21st, 2002 ( 01:34 pm ) 1-67
1.4.1.8 Summary of Lessons Learned 1-68
1.4.2 May 1997 Chilean Blackout 1-69
1.5 Blackouts in South East Asia and Australasia 1-72
1.5.1 Blackouts in Japan The 1987 Tokyo Blackout 1-72
1.5.1.1 Sequence of Events 1-72
1.5.1.2 Voltage Collapse 1-76
1.5.1.3 Root Causes 1-76
1.5.1.4 Lessons Learned 1-78
1.5.2 Australian Blackout of Friday 13th August 2004 [73] 1-81
1.5.3 Central-South System Collapse of the Peninsular Malaysia Grid System on 13th
January 2005 1-84
1.5.3.1 Incident Description 1-84
1.5.3.2 Lessons Learned 1-88
1.5.4 Power System Disturbance In Peninsular Malaysia Grid on Wednesday 18th
November 1998 1-89
1.5.4.1 Events Leading to the System Separation 1-89
1.5.4.2 Key Factors Causing System Separation 1-89
1.5.4.3 Widespread Loss of Generation and Load 1-90
1.5.4.4 Conclusions 1-91
1.5.4.5 Lessons Learned and Recommendations 1-91
1.6 NERC Historical Data on Blackouts and Overall Blackout Risk 1-93
1.7 Probability Models for Estimating the Probabilities of Cascading Outages 1-95
1.8 Chapter Summary 1-97
1.9 References 1-102

CHAPTER 2 BEST PRACTICES TO IMPROVE POWER SYSTEM DYNAMIC PERFORMANCE


AND REDUCE RISK OF CASCADING BLACKOUTS
2.1 Introduction 2-1
2.2 Power Plant Equipment 2-1
2.2.1 Generator controls 2-2
2.2.2 Generator protection 2-3
2.2.3 Model Validation 2-4
2.2.4 Other Issues 2-4
2.2.5 Summary of dynamic performance best practices for power plant equipment 2-5
2.3 Reactive Power Compensation and Control 2-5
2.3.1 Rotor angle stability 2-6
2.3.2 Voltage stability 2-6
2.3.2.1 Power plants 2-6
2.3.2.2 Transmission and distribution 2-7
2.3.2.3 Example of importance of voltage/reactive power 2-8
2.4 Transmission Protective Relaying 2-10
2.4.1 Equipment protection General Observations 2-11
2.4.2 Transmission Line Protection 2-11

vi
2.4.2.1 Overreaching Distance Protection (The Zone 3 Issue) 2-11
2.4.2.2 Hidden Failures 2-12
2.4.2.3 Sensitive Ground Overcurrent Protection 2-13
2.4.3 Substation equipment 2-13
2.4.4 Summary of dynamic performance best practices for transmission protective
relaying 2-14
2.5 Load Shedding 2-14
2.5.1 Underfrequency load shedding 2-15
2.5.2 Undervoltage load shedding 18
2.5.2.1 Automatic undervoltage load shedding remedial action scheme to
maintain B.C. Hydro 500-kV network voltage stability [51] 2-19
2.5.2.2 Hydro Quebec UVLS (TDST) [52, 53] 2-19
2.5.2.3 Entergy UVLS (VSHED) [54] 2-20
2.5.2.4 New Mexico UVLS (ICLSS) [55] 2-20
2.5.2.5 TEPCO UVLS 2-20
2.5.3 Direct load shedding 2-21
2.5.4 Summary of dynamic performance best practices for load shedding 2-21
2.6 Special Protection Systems 2-22
2.6.1 SPS philosophy and design aspects 2-23
2.6.2 SPS classification 2-23
2.6.3 Aspects related to reliability, security and rapidity 2-24
2.6.4 Aspects related to the performance of system protection systems 2-24
2.6.5 Phasor measurement units PMUs 2-25
2.6.6 Special protection systems and PMS 2-26
2.6.7 Wide-area controls using phasor measurements 2-26
2.6.8 Summary of special protection systems 2-28
2.6.9 Summary of dynamic performance best practices for special protection systems 2-29
2.7 HVDC Links and HVDC Controls 2-29
2.7.1 HVDC applications 2-29
2.7.2 Basic HVDC controls affecting power system dynamic performance 2-32
2.7.3 HVDC controls to enhance power system dynamic performance 2-33
2.7.4 Summary of dynamic performance best practices for HVDC applications 2-33
2.8 Dynamic Performance Monitoring 2-34
2.8.1 Organization and management of WAMS data in WECC 2-37
2.8.2 Performance monitoring and situational awareness 2-39
2.8.3 WECC requirements for monitor equipment 2-41
2.8.4 Summary of dynamic performance best practices for power system dynamic
performance monitoring 2-43
2.9 Modeling and Simulation 2-45
2.9.1 Time frame of simulation studies 2-45
2.9.2 Models for simulations 2-45
2.9.3 Power flow programs and methodology for simulations 2-46
2.9.3.1 The seams issue 2-47
2.9.3.2 Slack bus and generator participation factors 2-48
2.9.4 Dynamic programs and methodology for simulation studies 2-48
2.9.4.1 Model validation of previous blackouts and other events 2-49
2.9.4.2 Modeling validation of the WECC August 10th 1996 collapse 2-49
2.9.4.3 Generator model validation by testing 2-50
2.9.4.4 Interim dynamic load model 2-51
2.9.4.5 New thermal governor model 2-52
2.9.4.6 Frequency responsive reserves and traditional spinning reserves 2-54
2.9.4.7 Voltage stability simulations 2-54
2.9.4.8 Predicting future cascading events is infinitely more difficult than
validating previous known blackout events 2-56
2.9.5 Summary of dynamic performance best practices for power system modeling
and simulation 2-56
2.10 On-line Dynamic Security Assessment and Other Operator Aids 2-57
2.10.1 Security assessment by direct observation 2-57
2.10.1.1 Example operator aid for oscillatory stability 2-58
2.10.1.2 Example operator aid for voltage stability 2-58

vii
2.10.2 Dynamic security assessment for potential contingences 2-60
2.10.3 Summary of best practices for on-line dynamic security assessment 2-62
2.11 Wind Generation 2-63
2.11.1 Introduction 2-63
2.11.1.1 Wind turbine generators 2-63
2.11.1.2 Wind power plants 2-64
2.11.2 Wind power plant performance 2-65
2.11.2.1 Steady-state performance 2-65
2.11.2.2 Dynamics 2-66
2.11.2.3 Grid Codes 2-67
2.11.2.4 Models 2-68
2.11.3 Summary of dynamic performance best practices for wind generation 2-69
2.12 Reliability in Market Environment 2-69
2.12.1 Reliability of generation 2-70
2.12.2 Reliability of transmission network 2-71
2.12.3 Reliability of system operation 2-72
2.12.4 Summary of dynamic performance best practices in market environments 2-73
2.13 Restoration 2-74
2.13.1 Summary of dynamic performance best practices issues for restoration 2-74
2.14 Reliability Standards 2-74
2.14.1 Summary of dynamic performance best practices for reliability standards 2-75
2.15 Conclusions and Summary of Best Practices for Power System Dynamic Performance 2-75
2.15.1 Overall summary of best practices for system dynamic performance and reducing
the risk of cascading power failures 2-75
2.16 References 2-80

CHAPTER 3 NEW AND EXISTING TECHNOLOGIES TO IMPROVE POWER SYSTEM


DYNAMIC PERFORMANCE
3.1 Introduction 3-1
3.2 Existing Modern Power Electronic Based Transmission Technologies That Can Help to
Significantly Improve System Dynamic Performance and Reduce the Risk of
Cascading [58] 3-1
3.2.1 Flexible AC Transmission Systems 3-1
3.2.2 Shunt Devices and Voltage Control 3-3
3.2.3 Series Devices 3-5
3.2.4 High-Voltage Direct-Current Transmission 3-9
3.2.5 Role of HVDC in Limiting the Scope of the August 14, 2003 Northeast
Blackout and in Helping to Restore Service Following the Event 3-12
3.2.6 Summary 3-14
3.3 Advanced Control Strategies 3-15
3.4 Wide Area Monitoring and Control Systems (WAMC) [37] 3-17
3.5 Load Control and Demand side Management 3-19
3.6 Grid FriendlyTM Controllers 3-20
3.6.1 Grid Friendly Applicability and Availability 3-20
3.6.2 Grid Friendly Control Strategies 3-22
3.6.3 Ongoing Efforts in Grid Friendly Technology 3-24
3.7 Distributed Generation 3-25
3.8 Operator Tools 3-25
3.9 Cost Considerations Related to Self-Healing Grids 3-26
3.10 Summary 3-28
3.11 References 3-28

viii
Executive Summary
Background:
Following the aftermath of the major power system blackouts of 2003 (in
Sweden/Denmark, Italy, and the states of Ohio, Michigan, New York and province of
Ontario in the North American Continent), the IEEE Power System Dynamics
Performance Committee (PSDPC) sponsored an all day panel session on blackouts, at
the Power Engineering Society General Meeting in Denver, Colorado, in 2004.
Subsequent to this panel session, the PSDPC formed this Task Force. Since 2004, the
Task Force sponsored another panel session (at the Power Systems Conference and
Exposition in Atlanta, Georgia in 2006) and also sponsored the publication of a series
of invited papers on the subject of blackouts in the September/October issue of the
IEEE Power & Energy Magazine. This report constitutes the final deliverable of the
Task Force. The aim of this document is to provide a single, comprehensive, source of
information to the industry covering all system dynamic performance aspects of
power system blackouts.
The document covers in detail, the following subjects:
1. A comprehensive review of most major power system blackouts and large
system disturbances in the past four decades, worldwide. The focus is
primarily on the system dynamic performance issues, however, the
descriptions of these events highlight all of the apparent root causes and
lessons learned by the industry in their aftermath.
2. A comprehensive review of best practices for improving power system
dynamic performance and thus minimizing the risk of wide spread
disturbances and subsequent blackouts.
3. A review of existing and emerging modern technologies that can help to
improve system dynamic performance and thus help to minimize the risk of
blackouts.
Each of the three chapters of the report contains a detailed summary and conclusions
section that provides the main conclusions of that section as well as pertinent
recommendations (most notably in Chapter 2 a summary list of best practices is
provided in the chapter summary). Below the overall high-level conclusions and
recommendations of the report are presented.
Summary of Findings, Conclusions and Recommendations:
Most of the major grid blackouts described, and investigated, were initiated by a
single event (or multiple-related events such as a fault and subsequent undesirable
operation of protective relays) that gradually led to cascading outages of other lines
and equipment and eventual collapse of the entire system. Post mortem analysis (and
hindsight) often identifies a possible means of mitigating the initial event or
minimizing its impact to reduce the risk of the ensuing cascading trips of lines and
generation. Given the complexity and immensity of modern power systems, it is not
possible to totally eliminate the risk of blackouts. However, there are certainly means
of minimizing the risk based on lessons learned from the general root causes and
nature of these events.
In the case where proper operator or automatic control actions are not taken, the
consequences can be many. In the simplest case (e.g. many of the blackouts discussed

ix
here) parallel transmission paths may become overloaded due to a redistribution of
power after the initial outage and thus a process of cascading transmission line
outages may ensue. At some point this will lead to dynamic performance issues such
as rotor angular instability (transient or small-signal), voltage instability or collapse,
or uncontrollable decline/rise in frequency in electrical islands formed after system
split-up. Eventually, a point of no return is reached at which time the rapid succession
of cascading events becomes unmanageable and a chain reaction of load and
generation tripping leads to the ultimate blackout.
Based on the discussions in this report, some of the clear root causes for these events
are:
A lack of reliable real-time data.
Thus, a lack of time to take decisive and appropriate remedial action against
unfolding events on the system.
Increased failure in aging equipment.
A lack of properly automated and coordinated controls to take immediate and
decisive action against system events in an effort to prevent cascading. In
particular, a lack of sufficient coordination between various protection and
controls with common spheres of influence.
A lack of coordination and communication between transmission system
operators in neighboring systems to help take decisive action against unfolding
events.
Power systems are increasingly being pushed harder with higher levels of
power transfers over longer distances.
In many of the blackouts discussed here, transmission protection operations in
the absence of any faults have played a major role in cascaded outages. This is
because, as the equipment become more stressed, the boundary between
healthy and faulted equipment becomes blurred, making it more difficult for
the protection to discriminate.
In recent years, some of these problems may have been driven by changing priorities
for expenditures on maintenance, reinforcement of the transmission system and
operator training. Nonetheless, with proper and prudent expenditure the appropriate
technologies do exist to address these root causes. In addition, especially for market-
based systems with the breakup of vertically owned and operated utilities, it is
incumbent on policy makers to ensure that reliability standards are made mandatory
and enforceable, backed by meaningful and effective penalties for non-compliance
with the standards. Furthermore, reliability standards should be reviewed periodically,
taking into account experiences from major system incidents and evolving
technologies. At a regulatory body level, clarification should be provided on the need
for expenditure and investment for bulk system reliability and how such expenditure
will be recoverable through transmission rates. Finally, from a research perspective
thought needs to be given to the limitations of present deterministic reliability criteria,
which are primarily limited to addressing single contingencies, i.e. an N-1 criterion.
Chapter 2 is a comprehensive look at best practices for improving system dynamic
performance. Section 2.15 provides a list of the recommended best practices. While

x
mandatory reliability criteria and rules are valuable in reducing the risk of blackouts,
Chapter 2 goes beyond the rules in recommending best practices in more detail.
Many of the best practices comprise relatively low-cost improvement and
modernization of control, protection, and communications equipment. Other relatively
low-cost areas include EMS security assessment and training simulator software,
generator excitation equipment modernization, substation bus configuration
improvements, shunt capacitor bank additions and other reactive power equipment.
Power companies should prioritize implementation of best practices.
A concern for regulated power companies in implementing best practices is difficulty
in cost recovery, including for increased engineering staff. Regulators should consider
these best practice strategies in cost recovery decisions, balancing the reliability
benefits to consumers to the additional cost.
Finally, in Chapter 3 a review is given on existing modern technologies that can help
to improve system dynamic performance, as well as emerging technologies with
significant potential for helping to build better and more secure power systems in the
future. A perusal of these technologies by the reader is recommended.

xi
CHAPTER 1

Large-Scale Power System Blackouts: Causes, Power


System Dynamic Performance Aspects and Lessons
Learned
1.1 Introduction
The advancements in technology during the 20th century were unparalleled by those in
any other period of human history. This has been due to many factors. One of these
key factors is the advent of global electrification. Major national and international
electric power grids have brought about the impetus for vast advancements in
technology in every industrial sector. This phenomenon began with the arrival of
alternating current (ac) systems. No individual was more prolific in the early design
and implementation of high voltage ac generation and transmission systems than
Nikola Tesla. In 1895, the first major power generation project was constructed at
Niagara Falls in New York state. This project consisted of hydro turbines and
produced ac electricity using synchronous generators. The power was then transmitted
to supply the nearby city of Buffalo. The turbine and generator designs were based on
patents by Nikola Tesla, and the project was implemented by the Westinghouse
Company. This was the birth of the modern ac power system.
After more than a century of development some of todays power systems, such as the
North American system, span over thousands of kilometers with tens of thousands of
generating plants all interconnected by a myriad of transmission lines, transformers,
high voltage dc systems and many other complex transmission equipment. The
control and maintenance of these systems is a highly complex and involved task
requiring constant operator action, backed with continuous planning work to ensure
that customer loads are served with safe, reliable and high quality electric power.
Never is the reliance of our modern lives on this constant flow of electricity more
apparent then when a major system blackout is experienced.
As modern power systems have grown in size and complexity, the challenges for
maintaining system security have also become more involved. In the past several
decades, major system blackouts have been a constant reminder of this fact. In this
chapter, a description is given of several major blackouts in recent history, the events
that led to the blackout, their root causes and lessons learned from these events in an
effort to help reduce the risk of future events of such magnitude.
1.2 Major US/Canadian Blackouts
1.2.1 Northeast Blackout of November 9-10, 1965
At 5:16 pm on November 9, 1965 the most severe electric power industry blackout up
to that time was initiated by the operation of a single relay on a 230 kV line near
Niagara Falls. This inappropriate relay action initiated power system cascading that
resulted in power failure and a blackout of most of the customers in the shaded area of
Figure 1-1, including the cities of New York, Boston and Toronto. The affected
systems shown in Figure 1-1 were at the time referred to as the Canada-United States
Eastern Interconnection (CANUSE). The blackout covered about 80,000 square miles
and affected 30 million people. The outage time varied greatly, by location, from a
few seconds to as much as 13.5 hours. The longest and most serious outage occurred
in New York City. This resulted in thousands of people being trapped in dark

1-1
elevators and underground transit systems for hours. This event identified many
deficiencies in the design and operation of an interconnected power system. The
lessons learned from this event have made a major impact on the electric power
industry. The following practices are a direct result of this blackout [1]:
Coordinated regional power system planning and operation
Application of underfrequency load shedding
Black start capability
Up to date restoration procedures
Generator tripping for lost transmission
In this section a discussion is presented of this event. Note that many of the details of
the power system discussed here are the conditions, system topology and
organizations of that period in 1965 clearly there have been many changes to the
system and names of utilities since that time.
1.2.1.1 The Power System and Initial Conditions
The initiating event occurred in the Niagara Falls area, where a large amount of power
is generated by hydroelectric power plants. These hydroelectric plants in the Niagara
Falls area were at the time operated by the Hydro-Electric power Commission of
Ontario (Ontario Hydro) and the Power Authority of the State of New York
(PASNY). The Ontario Hydro plants on the Canadian side of Niagara Falls were
comprised of Sir Adam Beck No. 1, Sir Adam Beck No. 2 and an associated
pumping-generating station. The combination of Beck No. 2 and the associated
pumping-generating station has a capacity of more than 1300 MW. The PASNY-
Niagara Station, located on the U.S. side of the falls, had a capacity of 2500 MW. In
addition, Ontario Hydro and PASNY had hydro generation on the St. Lawrence River
at Massena, New York (See Figure 1-2). The two plants were separately operated and
each had a capacity of 700 to 800 MW [1].
At 5:00 pm on November 9, 1335 MW was scheduled from Beck 2 to Toronto area
substations over five 230 kV lines. The Toronto area substations were Buchanan,
Burlington and Detweller (See Figure 1-3). A schedule of 300 MW was set from the
U.S. to the Toronto area through the same five 230 kV lines. In addition, there was
200 MW of inadvertent flow (clockwise circulating flow around Lake Ontario) from
the U.S. to Canada at Niagara Falls. This resulted in a total flow of 1835 MW on the
five 230 kV lines from Beck 2 to the Toronto area. This is an average of 367 MW per
line. Ontario Hydro was interconnected with Michigan but there was no power
transfer scheduled at 5:00 pm.
Power from the PASNY-Niagara station and Massena station was mainly scheduled
into New York State over two 345 kV lines and the underlying lower voltage lines,
which were part of the Niagara Mohawk and PASNY transmission systems. The lines
go eastward to Albany and then south to New York City. The 345 kV paths are shown
as heavy black lines in Figure 1-1. At 5:15 pm PASNY-Niagara was generating 2275
MW, of which 500 MW flowed into Canada and 840 MW flowed east on the two 345
kV lines. A total of 935 MW flowed east and south over the underlying lower voltage
lines. At the PASNY-Massena hydro plant 702 MW was being produced. As noted
above, there was about 200 MW of clockwise circulating flow from Ontario Hydro
into the U.S. at Massena.

1-2
The New England and Connecticut power systems were connected to the Niagara
Mohawk and PASNY system by one 345 kV line, one 230 kV line and five 115 kV
lines. The 345 kV line is shown in Figure 1-1 going east to a point between
Springfield and Hartford. The net flow from New York into New England and
Connecticut was 140 MW prior to 5:16 pm [2].
The southern end of the CANUSE system had interties with the Pennsylvania New
Jersey Maryland (PJM) system over 7 transmission lines ranging in voltage from
115 kV to 230 kV. The net flow into PJM prior to 5:16 pm was 68 MW [2].
The southeastern New York power system, which included Consolidated Edison
serving New York City, was connected to the Niagara Mohawk area by two 345 kV
lines and four, or less, lines at 138 kV. The region was also connected to New
England by one 345 kV line and to PJM by one 138 kV line. The flow into
southeastern New York prior to 5:16 pm was 400 MW [2].
1.2.1.2 Sequence and Significance of Events
The sequence of the events leading to the Northeast Blackout is presented here and
where appropriate the significance of the events is discussed.
5:16:11 pm
The 230 kV Line Q29BD that was transmitting about 367 MW and an unknown
quantity of vars (reactive power) was opened by a backup relay (probably mho type)
(See Figure 1-3). Mho and reactance type relays were used as backup to the primary
line protection scheme on the five 230 kV lines transmitting power from Beck 2 into
the Ontario Hydro system. To coordinate the mho type relays for fault detection and
clearing, it was necessary to set the relays such that they would initiate a trip signal
for line loading considerably below the current carrying capability of the line. The
relays were expected to trip for 375 MW and 160 MVAR at a voltage of 248 kV. At a
reduced voltage the relay would trip at a lesser power level. It is not surprising that a
relay operation occurred for 367 MW [2].
When Line Q29BD tripped the load that it carried was transferred to the other circuits
connecting Beck 2 with the Ontario Hydro system (See Figure 1-3).
5:16:11 pm + 161 cycles1 (2.683 s)
After Q29BD tripped, the remaining four 230 kV lines carried power that exceeded
the tripping level of the mho type backup relays. Therefore, during a 161 cycle period
these remaining four lines were opened interrupting the flow of about 1835 MW. This
power needed for loads in the Toronto area only had one remaining way to flow and
that was on a counterclockwise path around Lake Ontario.
Initially, the electric power produced by Beck 2 and PASNY-Niagara decreased due
to the increased path impedance. The mechanical power did not change quickly and
thus resulted in accelerating the generator rotors and advanced the power angle of
these units relative to all other points in the system.
5:16:11 pm + 197 cycles (3.283 s)
Since the only path remaining between Beck 2 and Toronto was through Massena, the
current flow increased in the 230 kV line connecting PASNY and Ontario Hydro at
1
In the context of the North American power system, which is a 60 Hz system, one cycle is equal to
1/60 = 0.0167 seconds. Here the time periods have been given for the technical reader. The equivalent
time period in seconds is also provided in parentheses.

1-3
Massena. This caused the line to be tripped by a directional overcurrent relay leaving
Ontario isolated from the rest of CANUSE. The Beck 2 generators and PASNY-
Niagara generators were still connected and their acceleration was increased by loss
of the tie at Massena.
5:16:11 pm + 214 cycles (3.567 s)
Due to continued acceleration of the PASNY and Ontario Hydro generators at
Niagara Falls an out of step condition developed, causing the two 345 kV lines
between Rochester and Syracuse to be opened by distance relay action. All parallel
underlying lower voltage lines were also tripped.
At about this time all PJM interconnections with the CANUSE area also opened due
to instability, except for the 138 kV tie to the Southeast New York Pool. Separation
on this path instead occurred at the Greenwood station in Brooklyn.
5:16:11 pm + 241 cycles (4.0167 s)
The two 230 kV Adirondack lines from Massena to the two 345 kV east-west lines
were tripped by instantaneous directional distance relays. These trips were apparently
due to an out-of-step condition. A transfer trip scheme caused five of the sixteen
PASNY-Massena generators to be tripped for loss of the two 230 kV lines.
At this time (4 seconds after the initial line opening) the system had been separated
into 5 service areas. The corresponding line openings leading to separation of the
power systems between the 5 areas are shown in Figure 1-4 [1]. These areas are
shown in Figure 1-5.
System After 4 Seconds
Following is a brief description of each of the area power systems after 4 seconds.
Area 1:
The Ontario Hydro system was completely separated from New York and was badly
deficient of generation. This area divided into three subsections; 3800 MW of load
had been dropped. The Sarnia tie to Michigan remained closed and Detroit Edison
closed the tie to Windsor in about 17 minutes.
Area 2:
The northern New York region had the hydro generation sources of PASNY-Massena
and northern Niagara Mohawk and was able to carry load in the Massena, Potsdam,
Watertown, and Oswego areas.
Area 3:
The power system in the vicinity of Niagara on the U.S. side, including the New York
State Electric and Gas south central area, was separated from the remainder of
CANUSE and had large excesses of generation leading to an overfrequency condition.
The steam plants were shut down by overspeed protection to avoid possible damage to
turbine blades. The loss of the steam units was followed in about 1.5 minutes by
tripping ten generators at Beck because of low governor oil pressure caused by
excessive governor operation. In addition, five PASNY-Niagara pumping-generating
units were shut down by overspeed protection. Due to the extensive loss of generators
the load now exceeded the available generation so that an underfrequency condition
ensued. As a result the two 230 kV lines from Beck 2 to PASNY-Niagara tripped by

1-4
underfrequency relay action when the frequency reach 58.5 Hz. This worsened the
underfrequency condition causing Area 3 to be blacked out.
Area 4:
The balance of the CANUSE power system, including a part of Upstate New York,
the New England systems, and the southeast New York system, was separated from
the rest of the CANUSE system but remained interconnected within itself.
The Collapse of Area 4 The Blackout: From Figure 1-5 it can be seen that Area 4
had the largest physical area and included the cities of New York and Boston. Just
prior to the relay operation that initiated the blackout, Area 4 was importing about
1100 MW, including 400 MW to the downstate New York area and 140 MW to New
England. During the 4 second period following the first relay operation, the upstate
New York system split with the eastern portion tied to New England and southeastern
New York, resulting in an island with an 1100 MW deficiency. To survive this
situation either load must be reduced, generation increased or ties opened such that
some smaller system or systems are formed with a close match of load and generation.
This is referred to in the industry as islanding. At that time there were no established
procedures for load shedding. Although Area 4 had sufficient spinning reserve in
steam driven generators, the actual rate of increase in power production was not
sufficient to avoid a decline in system frequency to a level around 57 Hz. At about 57
Hz, power plant auxiliary equipment ceases to perform causing the plants to be shut
down. Successful islanding requires great foresight to know which ties to open to
form an island that has a good match of load and generation. Since none of these three
remedial actions was taken, there was a drop in voltage and frequency and within
about 11 minutes all generators had tripped resulting in most of Area 4 being blacked
out.
In the process of shutting down turbine-generators during a blackout it is possible to
damage power plant equipment. This was the case for 3 Consolidated Edison plants
with a combined rating of 1500 MW. The power source for the oil pumps, which
lubricate the shaft bearings, was inadequate resulting in burned out bearings. The
units were unavailable for an extended period [2].
Area 5:
The power systems in Maine and the eastern portion of New Hampshire did not
experience significant outages. This is due to the fact that these systems are radial to
the systems that experienced large power swings. Therefore, power surges did not
travel through the systems of Maine and eastern New Hampshire but merely caused
the opening of ties north of Boston thereby isolating Maine and Eastern New
Hampshire. In addition, these systems were not interchanging significant amounts of
power with the rest of the CANUSE system so that when the disturbance occurred
they had a good match of load and generation.
1.2.1.3 Restoration
Restoration of a power system following a blackout can be a very complex procedure.
First a power source must be available to start up a power plant. In the case of a hydro
plant a relatively small source is required and can be provided by batteries. In the case
of a steam-fired plant the power requirements for start up can be as high as 5% of the
plant rating. Many steam plants in 1965 did not have Black Start Capability and
required power from an off-site source. Therefore, in a system with many steam plants
the plants needed to be started up sequentially. As each power plant is started it

1-5
cannot be loaded to its full capacity for an hour or so. Energizing and loading circuits
from the power sources requires careful coordination to match load and generation
and to control voltage.
The longest restoration period of 13.5 hours was in the city of New York by
Consolidated Edison. Consolidated Edisons system was divided into 42 networks
where each network was served by one substation. The utility had prepared a
restoration procedure in 1938 and had kept it up to date. The reasons for the long
restoration time were [1]:
1. The huge physical area of the city. The load apportioned to each centrally
situated substation is as large as the entire electrical load of many big
cities.
2. If the central substation is forced out, there is no alternative source from
which to supply the network area. This is generally a characteristic of
network systems.
1.2.1.4 Types of Instability
The types of power system stability (and instability) are well defined in [3]. It is
believed that three of the major types were experienced during the Northeast Blackout
of 1965. These will be discussed below.
Rotor Angle Stability
Although there was no fault involved, the loss of the five 230 kV lines beginning at
5:16:11 pm and ending 161 cycles later initiated an event that resulted in rotor angle
instability of the transient stability type. The loss of the 5 lines, which were carrying
about 1835 MW between Beck 2 and the Toronto area, increased the path impedance
drastically. The only remaining path was around the eastern end of Lake Ontario. This
caused the Beck 2 and the PASNY-Niagara generators to rapidly accelerate,
increasing their power angles relative to other points in the system. This resulted in
transient instability and opening the tie at Massena between PASNY and Ontario
Hydro 36 cycles after the five 230 kV lines were opened. Seventeen (17) cycles later
the two 345 kV lines between Rochester and Syracuse opened due to transient
instability. Twenty-seven (27) cycles after that, the two 230 kV Adirondack lines
from Massena were tripped as a result of transient instability.
Frequency Stability
Areas 1, 3, and 4 all experienced frequency instability which resulted from a large
difference in load and generation in the areas. In Area 3 there was an initial
overfrequency condition which caused large amounts of generation to be tripped
resulting in a subsequent generation deficiency and underfrequency. It was
underfrequency in the 3 areas that caused the areas to black out.
Voltage Stability
Following the system separation and generation trips, voltage collapse occurred in all
of the 3 areas that experienced blackouts. It is believed that the systems did not
experience Voltage Instability as defined in [3].
1.2.1.5 Lessons Learned and Resulting Practices
Many lessons were learned from the 1965 blackout. These had a major effect on the
electric power industry and have carried over to the present. Following is a brief
discussion of the most significant factors.

1-6
Coordinated regional planning and operation
One of the most immediate and significant conclusions from the power failure was the
need for improved coordination by the many utilities in planning and operation of the
power systems. As a result the Northeast Power Coordinating Council (NPCC)
comprised of all electric utilities in the Northeast and Ontario was formed in January
of 1966 to improve coordination in planning and operation among the utilities in the
region. NPCC became the first Regional Reliability Council in North America [4].
During the following months eight other Regional Reliability Councils were formed
across the U.S. and Canada. In 1968 the North American Electric Reliability Council
(NERC) was formed to promote the reliability of bulk electric supply in the electric
utility system of North America. NERC consists of nine Regional Reliability Councils
and one affiliate, encompassing virtually all of the electric power system in the U.S.,
Canada and northern portion of Baja Mexico [4].
To this day these same councils provide the framework for planning and operating the
interconnected power systems. Some councils who were established as voluntary
organizations have now changed to the point that the council has authority to direct
action of its members.
Underfrequency load shedding
Analysis of the 1965 blackout made it very apparent that a significant part of the
CANUSE system could have been saved if load would have been shed in a systematic
manner to arrest the frequency decay when there was a generation deficiency. As a
result, nearly every utility now has an underfrequency load shedding scheme (UFLS),
which sheds load by opening distribution feeders in steps until about 25% or more of
the utilities load is shed. The steps usually start around 59.0 Hz and sheds 25% or
more of the load prior to reaching 57.0 Hz.
Black start capability
The 1965 blackout clearly showed the need for black start capability at every major
power plant. This had the immediate effect of swamping the gas turbine-generator
manufacturers with orders for units in the 50 MW range that can be started from a
battery power source. Such gas turbines are often located at the site of a major power
plant for use as a black start unit and as a peaking unit. Since hydro units can be
started quickly without a major source of power, an interconnection to a hydro plant
can be used for black start capability by steam fired power plants.
Restoration procedures
The 1965 blackout emphasized the importance for an up to date restoration procedure.
As a result, most utility operating departments consider the maintenance of an up to
date restoration procedure an important function and hope that it will never be needed.
Generator tripping for lost transmission
This practice preceded 1965 but its benefits were emphasized by the 1965 Northeast
Blackout. The blackout could have been completely avoided if, when the five 230 kV
lines from Beck 2 to Toronto 230 kV were lost, generators at Beck 2 and PASNY-
Niagara would have been tripped in conjunction with an equal amount of load
shedding around Toronto. The number of generators to be tripped could have been
determined by study.

1-7
Figure 1-1: Area of 1965 Northeast Blackout [1].

Figure 1-2: Ontario Hydro southern transmission system and Major interties to the United States at
Niagara and Massena [1].

1-8
Figure 1-3: Block diagram of the 230 kV system in the vicinity of Beck No. 2 [1].

Figure 1-4: Transmission line outages, Northeast Blackout 1965 [1].

1-9
Figure 1-5: Area separation during 1965 Northeast Blackout [1].
1.2.2 North American Northeast Blackouts of 1977 and 2003
1.2.2.1 Blackout of 19772
On the evening of July 13, 1977, severe thunderstorms began developing on the
northern edge of the Consolidated Edison (Con Edison) system, which was serving
New York City and much of Westchester County to the north. The Con Edison system
load at the time was 6,091 MW, down from the days peak of 7,264 MW but slowly
increasing again as the sun set and lights were being turned on. A total of 3,891 MW
of the load was being supplied by generation within Con Edison, with the remaining
2,200 MW coming from imports. Figure 1-6 shows a one-line of the area.
At 8:37:17 pm, a lightning stroke caused the trip of two parallel transmission lines
from Buchanan South to Millwood West (W97 and W98). Since Buchanan South is a
four-breaker ring bus, this event left the Indian Point #3 nuclear unit with no outlet,
and therefore it tripped as well. The unit had been generating 883 MW. Due to a
design error in the protective system, the breakers on the Ladentown-Buchanan (Y88)
line tripped and locked out at Ladentown, removing one of the tie lines carrying
imports into Con Edison. After this event, Con Edison was importing about 3,000
MW over seven remaining interconnections: three 345 kV lines, one 230 kV line (to
New Jersey), and three 138 kV lines. All lines were still within normal ratings, except
for the Pleasant Valley-Millwood (W81) line which was within its long-term
emergency (4 hour) rating.
Eighteen minutes later, at 8:55:53 pm, a second lightning stroke caused the trip of two
additional parallel transmission lines: W93/W79 (Buchanan North-Sprain Brook) and
W99/W64 (Millwood West-Sprain Brook), which are on common towers south of

2
Con Edison Power Failure of July 13 and 14, 1977, Final Staff Report, Federal Energy Regulatory
Commission, DOE/FERC-0012, June 1978. Available at http://blackout.gmu.edu/archive/a_1977.html

1 - 10
Millwood West. W99/W64 reclosed after two seconds, but W93/W79 did not reclose
at Buchanan North, isolating the Ramapo-Buchanan (Y94) tie line from the west. The
resulting surge in power over the remaining tie lines triggered an immediate trip of the
W81 line to the north as a result of a bent contact on a relay. Imports into Con Edison
had been reduced to 2,600 MW as a result of some reserve pickup, but these imports
were now flowing over only one 345 kV line and four other interconnections at lower
voltage levels. Two of the lines (the 80 line from Pleasant Valley to Millwood West,
and the Linden-Goethals tie to New Jersey) were loaded above their short-term
emergency (15 minute) ratings. The Leeds-Pleasant Valley (92) line, which is in
series with the 80 line, was also loaded above its 15 minute rating.
After this second event, the New York Power Pool (NYPP) senior pool dispatcher
repeatedly called for Con Edison to shed at least 400 MW of load. The Con Edison
operator acknowledged the directive, but no load shedding took place. The Con
Edison operator did change taps on the phase angle regulator at Goethals to reduce the
Linden-Goethals flow, but this action pushed the imports onto the other remaining
ties. The Con Edison operator had called for a fast load pickup of reserve generation
eight minutes after the first event. However, the response of most of the reserve
generation was slower than the claimed capability. After the second event, the Con
Edison operator called his supervisor, who was at home, and began describing the
situation. The operator was thus not available to talk to the system operators of
neighboring systems or the NYPP.
At 9:19:11 pm, the Leeds-Pleasant Valley (92) line tripped and locked out following a
phase to ground fault. Although the exact location of the fault was not found, it is
probable that the line tripped as a result of sagging into a tree; the line current had
exceeded the short term emergency rating for over 20 minutes. Shortly afterward, at
9:19:53 pm, the 345/138 kV transformer at Pleasant Valley was tripped by an
overcurrent relay. These actions isolated the north end of the 80 line, leaving four
remaining interconnections (one at 230 kV and three at 138 kV) to carry 1,900 MW of
import power.
Less than three minutes later (9:22:11 pm), with the approval of the NYPP pool
dispatcher, Long Island opened the Valley Stream-Jamaica tie to Con Edison. The
flow on this tie, which was originally exporting over 250 MW from Con Edison to
Long Island, had now reversed and was carrying over 500 MW into Con Edison. The
line was opened because Long Islands only other tie (the Northport-Norwalk cable to
New England) was loaded well over its short-term emergency rating. With the Valley
Stream tie opened, the Long Island system was protected from collapsing with the
Con Edison system. The NYPP pool dispatcher and Long Island system operator had
been unable to contact the Con Edison operator during the previous 15 minutes.
An attempt by Con Edison to reclose the W81 line at 9:22:47 pm failed. Finally, at
9:29:41, the tap changing mechanism on the Linden-Goethals line failed, tripping the
line. The two remaining 138 kV interconnections to the north tripped immediately
afterward, islanding the Con Edison system with 5,981 MW of load and 4,282 MW of
generation. Frequency dropped to around 57.8 Hz and then began recovering after the
activation of three blocks of underfrequency load shedding. Unfortunately, as the
frequency returned to about 60 Hz, a loss of excitation relay operated at the
Ravenswood #3 unit, causing it to trip. The unit was responding to a sharp voltage rise
accompanying the load shedding. With this loss of 844 MW of generation, frequency
fell rapidly again to 57.8 Hz before slowing significantly; the fourth and final block of

1 - 11
underfrequency load shedding activated by this time but was insufficient to reverse
the frequency decline. Frequency continued to slowly fall at a rate of about 1
Hz/minute. When the frequency fell to 54 Hz, the Arthur Kills #3 unit tripped, leading
to a rapid sequence of trips of the remaining generators and a blackout of the islanded
Con Edison system.
The blackout was the result of a thermal cascade following two extreme
contingencies. No voltage or stability issues prior to islanding were identified in the
postmortem analysis.
Subsequent investigation identified four causes of the blackout:
1. Two lightning strokes, each faulting two 345 kV transmission lines;
2. Equipment malfunctions which prevented the rapid reclosure of three of the lines
that were struck and also resulting in the loss of additional lines;
3. An improper relay circuit design, resulting in the additional loss of a transmission
line (Y88); and
4. A series of operator failures at Con Edison.
At least five such operator failures were enumerated:
a. Failure to recognize that Y94 was open and unavailable;
b. Failure to assure that scheduled reserve generation was available and that an
appropriate amount of reserve was being supervised by automatic control;
c. failure to pay strict attention to short term emergency ratings of critical facilities;
d. failure to call for increased generation promptly after the first event; and
e. failure to shed load.
Lack of vigilance by management, system operator training, and operating practices at
Con Edison were identified as major factors leading to the blackout. System
restoration procedures were also found to be inadequate; the restoration process was
delayed by many unexpected problems.
1.2.2.2 August 14, 2003 Blackout3
August 14, 2003 was a hot summer day in the Midwest and northeastern United
States, but not unusually so. System loads in the eastern Great Lakes area exceeded
the day-ahead forecasts from the previous day but were below record levels.
FirstEnergy, the system operator for northern Ohio, was importing power into the
Cleveland area to meet the demand. As a consequence of these power imports, some
local unit and capacitor bank outages, and the heavy air-conditioning loads, voltages
in this area began sagging as time passed and the load increased. Efforts to maximize
reactive output and support the voltages caused the Eastlake 5 unit near Cleveland to
trip at 13:31. This unit trip removed 600 MW and 400 MVAr of generation from the
area, further increasing the imports to serve the load and exacerbating the voltage
situation.

3
Technical Analysis of the August 14, 2003, Blackout: What Happened, Why, and What Did We
Learn?, North American Electric Reliability Council, July 13, 2004. Available at
http://www.nerc.com/~filez/blackout.html

1 - 12
Figure 1-6: System affected by the 1977 blackout.
Shortly after 14:14, the alarm and event logging software in the FirstEnergy EMS
system failed, preventing alarms of line trips from being issued to the system
operators. The operators were unaware of the computer failure and did not know that
they essentially were operating blind. Midwest ISO (MISO), the reliability
coordinator for the area, also encountered software problems with some of their
system analysis and monitoring tools. These tools did not recognize some line outages
that had occurred earlier in the day and hence did not function correctly. These
multiple, independent software failures inhibited both FirstEnergy and MISO from
fully seeing or understanding the events that unfolded over the following two hours.
At 15:05:41 (3:05:41 pm EDT), the Chamberlin-Harding 345 kV line tripped and
locked out while being loaded at only 44% of its normal rating. This line is one of
four 345 kV transmission lines connecting the Cleveland area to generation in the
Ohio River Valley to the southeast. Subsequent investigation concluded that the line
tripped because of a ground fault caused by conductors sagging into a 42 foot tall tree
growing in the lines right of way. Neither FirstEnergy nor MISO was aware that the
line tripped at this time. The trip of this line naturally increased the loading on the
three remaining 345 kV lines into Cleveland.
Just about half an hour later (at 15:32:03), a second 345 kV line (Hanna-Juniper)
southeast of Cleveland tripped. Nine minutes later, a third 345 kV line to the southeast
of Cleveland (Star-South Canton) also tripped and locked out. While more heavily
loaded than Chamberlin-Harding (relative to line ratings), these circuits were still
within emergency ratings. Like Chamberlin-Harding, the subsequent investigation
showed that these two 345 kV lines tripped following ground faults with excessively
tall trees. At this point, only one 345 kV line (Sammis-Star) and several underlying
138 kV lines remained in service to connect the Cleveland area to the Ohio Valley,

1 - 13
and many of these lines were now overloaded. Over the next 25 minutes, these 138
kV lines southeast of Cleveland began sagging into underlying objects and tripping,
gradually pushing the loading on Sammis-Star 345 kV up to 120% of its emergency
rating. While all of this was going on, FirstEnergy and MISO had indications of
trouble but did not fully comprehend the seriously degraded state of the transmission
system southeast of Cleveland.
At 16:05:55, the last 138 kV line still connecting Cleveland to the Ohio Valley (Dale-
West Canton) tripped after sagging into a tree. Two seconds later, the Sammis-Star
345 kV line tripped, completely severing the transmission path into Cleveland from
the southeast. Unlike the earlier line trips, the Sammis-Star line opened because the
distance relays for the line viewed the high current as a distant fault within the zone 3
relay characteristic of the lines protection scheme. The 1400 MW of power that was
flowing on this line was now diverted both east and west over comparatively long
distances onto the two remaining lines into Cleveland that parallel the shore of Lake
Erie (see Figure 1-7). These large loop flows greatly exceeded the capability of the
transmission systems in these areas and resulted in significant overloads which soon
led to additional line tripping. The opening of Sammis-Star initiated a rapid cascade
that would have been virtually impossible to stop by human intervention [5].
The cascade progression was initially accelerated by additional distance relay actions
with protection zones (2 and 3) set much longer than the length of the line
(overreaching). Two 345 kV lines in northwest Ohio (Muskingum-Ohio Central-
Galion and East Lima-Fostoria Central) tripped around 16:09. These line trips
completed a separation of northern Ohio from central Ohio and forced the loop flow
further north into Michigan. About 90 seconds later, two 345 kV lines in southwestern
Michigan (Argenta-Battle Creek and Argenta-Tompkins) tripped. After these lines
tripped, the system became transiently unstable. In rapid succession, eastern and
western Michigan separated, Cleveland separated from Pennsylvania, eastern
Michigan and Toledo separated (leaving northern Ohio as an island), machines in
Detroit pulled out of step and lost synchronism, northwestern Ontario separated from
the rest of the province, New York and Pennsylvania separated, New York and New
England separated, eastern and western New York separated, and Ontario and New
York separated. All of this occurred in less than 15 seconds. The breakup resulted in
five major islands, three of which (northern Ohio, eastern New York, and
Ontario/eastern Michigan) soon collapsed and blacked out (see Figure 1-8). The other
two islands (New England/Maritime provinces and western New York) survived with
some load loss.
Postmortem analysis concluded that the system was still operationally secure
immediately before the Chamberlin-Harding outage at 3:05 pm; i.e. the system could
withstand the loss of any single transmission line or generator. However, after
Chamberlin-Harding tripped, operator action was needed to restore the system to a
secure state. Since the operators were unaware of the line trip, no corrective action
was taken.
The August 14, 2003 blackout was the result of a thermal cascade, which was
accelerated by overgrown trees. For one hour following the Chamberlin-Harding
outage at 3:05 pm, all of the lines that tripped did so because of contact with
underlying trees and distribution lines. After one hour, this thermal cascade eventually
culminated in additional line trips from impedance relays, declining voltages, and,

1 - 14
finally, transient instability. However, the blackout developed as a result of an
uncontrolled thermal cascade of the transmission system southeast of Cleveland.
The post-mortem investigation identified four root causes of the blackout:
1. A lack of understanding of the transmission system in northeastern Ohio by
FirstEnergy and the regional council ECAR,
2. Lack of situational awareness at FirstEnergy,
3. Inadequate tree trimming, and
4. Inadequate real-time diagnostic support from the reliability coordinators.
Multiple, independent software failures at both FirstEnergy and Midwest ISO were
major factors in both causes 2 and 4 above. Remarkably, MISO was using software
that was considered under development and not fully mature to monitor and secure
the power system in its role as reliability coordinator [5].
A number of other factors contributed to the spread of the blackout, including
overreaching impedance relays and a lack of coordination between generation
protection and transmission protection systems. FirstEnergy had also neglected to
study the reactive needs of their system and had adopted voltage criteria which were
insufficient to secure the system; as a result, the system was near the edge of a voltage
collapse on the afternoon of August 14. However, these factors were not causal [5].
Further discussion and analysis of the August 14 blackout is also presented in section
2.3 and 3.2.5 in Chapters 2 and 3, respectively.

1 - 15
Remaining
Paths

Eastlake 5
15:05:41
15:32:03
15:41:35

16:05:57

a) The Transmission path from the Ohio Valley to Cleveland was gradually severed, August 14,
2003, 3:05 pm 4:06 pm

16:10:36

16:09:06 16:08:59

b) Cascade spread from Ohio to Michigan, 4:06 pm 4:10:36 pm.

b) The Separation of transmission system across Michigan and the separation of Ohio from
Pennsylvania resulted in a large power shift across Pennsylvania, New York, and Ontario (4:10:38 pm
4:10:39 pm).
Figure 1-7: Power transfer shift from New York/Pennsylvania through Ontario into Michigan and
Ohio, in the final stages of the 2003 Northeast Blackout.

1 - 16
System separation
north of Lake
Superior

a) Northern Ohio separated from Michigan and formed an island, 4:10:39 pm 4:10:40 pm. A large
island comprised of Northeast US, Ontario, and Michigan completed separation from the Eastern
Interconnection, 4:10:43 4:10:45 pm.

Some local load


interrupted

b) End result showing regions that blacked-out.


Figure 1-8: Islanding of the Northeast region that eventually blacked out.

1 - 17
1.2.3 Western Interconnection System Breakups of 1996 [6, 7]
1.2.3.1 Western System Breakup of July 2, 1996 [8-10]
On July 2, 1996 hot weather had produced heavy loads throughout the Western
System Coordinating Council4 (WSCC). Water supplies were abundant in the north
and there were heavy power imports from Canada (about 1850 MW), through the
Bonneville Power Administration (BPA) service area, into load centers in California.
Environmental mandates had forced BPA to curtail generation on the lower Columbia
River in order to aid fish migration. This resulted in both a reduction in voltage
support and system inertia in an area from which both the California Oregon
Interface (COI) and the Pacific HVDC Intertie (PDCI) originate. This had two
consequences, one it threatened the ability of those transmission paths to sustain
heavy imports into California, and secondly it resulted in an increase of system
exposure to the north-south inter-area mode of oscillation (Canada vs. Southern
California and Arizona). The power flow also involved unusual exports from the
Pacific Northwest into Southern Idaho and Utah, with Idaho voltage support reduced
by a maintenance outage of the 250 MVA Brownlee #5 generator near Boise.
At 02:24 pm local time, a 345 kV line emanating from the Jim Bridger plant (in SW
Wyoming) into SE Idaho tripped due to arcing to a tree. Relay error also tripped a
parallel 345 kV line, initiating trip of two 500 MW generators by stability controls.
Inadequate reserves of reactive power produced sustained voltage depression in
Southern Idaho, accompanied by oscillations throughout the Pacific Northwest and
northern California. About 24 seconds after the fault, the outage cascaded through
tripping of small generators near Boise plus tripping of the 230 kV line from Western
Montana to SE Idaho. Then voltage collapsed rapidly in Southern Idaho and the north
end of COI. This was further aggravated by false tripping of 3 units at McNary.
Within a few seconds, the western power system was fragmented into five islands,
with most of southern Idaho blacked out.
On the following day, the President of the United States directed the Secretary of
Energy to provide a report that would commence with technical matters but work to a
conclusion that Assesses the adequacy of existing North American electric reliability
systems and makes recommendations for any operational or regulatory changes. The
Report was delivered on August 2 [8], just eight days before the even greater breakup
of August 10 1996.
1.2.3.2 Western System Breakup of August 10, 1996 [11-15]
This event is unique in that a major effort was made by WSCC members to simulate
the event and validate the study models. The event affected about 7.5 million
customers and resulted in over 30,000 MW of load being interrupted and over 25,000
MW of generating capacity being dropped from the system. Much of the load
interrupted was the result of underfrequency load shedding. The lost generators
involved 175 generating units in addition to the Northwest hydro units that were
intentionally tripped when the Pacific Intertie opened [12]. On this day, temperatures
and loads were somewhat higher than on July 2, 1996. Northwest water supplies were
still abundant (which was unusual for August) and thus imports from Canada had

4
The Western System Coordinating Council (WSCC) is now known as the Western Electricity
Coordinating Council (WECC). Thus, for historic reasons some references and older diagrams used in
this report list the name of the coordinating council as WSCC, while other references throughout this
document to recent publications (the coordinating councils present website) etc. refer to WECC.

1 - 18
increased to about 2300 MW. The July 2 environmental constraints on lower
Columbia River generation were still in effect, reducing voltage and inertial support at
the north ends of the COI and PDCI. Heavy rain and hot weather during 1996 had
resulted in faster than normal tree growth in northern California, Oregon, and
Washington. Right-of-way clearing had not been increased accordingly which
resulted in reduced clearance between power line conductors and tree tops. This
contributed to flashovers from conductors to tree tops referred to here as tree faults.
The initiating event occurred at 15:42:37, which is defined here as t = 0. At t = 0 the
Allston Keeler 500 kV line tripped due to a tree fault. The Keeler to Pearl 500 kV
line also opened due to the breaker configuration at Keeler. The line was carrying
1300 MW. The only remaining 500 kV path for this north-to-south flow was through
Hanford and the underlying 115 kV and 230 kV lines. This shift of flow significantly
reduced the voltages in the area and resulted in a large increase in power flow on
lower voltage transmission lines. At t = 5 minutes the shift in power flow from the
Allison-Keeler-Pearl 500 kV lines caused the Merwin St. Johns 115 kV line to trip
by inappropriate relay action and the Ross-Lexington 230 kV line sagged into a tree
and faulted. Also, at this time 13 McNary generators tripped sequentially due to
malfunctions in their exciter protection. The sequential tripping of McNary units
occurred over an 80 second period. About 850 MW of generation was tripped. The
tripping of the McNary units initiated undamped inter-area oscillations in the system.
One of the critical quantities was the COI power flow. For about 40 seconds, 0.25 Hz
oscillations were sustained with essentially no damping (See Figure 1-9). As
frequency decayed and intertie power flow fell below schedules, automatic generation
control (AGC) and governors caused generation to increase at Grand Coulee, Chief
Joseph and John Day and an increase in Canadian exports to the northwest U. S.
These resulted in increased north-to-south power flow on the remaining lines. With
the loss of McNary var support and increased line loading, system voltage decayed
and COI power oscillations grew rapidly and then power flow was interrupted at t = 6
minutes, 15 seconds. The 3 circuits (COI) were interrupted by a low voltage, high
current condition. The loss of the COI initiated a total system breakup into 4 islands.

Figure 1-9: Observed and Simulated California-Oregon Intertie Power Flow During McNary
Generator Tripping (From Reference [12], with permission).

1 - 19
1.2.3.3 Key Lesson Learned
One of the key lessons learned as a result of the 1996 system break-ups in the Western
Region was the inadequacy of the planning models to properly duplicate the observed
events (Figure 1-9 is indicative of this fact). As a result the WSCC undertook a
mandated plan of testing all generating facilities in the system as well as much
continued modeling efforts that continue to this day looking at improved modeling of
loads, turbine-governors and transmission equipment by the WECC Model Validation
Working Group (www.wecc.biz). In summary, the major lessons were:
Simulation: The attempted simulation of the August 10 event made it very
clear that many of the models in use were not appropriate.
Excitation system controls: Overexcitation limiters on many generators were
found to be poorly designed, which led to generator tripping at the most
inappropriate time. The McNary problem was particularly critical because the
generator field current protection scheme caused 13 generators to trip rather
than limit the field current.
Voltage Control on the Intertie: Depressed voltage in the area of the PDCI
northern terminal caused the DC controls to respond in a way that required
more var support from the AC system, thereby further depressing the AC
voltage. BPA has implemented new controls, which reduce the var support
from the AC system when the AC voltage is depressed [16]. The August 10
event showed the adverse effect of insufficient var capability. As a result two
460 Mvar shunt capacitor banks have been installed at a point that will allow
the generators in the lower Columbia basin to operate at lower reactive power
levels thereby having reserve var capacity for extreme events such as the
August 10 event.
Power System Damping: Power system damping of interarea oscillations has
been an important consideration in the WECC area since it was established.
The major countermeasure has been the installation of Power System
Stabilizers (PSS) on all generators in the WECC area. It is apparent that one of
the causes of the August 10 blackout was insufficient damping of interarea
oscillations at around 0.25 Hz. On August 10 a PSS was out of service at a
nuclear plant in southern California. Several other plants had PSSs out of
service due to noisy transducers. The nuclear plant PSS has been returned to
service and effort is being made to keep all PSS in the WECC in service [16].
Controlled separation: An islanding scheme for separation of the western
interconnection into north and south islands was removed from service
following the addition of a third 500-kV circuit from Oregon to California.
Had this been in service, the breakup would have been much less severe. This
control is now upgraded and reinstated.
Generator control and protection following islanding: In the southern islands,
many generators tripped undesirably following underfrequency load shedding.
Better design and coordination of generator control and protection for
abnormal voltage and frequency excursions are needed.

1 - 20
1.3 Major Blackouts in Europe
1.3.1 Blackout in Southern Sweden and Eastern Denmark September 23,
2003
The national transmission grid in southern Sweden is built up in a meshed structure
with primarily single 400 kV lines connected to substations that are mainly equipped
with double buses. Nuclear generation is connected to the national grid in three sites
with units ranging from 500 to 1200 MW. Two 600 MW rated HVDC-links to
Germany and Poland are also connected to the 400 kV grid as well as a major oil-fired
power station. The Zealand grid in eastern Denmark is closely connected to the
Swedish grid by a double-circuit 400 kV set of submarine AC cables in parallel with
an older set of 132 kV cables. Generation comprises of large coal-fired units up to 650
MW, mixed fuel CHP-plants (Combined Heat and Power) and a considerable share of
wind power. Zealand is also connected to Germany by one 600 MW HVDC-link.
Figure 1-10 shows a diagram of the system.

Figure 1-10: Swedish power system.


1.3.1.1 Course of Events
Pre-fault Conditions
Prior to the disturbance, operating conditions were stable and well within the
constraints laid out in the operational planning and grid security assessment. The
demand in Sweden was around 15 000 MW, which was quite moderate due to the
unusually warm weather for the season (this area is typically winter peaking).
The nuclear generation in the affected area was limited due to on-going annual over-
haul programs and delayed restarts for some units due to nuclear safety requirements.
There was no generation in the power station of Barsebck in the southernmost

1 - 21
province of Sweden due to the permanent closure of unit 1 and a delayed restart of
unit 2. Apart from the nuclear, only minor hydro and local CHP generation was in
service in southern Sweden.
The generation in Zealand was scheduled according to the spot market trade on
NordPool to an export of 400 MW to Sweden.
Two 400 kV lines in the area were out of service due to scheduled maintenance work.
Likewise, the HVDC links to Poland and Germany were taken out of operation due to
the annual inspection and some minor work.
Maintenance of grid components must be avoided during the high-load winter period
and carefully coordinated with the nuclear unit overhaul periods during the rest of the
year. That implies that some maintenance work has to be scheduled to coincide with
the nuclear outages and others must not, depending on their location in the grid.
Outages of main grid components usually lead to a reduction of the transmission
capacity. In order to sustain a consistent level of security against anticipated faults,
transmission constraints in critical grid sections are continuously adjusted to the
current operating and outage conditions.
Initial Loss of Generation.
At 12.30 hrs, unit 3 in Oskarshamn Nuclear Power Plant started to pull back by
manual control from its initial 1235 MW generation to around 800 MW due to
internal valve problems in the feedwater circuits. The attempts to solve the problems
failed and the reactor scrammed to a full shut-down and stopped after around 10
seconds.
Loss of a single 1250 MW unit occurs regularly and it is regarded as a standard
contingency. According to security standards applied within the Nordic
Interconnected Grid, real and reactive reserves as well as spare transmission capacity
shall be available to cope with this level of disturbance severity without any
subsequent supply interruptions. These conditions were well at hand at this moment
and the system could handle this outage without any immediate serious consequences.
After a normal transient in frequency and automatic activation of the spinning
momentary reserves from hydro power in Norway, northern Sweden and Finland, the
system resumed to stable operating conditions within less than a minute. Voltages in
the southern part had dropped around 5 kV, but remained within the 405-409 kV
level, which is by no means critical. The frequency was automatically stabilized
slightly below the normal operating limit of 49.90 Hz. Actions were therefore initiated
to raise the frequency.
Transmission levels were still contained within the predetermined security constraints.
The power flow was, however, redistributed in the grid due to the loss of generation
on the south-eastern side. More power was flowing on the western side to supply the
demand in the south.
The Bus Fault
At 12.35 pm, a double bus fault occurred in a 400 kV substation on the western coast
of Sweden. Two 900 MW units in the nuclear power station of Ringhals are normally
feeding their output to this substation over two radial lines, connected to separate
buses. The buses are connected to each other through interconnector bays, equipped
with circuit breakers that are designed to sectionalize and split the substation in order

1 - 22
to contain a fault on one bus and leave the other intact with its connected lines in
service. A fault on one bus should therefore disconnect only one of the two nuclear
units (see Figure 1-11).

Figure 1-11: Switchgear layout in Horred Substation.


The Disconnector Damage
The reason for the double bus fault was damage to a disconnector located in the bay
between the two buses that the nuclear units were connected to.
The disconnector is of a type that rises vertically from beneath to contact with the bus.
One of the mechanical joints allowing the structure to move upwards had been
dislocated as a result of overheating. The isolator was inspected in March 2003 with
respect to thermal overloads but nothing irregular was detected. A similar damage has
never been observed in any of the around 70 sets of isolators of this type in the
Swedish 400 kV-grid.
The loading current of the isolator had increased from around 1000 A to some 1500 A
following the 1235 MW generation loss on the east coast. This is far below its rating
for maximum load which is 3100 A.
With the joint broken, the vertical structure of the isolator collapsed and it fell to the
side in the direction of the other parallel bus (Figure 1-12). Falling down, the contact
with its own bus opened with the load current still flowing. This ignited an arc
initially between the bus contact and the isolator parts. Eventually the arc flashed over
to the nearest phase of the adjacent bus. The insulation distance between the buses
was reduced by the protruding parts of the fallen isolator. Supposedly the wind may
also have drifted the arc towards the other bus and helped it to bridge the distance.
Through the arc two different phases from the two buses were short-circuited. This
fault was directly detected by the separate bus protection devices, immediately
tripping the circuit-breakers for all incoming lines to both buses. This is a
predetermined action given by the design of the protection system.

1 - 23
Nearest phase in adjacent Busbar A,
busbar B 3 phases (R,S, T)

R R T R S T

Overheating Disrupture

Figure 1-12: Faulty Disconnector in Horred.


System impacts
The consequences to the power system from the disconnection of the buses were that
the two nuclear units with a total output of 1750 MW were tripped and that the grid
lost its transmission path along the west coast. Initially this triggered heavy power
oscillations in the system, very low voltages (Figure 1-13) and a further drop in
frequency down to a level slightly over 49.00 Hz, where underfrequency load-
shedding schemes start to operate.
The grid was then heavily overloaded on the remaining south-east and south-central
parts in terms of capability to sustain the voltages. This part of the grid had no major
generation connected and thus the reactive power support was weak.
B400/kV

400

375

350
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
t/s
325

FL4 S4P/MW
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
-600 t/s
-700

-800

-900

-1000

FL4S4Q/Var

0 10 20 30 40 50 60 70 80 90 100 110 120 130 140


-2,000e+008 t/s

-4,000e+008

-6,000e+008

FREKVENSB400/mHz
250

0
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
-250 t/s
-500

-750

The fault in Horred Voltage collapse


Grid separation

Figure 1-13: Measured voltage, power P/Q and frequency in north Sweden.
During some 90 seconds after the bus fault the oscillations faded out and the system
seemed to stabilize. Meanwhile the demand in the area recovered gradually from the
initial reduction following the voltage drop by action of the numerous feeder
transformer tap-changers. This lowered the voltage further on the 400 kV grid down

1 - 24
to critical levels. Finally the situation developed into a voltage collapse in a section of
the grid south-west of the area around the capital city of Stockholm.
When the voltage collapsed to very low levels, circuit breakers in the critical grid
section were tripped from distance protections reaching the low impedance criteria.
Thereby the grid split up into two parts. The southern part, comprising south of
Sweden and eastern Denmark, initially remained interconnected but suffered from a
massive inadequacy of generation (see Figure 1-14). The inertia of the remaining
generators in Denmark was immediately drained from trying to feed the demand.
Within seconds, the frequency and voltage had dropped to levels where generator and
other grid protections reacted and this entire subsystem collapsed.

2500

Sub-system break-
down
2000

The fault in Horred


1500

MW

1000

500

0
0 20 40 60 80 100 120
Seconds
sekunder

Figure 1-14: Inflow of real power from Denmark (Zealand).


Basically all supplies south of a geographical line between the cities of Norrkping in
the east and Varberg in the west were interrupted. Some minor hydro power stations
survived, feeding small islands around them. In total, the initial loss of supply was
approximately 4500 MW in Sweden and 1850 MW in Denmark.
North of this area the power system was intact including the interconnections to
Norway and Finland. Supplies were not primarily interrupted in the Stockholm area.
Some sensitive equipment reacted however to the low voltage level and transients,
leading to a few irregularities in traffic control systems and telecommunications.
Some simultaneous loss of supply was also reported in rural areas in the central parts
of the country, but these were caused by local weather conditions and they are not
related to the major disturbance.
1.3.1.2 Restoration
The immediate base for the restoration was the intact National Grid north of the split.
The hydro power in Norway, northern Sweden and Finland was fully available to pick
up the recovery of the demand.
Following emergency restoration procedures, lines and substations were energized to
build up the grid from north towards the south. The restoration could proceed along

1 - 25
the eastern and central paths. On the western side difficulties appeared in energizing
one vital 400 kV line. In spite of some obstacles, the National Grid Control Centre
managed to energize the 400 kV grid down to the southernmost substations within
less than one hour (see Figure 1-15). Initially the voltages were rather volatile due to
the lack of reactive support from generators that were not yet synchronized. A serious
loss of remote control of one important substation caused delays in stabilizing the
voltage and thus to the restoration process.

Hallsberg

Glan

Kimstad
Kolstad

Strmma Tenhult
Simpevarp
Horred

Breared Alvesta
Nybro

Hemsj
Sdersen

Barsebck
Line outages due to:
Maintenance work
The disturbance

Figure 1-15: Grid restoration after 60 minutes.


The restoration in Denmark suffered from faults in the black-start facilities in two
power stations. The grid had to be energized from Sweden, which was accomplished
some 70 minutes after the grid separation. With the feeder transformers reconnected
to the 400 kV grid, the regional and local grids were restored subsequently. Some
restrictions were imposed on the recovery of supplies but they were soon alleviated as
the grid restoration gradually became more complete and generation was made
available in the area. By 19.00 hrs in the evening almost all supplies in Sweden and
Denmark were reported to have been resumed. Figure 1-16 summarizes the loss-of-
load during the entire event.

Figure 1-16: Loss-of-load in the affected area in Sweden.

1 - 26
During the restoration process some idle generation in oil-fired plants were called on
to synchronize as soon as possible. The on-going maintenance works on two 400 kV
lines were cancelled and preparations to return to service were ordered.
1.3.1.3 Conclusion
The cause of the major black-out was that a severe grid fault that occurred only a few
minutes after a more ordinary but still significant initial fault. The probability of such
a coincidence is extremely low as the interrelationship between the faults in the two
separate locations was either zero or very weak.
The initial fault (loss of a 1250 MW generation unit) can be classified to be on the N-
1 level according to generally applied grid security standards. Any subsequent single
N-1 level fault should be managed without any external consequences to the supply,
provided that 15 minutes are available for activation of stand-by reserves, if
necessary. The fact that the faults occurred only 5 minutes apart and that the grid was
crippled by losing vital parts means that the entire disturbance can be classified to be
at least an N-3 level or higher event. Furthermore, the second fault resulted in the loss
of an entire busrbar with multiple elements. This is far beyond the severity degree that
the Nordic Power System is designed and operated to cope with.
1.3.1.4 Determined Actions
A number of main remedial actions have been identified and determined as follows:
Review of the Planning and Operational Reliability Standards applied
within the cooperation between the Nordic Transmission System Operators
against the background of the increasing vulnerability of the modern
society to blackouts.
Mandatory Technical Requirements on Power Stations will be enforced, in
particular to manage the transition to house-load operation on external grid
disturbances.
Reinforcement of the Transmission Capacity to the south of Sweden by
activation of previous plans to build a new 400 kV line in an existing right-
of-way of old 220/135 kV lines. Attention is also drawn to the importance
of new generation to be established in the area.
The further development of advanced system protection schemes will be
considered.
Restructuring of the switchgear in the fault-struck substation with respect
to the risk of flashovers between the main buses. Review of other
substations of similar importance.
Enforced inspections of disconnectors and scheduled replacements of
critical parts.
Review of applied methodology and resources needed for a satisfactory
control of the out-sourced maintenance.
Analysis and actions to secure remote control functionalities under outage
and transient conditions.
Review of procedures to manage the information demand from media and
authorities during a power outage.

1 - 27
1.3.2 Italian Blackout of September 2003
1.3.2.1 Background
On Sunday September 28th, 2003 the Italian power system experienced for the first
time an electric power system blackout. The incident affected an area of about 55
million people, resulting in about 180 GWh of energy not supplied in the 24 hours
after the incident.
During the year of the event, the energy price in Italy was higher than in the European
market. This justified the pressure from large industrial customers in Italy for
importing cheap power from foreign countries. During 2002, Italy imported about
16% of its total electrical energy supply. At the time of the events, the interconnection
between the Italian system and the remaining part of the Union for the Coordination
of Electricity Transmission (UCTE) grid was through six 400 kV lines5 and nine 230
kV lines (Figure 1-17). An additional 500 MW dc undersea cable between Italy and
Greece was put in service at the end of 2002. The most important tie-lines were those
to France (three 400 kV and one 230 kV lines) and to Switzerland (two 400 kV and
six 230 kV lines) systems.
The operation of the European interconnected electricity system is subject to security
and reliability standards set within the framework of the association of Transmission
System Operators (TSOs) in continental Europe, UCTE [17]. The main principle
underlying the security and reliability standards is that the system must be operated in
such a way that any single incident has to be faced without security risks, i.e., without
any technical constraint violation (N-1 state). This rule also states that in case of such
an event, the system must not only withstand the situation but also return to the N-1
secure state as soon as possible, to prevent a possible cascade of events. N-1 security
can be accomplished also in cooperation with a neighboring TSO, subject to the
previous agreement of the latter. An additional agreement was in operation among the
Swiss (ETRANS), French (RTE) and the Italian (GRTN, now called TERNA)
operators, in order to make the exchange of information easy during possible
perturbations.
On Sept. 28th, at 3:01 am, the Italian load was 27.4 GW, including 3400 MW of
pumped storage plants. Italian power plants supplied 20.3 GW, 18 GW from thermal
power plants and 1 GW from hydro power plants. Tie-lines from the northern border
supplied 6800 MW and 300 MW from Greece. Table 1-1 shows the power flows on
the different northern borders in terms of forecasted values (day-ahead security
assessment) and actual values. At 3:01, the Italian load was about 800 MW more than
forecasted, resulting in a temporally import of about 500 MW higher than planned.
However, the situation was considered secure, as a Transmission Reliability Margin
of 500 MW was previously agreed upon among the three system operators involved.

5
Note the Albertville-Rondissone tie-line is a double circuit line.

1 - 28
Figure 1-17: Interconnection of the Italian system [18].
Although the import level was very high in percentage of the load, the power reserve,
available in different time frames, was 14100 MW of which:
3300 MW of pumped-hydro pumping load;
3500 from thermal plants already in operation;
2500 MW from hydro power plants that could be synchronized in a few
minutes;
4800 MW from the pumping storage generation plants that could be
synchronized in about 30 minutes;
1000 MW of additional industrial load always available for shedding.
Table 1-1: Import of the Italian system before the incident [21].
Day before schedule Actual import at 3:01
Interface
[MW] am [MW]

Switzerland 3686 3651


France 1996 2334
Austria 258 198
Slovenia 428 676
Total 6368 6859

1.3.2.2 Dynamic Phenomena During the Event [19-23]


The first three events occurred in the Swiss grid (Figure 1-18). Due to a flashover to a
tree, at 3:01:22 am the 400 kV line Mettlen-Lavorgo tripped; the automatic and the
manual re-closures attempted in the following 5 minutes failed due to the large phase
angle across the breaker (42). The trip caused the power flow in the serial tie-line
Lavorgo-Musignano to drop from 1250 MW to 550 MW and the overload of the
Swiss parallel lines Sils-Soazza (400 kV) and Mettlen-Airolo (230 kV).
Consequently, although the total Italian import was unchanged, it decreased from
Switzerland (-550 MW) and increased from France (+370 MW), Austria (+30 MW)
and Slovenia (+150 MW).

1 - 29
Figure 1-18: Swiss grid. (Note: this is the 2003 system configuration, the system has since changed.
www.etrans.ch)
As the Swiss grid was operated according to the corrective N-1 security criterion,
ETRANS had only 15 minutes available to relieve the overload. ETRANS operators
tried some topological countermeasures and at 3:11 called GRTN asking for a 300
MW import reduction, which took place in the following 10 minutes.
However, the mitigation attempts did not relieve the thermal overload of the Sils-
Soazza line. As a result the continuing sag of the line resulted in tree contact, at
3:25.22 and subsequent tripping of that line. This then caused the cascading trips of
the 230 kV line Mettlen-Airolo and of all the tie-lines in the north of Italy, except for
a weak connection to Slovenia, which was not properly tripped by its distance
protections. Figure 1-19 shows the reduction of the Italian import from the different
Countries during the transient.
3.25.20

3.25.24

3.25.28

3.25.32

3.25.36

3.25.40

3.25.44

3.25.48

3.25.52

3.25.56

3.26.00

3.26.04

3.26.08

3.26.12

3.26.16

3.26.20

3.26.24

3.26.28

3.26.32

3.26.36

4500

4000

3500 France

3000

2500
Switzerland
2000
P[MW]

1500

1000

500
Slovenia

0 Austria

-500

-1000

Figure 1-19: Italian import during the disconnection [25].


Figure 1-20 can help in understanding the dynamic phenomena during the
disconnection of the Italian system from the UCTE system. Each perturbation seen in

1 - 30
the frequency plot, e.g., at 10 s and at 13 s, corresponds to the trip of the lines between
Switzerland and Italy [24]. The decrease in the frequency, although quickly
recovered, caused an increase of the electrical angle and of the impedance between
the Italian system and the UCTE system. Such increasing shift of phasors resulted in a
significant voltage drop close to the France border that triggered the distance relays,
disconnecting progressively the tie-lines to France. After many seconds, the Italian
system remained connected to the rest of the UCTE grid only through a local 132/150
kV grid to Slovenia, for about 1 minute. In this period, the fact that Italy was losing
synchronism with respect to the rest of the European system is clearly demonstrated
by the oscillations, at about 1 Hz, of voltages and of the reactive power shown in
Figure 1-21. Figure 1-21 also shows the complete disconnection of the Italian system
after about one minute and thus the increasing trend of frequency in the European
system after the disconnection of Italy.

Figure 1-20: Frequency [Hz], voltage [kV] and exported power [MW] recorded in Musignano
substation (Italy), close to the Swiss border [25].

Figure 1-21: Records of some electric variables in Hungary and Germany during the separation of the
Italian system [19].
After the disconnection, Italy experienced a total power deficit of about 6800 MW
and frequency continued to decline. For this phase, several issues must be discussed,

1 - 31
regarding the behavior of the defense plan and of power plants [25]. In the 1980s,
ENEL developed a defense plan in order first to avoid the separation from the UCTE
system and second to shed load and prevent a blackout, in case separation occurred.
During the 2003 incident, the first goal was not attained, because the separation
mainly occurred due to events outside Italy, and therefore not under the control of
GRTN. The automatic load shedding procedure was designed to start based on either
a combination of frequency and time derivative of frequency, or on pure frequency.
First, the pumped storage (if in operation) power plants and some industrial loads are
disconnected (starting at 49.6 Hz); then, starting at 49.1 Hz, the domestic customers
automatic load shedding plan progressively takes place, with the goal of preventing
frequency from dropping below 47.5 Hz. This limit value is the threshold under which
generators are allowed, after 4 s, to disconnect from the grid, according to the grid
code. They are also allowed to disconnect immediately if frequency goes below 46
Hz. In Figure 1-20, the effect of the disconnection of pumped storage power plants (at
49.6 Hz) and the first stage of the automatic load shedding (at 49.1 Hz) are clearly
visible. Two factors prevented this strategy from being successful:
1. most importantly, the behavior of power plants during the transient was
quite poor;
2. as a consequence of the partial power plant failures, the load disconnected
by the load shedding was not enough to stop the frequency decrease also
considering the minimum national demand during those early hours of
Sunday morning.
Figure 1-22 depicts the frequency and its time derivative, proportional to the power
unbalance. Moreover, it shows the instants at which pumped storage power plants
disconnected (in blue), and at which some major thermal power plants were untimely
put out of service by their protections (in red). In particular, different types of
protections operated during this transient: electrical and turbine protections. The
figure clearly shows that when the load shedding operated, the load-generation
unbalance (pink curve) was quickly recovered and that the whole system was very
close to recover completely. Unfortunately, this was not possible due to the poor
behavior of 24 thermal generating units that tripped at frequencies well above 47.5
Hz, making the unbalance more and more dramatic and the load shedding unable to
restrain frequency decline. At 3:28, when the frequency dropped below 47.5 Hz, the
blackout was unavoidable.
A rough estimate gives a resulting final unbalance of about 900 MW, as it follows:
Loss of generated power (total 13176 MW):
- 6664 MW due to the separation from UCTE;
- 1700 MW due to distributed generators tripping on distribution grids;
- 4812 MW loss of 24 out of 50 main generators.
Control actions (total 12300 MW):
- 3200 MW automatic pumping shedding;
- 7700 MW Automatic load shedding;
- 1400 MW Contribution of primary frequency control.

1 - 32
Monfalcone=-327MW
50,5 Entracque+S.Massenza+S. 0,1
Fiorano+Presenzano=2159
50,0
0,0
49,5
Montalto+Trino+Livorno=-677MW
ISAB=-180 MW
Bargi+Roncovalgrande+Anapo=1157
49,0 -0,1

df/dt [Hz/s]
La Spezia=-622 MW
48,5
f [Hz]

-0,2
Edolo=538
48,0
TVDN=-502 MW
47,5 -0,3
P.Tolle=-400 MW Trino=-105 MW
47,0
P.Tolle=-496 MW Trino=-105 MW -0,4
46,5
46,0 -0,5
03 30

03 40

03 5 0

03 00

03 10

03 20

03 30

03 40

03 50

03 00

03 10

03 20

03 30

03 40

03 50
00
5:

5:

5:

6:

6:

6:

6:

6:

6:

7:

7:

7:

7:

7:

7:

8:
:2

:2

:2

:2

:2

:2

:2

:2

:2

:2

:2

:2

:2

:2

:2

:2
03

Time

Figure 1-22: Records of frequency and of its time derivative during the transient [23].
It is worth noticing that the whole UCTE system experienced a significant transient,
until 4:30 am Actually, over-voltages in some nodes, up to 450 kV in the 400 kV
grid, was an issue, but they disappeared in a few minutes. However, the main
problems were due to the abnormal value of the frequency, up to 50.24 Hz at its peak,
which stabilized at 50.190 Hz after one minute. This caused deviations from the
schedules and tripping of generators (positive effect), pumped storage plants and
industrial loads (negative effect). TSOs had to operate reducing the set point of the
export to Italy or blocking the frequency/power control and reduce generation or start
pumps in order to manage the abnormal frequency value.
1.3.2.3 Restoration
The restoration after the blackout was quick in the northern areas of the Italian
system, due to the interconnections with the UCTE system. Restoration took longer in
the southern part of Italy and approximately 20 hours for Sicily [25].
The restoration can be divided into three phases. The first phase (Figure 1-23), from
3:38 am to 8:00 am, was characterized by:
the Northern part initially supplied by Switzerland, France and Slovenia, and
by a small number of generators synchronized with the UCTE system;
lack of information on the topology of the system because the SCADA in the
National Control Center lost visibility of part of the system from 6:31 am to
1:17 pm;
remote control and communication problems;
difficulty in switching, failure in operation of some disconnectors.
The second phase, Figure 1-24, from 8:00 am to 12:00 pm, was characterized by not
yet having all thermal units in operation and pumping storage reservoirs not yet being
completely filled. Due to the consequent lack of energy, distributors were asked for
rolling blackouts on the already supplied system (from 11:00 am to 6:00 pm). The
main problems in this phase were due to the high voltage on the Adriatic East
backbone, and to the difficulty in synchronizing the grid in the Naples area.

1 - 33
The third phase, from 12:00 pm to 9:40 pm, showed high and dangerous power
transfers from North to South due to lack of generation in the South, difficulty in
supplying the area of Brindisi (South-East) and the substation of the cable from
Greece, which was available only at 4:50 pm due to high voltages. Restoring the
Island of Sicily was also delayed. This was because the supply from the continent
started at 4:38 pm, but with no more than 200 MW, in order to accommodate before
the load ramp in the mainland that was expected at 8:00 pm.

Figure 1-23: First phase of the restoration [19].

Figure 1-24: Second phase of the restoration [19].

1 - 34
1.3.2.4 Lessons Learned
After the blackout, several international and national investigations were carried out
in order to understand the incident and to identify possible countermeasures. The
results of such investigations can be divided into measures to be taken in advance and
measures for the real time operation.
Measures to be taken in advance:
1. It is necessary to improve the coordination between the grid operators,
especially when the presence of the market makes the system operation close
to security limits. A first goal should be to make homogeneous the planning
and the operation procedures and organize common training sessions for the
Operators. In particular, UCTE recommends:
Harmonize the N-1 criterion.
Define maximum time delay to return the system to N-1 security
conditions.
Include voltage stability assessment.
2. Identify unambiguous procedures and improve the coordination also by
making observable to each system operator not only the interconnection lines
but also the closest areas of the grids in the neighboring Countries.
3. Empower and make redundant the data communication and remote control
system.
4. Add phase angle computation to the contingency analysis.
5. Implement reactive shunt compensation to avoid overvoltages in no-load
conditions.
6. The growth of the power import requires effective protection and automation
to handle events such as tripping of overloaded lines to prevent cascading
outages. More advanced Wide Area Protection systems (WAP) may be an
option [26, 27].
7. All the network users should be more involved in the responsibility for the
security of the transmission system. More attention should be paid to the
observance of the grid codes, in particular for the setting of protections, and
the TSOs should be responsible for monitoring.
8. During the commissioning of the power stations, a more accurate testing of
protecting devices and governors should be performed.
9. More stringent connection rules should be prepared for the minor generation
connected to the medium voltage grids.
10. The automatic defense plan procedures should be updated often, especially in
presence of scenarios more uncertain than in the past. This means that the
TSOs should perform, often, both steady-state and dynamic studies, also
for tuning the contingency plan.
11. Harmonize tree trimming practices.
Measures for the real time operation:

1 - 35
12. Monitoring of the external networks and preventive agreements with foreign
TSOs on the possible emergency controls.
13. Representation on the Italian TSO control center of part of the Swiss and
French grids with State Estimation.
1.3.3 French Blackout of December 19, 1978
The failure which occurred over a very large portion of the French Electric System, on
December 19, 1978, was initiated at 8:29 am by the tripping of the 400kV
Bezaumont-Creney line. It quickly resulted in a complete service interruption of about
29 +/- 1 GW from a total demand of 38:5 GW, a drop representing about 75%. The
corresponding non-supplied energy was estimated to be about 120 GWh.
The operation of the French network, just before the event, was characterized by large
power imports from Belgium and Germany. These imports resulted in stressed
operating conditions (low voltages near Paris and in western part of France and
heavily loaded lines in the East part of the system).
The tripping of the 400kV Bezaumont-Creney line by the over-current protection was
followed by new line overloads and cascading tripping, resulting in voltage collapse
and loss of synchronism. Then the system separated in different sub-networks
(islands) with the impossibility to balance load and generation.
Four stages of the event can be summarized as:
1st stage: initial event at 8:26 am
2nd stage: first attempt to restore the service from 8:26 to 9:08 am
3rd stage: second event at 9:08 am
4th stage: progressive service restoration after 9:08 am
1.3.3.1 System Conditions Leading Up to the Initial Event
Starting from 8 am, the demand was increasing from 37.2 GW to 38.5 GW, just prior
to the initial fault. The action of Automatic Load Frequency Control (ALFC) were
unable to keep up with the generation-demand; the ALFC level reached its maximum
contribution +1 (level +1 reflects a 100% contribution)
Flows from East to Paris increased leading to a degradation of the operating
conditions of the EHV system, underlined by overcurrent and voltage drops (see
Figure 1-25).
The following overloads are of particular interest:
- at 8:06 am, activation of the 20 min overcurrent alarm on the 400 kV
Bezaumont-Creney line. In order to mitigate the overload various operator
actions were taken, without success, between 8:06 am and 8:26 am.
- at 8:16 am 20 min overcurrent protection alarm on the 400 kV Mery-
Chesnoy 1 line
- at 8:17 am, 20 min overcurrent protection alarm on the 225kV Genissiat-
Vielmoulin line
- at 8:17, 20 min overcurrent protection alarm on the 225kV Genissiat-
Vielmoulin line

1 - 36
- at 8:18 20 min overcurrent protection alarm on the 225 kV Creney-Les
Fosses line, which disappeared at 8:22
Concerning voltage levels, the situation had worsened significantly since 8:00 am (see
Figure 1-26).

Figure 1-25: System conditions leading to the blackout.

Figure 1-26: Worsening system voltage levels.


Eventually, generation started to trip and back down:
- at 8:08 am at the Bugey nuclear power station, unit 2 reduced its output
initially to 120 MW due to a leak on the heat exchanger and then to 0 MW at
8:24 am

1 - 37
- from 8:00 am Onwards, in the Paris area, a reduction of the output of thermal
generating units was reported from under-voltages on the auxiliaries of the
power plants; a total reduction of about 500 MW
- at 8:23 am, tripping of unit 3 at the Chevire plant (100 MW) occurred due to
the excessive vibration
1.3.3.2 First Stage The 8:26 am Incident
The tripping of the 400 kV Bezaumont-Creney line, due to overcurrent protection,
resulted in the subsequent tripping of 225 kV Creney-Laneuville, Vielmoulin-
Genissiat and Mambelin-Rolampont lines by their 20 second overcurrent protections.
These line trips were followed by the tripping of hydro units of the Revin power
plants, due to the operation of the generator over-current protection. These units
deliver power to the Mazure substation; the Mazure-Achene (Belgium) line tripped
due to the operation of the maximum power relay.

Figure 1-27: System condition at 8:26 am.


The power throw-over resulting from the latter tripping, resulted in the tripping of the
400 kV tie-line Avelin Meractor (Belgium), by operation of the distance relay at the
Meractor end.
A generalized instability of the condition spread over the French system with deep
voltage and current oscillations. However these disturbances were not observable in
the East and South-East areas which stayed connected to the European system.
The safe guard protection system operated according to their expected behavior; a few
relays of a different type also operate on the Spanish border-line and within the
French system. (See Figure 1-28: condition of the network at the beginning of the
instability).
Within the areas which were severed from the European network, instability
conditions persisted (in voltage and frequency). A new balanced condition, between
generation and load, could not be achieved and the operation of protective relays on
generating units (under-voltage or underfrequency) quickly led to a complete blackout

1 - 38
in these regions. As a result, at the end of the first stage, the power system was
divided as depicted in Figure 1-29.
On the Eastern side, most of the network was still energized, being fed by the power
plants of the Eastern area and the Alps area, and by the tie-lines. Part of the South-
Eastern area also remained energized, but at a low voltage.

Figure 1-28: System condition at the beginning of the instability.

Figure 1-29: Formation of islands.


1.3.3.3 Second Stage: First Attempt of Service Restoration:
The de-energization of the Western part unloaded the 400 kV lines St Vulbas Bayet
Marmagne Eguzon (see Figure 1-29), thus the voltage level recovered to 410 kV.

1 - 39
This allowed a quick re-energization of the 400 kV system of the South-Western part,
between 8:27 and 8:42 am.
After 8:30 am, the energization of the loop of the Paris area began from the
Marmagne substation. During this time the 225 kV system of the Western part was
progressively energized. In the re-energized areas, part of the load was reconnected
(see Figure 1-30). During this time most of the required power was transmitted from
the South-Eastern area on the 400 kV lines: St Vulbas Bayet Marmagne.

Figure 1-30: Load versus time.


The Northern area was re-energized after 8:46 am by the Eastern 400kV network
(Mery-Sur-Siene, Vesle and Mazures) and then resynchronized with the Belgian
system, at 9:05 am by the Mercator-Avelin line, and at 9:11 am by the Achene-
Mazures line.
At the end of this stage, the 400 kV system was almost completely re-energized and
power plant auxiliaries in the Paris and Western areas, as well as other loads, were
progressively brought back on-line.
1.3.3.4 Third Stage: Second Event at 9:08 am
As the load supplied increases in the Western area, the voltage level dropped, since
the power was mostly transmitted by the 400 kV lines St Vulbas Bayet
Marmagne. Between 9:00 and 9:07 am, the load on the lines, in the Alps area,
increased quickly. After 9:05 am over-current alarms occurred on the 400 kV lines, in
the Rhne-Alps area. At 9:08 am over-current tripping of 400 kV and 225 kV lines
resulted in decaying voltage on the 400 kV network of the Western system, which was
previously partly re-energized.
1.3.3.5 Fourth Stage: Service Restoration After 9:08 am
Within the Paris area, service restoration occurred once enough thermal units were
resynchronized to the system. The Parisian transmission system operation was totally
restored between 11 am and noon. In the Rhne-Alps region, service restoration
started at 9:30 am; the full load was re-supplied by 11 am.

1 - 40
In the South-Western region, the network reconstitution was obtained through the 400
kV ties with Spain and with the contribution of hydro power stations; the South-West
was totally supplied at about noon.
The Western regions were re-supplied from the Paris area. Consumer service started
as soon as 10:30 am and electric service went back to normal at 1:30 pm.
The Northern region was resynchronized to the 400 kV network of the Eastern region
at 9:03 am and normal conditions were quickly reached.
Service restoration of the Seine estuary area (Basses-Seine) and Normandy region
began at 10:00 am and completed by 12:30 pm.
The service restoration of the South-Eastern region was completed by 10:30 am. The
Eastern region was not affected by this incident, and it remained energized by local
thermal and hydro units, together with from German and Swiss international ties.
The service restoration of the whole French system is depicted on Figure 1-30. At
6:00 pm, 35.7 GW was supplied for a predicted demand of 37 GW. The small loss can
be explained by difficulties encountered in the resynchronization of a few thermal
units.
1.3.3.6 Summary and Comments
o The operating conditions of the network just prior to the fault were
characterized by very low voltage levels and high power transfers in the
Northern and North-Eastern areas. The power system was thus vulnerable.
o Given these conditions the successive tripping of lines induced instability in
the system.
o In some parts of the system the voltage levels, which were already low, further
decreased which resulted in the loss of generators, by operation of under-
voltage relays. Also, this prevented the safeguard protection system to work
efficiently.
o As for the service restoration, the 9:08 am failure stresses the need for tight
control of the load restoration. Even with the encountered difficulties, most of
the load was supplied by 2 pm (about 90% of the load).
1.3.4 Blackouts in the Nordic Countries
This section includes a short description of the most recent and most extensive
blackouts in the Nordic power system (note: the Swedish/Eastern Denmark blackout
of 2003 was described in section 1.3). .
1.3.4.1 Sweden 1983
This is the oldest incident included in this discussion, but still important because it
represents the largest single blackout that has occurred in the Nordic countries [28].
The blackout happened early afternoon on 27 December 1983 and affected most of
Southern Sweden south of Interface 2 and some other local areas6. Interface 2 is the

6
We have chosen to use the word interface as the English translation of snitt, which is the
commonly used term in Scandinavian languages to describe a set of transmission lines (or
transformers) that carry the main power transfer between two areas. Thus, Snitt 2 in Sweden is
translated to Interface 2. Other terms that are used in English literature are Power Transmission
Corridors, Transfer Paths or Flowgate.

1 - 41
main corridor of power transfer from mid to south of Sweden, approximately on the
61-degree latitude. The corridor includes seven 400 kV transmission lines, and prior
to the event the power flow north to south on Interface 2 was 5600 MW, which was
well within the transfer limit at that time.
Prior to the event, at 12:20, unit 1 in Oskarshamn tripped, and 490 MW of generation
in the south was lost. This caused an increase in the transfer from north to south, but
the power flow on Interface 2 was still within its allowable limits.
The blackout was initiated at 12:57 after a breakdown of a disconnector in Hamra
transformer station (one of the main stations feeding Stockholm city). The breakdown
caused tripping of all lines connected to the station, including two of the seven 400
kV lines of Interface 2. The weakening of the grid caused overload on the remaining
lines and voltage drops in the southern parts of the network. As load recovered after
the initial voltage drops, the overloading became increasingly severe. This led to
cascading outages of the transmission lines in the interface and eventually to
separation of Southern Sweden south of Interface 2. After the separation, Southern
Sweden had lost 7000 MW of import, and the consequence was a total voltage
collapse.
The power interrupted in Sweden was 11400 MW, and the energy not supplied was
estimated to be 24000 MWh. Restoration of the power supply appears to have been
fairly efficient with an average outage time of 2.1 hours. The event also caused
outages in Eastern Denmark. Three main power plants tripped due to low voltages
before all the cables to Sweden tripped as a result of power oscillations.
Approximately 520 MW of consumption was lost, partly disconnected by the
automatic underfrequency load shedding and partly by manual disconnection. The
total energy not supplied in Eastern Denmark was estimated to be 765 MWh.
Using the classification proposed in [29], this is a critical event. It is, however,
difficult to assess the probability of an event like this. The blackout was basically
initiated by a single failure, the breakdown of a disconnector, but the series of events
(from the discovery of the problem, to the start-up of initial repair work and finally to
the disconnection of the entire station) have a much lower probability than an N-1
event. Taking into account the high focus on reliability in design of transformer
stations and switchgear since this event, it is judged to be an infrequent event
(assuming a probability of occurrence once every 20-30 years). It is interesting to
note, however, that the cause of the 2003 blackout in this same region (described in
section 1.3) and series of events have many things in common with this 1983 blackout
described here.
1.3.4.2 Helsinki 2003
This incident happened in the afternoon of Saturday 23 August. The initiating event
was a short circuit that happened by a mistake during connection of a generator that
had been out for maintenance. The primary protection failed to correctly isolate this
fault, and as a result several lines tripped, among them two of the main lines feeding
the larger Helsinki and Vantaa area.
Around 800,000 people were affected and approximately 500 MW of power was
interrupted. The main transmission grid was reconnected within 15 minutes and all
consumers had power restored within one hour.
All circumstances taken into account, this is classified as a minor event. The blackout
was a result of a short circuit fault (caused by human error) and a protection failure.

1 - 42
This is an N-2 event, the probability of occurrence is assumed to be once every 8-10
year.
1.3.4.3 Western Norway 2004
A large part of Western Norway, including Bergen and most of Hordaland, is
connected to the rest of the power system through one corridor in the south, the
Sauda interface, and in the north by one 300 kV line from Fardal towards Evanger.
The Sauda interface includes the two 300 kV lines Hylen-Sauda and Nesflaten-Sauda.
At 13:59 on Friday 13, February the 300 kV Nesflaten-Sauda split up in a line joint.
The line split was at first sensed by the distance protection as a high impedance fault,
and therefore the breakers did not disconnect the line immediately. The fault was also
seen by the distance protection on the other 300 kV line Hylen-Sauda. As a
consequence when the fault current increased, both lines in the corridor were tripped
after the time delay of the relays. The remaining connection was now in the north,
where the 300 kV line Modal-Evanger experienced a 50% overload. With decreasing
voltages and increasing currents, this line also tripped and the whole area (Bergen,
larger part of Hordaland and northern parts of Rogaland) collapsed.
A little less than 500,000 people were affected and approximately 2400 MW of power
was interrupted. The energy not supplied was estimated at nearly 1200 MWh, which
means the average duration of the outage was 0.5 hour. Almost all consumers had
power restored within one hour.
According to the classification in [29], this was a moderate event. The event was
initiated by one fault, but the blackout was a result of the tripping of two lines. It can
therefore be discussed if this is an N-1 or an N-2 event. In any case, this is not a
contingency that is included as a single failure in the present operating strategy. This
suggests that there is a moderate probability of this event recurring. Since this is an
area with more frequent bottlenecks than in the Helsinki area, the probability of
occurrence has been assumed to be once every 5 years.
1.3.5 Blackouts in Greece
1.3.5.1 Blackout of July 12, 2004
Since the mid-1990s the yearly load peak of the Hellenic Interconnected System
occurs in the summer due mainly to the increasing use of air conditioning. Since then
the Hellenic system (especially during summer) is prone to voltage instability. This
phenomenon is related to the power transfer from the generating areas in the North
and West of Greece to the main load center in the Athens metropolitan area reaching
its maximum. The problem is due to the total electrical distance between generation
and load consisting of generator step-up transformers, 400 kV and 150 kV
transmission lines, 400/150 kV autotransformers and bulk power delivery
transformers (usually 150/20 kV).
The first voltage stability incident occurred in 1996 and was felt in all areas South of
Athens, with the weakest point being the Peloponnese peninsula. After a number of
system reinforcements including the installation of new generation in greater Athens
area, a third North-South 400 kV double circuit, two new 150 kV cables connecting
Peloponnese to Western Greece, the loadability has increased considerably, while at
the same time the weakest part of the system has become the area of Central Greece to
the North of the metropolitan Athens area.
In view of the increasing yearly peak, as well as the deteriorating power factor of

1 - 43
loads, a number of further system upgrades was planned for the period 2003-2004, in
conjunction with the 2004 Athens Olympics. This included new 400/150 kV
autotransformers, new capacitor banks in medium (MV) and high voltage (HV) buses
and new 150 kV lines. Many of these planned upgrades were integrated in the system
only after the yearly peak, which in 2004 occurred on July 12 [30]. This was a major
factor leading to the blackout, which is discussed below.
Sequence of Events
At 7:08 of July 12, unit 2 (rated 300 MW) of the Lavrio power station in the Athens
area was lost due to auxiliaries UPS failure. The failure was repaired, but due to
further problems occurring during startup the unit not synchronized until 12:01. At
that point load had peaked to 9160 MW and voltages in the Athens area were
constantly dropping and were reaching 90% of the nominal value. Voltage decline
stopped as soon as Lavrio-2 synchronized and started generating.
However, at 12:12 unit Lavrio-2, still being in the process of achieving its technical
minimum, and consequently on manual control, was lost again due to high drum level.
This brought the system to an emergency state.
At 12:25 a load shedding of 100 MW was requested by the Hellenic Transmission
System Operator (HTSO) Control Center. At 12:30 a disconnection of 80 MW was
achieved manually. This, however, was not enough to stop the voltage decline, so a
further shedding action of 200 MW was requested at 12:35. At that time load had
reached 9320 MW.
The second load shedding command did not have the time to be executed. At 12:37
Unit 3 of Aliveri power station serving the weak area of Central Greece tripped
automatically. It is unclear which event initiated this critical tripping. At 12:38 the
remaining unit in Aliveri was manually tripped. After that, voltages were collapsing
and the system was split in two at 12:39.
The splitting of the system (shown in Figure 1-31) was initiated by the opening of the
North-South 400 kV lines, due to the switch-onto-fault function of the protection
relays. After splitting all the remaining generation in the areas of Athens and
Peloponnese were disconnected by undervoltage protection leading to the blackout.
The split of the system saved the North and Western parts of the Hellenic system,
which remained interconnected, even though the resulting surplus of power created a
severe disturbance in the neighboring systems of the 2nd UCTE synchronous zone.
The total excess generation was about 2000 MW changing the flow at the north
interconnections from an import of 900 MW to an export 1100 MW. One of the two
interconnecting lines with the external systems to the North was overloaded and
tripped and the remaining line then carried the total flow of 1100 MW. The frequency
of the then Zone II of UCTE increased to 50.75 Hz. The dc cable to Italy continued
importing according to its program (250 MW).
The restoration of the south system started at 12:45. With the generating units of the
North and West in operation the restoration was relatively fast. Operations followed
simultaneously two different directions: from the west to the lignite plants in
Peloponnese, and from the center down to the power plants in the area of Athens. In
half an hour (13:15) all the main substations of the south were re-energized. In one
hour and 15 min (14:00) 1900 MW had been supplied.

1 - 44
Figure 1-31: North-South split of July 12.
The supply to the rest of the load was restricted due again to voltage stability
considerations, since the units involved in the blackout continued to be out of
operation and a full load restoration could initiate another collapse. The Aliveri units
were synchronized at 15:05 and Lavrio 2 unit at 22:42. All the consumers were fully
supplied by 17:30.
Initial Post Mortem Analysis
This first post mortem analysis was based on the State Estimator (SE) solution of the
HTSO Energy Management System, as obtained through the data transfer utilities of
the on-line Voltage Security Assessment (VSA) application that was not in operation
during the Summer of 2004 and has since then be re-activated [31].
The first post-mortem analysis was based on the SE solution of 12:35 that was later
found to be less accurate than previous saved snapshots. The system was simulated by
enforcing the overexcitation limits on all generators and then applying a slow
proportional ramp on the load demand. All loads are assumed connected on the
secondary of either actual or assumed LTC transformers regulating the secondary
voltage. Behind the transformers load is assumed voltage sensitive with exponents of
1.5 for real and 2.0 for reactive load.
The simulation results are presented in the form of PV curves showing the voltage of
a critical 150 kV bus as a function of total system load consumption. The tip of these
curves corresponds roughly to the maximum load that can be consumed by the
system. Load demand above this point initiates a voltage collapse.

1 - 45
The PV curve for the base case scenario is shown in Figure 1-32. The initial load
reduction is due to the activation of generator overexcitation limiters. As seen the
system is barely able to withstand the enforcement of limits and can accept a very
small load increase. The system is thus in a critical condition.
LONG-TERM SIMULATION
0.9
710
0.88

0.86

0.84
Voltages (p.u.)
0.82

0.8

0.78

0.76

0.74

0.72
9050 9100 9150 9200 9250 9300 9350 9400
Area Load (MW)

Figure 1-32: Base case July 12, 12:35.


At this critical point the loss of Aliveri-3 was fatal. This is shown in Figure 1-33,
where the loss of this unit is simulated. As seen, load consumption cannot be restored
at the pre-disturbance level and as a result the system collapses due to voltage
instability. The subsequent disconnection of Aliveri-4 and the North-South split of the
system simply accelerated the collapse process.
LONG-TERM SIMULATION
0.9
710
0.88

0.86

0.84
Voltages (p.u.)

0.82

0.8

0.78

0.76

0.74

0.72

0.7
8850 8900 8950 9000 9050 9100 9150 9200 9250 9300 9350 9400
Area Load (MW)

Figure 1-33: Loss of Aliveri-3 (12:35, July 12).


The first post-mortem analysis was concluded in 10 days and demonstrated that the
system, with the addition of the upgrades that were put in place between July 12 and
22, could withstand without problems all the disturbances of July 12. This gave
sufficient reassurance that the system was secure to supply the Athens area during the
August 2004 Olympic games.
System Reinforcements since the Summer of 2004
The reinforcements made to the system after the blackout include: a new 150 kV
Substation, and a few 150 kV lines, restoration of the 150 kV cables that were not in
service on July 12, new capacitor banks at 20 kV and 150 kV substations in Athens
and Central Greece, new distribution substations in the wider Athens area,
commissioning of a new private gas-turbine power plant just to the North of Athens,

1 - 46
which by special agreement is called upon to operate by HTSO in case of insufficient
generation in the South part of the system.
Concerning emergency controls, based upon the experience gained during the
blackout, HTSO and the Public Power Corporation SA (PPC), which owns and
operates the distribution and sub-transmission network in the Athens area, decided to
better coordinate the process of load shedding in case of similar events in the future.
In particular, load shedding will be automated, so that it is applied directly from the
control center of HTSO, thus avoiding the delays associated with manual load
shedding.
In parallel, it was decided that the Voltage Security Assessment (VSA) system [31]
previously developed and tested should be applied on-line in the control Center of
HTSO, so that a constant monitoring of voltage security is available. The on-line VSA
is now in operation and provides security margin information and loadability limits
periodically, or upon request.
Data Validation and Adjustment
A more detailed analysis of the July 2004 blackout was conducted at a later date, so as
to validate the available generator and system data and allow the VSA application to
estimate more accurately the loadability of the actual system.
Generator aspects were investigated closely including reactance and saturation data,
starting from the generator short-circuit and open-circuit characteristics. After that,
the maximum allowable continuous excitation current was adjusted to a new value
based on the rated generation. This led to a better approximation of the generator
reactive production during system simulation.
Armature current limitation was also considered for the generators in the Athens and
Central Greece areas. In general, the rotor current limit is more restrictive than
armature current limit at loading levels below the rated active power. However, the
continuous decrease in the generator terminal voltages during the incident, especially
at units located in the most affected areas, led to excessive armature currents, making
the armature current limit more restrictive than the field limit. In the Hellenic System
there is no automatic limiter to adjust the stator current, thus the model adopted for
the simulation tried to duplicate the suggested generator operator practice, in order to
avoid stator overheating and the loss of the unit.
A simulation of the system response with the corrected data was carried out. The new,
adjusted simulation starts from the 11:30 snapshot of July 12, which together with that
of 12:00 proved more reliable and consistent. A ramp was imposed on load demand
with a rate that matched the total load consumption at 12:00 with that of the
corresponding EMS snapshot. Negative reactive loads (due to capacitors included in
the measured flows) were removed from the ramp increase. All generators were
assumed to control voltage up to the point of reaching their rotor current limit. Both
rotor and stator current limits were imposed in the simulation.

1 - 47
Figure 1-34: Measured voltage and simulated with constant reactive generation.
In Figure 1-34 the simulated (solid line) and measured (dots) voltages of an HV bus
are shown. It is evident that the simulation and the measurement agree up to the state
estimator error for the whole period prior to the blackout. One can see that the
simulated voltage response is similar to the one exhibited by the actual system and the
general trend is followed.
1.3.5.2 The Crete Power System Blackout of 25 October, 2001
The power system of Crete is the largest isolated (island) power system in Greece
(Figure 1-35) with the highest rate of increase nation-wide in energy and power
demand (Figure 1-36). Until 1988 the annual peak load occurred in winter, from then
on it always appears in summer evenings. One characteristic of the load profile is the
large difference between peak and off-peak hours. In 2001 the conventional
generation system consisted of two thermal power plants, one in Linoperamata and
one in Hania, located near the major load points of the island. There were 18 thermal,
oil-fired generating units with a total installed capacity of 515 MW, as follows:
Six steam turbines in Linoperamata of 110 MW
Four diesel engines in Linoperamata of 50 MW
Two gas turbines in Linoperamata of 30 MW and five gas turbines in Chania
of 190 MW
One combined cycle in Chania of 135 MW (2 gas turbines and 1 steam turbine)
The base load is mainly supplied by the steam and diesel units. The gas turbines
normally supply the peak and have high operating costs. The transmission network
consists of 150 kV lines and 2 lines of 66kV. The distribution network consists of 20
kV and 15 kV lines [32]. The generation system and the transmission network are
supervised by a control center located in one of the substations in Iraklio.

1 - 48
Figure 1-35: One-line diagram of the Crete system.
INCREASE OF ENERGYAND PEAK-LOAD FOR THE SYSTEM OF CRETE

2500,0 450,0

400,0

2000,0
350,0

300,0
PEAK LOAD ENERGY
1500,0
PRODUCTION
250,0
GWh

MW

200,0
1000,0

150,0

100,0
500,0

50,0

0,0 0,0
64

66

68

70

72

74

76

78

80

82

84

86

88

90

92

94

96

98
19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

Figure 1-36: Increase of peak load and annual energy.


Eleven wind farms have been installed in Crete since 1995 of total capacity of 70 MW
and there are plans for more wind generation in the next years. The hourly wind
penetration in 2001 reached 41.2%. All the wind farms, with minor exceptions
(Moulia), have been installed at the eastern part of the island (Sitia) that presents the
most favorable wind conditions. As a result, in case of faults on some lines, the
majority of the wind farms might be disconnected.
Furthermore, the protections of the wind farms might be activated in case of
frequency or voltage fluctuations, decreasing additionally the dynamic stability of the
system. Extensive transient analysis studies have therefore been conducted in order to
assess the dynamic behavior of the system under various disturbances and with
different combinations of the generating units.

1 - 49
Description of the October 25, 2001 Blackout
The collapse of the Crete power system was due to several serious disturbances on the
system, which led to inability of supplying the consumption.
The total load of the island was rising from a minimum of 155 MW in the morning
(5:05 am) to reach the maximum of the day 300MW at 12:00 pm. At the time of the
first disturbance, the total load was 233 MW (at 8:07:44 am), i.e. 49.5% of the
maximum load of the year (471.4 MW). This load was fed by 189 MW of thermal
units and 44 MW of wind generation. The first unit that tripped was one of the two
gas turbines (GTs) of the combined cycle plant. The combined cycle plant was at 50%
of its capacity, with one of its GTs producing 40 MWs. At 8:07:45 am, on starting up
of the second GT, the first GT tripped due to low pressure of supplied diesel oil.
Following this, the steam turbine producing 18 MW also tripped at 8:08:11 am due to
loss of the GT.
The loss of these two units reduced production by 58 MW in 26 seconds. As a result,
underfrequency protections rejected initially about 25 MW of load followed by a
further rejection of 33 MW. The frequency reached 48.7 Hz after the loss of the steam
turbine, and then rose to the value of 49.33 Hz.
At 08:08:30 am the second GT of the combined cycle with a production of 31 MW
was tripped followed by outage of the wind farm production (44 MW) probably due
to voltage variations.
VOLTAGE & FREQUENCY LIMITS IN WIND FARMS (%)

Wind Farms
LIMIT WP1 WP2 WP3 WP4 WP5

UP LIMIT
OVERVOLTAGE >8.5/0.5sec >10/0.25sec >27.3/0 sec >10/0.2sec >12.5/0.1sec

DOWN LIMIT <- <- <-


UNDERVOLTAGE 20.5/0.5sec 10/0.25sec 36.36/0.3sec <-10/0.2sec <-20/0.1sec

OVERFREQUENCY >51/2sec >51/3sec >56/0sec >51/1.5sec >51/0.1 sec

UNDERFREQUENCY <47/2 sec <48/3sec <46/0sec <49/0.2sec <47/0.2 sec

After the above incidents available production was only 100 MW compared to a load
of 130 MW. This situation provoked the collapse of the system, because of
overloading of the Linoperamata Station units. The evolution of the system frequency
is shown in Figure 1-37.

1 - 50
50.4
50.1
49.8

frequency(Hz)
49.5
49.2
48.9
48.6
48.3
48
0 5 10 15 20 25 30 35 40 45 50 55
time(sec)

Figure 1-37: System Frequency.


Conclusions
The main reason for the system collapse was the drop of frequency at 45 seconds.
Although up to 45 seconds two major units were outaged, the activation of load-
shedding kept the system stable. After the drop of frequency at 45 seconds, load
shedding was not effective in maintaining the frequency in acceptable limits and the
system collapsed. From the SCADA records it seems that this drop was due to wind
power loss. Simulations [33] have shown that if the wind farms were not rejected at
that time the system could survive the previous disturbances. This conclusion proves
the advantages of installing wind turbines with good tolerance in voltage and
frequency variations, as currently imposed by several European regulations, and of
providing on-line dynamic security assessment functions [34, 35].
1.3.6 Blackout in the Swiss Railway Electricity Supply System: 22 June 2005
1.3.6.1 Introduction
The electricity supply of the Swiss railway system (SBB) is a 16.7 Hz single-phase ac
system which comprises eight power plants, five frequency converters constituting
connections with the Swiss 50 Hz system, two interconnections with the electricity
supply of the German railway system (DB) (also 16.7 Hz), 63 substations and 1800
km of single phase transmission lines, see Figure 1-38. The train operation in
Switzerland is almost 100% electrified.
Most transmission lines are 132 kV, but a few 66 kV lines exist. The final power
supply to the individual locomotives is through the overhead traction lines at 15 kV.
When the railway was electrified around 100 years ago a rather low frequency had to
be used because of commutation difficulties experienced when trying to operate the
traction motors at higher frequencies. It was decided to use 16.7 Hz, i.e. 16 2/3 Hz,
which could be supplied both by designated power stations, operating at 16.7 Hz, and
through frequency converters from the 50 Hz system. Originally these frequency
converters were mechanical, 16 2/3 = 50/3, but nowadays they are mostly static, i.e.
power electronics based.

1 - 51
Figure 1-38: Map of the electricity supply system of the Swiss railway. (Source: SBB)
The SBB electricity system suffered a major blackout in the late afternoon on 22 June
2005 which caused a complete standstill in the train traffic in the SBB system. This
disturbance occurred during rush-hour and had a significant impact on the societal
activities. Approximately 2000 passenger trains with about 200 000 passengers were
brought to standstill by the power outage. The restoration of the system started after
90 minutes and the train traffic gradually resumed. About four hours after the
blackout the passenger train traffic was back to a normal level, and the freight trains
were back to normal operation during the next day. Here an overview of the blackout
will be provided. A detailed description of the blackout can be found in [36].
1.3.6.2 Description of the Blackout
Initial conditions
On the 22 June 2005 three generators and one frequency converter were not available
due to retrofits and maintenance work. All other generation units were available. All
transmission lines were in-service except the line Amsteg/WassenSteinen, due to
construction work during the afternoon, and Bussigny-Geneva-Rigot due to a faulty
cable. This latter line outage had no influence on the blackout described here.

1 - 52
Figure 1-39: The network conditions in the Amsteg/Wassen Steinen area in the afternoon of 22 June
2005. (Baustelle is German for construction site.) (Source: SBB)
The particular situation in the canton of Uri, i.e. where the line Amsteg/Wassen
Steinen is located, was that in the afternoon construction work was to take place
directly under the transmission line. Due to security reasons the line had to be taken
out of operation, see Figure 1-39. As a consequence the Gotthard area was only
connected to the central part of Switzerland with one transmission line, which
according to the documentation available in the control center had a capacity of 240
MW.
In the daily operation briefing, it was noted that the situation could be critical between
13:00 and 18:00 in the afternoon, but it was deemed that the situation should be
manageable. The following reasoning lead to this conclusion: The maximum planned
transfer from Gotthard to the central part of Switzerland was about 170 MW, which
was well below the capacity of the line, i.e. 240 MW. Furthermore, it was agreed with
the construction company that if necessary they could stop their activities within 20
minutes notice so that the disconnected line could be put in operation again.
Course of Events
At 17:08 the line Amsteg-Rotkreuz was disconnected due to overload and the SBB
system was split into two parts. The frequency in the southern part increased and it
decreased in the northern part. The power surplus in the southern part was about 200
MW, which lead to an overfrequency. Due to this overfrequency most of the
generators were tripped within eight seconds and the southern system was without
power supply and there was a complete standstill in the train traffic. The power supply
in the northern system could be maintained through an increased import from
Germany (DB) and an increased generation in the power stations Chtelard, Vernayaz
and Etzel. These power stations were tripped due to overload at 17:35 and the
operation in the northern part was unstable. The connections to Germany were opened
and the train traffic came also here to a complete standstill. The restoration of the
system began immediately. At 21:15 the transmission system was completely
energized and the train traffic could be resumed.
Causes of the Blackout
One main reason of the blackout was that the data for the line Amsteg Rotkreuz had
not been adequately updated in the documentation used in the control center. A
disconnector, with maximum capacity of 211 MW, was the weakest link of the

1 - 53
transmission line, and this piece of information had not been put in the data bank used
by the control center.
The planned maximum power level on the line Amsteg Rotkreuz was 171 MW in
the first 15 minutes after 17:00. In the calculation of this power it was assumed that
power consumed by trains in the southern area would be 30 MW. It is claimed by
SBB that in the planning of the operation it was deemed that the system was (N-1)
secure.
The triggering event of the blackout was that shortly after 17:00 the train load,
assumed to be 30 MW at the planning stage, was actually negative, 20 MW, i.e. the
trains fed power into the system, see Figure 1-40.

Figure 1-40: The train load in the southern system. Top curve (red): The maximum load within 15
minutes. Middle curve (black): The average load (15 minute period). Bottom curve (blue): Minimum
load within 15 minutes. (Source SBB)
The negative load comes from the fact that many trains were simultaneously driving
downhill. To increase energy efficiency, modern electric locomotives have the ability
to recuperate the energy, which means that they will act as generators and feed power
into the grid when going downhill or decreasing speed.
As seen from Figure 1-40 the net load was about 50 MW lower shortly after 17:00
and the loading of the line Amsteg Rotkreuz increased by 50 MW to about 220
MW. This was still below the believed maximum limit of 240 MW, but above the
actual limit of 211 MW. Protections tripped the line at 17:08.
After the separation of the system a huge number of alarms were displayed to the
operators in the control center. These were not filtered and the operators were neither
fully aware of the situation in the system nor of what actions should be taken to save
the system. Even thought the interconnections to Germany were overloaded at this
stage, one continued to export power to the 50 Hz system via frequency converters.
(The power direction at this time of the year is often from the 16.7 Hz system to 50
Hz system because of the hydrological conditions, i.e. the snow is melting in the
Alps.) An obvious action to save the system would have been to reverse the power
direction to alleviate the interconnections with Germany. This was not done. When

1 - 54
the interconnections were disconnected, the northern part of the system collapsed
leading to the blackout.
1.3.6.3 Determined Actions
A number of remedial actions have been identified and have been or are planned to be
implemented. The most important ones are:
New routines for the management of data bases.
Implementation of a better alarm filtering system short term fix.
Specification and procurement of a new control system for the central control
center (To be installed at the latest in 2010.).
Increase of the spinning reserve in the system by 50 MW.
Introduction of a risk management system for the operation planning and
actual operation.
Training of operators, particularly concerning critical situations.
1.3.7 August 2003 Blackout in the United Kingdom
th
On 28 August 2003, an unfortunate sequence of network events in the National Grid
Transco (NGT) power transmission system led to power supply interruption in South
London from 16:20 until 16:57. About 410,000 homes were completely cut-off from
the supply. London Underground Ltd. (LUL) and National Rail were the main bulk
customers affected where 3.5 millions daily commuters stranded for hours.
1.3.7.1 Overview of London Power System
The NGT is the sole licensee to operate the high voltage electricity transmission in
England and Wales. The NGT system normally operates at 275kV and 400kV levels.
The NGT network connects about 72 GW installed capacity of England and Wales
(E&W). The winter peak demand in E&W is about 55 GW of which approximately 11
GW accounts for the peak demand in greater London area. The summer peak in
greater London area is about 7.5GW [37]. Figure 1-41 shows the transmission system
around London.

Figure 1-41: Transmission system of London.

1 - 55
1.3.7.2 Power Supply Arrangement of South London
The NGT feeds power to South London network through four substations at Little
Brook, Hurst, New Cross and Wimbledon shown in Figure 1-42. Normal area
demands of around 1,100MW are drawn by EDF Energy to supply domestic
customers and London Underground, together with supplies for other large users
including Network Rail.

Figure 1-42: Transmission system of South London.


1.3.7.3 Operating Arrangements on That Day and the Effect on Supply of Demand
In the evening of Thursday 28th August 2003, the transmission system shown in
Figure 1-42 was secure and operating in accordance with the relevant National Grid
planning standards and operating procedures. The substations were all securely
configured and supplying normal area demand of around 1,100MW. Usually the
power flows from the Little Brook end towards Wimbledon.
The Security and Quality of Supply Standard (SQSS) adopted by the NGT in planning
and operation of the transmission systems states that Typically, the main system
must be able to withstand the unplanned loss of a double circuit (two overhead lines
on the same transmission towers), although smaller demand groups are permitted to
be dependent on single circuit when circuit outages are required [38]. The
Wimbledon, New Cross and Hurst substations, those were to be affected by the
incident, were connected to the rest of the transmission system by two circuits,
ensuring that a single transmission fault would not result in a loss of supply.
The transmission system in the area was arranged with a number of circuits out of
service for scheduled maintenance.
A simple power flow diagram in the transmission network over the affected zone is
depicted in Figure 1-43 where the circuit #1 between WIMB and NEWX and the
circuit #2 between HURS and LITT are shown to be disconnected for scheduled
maintenance. It can be seen from Figure 1-43 that the circuit #2 between WIMB and
NEWX and the circuit #1 between HURS and LITT were only loaded to 44% and
37% of the available capacity.

Figure1-43: Schematic of South London substations power flow in normal operating


condition.

1 - 56
The black-out was initiated when the transmission circuit #1 from LITT to HURS was
disconnected to isolate a transformer. Hence, there was no supply from LITT to
HURS and all the power supplied to EDF Energy had to come through circuit #2 from
WIMB to NEWX which was also below the capacity of the WIMB-NEWX circuit #2.
1.3.7.4 Sequence of Events During the Incident
The sequence of events commenced on Thursday 28 August at 18:11 when an alarm
was received at the Electricity National Control Centre (National Control) at
Wokingham. The system configuration at the start of the incident and the power flows
are illustrated below.

Figure 1-44: Configuration of power system showing power flow [38].


At 18:11 staff at the National Control received the Buchholz alarm from a
transformer at HURS as an indication of gas accumulation within the oil inside the
transformer. At 18:17 discussions were initiated with EDF Energy for an alternative
transmission route to meet demand and take the faulty transformer out of service.
During switching time (typically 5 10 minutes) one circuit from WIMB would
supply NEWX and HURS.
EDF Energy had disconnected circuit breaker in the down stream and the load in that
section was automatically transferred to the bus section fed from the other section of
HURS that was fed from NEWX via two other transformers. At this point, HURS and
NEWX were supplied from WIMB 275kV substation and were dependent on the
single, Circuit2 from WIMB to NEWX .

Figure 1-45: Configuration and power flow of the system at 18:20 [38].
Immediately following the opening of the HURS-LITT section, the automatic
protection relay operated on the number two circuits from WIMB to NEWX. This
isolated NEWX, HURS and part of WIMB from the rest of the system resulting in
complete loss of supply at HURS and NEWX and part of supply from WIMB to EDF
at WIMB itself.
The transmission network under this situation is shown in Figure 1-46.

1 - 57
Figure 1-46: Transmission system and power flow immediately after the incident [38].
National Control concluded that the automatic protection equipment in circuit #2 from
WIMB to NEWX had most likely operated incorrectly. At 18:21 NGT Control and
EDF Energy discussed the loss of supply and the substations affected. At 18:22
standby engineers were called out to the affected substations to investigate and help
restore supplies.
At 18:25 the network was reconfigured to isolate circuit #2 from WIMB to NEWX.
By 18:30, further reconfiguration of the network had taken place and the first sections
of HURS and NEWX had been re-energized. By 18:40 further reconfiguration of the
network had taken place energizing further sections at NEWX and HURS.
At 18:57 the service restoration in full was complete. The original Buchholz alarm
was attributable to the shunt reactor at HURS which was isolated and the transformer
was made available to EDF Energy Control at 19:10. The network was configured as
shown below.

Figure 1-47: Transmission system and power flow after restoration [38].
The London transport and National Rail however could not resume operation
immediately after restoration as it needed time to evacuate commuters from the
tunnels and tracks before reenergizing the tracks.
1.3.7.5 Root Cause Based on Subsequent Analysis
Power flow analysis, in postmortem simulations, confirmed that the automatic action
opening the WIMB-NEWX line was due to an improper relay setting. Figure 1-48
shows a simulation of the scenario. The worst possibility of loading has been
considered. Load at NEWX and HURS are increased to the full capacity of the
transformers. In this case, circuit #2 is meeting the demand of 1374MW and is at
169% of its rating. This, however, does not exceed the circuit breaker and relay
current settings limits (200% assumed as minimum fault current in the line). Though
there would never be this huge demand, still the system can still supply with the

1 - 58
protection system not acting to isolate WIMB-NEWX line for the short term while
operator action helps to relive the overload.

Figure 1-48: Schematic of South London Transmission system with 1440 MW load at
NEWX and HURS.
Investigation into the relay settings identified that the over current back up protection
for the line WIMB-NEWX incorrectly operated. The load current that was flowing in
the line was 1460 A. The summer current rating of the line was 4450 A, hence it was
far below the line rating. The current multiplier setting (CSM) was chosen to be 0.85
from the range of 0.05 to 2.4 in steps of 0.05. For a 5 A relay the required operating
current in the secondary is 4.25 A. The current transformer used was 1200/1. Thus,
4.25 A on the relay corresponds to 1200*4.25 = 5100 A. However the normal
operating current due to reconfiguration (i.e. all of combined demand of 558 MW at
NEWX and HURS was supplied from WIMB through WIMB NEWX circuit #2)
was 1460. If instead a one 1A relay is used, the operating current would be 0.85 A on
the secondary side that would correspond to 1200*0.85 = 1020 A which is less than
1460 A. This is what exactly what had happened as 1 A relay was connected in place
of 5 A relay. This relay was commissioned in June 2001. This is a hidden failure and
has been exposed under the power flow analysis and further investigation. As
described earlier usual direction of power flow in greater London area is from the East
to West and this flow was in the reverse direction, which might explain why this
hidden error had not be found earlier.
1.3.7.6 Conclusion:
The failure in electricity supply in South London and the associated causes have been
studied. The power system network of London has been modeled to analyze the
sequence of events that occurred on that day. It was confirmed through simulation that
the incidence was due to operation of a relay incorrectly set. The transmission
network had enough capacity to cater for outage of one circuit. The National Grid
plan and operation of the transmission system was well in accordance with the
security and quality of supply standard (SQSS) and was followed on that day.
1.3.8 UCTE Major Disturbance of 4 November, 2006 [39]
1.3.8.1 System Conditions Prior to the Event
The event occurred on a Saturday evening. Due to the lower power demand on
weekends, many transmission lines were out of service for scheduled maintenance.
These included four 380 kV lines in Germany (near the region where the event
initiated) and several other 380 kV and a 220 kV line [39]. In addition, one of the
substations in the area, Borken 380 kV (in E.ON Netz), had two of its buses split into
two parts for construction work this made it topologically not possible for power to
flow East to West in this area.
Figure 1-49 shows the estimated generation (G), the sum of power exchanges on lines
that tripped during the incident (blue arrows) and flows on dc cables (orange arrows).

1 - 59
2 200
Data at 22:09 (Kontek, Baltic cable,
SwePol ,DK DC cable)
UK-France HVDC
(RTE-NG)

1 910 G = 62 300
9 260

G = 182 700 APG (East), CEPS, E.ON Netz


(East), MAVIR, PSE Operator,
SEPS, VE-T, WPS

750
(TERNA-HTSO)

(REE-MOROCCO) 310

490 170 G = 29 100

APG (West), CEGEDEL Net, E.ON Netz AD MEPSO, EPCG, HEP (East),
(West), ELES, ELIA, ENBW TNS, HEP (West), HTSO, ISO BiH, JP EMS, KESH,
REE, REN, RTE, RWE TSO, Swiss TSOs, MAVIR (Szeged area), NEK,
TENNET, TERNA, TIWAG Netz, VKW Netz TRANSELECTRICA

Figure 1-49: System condition prior to event. Generation (G), ac power flow (blue arrows) and dc link
flows (orange arrows). (Diagram reproduced with permission by UCTE from [39] UCTE 2006.)
On 18th September 2006, the shipyard (Meyerwerft) requested, from the Germany
utility E.ON Netz, that the double circuit 380 kV line Conneforde-Diele, which runs
across the Ems River, be switched out to allow safe passage of the ship Norwegian
Pearl via the Ems River to the North Sea at 01:00 on 5th November. This had
actually been done several times in the past. E.ON Netz analyzed the impact of this
line opening and since they did not find any N-1 criterion violations for this event, the
line switching was approved on 27th October. The analysis showed that the system
would be highly loaded, but secure, following this outage.
At 12:00 on 3rd November, the shipyard requested E.ON Netz to reschedule the line
outage for 22:00 on 4th November, three hours earlier than previously agreed to.
Having re-analyzed the event and found no N-1 criterion violations on their own
system, E.ON Netz tentatively agreed. At this stage, this was not communicated to
RWE TSO and TenneT, the neighboring Transmission System Operators (TSOs).
This late announcement meant that it was not possible to reduce the power exchange
between Germany and The Netherlands for the outage of the Conneforde-Diele line
according to TenneT, no exchange program reduction is possible after 08:00 for the
day ahead due to the agreed auction rules (capacity is considered as firm, except in the
case of force majeure)[39].
At 19:00 on 4th November E.ON Netz informed TenneT and RWE TSO about the
new time for switching the line. TennetT and RWE TSO indicated, at around 21:30 on
the 4th, that flows between Germany and The Netherlands were high. Nonetheless,
since the system was expected to be secure, they agreed to the new switching time for
the line outage.

1 - 60
1.3.8.2 The Event
At 21:38, E.ON Netz switched out the first of the two 380 kV Conneforde-Diele
circuits. One minute later, the second circuit was switched out. After both circuits
were switched out, at 21:39, E.ON Netz received several warning messages for high
flows on the lines Elsen-Twistetal and Elsen-Bechterdissen. Two minutes later, at
21:41, RWE TSO called to inform E.ON Netz about the 1,795 A safety limit (90% of
the steady state thermal rating of 2,000 A) on the Landesbergen-Wehrendorf 380 kV
line that interconnects E.ON Netz and RWE TSO. At this point the limit had not yet
been reached and the RWE internal grid was N-1 secure. Moreover, the protection
settings on the two ends of this line were different (3000 A on the E.ON side, 2100 A
on the RWE side). E.ON Netz, dispatchers later claimed that they were not aware of
the protection settings of the line on the RWE TSO end and thus had not taken this
into consideration. RWE TSO indicated that they had informed E.ON Netz of these
setting in the last data exchange in September 2003. In the end, the line was
automatically tripped by the distance relays in the Wehrendorf substation due to
overloading. This then lead to cascaded tripping of a number of other interconnecting
lines and the eventual split-up of the UCTE system at 22:10:28 into three subsystems.
Figure 1-50 shows the subsystems formed area 1 and 3 remained asynchronously
connected through the dc links from Italy to Greece. Figure 1-51 shows the system
frequency before, during and immediately after the split-up in the three areas.

Area 1 under-frequency
Area 2 over-frequency
Area 3 under-frequency

Figure 1-50: Three sub-systems formed due to the system split-up in UCTE. (Diagram reproduced
with permission by UCTE from [39] UCTE 2006.)

1 - 61
50.2

splitting :
22:10:28,700 West and North East
22:10:28,900 West and South East
50.1

Landesbergen-Wehrendorf tripping
22:10:13

50

Landesbergen closing busbar 22:10:11

49.9
Zone WEST
Zone South East
Zone North East

49.8
22:10:06

22:10:07

22:10:08

22:10:09

22:10:10

22:10:11

22:10:12

22:10:13

22:10:14

22:10:15

22:10:16

22:10:17

22:10:18

22:10:19

22:10:20

22:10:21

22:10:22

22:10:23

22:10:24

22:10:25

22:10:26

22:10:27

22:10:28

22:10:29

22:10:30
51,4

51,2

51

50,8

50,6

50,4

50,2

50

49,8

49,6
Zone WEST
49,4 Zone South East
Zone North East
49,2

49
22:09:30,0

22:10:00,0

22:10:30,0

22:11:00,0

22:11:30,0

22:12:00,0

22:12:30,0

22:13:00,0

22:13:30,0

22:14:00,0

22:14:30,0

22:15:00,0

22:15:30,0

22:16:00,0

22:16:30,0

22:17:00,0

22:17:30,0

22:18:00,0

22:18:30,0

22:19:00,0

22:19:30,0

22:20:00,0

Figure 1-51: Frequency in the three sub-systems during and after system split-up. (Diagram
reproduced with permission by UCTE from [39] UCTE 2006.)
1.3.8.3 Dynamic Issues During the Event
Post mortem analysis was mainly based on the following sources (i) two snapshot data
sets of the whole interconnected system using power flow calculations, (ii) wide-area
measurement system (WAMS) recordings from several substations in Germany,
Switzerland, Slovenia, Italy, Croatia, Austria and Greece, and (iii) digital fault
recordings from protection equipment of transmission lines that tripped and
SCADA/EMS recordings of inter-area power flows, system loads and area controller
data. The detailed analysis can be found in [39]. In summary, the system dynamics
can be characterized in the following three phases:
The highly loaded East-West corridor in northern Germany was significantly
weakened due to the opening of the two Conneforde-Diele lines. System
damping was still adequate after the opening of just this line.

1 - 62
The subsequent overload and tripping of the Landesbergen-Wehrendorf line
triggered cascading outages of many other interconnecting lines. Initially these
were due to shifting power flow causing overloads and subsequent tripping of
lines. However, by 22:10:28, as the system continued to weaken, stability
conditions worsened and all lines south of the Main river, in the middle of
Germany, tripped due to impedance protection, i.e. severe voltage drops on
both ends of those lines resulted in lines tripping due to low apparent
impedance viewed as faults by the protection relays.
Finally there was a loss of synchronism between the three areas that resulted in
a splitting up of the system into three separate areas.
1.3.8.4 Remedial Actions Taken in the Three Subsystems:
Western Subsystem
The Western area (designated as Area 1 in Figure 1-50) was left with a large deficit in
power. Total generation was 182.7 GW. However, there was a deficit of 8,940 MW
due to lost imports following the system split-up. This large imbalance in
generation/load, quickly lead to a frequency decline down to 49 Hz (Figure 1-51).
This resulted in a large amount of underfrequency load shedding (UFLS) and tripping
of pump storage hydro units. A total of roughly 17 GW of load and 1,600 MW of
pump storage units were tripped to finally stabilize the frequency. The reason for this
rather large amount of load shedding, as compared to the initial power deficit, was
that a large portion of wind and cogeneration units also tripped following the
frequency decline. Most of these units were connected to the distribution system and
therefore per their interconnection standards were allowed to automatically trip for
frequency excursions below 49.5 Hz. Roughly 40% of the total generation that tripped
was wind (about 4800 MW). In total, 60 % of the wind turbine generators connected
to the grid at 22:09 tripped just after the frequency drop and 30% of the cogeneration
units in operation also tripped.
North East Subsystem
In contrast to the Western and South-Eastern areas, the North-East area faced a severe
surplus of greater than 10 GW of generation (about 17% of total generation in this
area prior to the split). Thus, the frequency initially jumped up to 51.4 Hz. In this area,
wind generation also tripped, this time due to overfrequency protection. In this case,
this response was beneficial to the system since there was a surplus of generation.
However, some of the wind generators then automatically reconnected as the
frequency recovered, which then further aggravated the issue and frequency started to
rise again starting at 22:13 (see Figure 1-51).
During the event there was a real danger of further splitting of UCTE power systems.
However, good cooperation between the control centers of TSOs allowed many
overloaded lines to be quickly relieved and then finally to successfully resynchronize
area 1 and area 2 in Germany and Austria at 22:47. This further decreased the flows in
this region to acceptable levels. At 23:30 the power systems in central Europe was
back to normal operation [39].
South-Eastern Subsystem
After the split this region had a total generation of 29.1 GW and a load of 29.88 GW,
thus there was little power imbalance in this area (770 MW). Some power flow
oscillations were observed on a number of lines and a unit at Kakanj station in Bosnia

1 - 63
(BA) automatically tripped at 22:10 it was generating 210 W. At 22:10:40 the
frequency dropped to 49.79 z. Since the frequency was above the first stage for
UFLS no other automatic actions or load shedding took place. According to post
mortem analyses, area 3 was N-1 secure during the whole event.
1.3.8.5 Conclusions and Recommendations
The general performance of the UCTE system (automatic UFLS, etc.) in addition to
actions taken by TSOs, allowed the system to be restored quickly without a wide
spread blackout. The investigation [39], however, has underlined a need for
enhancement of UCTE standards and their application by TSOs. In particular, it was
recommended that European stakeholders need to give attention to harmonizing legal
and regulatory frameworks across Europe. Five specific policy related
recommendation were given in [39] by the study group.
A brief summary of the conclusions of the report [39] are as follows:
The main causes of the incident were non-compliance with the N-1 criterion
and insufficient coordination/communication among the TSOs.
One other contributing factor was the rapid increase of dispersed generation
(wind generation in this case) which, according to less restrictive rules of the
distribution systems can disconnect at frequency thresholds higher than
conventional power plants (and also can reconnect automatically without
coordination with the TSOs). This aggravated the situation.
The five key the recommendations can be summarized as follows [39]:
The need to coordinate the security assessment of adjacent power systems,
simulating also contingencies outside the TSO's own control area, and to make
mandatory automatic and regular on-line contingency analysis, together with
the identification of proper countermeasures for outages.
TSOs should reconsider their defense plans and load shedding schemes given
that a significant amount of the generation (in this case mostly wind
generation) tripped due to the large frequency excursions. Moreover, the
restoration process should be managed by TSO duties of involved parties
need to be clarified within a national framework. (This last comment
presumably refers to the fact that many wind generators automatically
reconnected to the system following frequency recovery without directives
from the TSOs, thereby further aggravating overfrequency conditions.)
The need of enhancing as much as possible the standards and the coordination
for the secure operation of inter-regional power systems, in terms of joint
training, data exchanges, results of security analyses, etc.
UCTE needs to set up an information platform to facilitate all TSOs being able
to observe in real time the actual state of the whole UCTE system.
TSOs should have control over generation output. Generation connected to the
distribution system should behave the same for large frequency and voltage
excursions as generation connected to the transmission system. Generators (on
the transmission and distribution systems) should constantly inform the TSOs
of changes in their generation schedules.

1 - 64
1.4 Blackouts in South America
1.4.1 Brazilian Blackouts
1.4.1.1 Blackout of April 18th, 1984 ( 4:34 pm )
At 4:34 pm, the 500/345 kV transformer (T11) of Jaguara substation was shutdown
when operating at its overload limit due to the need to optimize energy exchange
between the hydro plants located on Paranaiba river (which were spilling water) and
Grande river (whose reservoirs were at low level), allied to an unusually high and
early increase in load, in the load center of So Paulo, as a consequence of low solar
luminosity.
At 4:43 pm, the second 500/345 kV transformer (T12) of Jaguara substation tripped
out, followed by an almost simultaneous tripping of seven circuits in the 500 kV and
345 kV transmission systems. Power oscillations were recorded between the plants on
the Paranaiba river and the rest of the interconnected system giving rise to cascading
shutdowns.
With the injection of excessive power into the 345 kV and 440 kV networks feeding
the states of Rio de Janeiro, So Paulo and Minas Gerais, voltage collapse occurred
resulting in load losses around 5,520 MW. The 750 kV transmission system was
opened by out of step protection to avoid propagation of oscillations to the South
region.
Main lessons learned
1. The need for improving maintenance routines.
2. The need for staff training to improve operation security.
3. The need for investments in supervision and control systems and procedures.
1.4.1.2 Blackout of August 18th, 1985 ( 6:40 pm )
At 6:40 pm a single-phase-to-ground short-circuit, provoked by bushfires in one of the
circuits of the Marimbondo-Araraquara 500 kV double circuit transmission line, led to
the loss of both transmission circuits due to the incorrect action of protection of the
other circuit. As a result of the meshed ring bus arrangement at Marimbondo
substation, the transmission lines to Campinas and Poos de Caldas were also tripped.
In addition, the Special Protection System at Marimbondo substation, that should have
dropped machines at this plant, failed due to inadequate settings. This provoked the
opening of the 500/440 kV transformer at gua Vermelha substation due to
overloading, thus resulting in the tripping of several transmission lines and generators.
Many islands were formed in the system and the disturbance was propagated into the
Southern region.
Main lessons learned
1. The need to install adequate devices to record frequency and oscillographs in
general.
2. The need to pay greater attention to settings in special protection systems (SPS).
3. The need for investments to improve power flow generation in the Paranaba area.
1.4.1.3 Blackout of December 13rd, 1994 ( 10:12 am )
At 10:12 am, while performing tests at Ibina HVDC Converter Station, operation of
the forced isolation scheme was provoked by human error. As a consequence, the two

1 - 65
HVDC bipoles were blocked producing a shortage of 5,800 MW in the interconnected
system. With the accentuated voltage dip in the So Paulo area, oscillations were
experienced as well as loss of synchronism between Itaipu power plant and the
Southeastern Region, leading to the opening of the 750 kV transmission system in
such a way that this whole power plant remained connected to the Southern region. In
spite of the existence of a SPS for generation dropping at Itaipu, designed to prevent
overfrequency in the South system, this scheme proved to be insufficient. The
acceleration of Itaipu units associated with over-voltages provoked the tripping of the
750 kV circuits between Foz and Ivaipor, completely isolating Itaipu plant from the
interconnected electric system. Due to the large generation deficit, the underfrequency
load-shedding schemes operated. However, due to the rapid restoration actions, the
disturbance was not considered critical.
Main lessons learned
1. That as a consequence of the action of underfrequency load shedding schemes
some circuits can become lightly loaded and be shutdown due to the resulting
overvoltages.
2. The adjustment of the underfrequency load shedding schemes should be dynamic
because of variations in the system loading levels.
3. That SPS settings must be more selective.
1.4.1.4 Blackout of March 26th, 1996 ( 09:18 am )
At 09:18 am an incorrect operation (human error) of a selector switch at Furnas hydro
plant tripped the 345 kV differential bus protection systems, resulting in the plant
(1,300 MW) total shutdown. The shutdown provoked a simultaneous outage in seven
345 kV transmission lines that in turn caused the tripping of the transmission lines
interconnecting the plants located on the Grande and Paranaba Rivers, including the
500/440 kV transformer at gua Vermelha substation. The loss of these elements
separated the system from the large generation centers located in the states of Minas
Gerais and Gois. The interrupted load reached a total of 5,804 MW.
Main lessons learned
1. The need for adaptive bus protection.
2. The need for investment in reinforcements in the 500 and 440 kV networks in order
to increase operative flexibility.
3. The need to improve monitoring of the electric system to identify islanding
situations.
4. The need for investment in more effective training programs for operation and
maintenance staff.
1.4.1.5 Blackout of March 11th, 1999 ( 10:16 pm )
At 10:16 pm, prompted by a single-phase short-circuit at the 440 kV bus of Bauru
substation, remote transmission line protection systems were activated, once there was
no bus protection at Bauru at that time. Bus interconnections were also opened due to
the arrangement of the substation. Six 440 kV circuits were tripped, initiating an
oscillatory process that ended up with the cascading shutdown of various elements in
the neighboring system. Among them, the 750 kV transmission system tripped,
isolating Itaipu power plant from the South system; many 440 kV transmission lines
also tripped isolating Trs Irmos, Jupi, Porto Primavera, Capivara and Taquaruu

1 - 66
hydro power plants, with a total generation loss of 2,300 MW; 500 kV lines in the
Southern system were tripped, activating the SPS in the region; the Itaipu HVDC link
was lost due to voltage collapse in So Paulo area; many 500 and 345 kV transmission
lines were lost, adversely affecting the power supply to the states of Rio de Janeiro
and Esprito Santo. In the same manner, the North/South interconnection was tripped
by its out-of-step protection. The total interrupted load reached 25,000 MW, which
affected 75 million people. Several islands remained in operation, totaling 10,000
MW. Considering the extremely severe event and the small secondary incidents which
occurred during the restoration process, recovery from the black-out has been seen as
satisfactory, mostly due to proper actions taken by the teams that dealt with the
disturbance, in spite of the delay in restoring the system in certain areas (4:20 h in Rio
de Janeiro and Esprito Santo).
Main lessons learned
1. The need for modifications in the physical arrangements of many EHV substations
as well as in the protection and control systems.
2. The need for greater detailing in maintenance planning to reduce the risk of
blackouts.
3. The need for improving information systems in view of the new environment of the
electric industry as a whole.
4. The need to modernize the operation control centers.
5. The need for implementation of better SPS to cope with multiple contingencies,
even those of minimal probability of occurrence.
6. The need for constant reevaluation of SPS systems.
7. The need for improving the disturbance analysis process in general.
1.4.1.6 Blackout of May 16th, 1999 ( 06:05 pm )
At 06:05 pm, during maneuvers at Itumbiara substation in order to normalize the
Itumbiara - Porto Colmbia 345 kV transmission line that had been isolated for
maintenance purposes, the bus differential protection system tripped all the circuits
connected to the Itumbiara 345 kV bus. The shutdown caused the interruption of 785
MW that normally flows into the Gois-Braslia area including the states of Mato
Grosso and Tocantins. The interruption precipitated the collapse of the power supply
to all the above mentioned regions.
Main lessons learned
1. The need to review the settings of protection devices in order to avoid unnecessary
activation during disturbances.
2. The need to increase the list of precautions normally considered during insertion of
off-line elements coming back from maintenance or repair.
1.4.1.7 Blackout of January 21st, 2002 ( 01:34 pm )
At 01:34 pm, due to the conductor rupture of circuit number 2 of Ilha Solteira
Araraquara 440kV transmission line, approximately 1 km from the terminal of Ilha
Solteira substation, a single-phase short circuit was experienced. This fault,
considered to be of a permanent nature, was cleared by the actuation, in 58 ms, of the
main and alternate distance protection systems in its first zone, at the Ilha Solteira
terminal, and by the actuation, in 125 ms, of the tele-protection scheme by zone

1 - 67
acceleration of both protection systems at the Araraquara terminal. Automatic
disconnection at the Ilha Solteira terminal, of circuit 1, of the Ilha Solteira
Araraquara 440 kV transmission line was verified to have resulted from the non-
selective action of the primary distance protection system. As a result of the automatic
tripping of the two remaining 440 kV circuits that interconnect the Ilha Solteira bus
with the rest of the interconnected system, an oscillatory process was initiated that
forced the shutdown of the 440 KV transmission trunk. The control actions
implemented after the blackout of 1999 permitted the controlled opening of the
North/South and South/Southeast interconnections, thus avoiding the propagation of
the oscillations and minimizing the consequences of the disturbance.
Main lessons learned
1. The effectiveness of an adequate out-of-step relay protection to minimize
propagation of oscillations.
2. The need to elaborate specific studies to determine additional resources for the
restoration process.
3. The need to reevaluate the criteria for unattended substations.
4. The need to implement a Defense Plan to protect against large disturbances of
smaller probability.
1.4.1.8 Summary of Lessons Learned
An analysis of the main blackouts suffered by the Brazilian electric system over the
last 20 years has revealed the following facts:
1) The traditional planning criterion used for expansion in Brazil is the widely
known single contingency criterion (N-1). However, it is apparent from the
majority of cases studied that the Brazilian blackouts were caused by multiple
contingencies or single contingencies with multiple shutdowns not foreseen in the
normal planning procedures.
2) Special protection systems are acknowledged as the best way to improve
performance of electric systems during disturbances. This makes it necessary to
use improved computational tools when conducting dynamic studies and to
enhance the communication media presently available for protection purposes.
3) It remains a difficult and laborious task to identify the root causes of large
blackouts.
4) The engineers responsible for system operation/restoration should not be
submitted to hierarchical pressures in the moments immediately subsequent to
blackout situations. A regimen of isolation should be observed similar to the
protocol used in airport control rooms, nuclear plants, and hospital surgical
centers. Exposure to high-level management and to the press should be avoided or
eliminated.
5) The processes of supervision and control of electric systems should be given
absolute priority by the industry itself and the government.
6) Training programs for system operators must be given high priority.
7) Simulations of disturbances conducted on digital computers are a good way to
assure understanding of the blackout phenomenon, enhanced by the use of
mathematical models used as a complementary tool.

1 - 68
8) Clearly it is very important to have access to automatic control of the voltage
profile during the dynamic period. The protection settings against circuit over-
voltage, the automatic insertion of reactors/disconnection of shunt capacitors and
the opening of circuits are all crucial elements in this process, not only to
minimize problems but to increase the speed of the restoration of the post-
blackout system.
1.4.2 May 1997 Chilean Blackout
A voltage collapse problem on the Chilean Interconnected System (CIS) occurred in
May 1, 1997. This system, with about 500 buses, covers an area of 2000 km in length,
supplying over 3000 MW of load. The basic structure of this system in 1997 is
illustrated in Figure 1-52 through a simplified 17-bus system. The major load was
located in Santiago, Chiles capital, in a ring formed by three buses: Polpaico, Cerro
Navia and Alto Jahuel. The latter was the main bus receiving energy from power
sources located in southern Chile. In this representation there are 7 main power
sources; these include five hydro generating plants in the southern and central areas
(Colbun-Pehuenche, Pangue, El Toro-Antuco, Isla-Cipreses and Rapel), and two
thermal generating units in the central and northern areas (Ventanas and Guacolda).
This simplified system model is represented by a set of differential-algebraic (DAE)
equations, with loads being modeled by an exponential model [40], and generators by
a one-axis model [41]. The values used for the generator parameters are estimated by
using both widely available data and information on the actual generators. Automatic
Voltage Regulators (AVR) are modeled by a first-order transfer function connected in
series with a voltage limiter [40], which has maximum and minimum transient voltage
limits and maximum steady-state limits, as depicted in Figure 1-53. The voltage
limiter is activated once the excitation exceeds its transient limit value for more than
seconds; in the simulations this limit is chosen as 10 seconds. This excitation voltage
limiter or maximum excitation limiter (MXL) plays a key role in the voltage collapse
of this system as explained in more detail below. The tap mechanism in the
transformers with On Load Tap Changers (OLTC) is simulated through a special
function that uses a tolerance and a dead-time of 1 seconds. Once the OLTC is
activated, a tap change is produced every 2 seconds, as long as the secondary voltage
is outside the tolerance .

1 - 69
Figure 1-52: The 1997 Chilean Interconnected System (CIS).

Voltage
Limiter

Efdmax Efd
G
1+sT
Efdmin

Vref V
+ -

Figure 1-53: AVR with dynamic limits.


The voltage collapse scenario discussed here is the outage of the 154 kV line from
Itahue to Alto Jahuel, which happened in the CIS in May 1997 at 23:23 hrs. After the
collapse, the system was restored in approximately 30 minutes; nearly 80% of the
system load was lost during this event.
The voltage collapse is illustrated in Figure 1-54, where the evolution of the voltage at
the main systems buses is depicted. These simulations clearly resemble the typical
voltage collapse problem observed in other systems, i.e. a slow and steady decreasing
voltage magnitude value followed by a rapid decay [42]. Observe that the Alto Jahuel
bus voltages decay faster than the Charrua bus voltage, as the latter bus is more

1 - 70
distant in terms of electrical impedance to the critical area than the other buses
included in this figure.
C o lb n-2 2 0 Ra p e l-1 3 .8 A nco a -5 0 0
1 1 1

0 .5 0 .5 0 .5

0 0 0
0 10 0 10 0 10
A .Ja hue l-2 2 0 A .J a hue l-5 0 0 C ha rrua
1 1 1

0 .5 0 .5 0 .5

0 0 0
0 10 0 10 0 10

Figure 1-54: Evolution of p.u. bus voltages. Time scale in minutes.


The longitudinal structure of the CIS is similar to other networks where voltage
collapse incidents have been experienced in the past [42-45]. Thus, in [42], the author
analyzes in detail a voltage collapse in a 4-bus network with a structure similar to the
CIS; this system has a generating center located far from the demand, with an
additional generating unit in the middle of the transmission line connecting both ends.
A similar structure can be observed in the test system used in [43] and in the real
systems of [45] and [46]. All these references show that a longitudinal system with a
generating unit located near the middle of the network is prone to voltage instabilities;
the voltage collapse occurs when a generating unit reaches excitation control limits.
The main cause for this voltage collapse in the CIS is the overexcitation limits in the
generating units at Colbun-Pehuenche. This can be directly associated with the
maximum loadability of the system [44], as illustrated in the PV curves depicted in
Figure 1-55. Observe that the system collapses due to a maximum Q-limit at the
Colbun-Pehuenche generating bus when the Itahue-A.Jahuel line is removed (faulted
system), as there is no power flow solution for this system at the given loading
conditions. When a 200 MVar static bank of capacitors available at the A.Jahuel bus
in Santiago is added to the faulted system or when the load is shed, the equilibrium
point is recovered. As a matter of fact, operators tried to avoid voltage collapse by
connecting the bank of capacitors at A.Jahuel; however, the system still collapsed, as
these capacitors were not connected quickly enough. A detailed analysis of the proper
timing for capacitor connection and load shedding to recover the system is presented
in [47].

1 - 71
Figure 1-55: Voltage profiles in p.u with respect to total loading conditions in MVA. Continuous lines
correspond to the base system (the * marks the current loading conditions); dashed lines correspond to
the faulted system; dash-dotted lines correspond to the faulted system with 200 MVar capacitive
support added at the A.Jahuel bus.
1.5 Blackouts in South East Asia and Australasia
1.5.1 Blackouts in Japan The 1987 Tokyo Blackout
On July 23, 1987, the metropolitan Tokyo area experienced a massive blackout
caused by voltage instability. In the blackout, more than 8 GW of load was lost for
about 3.35 hours, which affected 2.8 million households. Lessons learned from the
blackout have formed the foundation of voltage and reactive power control, currently
adopted in the Tokyo Electric Power Company (TEPCO) network.
1.5.1.1 Sequence of Events
On July 23, 1987, the temperature of 35.9 C was recorded in Tokyo. It was the ninth-
highest temperature on record. In the morning of the day, TEPCO revised its demand
forecast upward from 38.5 GW to 39.0 GW and again 39.0 GW to 40.0 GW, in
response to revisions of the forecasted temperature. It would set a new record for
TEPCO at that time, but secure and stable operation had been expected with 40.0 GW
of electricity demand in the summer operational plan.
During lunch break on the same day, as electricity demand declined from 39.1 GW to
36.5 GW, some shunt capacitors had to be disconnected due to the upper bus voltage
limits of the tertiary side of transformers (shunt capacitors are installed mainly on the
tertiary side of transformers in the TEPCO network.). After lunch break, these shunt
capacitors were expected to be switched on automatically by the Voltage and Reactive

1 - 72
Power (Q) Controller (VQC7) as demand increased. However, since the load increase
was faster than ever experienced previously, voltage and reactive power controls by
VQC and AVR could not keep up with it, and thus the bus voltages started to decline.
In Figure 1-56, the rate of demand rise after the lunch break on the day is compared to
those of other days whose highest temperatures are higher than 33 C.
Rate of Bottom Demand
450 No Day Demand Rise during Lunch Break
July 23, 1987
[MW/min] [GW]
400 1 July 25, 1985 290 31.6
2 July 26, 1985 270 31.6
Rate of Demand Rise [MW/min]

350
3 July 29, 1985 220 31.1
300 4 July 30, 1985 260 32.0
5 Aug 29, 1985 300 32.6
250 6 Sep 2, 1985 180 33.2
7 July 29, 1986 280 31.9
200
8 Aug 1, 1986 250 31.7
150 9 Aug 5, 1986 290 29.1
Mean 260 MW/min 10 Sep 4, 1986 190 33.2
100 11 July 15, 1987 220 35.4
Standard Deviation 50 MW/min
50 12 July 17, 1987 250 35.0
13 July 22, 1987 280 33.6
0 14 July 23, 1987 400 36.5
25.0 30.0 35.0 40.0 15 July 24, 1987 250 36.6
Bottom Demand during Lunch Break [GW] 16 July 29, 1987 310 35.5
17 July 30, 1987 230 35.4
18 July 31, 1987 210 34.3

Figure 1-56: Rate of demand rise after lunch break.


At 13:19, when the 500 kV bus voltages in the western part of TEPCO area dropped
below 400 kV, two 500 kV transmission lines tripped due to zone 4 impedance relays,
and one 500 kV transmission line tripped due to a phase comparison relay. These
impedance relays operated because the voltage drop forced the apparent impedance to
be inside the reach of the relays. The apparent impedance of power flow in Shin-Tama
500 kV line (2L) and its zone 4 impedance relay setting at the Shin-Tama substation is
shown in Figure 1-57. The phase comparison relay at the Shin-Fuji substation
operated by the following sequence. A few minutes before the tripping, the contact of
the under-voltage relay (1) in Figure 1-58 was closed due to the voltage drop.
Receiving an alarm triggered by this, substation operators of the Shin-Tama
substation manually blocked the phase comparison relay (2). This toggled the phase
comparison relay at the Shin-Fuji substation from the phase comparison mode to the
overcurrent mode, in which the contact of the phase comparison relay (3) was closed
as the Shin-Tama 500 kV line (1L) was carrying current flow over the setting value
(400 A). In order to prevent the unwanted tripping, the phase comparison relay at the
Shin-Fuji substation had to be blocked (4). However, the contact of the under-voltage
relay (5) was closed before the relay was blocked (4), and the Shin-Tama 500kV line
(1L) was tripped at the Shin-Fuji substation.

7
The VQC controls HV and LV side voltages at substations by changing tap positions and switching
shunt capacitors/reactors [S. Koishikawa et al.: Advanced Control of Reactive Power Supply
Enhancing Voltage Stability of a Bulk Power Transmission System and a New Scheme of Monitor on
Voltage Security, Session Paper CIGRE 38/39-1, 1990]

1 - 73
200 Zone 4 Impedance Relay
Setting at Shin-Tama

Reactance [ohm]
150

100
13:19
13:15
50 13:00

0
-100 -50 0 50 100 150 200 250 300
Resistance [ohm]

Figure 1-57: Apparent impedance of power flow in Shin-Tama 500 kV line (2L).

Figure 1-58: Trip sequence of the phase comparison relay of the Shin-Tama 500 kV line (1L).
In addition to 500 kV transmission lines, four 275 kV transmission lines and four
275/66 kV transformers tripped due to zone 4 impedance relays. Note that no fault
occurred to cause these relays to operate.
In order to limit fault currents and prevent unexpected cascading events, the TEPCO
network below 275 kV has a radial structure. Thus, these tripping events cut down the
load at the end of the radial network, causing the loss of 8,168 MW, or 21 percent of
the total load. The voltage collapse stopped, avoiding further cascading events.

1 - 74
Fukushima Nuclear
Kashiwazaki-Kariwa Power Plant
Nuclear Power Plant

60Hz 50Hz

FC

Shin-Tama

Shin-Hadano

Shin-Fuji

60Hz 50Hz 500kV


275kV
FC Substation

Figure 1-59: The EHV system of the TEPCO network in 1987.


Since the voltage drop caused the apparent load to be lower in MW value, frequency
was already higher than the normal operating range before the loss of the load. Then
at 13:19, the loss of the load caused a sudden increase in frequency up to 50.74 Hz.
Due to this frequency spike, Kashima Unit 6 and Kawasaki Unit 6 tripped, and
Kashima Unit 4 was manually stopped. Frequency decayed to the normal operation
range within about five minutes by load restoration and governing operation of
generating units. Turbine bypass valves of nuclear units were opened from 13 to 25
percent as designed in order to protect their turbines, and no nuclear unit tripped
during frequency anomalies.

Figure 1-60: Frequency excursion at the collapse (chart paper copy).

1 - 75
1.5.1.2 Voltage Collapse
Results of postmortem analysis clearly indicates long-term voltage instability, where
the 500 kV bus voltages at the western end of the TEPCO network dropped
approximately with 4 kV/min from 13:00 to 13:15, and further declined
approximately with 18 kV/min from 13:15 to 13:19 before the voltage collapse. Just
before the tripping of 500 kV transmission lines, these 500 kV bus voltages were
dropped to around 370 kV.
At the time, 10,570 Mvar of shunt capacitors had been installed in the TEPCO
network. Among them, 1,280 Mvar was disconnected from the network at 13:00.
These shunt capacitors were all automatically and manually put into service by 13:07.
Some shunt capacitors were switched manually, as automatic capacitor bank
switching could not keep up with the load increase with the speed of 400 MW/min.
However, the capacitor bank switching was not completed in time to maintain voltage
stability. As the shunt capacitors were put into service after the voltage drop
progressed, they could not contribute to their capacity as the capacity in Mvar value
was reduced in proportion to the square of voltage. The power system is estimated to
be on the lower side of the P-V curve at 13:03 as shown in Figure 1-61. P-V curves in
the figure are derived from recorded data and postmortem simulations, and the voltage
excursion is based on recorded data.
The loss of the load at 13:19 brought the power system back to the higher side of the
P-V curve, and 500 kV bus voltages bounced back over 500 kV. The recovery was
fortunate as the operated zone 4 impedance relays were not set to save the power
system from voltage collapse, but their operations did save the power system from a
possible total blackout by shedding 8 GW of load. The process shows the
effectiveness of under-voltage load-shedding, and as is often the case, it can be seen
from the frequency excursion that UFLS cannot operate to save the power system
from the voltage collapse.

600
13:00 13:02 13:03 13:10
500-kV Bus Voltage [kV]

Less Disconnected
550 Shunt Capacitors

525

500

0.4GW
13:15
450
38.0 38.5 39.0 39.5 40.0
System Demand [GW]

Figure 1-61: Voltage excursion to the collapse.


1.5.1.3 Root Causes
The root causes of the blackout are categorized into the following four factors:

1 - 76
Inadequate Knowledge About Load Characteristics
First of all, what contributed most to cause this long-term voltage instability is that
automatic capacitor bank switching could not keep up with the load increase with the
speed of 400 MW/min. VQC could have been set to react faster to counter voltage
instability, but the load increase speed of 400 MW/min was beyond the scope of the
assumption.
Insufficient Monitoring Tools
At the time, TEPCO had three bulk power system control centers. Operators at the
control centers had a dynamic map board as a visualization tool, but the map board
did not display 500 kV bus voltages, which made them rely on CRT terminal displays
to monitor 500 kV bus voltages. Also, they could only monitor 500 kV bus voltages in
their territory and had to ask other control centers to check neighboring bus voltages
to their territory.
This insufficiency of monitoring tools is believed to have delayed their recognition of
the situation. If they had been aware of the situation earlier, the blackout could have
been minimized by emergency countermeasures such as manual load shedding and
OLTC blocking.
Lack of Strategies Against Voltage Instability
TEPCO did not have a pre-defined strategy against voltage instability at that time. In
addition, some of the operators did not have an adequate comprehension about the
phenomenon and its countermeasures.
Now, TEPCO has developed the pre-defined strategy against voltage instability and
revises it every year. Operators at the control centers have an obligation to participate
in the training program to implement the strategy, using a full replica training
simulator.
Unwanted Control by VQC
In the transmission system of the TEPCO network, most transformers are equipped
with an on-load tap-changer that is controlled by VQC. A sample control scheme of
VQC is shown in Figure 1-62. Following this control scheme, VQC controls tap
position and shunt capacitors/reactors connected to the tertiary side of a transformer,
trying to keep primary and secondary side voltage in the dead band.
On the course toward the voltage collapse, VQC switched off shunt reactors, switches
on shunt capacitors, and moved up tap position. Among these controls, the tap
position control has an adverse effect on the voltage stability, shifting the load
towards a constant power characteristic8. To avoid this problem, TEPCO has added a
function to lock this tap position control when all shunt capacitors are in service and
primary side voltage is lower than a certain setting.

8
Taylor, C. W.: Power System Voltage Stability, McGraw-Hill, 1993.

1 - 77
LV (275kV)

Switch off
Move down tap position Shunt Capacitors &
to lower the LV Switch on
Shunt Reactors

Dead Band
HV (500kV)

Switch off
Shunt Reactors &
Switch on Move up tap position
Shunt Capacitors to raise the LV

Figure1-62: Control scheme of VQC installed in a 500/275 kV substation.

Uneven distribution of power plants


Uneven distribution of power plants also contributed to the voltage collapse.
Distribution of power plants was concentrated in the eastern and northern end of the
TEPCO network and around Tokyo bay, which caused high power flow from east to
west in the network. Thus, the western side of the network tended to have a lower
voltage profile under heavily loaded conditions.
In addition, unavailability of two power plants contributed to the voltage collapse.
The Ohi power plant was located in central Tokyo and had played an important role in
maintaining voltage of this load concentrated area. Unfortunately, the Ohi power plant
was not available due to an oil tank fire and explosion on May 26, 1987, and it
deteriorated the ability of the central Tokyo area to maintain voltage, keeping up with
the load increase. The Higashi-Ohgishima power plant was located in the western part
of the TEPCO network and had played an important role in reducing power flow from
east to west. The Higashi-Ohgishima power plant had been available for
commissioning tests until the day before, but it was not available on the day because
of issues found during the tests.
Considering these situations, particular attention had been paid to this heavy power
flow from east to west in the stages of operation and planning, but combined with the
lack of reactive power supply, the voltage collapse occurred first in the western side
of the network. The situation has not changed much since then, and the heavy power
flow from east to west is still an issue in operation and planning in the TEPCO
network.
1.5.1.4 Lessons Learned
From lessons learned, the following countermeasures were implemented:
o Settings to keep a High and Flat Voltage Profile: Lessons learned from the
blackout indicate that it is crucial to keep a high and flat voltage profile in the
EHV network. By doing so, one can best leverage voltage support from existing
shunt capacitors and generators. Soon after the blackout, TEPCO reviewed and
changed the settings of VQCs and AVRs for heavily loaded days to keep a high
and flat voltage profile as follows:

1 - 78
Table 1-2: VQC and AVR settings for heavily loaded days.
Before the blackout After the blackout
VQC (500kV) 515~525kV 525~550kV
VQC (275kV) 280~287kV 295~300kV
AVR 102% 103%
These settings are reviewed at least every year to ensure the stable and
secure operation under the heavy load.
o Installation of Shunt Capacitors: At the time, 10.6 Gvar of shunt capacitors
had been installed in the TEPCO network, and it had been expected to be
enough to sustain electrical load over 40GW. However, some of shunt
capacitors were disconnected from the network due to the upper bus voltage
limits of the tertiary side of transformers, when the power system required
reactive power support from them. Lessons learned from the blackout suggest
installing more shunt capacitors dispersed through the system, especially in the
EHV network. After the blackout, TEPCO installed 11.9 Gvar of shunt
capacitors in its network in five years and has installed 18.2 Gvar of shunt
capacitors so far. Since these shunt capacitors are under automatic control of
VQCs, they can be considered as dynamic reactive power reserve for long-
term voltage instability.

35
Installed Shunt Capacitors [Gvar]

30
25
18.2Gvar
20
15
10
5
0
1955 1965 1975 1985 1995
Year

Figure 1-63: Shunt capacitors installed in the TEPCO network.


o Installation of Smoothly Controlled Dynamic Reactive Power Reserves
(PSVR, SVC): While conventional AVRs keep the generator terminal voltage
at a pre-set level, Power System Voltage Regulators (PSVR)9 keep the sending
end voltage at a pre-set level. PSVR has an advantage over AVR in improving
voltage stability, by enabling to maintain a high voltage profile at generators
even under severe contingencies. TEPCO developed it soon after the blackout,
and it is now installed on most generators that are connected to the TEPCO

9
The PSVR is essentially a fast secondary voltage control loop applied to power plants that biases the
plant AVR to maintain/regulate voltage at HV side of a step-up transformer [S. Koishikawa et al.:
Advanced Control of Reactive Power Supply Enhancing Voltage Stability of a Bulk Power
Transmission System and a New Scheme of Monitor on Voltage Security, Session Paper CIGRE
38/39-1, 1990]

1 - 79
EHV network. A Static VAR Compensator (SVC) had been tested at the Sin-
Sinano substation and proved its capability to supply voltage support in order
to keep a high voltage profile under severe contingencies. TEPCO installed six
SVCs in its EHV network in 1988 and 1989. SVC, as well as PSVR, can be
considered as reactive reserves against short-term voltage instability because
of their quick response to voltage drops.
Increased VAR support from Increased VAR support from
Voltage shunt capacitors Voltage generators (PSVR)
110 110
VQC Reference Voltage VQC Reference Voltage
100 100
Voltage Margin Voltage Margin
90 90

80 80

70 70

60 60
0.9 1.0 1.1 1.2 0.9 1.0 1.1 1.2
Normalized Demand Normalized Demand

Improved Voltage Stability Improved Voltage Stability


with Less Voltage Margin with Improved Voltage Margin

Figure 1-64: Comparison between VAR support from shunt capacitors and generators.
o Installation of UVLS: The effectiveness of UVLS against voltage instability
was verified in the blackout. TEPCO has installed UVLS as a last resort for
voltage instability caused by extreme contingencies that are severer than
credible ones. The TEPCO UVLS has four central units in the western
(weaker) side of the power system, and each unit can have different timer
settings for different speed of voltage drops. When three central units out of
four detect a voltage drop exceeding their settings, load shedding signals will
be sent to local stations to regain voltage stability. The TEPCO UVLS has
flexibility to arrest both short-term and long-term voltage instability by
adjusting its settings.
o Implementation of On-line Voltage Security Monitoring System: Lessons
learned from the blackout indicated the importance of recognizing the
operating margin on-line. TEPCO had also installed on-line voltage security
monitoring systems soon after the blackout and has been updating it
periodically so that operators at the control centers can take proactive
measures such as network configuration changes, generation schedule
modifications and so on. The first version of the on-line voltage security
monitoring system of TEPCO could draw P-V curves and Q-V curves from
on-line data by incorporating state estimation. Furthermore, the monitoring
system could draw P-V curves for the peak load on the day, assuming a
generation schedule derived by solving economic load dispatch. These P-V
curves were drawn under the normal operating condition and critical single
contingencies. In addition, for the power system after a single contingency, the
monitoring system determined automatic remedial actions to provide reactive
power in order to satisfy voltage requirements, using a Jacobian matrix at each
step, and drew a P-V curve after necessary actions were taken.

1 - 80
o Development of Simulation Tools: In order to simulate a slow disturbance,
such as a steep rise in electrical demand, TEPCO has developed a simulation
program called VQC Simulation. Responses and controls of VQC, PSVR, and
SVC are modeled in the program, and the program is used to study a setting of
these voltage controllers, as well as to analyze voltage stability under different
conditions. In addition, TEPCO has developed the Voltage Stability
Constrained Optimal Power Flow (VSCOPF) program10. In the program, a
constraint in the voltage stability is defined by the gradient of a P-V curve that
is voltage sensitivity against the load at an operating point. The gradient is
negative on the upper side of a P-V curve. The lesser the gradient is, the
farther the operating point is from the nose of the P-V curve. Users can
specify the minimum allowable value of the gradient, by which an objective
function is minimized while maintaining necessary margin to voltage
instability. TEPCO has been creating the integrated power system analysis
environment MidFielder, and the VQC Simulation program and the
VSCOPF program are available in it.
1.5.2 Australian Blackout of Friday 13th August 2004 [73]
The largest power system in Australia represents a system that currently has a
maximum demand of 30 GW and stretches for 5000 km along the coasts of
Queensland, NSW, Victoria and South Australia, at the time of this incident.
On the evening of Friday 13 August 2004 at 21:41 hrs, a current transformer at
Bayswater power station, located at a large generation centre in New South Wales
some 200km north of the major load centre of Sydney, developed an internal fault
causing it to later explode. This fault triggered an event which resulted in the loss of
six large generating units in NSW. As a result a total 3087 MW of generation was lost
at a time when the demand was 22629 MW.

10
Y. Tada et al.: Development of Voltage Stability Constrained OPF as One of the Functions of the
Integrated Power System Analysis Package, Named IMPACT, the 6th Int. Power Engineering Conf.
(IPEC 2003), vol. 1, pp. 237-242, Singapore, Nov. 2003.

1 - 81
Figure 1-65: Overview of National Electricity Market and Plant that contributed to the Incident.
The faulty current transformer was situated on the middle breaker in a breaker-and-
half scheme where one side was fed from a generating unit and the other side was
connected via a transmission line to another power station close by. The line
protection operated correctly to clear the fault. Fifteen seconds later the line auto-
reclosed onto the fault again and subsequently tripped and locked out as a result. At
this point three of the four generating units at Bayswater tripped as a result of
generator differential protection, with subsequent loss of 1971 MW. Also at the same
time, another generating unit on the central coast commenced shutdown sequence
initiated by negative phase sequence protection. This resulted in a further loss of 424
MW.
As a result of the sudden loss in generation the frequency fell rapidly to 48.9 Hz,
when it was abated by the automatic underfrequency load shedding (UFLS).
However, another generating unit tripped from 542 MW, 27 seconds after the fault, as
a result of Automatic Voltage Regulation protection. Finally, 36 seconds into the
fault, another 150 MW was tripped by a unit at Redbank as a result of underfrequency
protection. This resulted in further UFLS operation.

1 - 82
Figure 1-66: System frequency response.
A total of 1535 MW, 6.8% of the overall demand, was shed with Queensland and
Victoria bearing the brunt of the load shedding. As a result of the load shedding and
Frequency Control Ancillary Service (FCAS), system collapse was averted.
Subsequent investigation into the incident has revealed that:
The generator differential protections at Bayswater were working properly
according to their design and settings. However, the differential protection
could not discriminate for the rapid succession of two severe external faults
during the auto reclose. A review of the protection has resulted in either a
reduction of the sensitivity of the protection settings or an upgrade of the
protection technology that has the ability to discriminate against such faults.
The negative phase sequence protection operated prematurely as a result of a
faulty timer circuit in the generator negative phase sequence protection.
A faulty element in the excitation control system caused the Automatic
Voltage Regulation to trip the fifth unit.
The generator underfrequency protection at Redbank operated according to
their settings.
Most of the UFLS operated as designed. About 420 MW of load did not shed
as expected, but this did not have an adverse affect on the UFLS performance.
However, the sharing of the UFLS between regions was not equitable, with
NSW being one of regions least affected by UFLS.
FCAS operated beyond its requirements. Only sufficient FCAS was allocated
to cover for the worst single credible contingency. However, this event
represented a non-credible contingency and more support was actually
obtained from the in-service generation than was allocated.
In summary the causes of the incident could be identified as:

1 - 83
The initiating event was the failure of the current transformer. Whilst failures
of this nature should be minimized, this is considered a credible contingency
and the protection operated correctly to clear the fault. The system would have
been able to sustain such an incident if not for the subsequent events.
Inability of protective scheme to discriminate for successive faults outside its
zone of protection.
Faulty elements in protective equipment and control systems that result in
inappropriate response.
Protection settings that may be unnecessarily sensitive, as in the case of the
generator underfrequency protection.
It is important to note that the operation of automated UFLS was the most crucial
element in preventing system collapse. The National Electricity Code states that every
market customer with load of more than 10 MW must make at least 60% of the load
available for automatic UFLS.
Recommendations that have come from the investigations included:
Imposition of a requirement that generating units be able to withstand faults
that are reapplied as a result of reclosure in the network.
Review of underfrequency load shedding should be undertaken to try to ensure
equitable sharing of UFLS amongst regions.
Generating units should be able to withstand frequency disturbances within the
limits of the frequency operating standards.
Further information about the incident together with the findings and
recommendations of subsequent investigations can be found at
http://www.nemmco.com.au/marketandsystemevents/232-0020.htm.
1.5.3 Central-South System Collapse of the Peninsular Malaysia Grid System
on 13th January 2005
1.5.3.1 Incident Description
At 12:16 on 13th January 2005, a bus section circuit breaker P10 at the Port Klang
(PKLG) power station switchyard was opened for the purpose of investigation and
emergency repair. This action initiated a series of equipment tripping that led to
interruption of supply to the southern part of Central Klang Valley and the southern
part of Peninsular Malaysia. About 6000MW of supply was lost and most of the
supply was restored within 3 hours.
Conditions prior to the Disturbance
The North/ East areas of the Peninsular Malaysia grid are net export areas while the
Central/South areas are net import areas. The system demand was about 11,260 MW
and the total power transfer from North/East to Central/South was about 3,113 MW
prior to the disturbance.
The North to Central transmission link comprised a 500 kV double circuit from
BTRK KPAR, and a 275 kV line from BTRK to KL North. At KPAR, the network
integrates with PKLG power station to transfer power to the Central area via the
PKLG CBPS, PKLG KULW, PKLG KPAR/KULN 275 kV double circuit lines
while the KULN KULE and KULE - PULU 275kV double circuit lines completed

1 - 84
the power transfer to the eastern part of the Central area. The East grid system
transfers power to the Central and Southern region via the KAWA KULE and
KAWA YPGE 275kV double circuit lines.
Events leading to the System Disturbance
The opening of P10 (a 275 kV bus section circuit breaker) at PKLG caused the
adjacent S20 (a 275 kV bus section circuit breaker) to trip on overcurrent and the
PKLG 275kV substation split into a north and south section. The north section
comprising the PKLG KPAR/KULN 275 kV double circuit line and one PKLG -
KULW circuit experienced heavy loading. Heavy flow was also seen on the KULN
KULE 275kV line. An inter-trip scheme to protect the KULN - KULE line then
operated by tripping the KULE PULU 275 kV double circuit line. Unfortunately
with the PKLG 275 kV substation split, the loading on the remaining PKLG KULW
275 kV circuit caused it to trip and as a result power swung on to the parallel 132 kV
network from KULE SGBT which also tripped out.
Just after the 132 kV KL East SGBT circuits were lost, the NorthEast and the
CentralSouth sub systems began to pull apart as evident in the recorded frequencies
in locations within each of the respective sub systems. After the KAWA YPGE
(East South) 275 kV double circuit line tripped out, the systems were left
interconnected only through the four 275/132 kV transformers at Port Klang as shown
in Figure 1-67 below.
The weak interconnections through the transformers were not able to transfer the
3,113 MW flow between the sub systems as it had exceeded the system angular
stability limit. A pole slip condition ensued, as generators in the North-East sub
system went Out-of-Step with the generators in the Central-South sub system. The
above phenomenon was recorded by the disturbance recorder located at PKLG and
graphically illustrated in Figure 1-68.

Figure 1-67: Connection at PKLG prior to system separation

1 - 85
Figure 1-68: The Out-of-Step phenomena as recorded at Port Klang 132 kV Bus
Finally the North/East and Central/South Sub systems were physically and electrically
separated when all of the 275/132 kV transformers at PKLG tripped out. This resulted
in the subsystems having different generation-demand imbalances and this was
reflected in the recorded frequencies. The North-East subsystem, with greater
generation than demand, had its frequency rising up to as high as 51.36 Hz before
settling down to slightly above 50.5 Hz. The frequency of the Central-South Sub
system, where generation was significantly less than demand, decayed rapidly.
Following a temporary recovery initiated by underfrequency load shedding (UFLS),
frequency rapidly fell below 47.5 Hz and the Central/South subsystem collapsed when
all remaining generators tripped out. Figure 1-69 clearly shows the path the subsystem
frequencies took after the separation.

Figure 1-69: January 13th 2005 subsystem frequencies recorded after the system separation.

1 - 86
The frequency in the Central/South subsystem went down all the way to below 47.5
Hz and triggered all 15 stages of UFLS scheme from 49.5 Hz to 48.1 Hz. The total
load shedding initiated by UFLS amounted to 4,059 MW. Theoretically this should
have been sufficient to halt the decline in frequency and result in its recovery back to
a safe operating level.
This was not the case, due to a number of combined cycle gas turbine generator units
tripping out for various reasons (not due to generator underfrequency protection,
normally set at 47.5 Hz) from the Central-South system. This premature tripping of
the generator units caused a greater load - generation imbalance (3,113 + 1,365 MW
= 4,478 MW) in the Central-South system, much beyond the design capacity of the
UFLS, which retarded the recovery of frequency, reversed its course and finally
culminated into a steep descent leading to a total collapse of the subsystem. The
system frequency of the Central/South subsystem after the system split and the impact
of the premature generator tripping on its recovery is shown in Figure 1-70.
The reasons for the unexpected combined cycle gas turbine generator tripping was
that system frequency fluctuations disrupted the fuel to air ratio of premix burners,
resulting in a very lean mix of fuel and air that caused flame instability. This was
further aggravated by step changes of the variable inlet guide vanes (VIGV) when
switching over from exhaust temperature control to governor/load control and errors
in kinetic power calculation relative to the power measurement causing the gas
turbine to de-load further during low frequency. Thus, the turbine protection acted to
trip the turbines due to high vibration in the combustion chamber (due to
combustion/flame instability) during rapid power swings and exhaust temperature
trips.

Figure 1-70: Central-South system frequency trajectory following the split.

1 - 87
Some of these reasons are somewhat similar to the national grid collapse of 1996.
Various recommendations were made and carried out including the review of
protection settings and philosophy, daily on-line security assessment including
exceptional contingencies, review of the process of granting transmission equipment
outages, review of system defense plans, restoration procedures, review of the process
flow from development plan, design, specification, project implementation, inspection
and commissioning.
Of special concern and interest is the initiative to improve the robustness of generators
during disturbance condition by correcting the weakness identified in the findings on
the combined cycle gas turbine generator performance. It should be noted that it
would be acceptable if the generators in question respond for a small frequency dip (<
0.5Hz) but do not respond and remain connected to the grid even with a reduced
output for large frequency dips. This initiative is to be implemented through joint
effort with the Generators.
1.5.3.2 Lessons Learned
As a conclusion, the frequency response of the TNB grid system to various major
disturbances in 1996, 1998 and 2001 are shown in Figure 1-71 for comparison
purposes. It can be seen that in the event of a system split or major frequency
disturbance, the system would survive if the gas turbine based plants in the system do
not trip out inadvertently as was the case in 1998 and 2001. However in 1996 and
2005, the less than desirable performance of the gas turbine based units resulted in
further exacerbation of the initial problem resulting in widespread loss of load. Thus,
the key lesson learned was to start an initiative with the generator owners to jointly
work on making the power plants more robust and less likely to trip during such
disturbances.

2001

922MW tripping

1 - 88
Figure 1-71: System frequency response for major events.
1.5.4 Power System Disturbance In Peninsular Malaysia Grid on Wednesday
18th November 1998
At 13:22, a major system disturbance separated the Peninsular Malaysia Grid System
into 2 islands interrupting services to about 1.4 million customers for periods ranging
from several minutes to about 3 hours.
At the time of the incident, the weather was fine and system load was 7,553MW. Just
before the disturbance, the North East subsystem was exporting about 1,668MW to
the South Central subsystem compared to a normal load transfer of about 1,200MW.
The higher import was due to the availability of efficient units in the North-East
subsystem and the outage/unavailability of efficient generating units in the Klang
Valley (Central) and Southern Area. The load demand in the South Central subsystem
was satisfied through import of power via three 275 kV double circuit lines namely
the BGJH-KULN (North Central) lines (533 MW), KAWA-KULE (East Central)
line (661 MW) and KAWA-YGPN (East South) line (474 MW). The North
Central connection comprised the KULN-BGLH 275 kV double circuit line and two
132 kV lines.
There were no outages on any of the 275 kV transmission lines. A geographical map
and schematic diagram of the Grid System with the key substations and 275 kV
transmission lines is shown in Figure 1-72 below.
1.5.4.1 Events Leading to the System Separation
At 13:10, a circuit of the KULN-BGJH 275 kV double circuit line sagged too close to
a tree and flashed over. The fault was cleared by the distance protection relay. In the
process, the miniature circuit breaker (MCB) of the VT associated with the circuit
breaker at the BGJH end of this circuit locked out. A second flashover occurred at the
same spot for the same reason four minutes later. The distance protection on that
circuit of the KULN-BGJH could not clear the fault.
The uncleared fault caused the adjacent KUN-BGJH 275kV circuit to trip and this
effectively weakened the system. The power from the North/East area started to flow
to the Central/South through a much longer path via the KAWA-KULE (North/East
Central) and KAWA-YGPN (North/East South) 275 kV lines. Power oscillations
were observed on these 275 kV lines which connected the North/East system to the
South/Central system.
The fault was finally cleared ninety seconds after the start of the second flashover, by
back-up over current protection of adjacent circuit breakers. The delayed fault
clearing resulted in the power oscillations becoming stronger and this resulted in the
275 kV lines connecting the North/East to the Central/South systems to trip due to
power swing, resulting in the system separating into two islands North/East and
South/Central
1.5.4.2 Key Factors Causing System Separation
The system disturbance was due to a combination of circumstances and key factors as
follows:
Inadequate right of way maintenance caused a tree flashover.
Protection design philosophy was not satisfactory.

1 - 89
The tripping of both KULN-BGJH lines effectively increased the electrical
angle between the North Eastern and the South-Central subsystem. Increased
power flow in a weakened transmission network gave rise to large oscillations.
The prolonged line fault reinforced the oscillations and caused system voltage
to decrease.
Tripping of several generation units, especially combined cycle gas turbine
units, during this oscillation period prior to the system separation, further
reduced the voltage support of the South Central area.
No contingency and remedial action plan was provided to guide NLDC control
engineers to control power oscillation.

NATIONAL GRID

PERLIS
Chuping

THAILAND

Kota Setar

KEDAH
Tanah Merah

SO
Pergau
Bersia
PULAU PINANG Temengor

UT
TERENGGANU

H
Prai
Bukit Tengah

CH
Kenyir
Bukit Tambun
Kenering

IN
Lower Sg. Piah KELANTAN

A
Upper Sg. Piah
Chendroh

SE
A
Kuala Kangsar Tasik
Paka

PERAK
Papan

Segari Telok Kalong


Jor

Ayer Tawar Woh

Odak

Line tripped & PAHANG

fault prolonged
ST
R

Kg Awah
AI

SELANGOR
T S
OF

KL (E)
Kapar KL (N)
ME

Connaught Bridge
Serdang
LA

NEGERI
KA

SEMBILAN

Port Dickson

MELAKA
Melaka
Melaka

Yong Peng (N)

JOHOR

Skudai

Pasir Gudang

SINGAPORE

TENAGA
NASIONAL BERHAD
Figure 1-72: TNB system.
1.5.4.3 Widespread Loss of Generation and Load
The separation of the Peninsula Malaysia Grid resulted in approximately 1,771MW of
automatic underfrequency load shedding and 338 MW of manual load shedding and

1 - 90
approximately 2,520MW of undesired generation loss. The disturbance created two
islands from the original Peninsula Grid System:

A. North/East Island
The system frequency in the North/East island spiked to 51.68 Hz and returned to
50.5 Hz in 25 seconds, before dipping to 49.7Hz and returned to normal in about 3
minutes. A total of 17 generating units tripped after system separation due to reverse
power and excitation system control problem while 3 units were taken off manually.
The total generation loss was 1,760 MW. A total of 142,000 customers in the
North/East island were affected and 338 MW of load lost. The 275 kV system
voltages in the North/East island rose to above 330 kV and returned to normal level of
275 kV after 1 hour 8 minutes (Figure 1-73).
B. Central/South Island
The Central/South island faced a deficit of 1,830 MW in generation in an island size
of 5,049MW or 36% and the system frequency in the island dipped to 48.1 Hz in 4
seconds, returned to 49.5 Hz in 45 seconds and returned to normal in about 3 minutes.
The situation was further exacerbated by the loss of the tie-line to Singapore at 49.1
Hz. A total of 1,771 MW of load was lost due to manual and automatic
underfrequency load shedding. Two generating units (155 MW) tripped before system
separation and another 3 units (416 MW) tripped after system separation. About 1.203
million customers were affected. The voltages in the Central area returned to 275 kV
from a low of 235 kV, within 1 minute after islanding. A graph showing the frequency
of both the North /East and Central/South island is shown in Figure 1-74.
1.5.4.4 Conclusions
While poor right of way maintenance management initiated the fault, the disturbance
could have been avoided if the protection scheme could clear the line fault within the
designated time. However due to a design philosophy weakness, the fault was not
cleared and this led to a weakened and oscillating system that was aggravated by the
manual intervention of excitation control system and also the tripping out of generator
units with poor excitation control, finally resulting in the splitting of the grid system.
There were no contingency and remedial action plans to guide NLDC control
engineers to control power oscillation and subsequent break-up of the system.
This incident was in fact more severe when compared with the incident of a national
system collapse of the Peninsular Malaysia grid system of August 3rd 1996, from the
aspect of generation load imbalance. The generation deficiency in the Central
Southern Island of about 1,830MW for an island size of 5,049MW, or 36%, did not
result in a total collapse of the island. This improvement was due to actions taken
from lessons learnt after the August 3rd 1996 incident, in particular the prompt and
adequate operation of the underfrequency load shedding scheme, and modifications to
the gas turbine control systems during severe frequency deviations.
1.5.4.5 Lessons Learned and Recommendations
The lessons learned and recommendations arising from this incident included the
review and correction of weaknesses in protection design philosophy, the
improvement and adequacy in the management and control of maintenance practices,
conducting regular short term system studies (daily, weekly, seasonal) under
exceptional condition and apply results to system operations, the improvement in the

1 - 91
control systems of generators so that they stay robust during disturbances, further
improvements in the implementation of automatic underfrequency load shedding
scheme, strengthening of the transmission grid network further by expediting
reinforcement projects between the North and Central and the improvement in
restoration plans.

Figure 1-73: Frequency and Voltage of the North/East Island

1 - 92
52
51.676Hz

51 Northern Area
Frequency
50

98CS
49
98NE
Southern Area
48
Frequency
48.097Hz

47

46
-16.550
-14.150
-11.750
-9.350
-6.950
-4.550
-2.150
0.250
2.650
5.050
7.450
9.850
12.250
14.650
17.050
19.450
21.850
24.250
26.650
29.050
31.450
33.850
36.250
38.650
41.050
43.450
45.850
48.250
50.650
Figure 1-74: Frequency of North / East and Central/South Island.
1.6 NERC Historical Data on Blackouts and Overall Blackout Risk
It is clear that large blackouts occur more seldom than small blackouts, but how much
so? To examine this we consider the statistics of major North American power
transmission blackouts from 1984 to 1998 obtained from the North American
Electrical Reliability Council [48]. One might expect a probability distribution of
blackout sizes to fall off exponentially. That is, for the larger blackouts, doubling the
blackout size squares its probability and so, after squaring the probability many times,
the very largest blackouts have an extremely small probability. However, analyses of
the NERC data show that the probability distribution of the blackout sizes does not
decrease exponentially with the size of the blackout, but rather has an approximate
power law region [49, 50]. For example, Figure 1-75 plots on a log-log scale the
empirical probability distribution of energy unserved in North American blackouts.
The fall-off with blackout size is close to a power law dependence with an exponent
between 1 and 2. (Note that a power law dependence with exponent 1 implies that
doubling the blackout size only halves the probability. A power law dependence with
exponent 1 appears as a straight line of slope 1 on a log-log plot.) Thus the NERC
data suggests that large blackouts are much more likely than might be expected from
the common probability distributions that have exponential tails. The power law
region is of course always limited in extent in a practical power system by a finite cut
off corresponding to the largest possible blackout (i.e. loss of all load).

1 - 93
Figure 1-75: North American blackout size probability distribution.
A similar power law dependence has been observed in several cascading failure
models when a suitable loading or stress is assumed. These models include power
system models [51-54] and abstract models of cascading failure [55, 56]. Moreover,
historical North American line outage data [57] also shows a probability distribution
of number of lines out that has a heavy tail [58].
Blackout risk is the product of blackout probability and blackout cost. There are
considerable uncertainties about blackout costs, especially the costs of large
blackouts. Indirect costs, such as possible social breakdown [59], economic resilience,
reputation, legal and structural costs may all be significant and all are hard to
quantify. Here we make the crude assumption that blackout cost is roughly
proportional to blackout size. In the case of an exponential dependence of blackout
probability on blackout size, large blackouts become rarer much faster than blackout
costs increase so that the risk of large blackouts is negligible. However, in the case of
a power law dependence of blackout probability on blackout size, the larger blackouts
can become rarer at a similar rate as costs increase, and then the risk of large
blackouts is comparable to the risk of small blackouts [60]. For example, if the power
law dependence has exponent 1, then doubling the blackout size halves the
probability of the blackout and doubles its cost and therefore the risk of the blackout
remains the same. Thus power laws in blackout size distributions significantly affect
the risk of large blackouts and this risk justifies the study and analysis of large
blackouts. That is, although large blackouts are rarer than small blackouts, their costs
are higher and they become rarer slowly enough that they can have comparable risk.
Large blackouts typically become widespread by a complicated sequence of cascading
events. The events and the dependencies between events are usually unanticipated or
unlikely because the anticipated and likely events have already been accounted for in
power system design and operation. The determined efforts to analyze and mitigate
the foreseen and likely failures are usually successful and this accounts for the
historically high reliability of bulk power generation and transmission systems in
developed economies. Advances are being made in analyzing the risk of likely
sequences of failures [61]. However, it remains infeasible to explicitly analyze in
detail the huge number of possible unanticipated or unlikely events in a potential

1 - 94
blackout because of the combinatorial explosion of possible rare events to be
considered and the difficulty of anticipating all interactions and failure modes in an
enormously complex network. Much can be gained from tracing the deterministic
causes of individual blackouts in a detailed post mortem analysis of the blackout. In
particular, it is good practice to analyze individual blackouts in detail after they
happen so that system weaknesses can be exposed and mitigated. A complementary
approach (top-down rather than bottom-up) to describe and predict the overall risk of
blackouts must necessarily neglect much of the detail of the blackouts and be some
sort of statistical description of the blackouts.
The approximate power law region observed in the data can be qualitatively attributed
to the dependency of events in the blackout. As the blackout progresses, the power
system usually becomes more stressed, and it becomes more likely that further events
will happen. This weakening of the power system as failures occur makes it more
likely that a smaller blackout will evolve into a larger blackout. Over time, system
stress or loading tends to increase due to load growth and tends to decrease due to the
system upgrades and improvements that are the engineering responses to simulated or
real blackouts. It has been suggested that these opposing forces tend to slowly shape
the power system towards a near power law distribution of blackout size [62].
There can be tradeoffs between the frequencies of small and large blackouts. One very
extreme and uneconomic example is that operating a power system without using any
interarea tie lines would eliminate large blackouts completely and increase the
frequency of small blackouts. Another example is that a policy of never shedding load
in emergencies to solve local problems would tend to reduce small blackouts but
would tend to cause an increased frequency of larger blackouts. Moreover, after a
transmission system upgrade, the patterns of use of the network can evolve to change
the balance between maximizing use of the transmission system and reliability [62-
64]. These considerations, together with the roughly comparable risk of small and
large blackouts, suggest that engineering efforts to mitigate blackouts should aim to
manage the blackout risk by considering the joint reduction of the frequency of small,
medium and large blackouts as well as trading off the blackout risk with the economic
benefits of maximizing the use of the transmission system. Methods for estimating
and monitoring blackout risk can emerge as cascading failure becomes better
understood and cascading failure simulations improve [65-67].
1.7 Probability Models for Estimating the Probabilities of Cascading
Outages
High order contingencies in power systems are of great interest because of their
potential to cause huge losses. The word high-order here means loss of multiple
elements during a short time period in power systems. Such events are usually of
lower probability than N-1 events, which means loss of a single element in power
systems. If power systems are weakened due to losses of more than one transmission
line, what would be the probability that another transmission line trips? It is difficult
to calculate or estimate such a probability, however, it would be useful to predict. For
example, it would help engineers in the transmission and generation planning process,
where capital investments in new facilities must be weighed against the extent to
which those facilities reduce risk associated with contingencies. This could also help
system operators to estimate and evaluate network security in operations, for control-
room decision-making. Here, preventive actions, which cost money and are routinely
taken in anticipation of N-1 events, are not reasonable for a rare event, since the

1 - 95
certain cost of the preventive action cannot be justified for the event that is so
unlikely. Given that the number of rare events is excessively large and it is neither
possible nor necessary to do analysis for all of them, to prioritize the events becomes
crucial for on line analysis. The best way to prioritize event is by risk, which is the
expected impact by definition. However, considerable computation would be needed
to find out the impact. Another way to prioritize is by event probability, assuming the
impacts of events are of about the same magnitude. The event with highest possibility
will be computed next in developing operational defense procedures.
Ways to estimate the probabilities of power system rare events include fitting an
existing probability model to historical data, deriving the overall probability by
system structure and individual components; and using Monte Carlo simulation. As
indicated in the prior section and in [49, 51, 62], the use of distributions with power
law regions to model the occurrences of large disturbances has been proposed. Later
on, a number of probability distributions, which are variants of quasi-binomial
distribution and generalized Poisson distribution (GPD) [68], were proposed in [55,
56, 67, 69, 58]. Reference [70] presents work done by importance sampling to expose
hidden failure. A new model for forecasting the probability of high order
contingencies, exponentially accelerated cascading (EAC), has also been proposed
[71]. In [71] it is shown that the EAC model provides the best fit among a number of
methods for fitting the probability of outages for the historic record of contingencies
in the US system (data from [57]). In the EAC method two parameters p, which
represents the probability of occurrence of N-2 given an N-1 event at the start of a
cascading transmission blackout, and , which quantifies the increasing rate of the
conditional probability are used. These parameters are p = 10.3% and = 1.515. That
is, 10.3% of all transmission outages involve more than one line and after the first
outage, the cascading outage happens with a increasing probability by a factor 1.515
for each additional line lost, this is shown in Figure 1-76. Note, that based on this
assumed fit (the EAC technique) the theory indicates that beyond an N-8 condition, it
is essentially certain (100%) that consequential outages are certain in practice this
may not be the case.
Number of Events
1 10 100 1000 10000

N-1

N-2

N-3

N-4

N-5

N-6

N-7

N-8

Figure 1-76: Historic data of the number of transmission outages at 230kV and above in the US-
Canada, 1965-1985, [71].

1 - 96
0% 10% 20% 30% 40% 50% 60% 70% 80%

Pr(K 2/ K 1)
Pr(K 3/ K 2)
Pr(K 4/ K 3)

Pr(K 5/ K 4)
Pr(K 6/ K 5)

Pr(K 7/ K 6)
Pr(K 8/ K 7)

a) actual record
0.9

Exponential Accelerated Cascading (EAC)


0.8
Observed conditional probability
0.7

0.6
P(K k+1 / K k)

0.5

0.4

0.3

0.2

0.1 Assuming an N-4 contingency, the probability


that a next contingency happens is more than 30%
0
0 1 2 3 4 5 6 7
Number Lines in Outage (k)

b) EAC model
Figure 1-77: Conditional probability for successively increasing number of line outages based on
historic data of the number of transmission outages at 230kV and above in the US-Canada, 1965-1985.
1.8 Chapter Summary
In this chapter a description has been given of many of the blackouts and major power
system disturbances of the past several decades in the world. Figure 1-78 provides a
visual summary of this (note: some of the disturbances shown in Figure 1-78 are not
described in this document). A perusal of the descriptions of all the major blackouts
discussed in this chapter we lead us to the following brief outline of the general
anatomy of a blackout [72].
Most of the major grid blackouts described here were initiated by a single event (or
multiple-related events such as a fault and subsequent malfunction of protective
relays) that gradually led to cascading outages of other lines and equipment and
eventual collapse of the entire system. Post mortem analysis (and hindsight) often
identifies a possible means of mitigating the initial event or minimizing its impact to
reduce the risk of the ensuing cascading trips of lines and generation. Given the
complexity and immensity of modern power systems, it is not possible to totally
eliminate the risk of blackouts. However, there are certainly means of minimizing the
risk based on lessons learned from the general root causes and nature of these events.
Figure 1-79 [72] provides a graphical means of visualizing the sequence of events in a
blackout. Typically, the blackout can be traced back to the outage of a single
transmission (or generation) element. The majority of these events tend to be the
result of equipment failure (aging equipment, malfunctioning protective devices etc.)

1 - 97
or due to environmental factors (e.g. tree faults on hot summer days). Human error
can also be a contributing factor. Most modern power systems are designed to be able
to operate safely and in a stable fashion for such single N-1 (or multiple common-
mode) outages. However, depending on the severity of the event, the system may
enter into an emergency state following the disturbance, particularly during heavy
loading conditions. Thus, if proper automatic control actions or operator intervention
is not taken decisively, the system may be susceptible to further failures and
subsequent cascading. Also, though quite rare, it is possible (see for example the
discussion on the Swedish blackout in section 1.3.1) to have a second uncorrelated
event occur while the system is in this emergency state following an N-1 event, prior
to system readjustment. This too can have devastating results.
In the case where proper operator or automatic control actions are not taken, the
consequences can be many. In the simplest case (e.g. many of the blackouts discussed
here) parallel transmission paths may become overloaded due to a redistribution of
power after the initial outage and thus a process of cascading transmission line
outages may ensue. At some point this will lead to dynamic performance issues such
as:
Transient angular instability the events may lead to large and quick
deviations in generator rotor angles across the system. If this is then followed
by inadequate electrical coupling between groups of generators (due to the loss
of transmission lines), it may lead to a lack of synchronizing power and thus
an inability for generators in different regions of the system to keep in
synchronism with each other. The observed response is a continuously
growing angular shift between groups of generators. As the two regions
separate in angle, the voltage in between the two regions will be depressed.
Such depressed voltage may lead to transmission line protective relays
tripping additional lines and thus possible eventual severing of all ac
transmission paths between the two regions. Out of step relays may also be
used to deliberately sever ties between the two systems in the event of
transient instability. This phenomenon occurs within a few seconds. (See for
example Figure 1-51).
Growing Power Oscillations (small-signal instability) in this case the
weakening of the transmission system coupled with high power transfer levels
leads to uncontrolled growing electromechanical oscillations between groups
of generators. The observed response is an uncontrolled growing oscillation in
power on transmission corridors until once again protective relays result in
further splitting up of the system. This phenomenon may take several to tens
of seconds to unfold. (Figure 1-9 shows an example of this phenomenon.)
Voltage instability or collapse in some cases, particularly during peak
system load conditions voltage instability may occur. This may be a very fast
phenomenon due to stalling of motor loads (such as air-conditioning motors)
or may take several minutes to unfold in the case of load being restored due to
on-load tap-changer action and other automatic devices. (Figure 1-13 shows an
example of slow voltage collapse.).
Frequency Instability in the case where the system may have split up into
islands (or subsystems) due to cascading outages or one of the aforementioned
dynamic issues, some of these islands may experience a severe imbalance
between generation and load. As such, if proper automatic actions are not

1 - 98
taken to balance load and generation (some times this may not even be
possible depending of the total imbalance when the island is formed) there will
be a rapid uncontrolled decline (for cases with generation deficiency) or rise
(for case with generation surplus) of frequency leading to widespread
generation tripping and subsequent blackout of the island. (See Figures 1-22
and 1-70 for examples of this.)
Eventually, a point of no return is reached at which time the rapid succession of
cascading events becomes unmanageable and a chain reaction of load and generation
tripping leads to the ultimate blackout.
Data and models for overall blackout risk are emerging and sections 1.6 and 1.7
provide some insights on this challenging topic. However, when blackouts occur the
socio-economic impact is devastating. Based on the discussion in this chapter, some
of the clear root causes for these events are [72]:
A lack of reliable real-time data.
Thus, a lack of time to take decisive and appropriate remedial action against
unfolding events on the system.
Increased failure in aging equipment.
A lack of properly automated and coordinated controls to take immediate and
decisive action against system events in an effort to prevent cascading. In
particular, a lack of sufficient coordination between various protection and
controls with common spheres of influence.
A lack of coordination and communication between transmission system
operators in neighboring systems to help take decisive action against unfolding
events.
Power systems are increasingly being pushed harder with higher levels of
power transfers over longer distances.
In recent years, some of these problems may have been driven by changing priorities
for expenditures on maintenance, reinforcement of the transmission system and
operator training. Nonetheless, with proper and prudent expenditure the appropriate
technologies do exist to address these root causes. In addition, it is incumbent on
policy makers to ensure that reliability standards are made mandatory and
enforceable, backed by meaningful and effective penalties for non-compliance with
the standards. Furthermore, reliability standards should be reviewed periodically,
taking into account experiences from major system incidents (such as those described
here) and evolving technologies (such as those described in the sections 3.3, 3.4, 3.5
and 3.6). At a regulatory body level, clarification should be provided on the need for
expenditure and investment for bulk system reliability and how such expenditure will
be recoverable through transmission rates.
Finally, from a research perspective thought needs to be given to the limitations of
present deterministic reliability criteria which are primarily limited to addressing
single contingencies, i.e. an N-1 criterion.

1 - 99
2004 Western 1983 Sweden
Norway
2003 Southern Sweden Helsinki 2003
And Eastern Denmark
2003 UK
2005 Swiss
Railway French Blackout
1978
1965, 1977 and 2003
Northeast Blackouts
2004 Greece
1987 Tokyo
2000 Portugal
2006 UCTE 2001 Crete
2003 Italian
Blackout
1996 Western System
Breakups 2003 Algeria 1990 Egypt

2001 Nigeria
2002 Colombia Brazilian Blackouts Malaysia 1998
1984, 1985, 1999 and 2005
and 2002

1997 Chilean 2004 Australian


Blackout
Blackout
2002 Argentina

Auckland Blackouts
1988 and 2006

Figure 1-78: Major system disturbances and blackouts of the past several decades. Not all of the
events shown are described in this report.

1 - 100
Figure 1-79: General trend of events leading to a blackout (reproduced with permission from [72]).

1 - 101
1.9 References
[1] G. D. Friedlander, The Northeast power failure a blanket of darkness, IEEE Spectrum, Vol. 3,
No. 2, Feb. 1966.
[2] Federal Power Commissions report to the President, Northeast Power Failure, dated Dec. 6,
1965, available from United States Government Printing Office.
[3] IEEE/CIGRE Joint Task Force on Stability Terms and Definitions, Definition and Classification
of Power System Stability, IEEE Transactions on Power Systems, Vol. 19, No. 3, pp. 1387-1401,
August 2004.
[4] G. S. Vassell, Northeast Blackout of 1965, IEEE Power Engineering Review, Jan. 1991.
[5] Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and
Recommendations, U.S.-Canada Power System Outage Task Force, April 2004. Available at
https://reports.energy.gov/BlackoutFinal-Web.pdf
[6] J. F. Hauer and J. E. Dagle, Pacific Northwest National Laboratory Consortium for Electric
Reliability Technology Solutions: Grid of the Future White Paper on Review of Recent Reliability
Issues and System Events, December 1999, Prepared for the Transmission Reliability Program
Office of Power Technologies Assistant Secretary for Energy Efficiency and Renewable Energy
U.S. Department of Energy.
[7] R. G. Farmer and E. H. Allen, Power System Dynamic Performance Advancement From History
of North American Blackouts, Proceedings of the IEEE PSEC, Atlanta GA, 2006.
[8] The Electric Power Outages in the Western United States, July 2-3, 1996. Report to the President
of the United States by the Secretary of Energy, August 2, 1996.
[9] R. Hardy, BPA Administrator, Statement Before the Subcommittee on Water and Power
Resources of the House Committee on Resources. Washington, D.C., November 7, 1996.
[10] C. W. Taylor and D. C. Erickson, "Recording and Analyzing the July 2 Cascading Outage," IEEE
Computer Applications in Power, vol. 10, No. 1, pp. 26-30, January 1997.
[11] BPA Transmission System Reactive Study Final Report of Blue Ribbon Panel. Bonneville
Power Administration, Portland, Oregon.
[12] D. N. Kosterev, C. W. Taylor, and W. A. Mittelstadt, "Model Validation For The August 10, 1996
WSCC System Outage," IEEE Trans. Power Systems, vol. 14, no. 3, pp. 967-979, August 1999.
[13] K. Morison and S. Yirga, System Disturbance Stability Studies for Western System Coordinating
Council (WSCC), EPRI Report TR-108256, September 1997.
[14] V. R. VanZandt, M. J. Landauer, W. A. Mittelstadt, and D. S. Watkins, "A Prospective Look at
Reliability with Lessons From the August 10, 1996 Western System Disturbance, Proc.
International Electric Research Exchange Workshop on Future Directions in Power System
Reliability, Palo Alto, CA, May 1-2, 1997.
[15] A Survey of the Implications to California of the August 10, 1996 Western States Power Outage.
Report of the California Energy Commission, June 1997. Available on the Internet at
http://www.energy.ca.gov/electricity/index.html#reliability
[16] C. W. Taylor, Improving Grid Behavior, IEEE Spectrum, June 1999.
[17] UCTE Operation Handbook, available at www.ucte.org.
[18] www.terna.it
[19] UCTE Report of the Investigation Committee on the 28 September 2003 Blackout in Italy
available at: http://www.ucte.org/pdf/News/20040427_UCTE_IC_Final_report.pdf
[20] SFOE, Report on the blackout in Italy on 28 September 2003, November 2003, available at
http://www.energie-
schweiz.ch/imperia/md/content/energiemrkteetrgertechniken/elektrizitt/strompanne03/4.pdf
[21] AEEG and CRE: Report on the events of September 28th, 2003 culminating in the separation of
the Italian power system from the other UCTE networks, available at :
http://www.autorita.energia.it/docs/04/061-04all.pdf

1 - 102
[22] Blackout del sistema elettrico italiano del 28 settembre 2003 (in Italian). Ministero delle
Attivit produttive, Roma, 28 November 2003.
[23] AEEG: Resoconto dellattivit conoscitiva in ordine alla interruzione del servizio elettrico
verificatasi il giorno 28 settembre 2003 (partially in Italian). Available at:
http://www.autorita.energia.it/docs/04/083-04all.pdf, June 2004
[24] S. Corsi and C. Sabelli, General Blackout in Italy:Sunday September 28, 2003, h. 03:28:00,
Proceedings of IEEE-PES General Meeting, PSSS Panel on Recent blackouts, Denver 2004.
[25] A.Berizzi: The Italian 2003 blackout. 2004 IEEE PES General Meeting, PSSS Panel on Recent
Blackouts, Denver, June 2004.
[26] S. Corsi, Wide Area Voltage Regulation and Protection: when their coordination is simple,
Power Tech 05, St. Petersburg, June 2005.
[27] S. Corsi, G. Cappai and I. Valad, Wide Area Voltage Protection, CIGRE 2006, Paris.
[28] Vattenfall: Elevbrottet 27-12-1983. Rapport frn Vattenfalls Strningskommisjon, Vattenfall
1984 (in Swedish).
[29] G. L. Doorman, K. Uhlen, G. H. Kjolle and E. S. Huse, Vulnerability Analysis of the Nordic
Power System, IEEE Transactions on Power Systems, Volume 21, Issue 1, Feb. 2006
Page(s):402 - 410
[30] C. D. Vournas, V. C. Nikolaidis and A. Tassoulis, Experience from the Athens Blackout of July
12, 2004, IEEE Power Tech, St. Petersburg, June 2005.
[31] T. Van Cutsem, J. Kabouris, G. Christoforidis and C. D. Vournas, Application of real-time
voltage security assessment to the Hellenic Interconnected system IEE Proc.-Gen. Transm.
Distrib., Vol 152, pp. 123-131, Jan. 2005.
[32] Crete system annual report PPC, 2001
[33] J. Sylignakis, D. Georgiadis and N. Hatziargyriou, Study of Collapse of the Crete System due to
Large Power Generation Loss, MED02/286, Proceedings of MedPower 2002, 4-6 November
2002, Athens, Greece
[34] N. Hatziargyriou, J.A. Pecas Lopes, J. Stefanakis, E. Karapidakis, M.H. Vasconcelos and A.
Gigantidou Artificial Intelligence Techniques applied to Dynamic Security Assessment of
Isolated Systems with High Wind Power Penetration, 2000 Session of CIGRE, Paris, August
2000.
[35] N.D. Hatziargyriou and E.S. Karapidakis, Online preventive dynamic security of isolated power
systems using decision trees, IEEE Transactions on Power Systems, Volume: 17 Issue: 2, May
2002, Pp: 297 304.
[36] The Blackout of the SBB system 22 June 2005, Report from Schweizerische Bundesbahnen SBB,
11 August 2005 (In German).
[37] Interim Great Britain Seven Year Statement November 2004 - PDF, National Grid Transco.
[38] Executive Summary - Report into London incident - August 2003 ,National Grid Transco.
[39] UCTE Final Report System Disturbance on 4 November 2006, available at
http://www.ucte.org/pdf/Publications/2007/Final-Report-20070130.pdf
[40] IEEE Task force on Load Representation for Dynamic Performance, Load Representation for
Dynamic Performance Analysis, IEEE Transactions on Power Systems, Vol. 8, No. 2, pp. 213-
221, May 1993.
[41] J. Arrillaga, C. P. Arnold and B. J. Harker, Computer Modeling of Electrical Power Systems, John
Wiley & Sons, 1983.
[42] T. Van Cutsem, Voltage Collapse Mechanism-A case Study, Bulk Power System Voltage
Phenomena II-Voltage Stability and Security, pp. 85-101, August 1991.
[43] C. A. Caizares, editor, Voltage Stability Assessment: Concepts, Practices and Tools, IEEE-
PES Power Systems Stability Subcommittee Special Publication, SP101PSS, August 2002.

1 - 103
[44] G.K. Morrison, B. Gao, and P. Kundur, Voltage Stability Analysis Using Static and Dynamic
Approaches, IEEE Transactions on Power Systems, Vol. 8, No. 3, pp. 1159-1171, August 1993.
[45] Y. Hain and I. Schweitzer, Analysis of the Power Blackout of June 8, 1995 in the Israel Electric
Corporation, IEEE Transactions on Power Systems, Vol. 12, No. 4, pp. 1752, pp. 1752-1757,
November 1997.
[46] R. Hirvonen and L. Pottonen, Low Voltages after a Disturbance in the Finnish 400 kV Network,
Power Tech 05, St. Petersburg, June 2005, pp. 231-239.
[47] L. S. Vargas and C. A. Caizares, Time Dependence of Controls to Avoid Voltage Collapse,
IEEE Transactions on Power Systems, Vol. 15, No. 4, November 2000, pp. 1367-1375.
[48] Information on electric system disturbances in North America can be downloaded from the
NERC Disturbance Analysis Working Group (DAWG) website at www.nerc.com.
[49] B.A. Carreras, D.E. Newman, I. Dobson, A.B. Poole, Evidence for self-organized criticality in a
time series of electric power system blackout, IEEE Trans. Circuits and Systems, Part I. vol. 51,
no. 9, Sept. 2004.
[50] J. Chen, J.S. Thorp, and M. Parashar, Analysis of electric power system disturbance data, 34th
Hawaii Intl. Conf. on System Sciences, Maui, Hawaii, January 2001.
[51] B.A. Carreras, V.E. Lynch, I. Dobson, D.E. Newman, Critical Points and Transitions in a Power
Transmission Model, Chaos, vol. 12, no. 4, 2002, pp. 985-994.
[52] D.P. Nedic, I. Dobson, D.S. Kirschen, B.A. Carreras, V.E. Lynch, Criticality in a cascading
failure blackout model, International Journal of Electrical Power and Energy Systems, vol 28,
2006, pp 627-633.
[53] J. Chen, J.S. Thorp, I. Dobson, Cascading dynamics and mitigation assessment in power system
disturbances via a hidden failure model, International Journal of Electrical Power and Energy
Systems, vol 27, no 4, May 2005, pp. 318-326
[54] H. Liao, J. Apt, S. Talukdar, Phase transitions in the probability of cascading failures, Electricity
Transmission in Deregulated Markets, conference at Carnegie Mellon University, Pittsburgh PA
USA Dec. 2004.
[55] I. Dobson, B.A. Carreras, D.E. Newman, A loading-dependent model of probabilistic cascading
failure, Probability in the Engineering and Informational Sciences, vol. 19, no. 1, January 2005.
[56] I. Dobson, B.A. Carreras, D.E. Newman, A branching process approximation to cascading load-
dependent system failure, 37th Hawaii Intl. Conf. on System Sciences, Hawaii, 2004.
[57] R. Adler, S. Daniel, C. Heising, M. Lauby, R. Ludorf, T. White, An IEEE survey of US and
Canadian overhead transmission outages at 230 kV and above, IEEE Transactions on Power
Delivery, vol. 9, no. 1, Jan. 1994, pp. 21 39.
[58] Q. Chen; J. D. McCalley, A Cluster Distribution as a Model for Estimating High-order Event
Probabilities in Power Systems, Probability In The Engineering and Information Sciences,
Volume 19, .Issue 04, 2005, pp489 505.
[59] R. Billinton, R.N. Allan, Reliability evaluation of power systems, second edition, Chapter 13,
Plenum Press, New York, 1996.
[60] B.A. Carreras, V.E. Lynch, D.E. Newman, I. Dobson, Blackout mitigation assessment in power
transmission systems, 36th Hawaii International Conference on System Sciences, Hawaii, 2003.
[61] M. Ni, J.D. McCalley, V. Vittal, T. Tayyib, Online risk-based security assessment, IEEE
Transactions on Power Systems, vol. 18, no. 1, pp. 258-265, 2003.
[62] B.A. Carreras, V.E. Lynch, I. Dobson, D.E. Newman, Complex dynamics of blackouts in power
transmission systems, Chaos, vol. 14, no. 3, September 2004.
[63] D. Kirschen, G. Strbac, Why investments do not prevent blackouts, UMIST (now University of
Manchester) August 2003, available at
www.ksg.harvard.edu/hepg/Standard_Mkt_dsgn/Blackout_Kirschen_Strbac_082703.pdf

1 - 104
[64] N. D. Reppen, Increasing utilization of the transmission grid requires new reliability criteria and
comprehensive reliability assessment, Eighth International Conference on Probabilistic Methods
Applied to Power Systems, Iowa State University, Ames, Iowa, September 2004
[65] D. S. Kirschen, D. Jayaweera, D. P. Nedic, R. N. Allan, A probabilistic indicator of system stress,
IEEE Transactions on Power Systems, vol. 19, no. 3, 2004.
[66] R. C. Hardiman, M. T. Kumbale, Y. V. Makarov, An advanced tool for analyzing multiple
cascading failures, Eighth International Conference on Probabilistic Methods Applied to Power
Systems, Iowa State University, Ames, Iowa, September 2004
[67] I. Dobson, K .R. Wierzbicki, B.A. Carreras, V.E. Lynch, D.E. Newman, An estimator of
propagation of cascading failure, 39th Hawaii Intl. Conf. on System Sciences, Kauai, Hawaii,
2006.
[68] Consul, P. C. Generalized Poisson Distributions: Properties and Applications, Marcel Dekker,
New York, 1989
[69] I. Dobson, B.A. Carreras, D.E. Newman, Branching process models for the exponentially
increasing portions of cascading failure blackouts, Thirty-eighth Hawaii International
Conference on System Sciences, Hawaii, January 2005
[70] A. G. Phadke and J. S. Thorp, Expose hidden failures to prevent cascading outages in power
systems, IEEE Comput. Appl. Power, vol. 9, no. 3, pp. 2023, Jul. 1996
[71] Q. Chen, C. Jiang, W. Qiu and J. D. McCalley, Probability Models for Estimating the
Probabilities of Cascading Outages in High Voltage Transmission Network, IEEE Transactions
on Power Systems, vol. 21, no. 3, August 2006, pp. 1423-1431.
[72] P. Pourbeik, P. S. Kundur and C. W. Taylor, The Anatomy of a Power Grid Blackout, IEEE
Power & Energy Magazine, Vol. 4, No. 5, September/October 2006.
[73] NEMMCO report Power System Incident Report Friday 13 August 2004, 28 January, 2005,
http://www.nemmco.com.au/marketandsystemevents/232-0022.pdf.

1 - 105
CHAPTER 2

Best Practices to Improve Power System Dynamic


Performance and Reduce Risk of Cascading
Blackouts
2.1 Introduction
This chapter surveys and recommends best practices related to system dynamic
performance for the avoidance of large-scale cascading blackouts. Power system
dynamic performance encompasses rotor angle stability, voltage stability, and
frequency stability [1]. In recent years voltage stability has been the dominant concern
in many parts of power systems. Many aspects of power system design and operation
are involved including power plant equipment, various control and protection devices,
reactive power compensation, voltage profile practices, emergency controls, models
and simulation methods, reliability standards, and restructured industry effects.
Practices for maintaining stable electrical islands following controlled or uncontrolled
separations are included.
Following the dramatic blackouts in 2003 and 2004, the industry has worked
diligently to reduce the risk of cascading failures. We acknowledge and summarize
this work and make further recommendations.
Many blackouts have originated as a relatively slow progression of equipment trips
related to higher than average loadings. While addressing common factors associated
with these initial equipment trips (e.g. vegetation management) will substantially
reduce the probability of future blackouts, these issues are largely beyond the scope of
the IEEE Power System Dynamic Performance Committee. Implementation of best
practices related to dynamic performance acts as an additional layer of protection,
promoting the philosophy of defense in depth.
In todays market environment, best practice guides and enforceable standards are
more required than ever to help prevent blackouts.
Terrorist attacks on power systems have also become a concern. Best practices to
improve power system dynamic performance reliability, robustness, and resilience for
natural and accidental events also apply to improving network performance against
terrorist attacks.

2.2 Power Plant Equipment


Power system dynamic equipment play a crucial role in determining the degradation
process leading to a blackout. These control devices are located at the power plant and
substations and can be distinguished as regulation or protection equipment. Most of
them work autonomously, while some can operate in a coordinated way through a
higher level wide-area controller that also significantly affects the power system
dynamics.
Power plant equipment is an integral part of the interconnected power system. Of
particular interest are two types of plant equipment:
Unit Controls. Equipment used to control the response of the turbine-generator under

2-1
normal and contingency operating conditions. Of interest are excitation controls
(including power system stabilizers and limiters), prime mover and energy supply
controls, and automatic generation control.
Unit Protection. Equipment designed to disconnect the generator from the system to
minimize potential damage, especially under contingency conditions. Examples are:
over-excitation protection, loss of excitation protection, and overspeed protection.
Protection devices, including boiler and turbine protection, have responded to external
conditions and undesirably removed generators from the power system during
disturbances [2] (see also discussion of blackouts in Chapter 1).
Critical functions in the interconnected power system dynamics include:
Voltage support, through reactive power, in the local area (generators are the
main source of dynamic voltage support in the interconnected power system),
and
Frequency support, through active power control and the inertia of generators,
for the entire interconnected network.
In order to carry out these functions effectively, the equipment must be designed,
engineered, tuned and operated to ensure satisfactory dynamic performance of the
entire interconnected system. Specifically:
Unit controls must operate such that they actively support the system voltage
and frequency requirements. Excitation limiters (overexcitation,
underexcitation, and volts/hertz) should be coordinated with protection.
Unit protection must not cause premature generator trip-outs. The goal is to
maximize equipment utilization while avoiding equipment damage. Thus,
there should be a balance between the needs to disconnect equipment to
protect it and to keep generators on-line to support the power system. For this,
equipment must be designed to ride through an acceptable range of temporary
excursions in voltage, frequency, active and reactive power.

2.2.1 Generator controls


Generators have fast dynamics. The main dynamics of interest are the excitation
equipment with automatic voltage regulator (AVR) that controls the generator stator
terminal voltage control loop (dominant time constant (DTC) of about 0.5s). The
AVR may include power system stabilizer, under-excitation limiter and over-
excitation limiter (DTC 10s), and volts/hertz limiter. Reference 3 provides details
and recommended practices, especially for angle stability. The continuous reactive
power capability varies with the power factor rating and the generator cooling class.
AVR line drop compensation (LDC) with DTC same as AVR, moves the voltage
regulated point into the generator transformer reactance for tighter regulation of the
transmission-side voltage [4]. Excessive LDC causes a positive feedback effect with
parallel generators. Reactive droop compensation (reversed polarity LDC) or cross-
current compensation provides negative feedback to stabilize paralleled generators.
Generators are paralleled to the grid by a step-up transformer that, in some
applications, is under Load Tap Changer (LTC) control: stepwise manual or automatic
(DTC 10s) control with hysteresis and delay. Often generator step-up transformers
have only no-load taps and a compromise between voltage and reactive power ranges
are required [5, 6]. Consideration must also be given to station service voltage

2-2
performance and to coordination of the tap positions of the generator step-up
transformers with the station service transformers.
A recent concept in power plant voltage control is High Side Voltage Control (HSVC)
allowing generators to participate in Secondary Voltage Regulation (SVR) [4, 7, 8,
121, 122]. HSVC (at the power plant level) and SVR (at the wide-area level) rapidly
(DTC 5s for the proportional effect and 50s for the integral effect) support the
local EHV bus or the pilot node EHV area bus, respectively. Therefore they
significantly contribute to maintaining voltages during disturbances, locally or in a
wide-area framework. LDC and HSVC are discussed further in the next section.
Priority should be given to upgrading AVRs whose regulating, stabilizing and limiting
loops must operate with high cut-off frequency. Moreover the use of HSVC, possibly
operating under SVR, is suggested.
The turbine governor (speed-governor) also affects unit dynamics and stability. It is
the primary means of controlling system frequency for generators in parallel operation
in a power system [9]. This is done through feedback of the units speed (and
sometimes also electrical power and/or valve position) to control the units active
power, following a droop characteristic. Units can also contribute to the grid
secondary power/frequency regulation (automatic generation control) within an active
power control band. All these conventional closed-loop controls are related to
frequency regulation and angle stability. On thermal units, reheat turbine fast valving
can be used to stabilize large disturbance [11]. As described in Section 2.6, special
protection system (SPS) actions such as generator tripping are possible to improve
angle stability. When properly applied, SPS can also be used to improve the voltage
and thermal overload performance of the network.
It is preferred to avoid operating thermal power plants under outer-loop megawatt
control, which overrides the turbine-generator primary governor response and
maintains a fixed megawatt output on the unit.

2.2.2 Generator protection


Power plant protection has to be coordinated with the allowed voltage and frequency
regulating ranges. Therefore the generator over/under-voltage protection has
thresholds of about 10% of nominal voltage. Under/over-frequency relays generally
operate within 5% of the nominal speed. Loss of excitation protection operates
outside the underexcitation limit. Moreover the overcurrent and load surge protections
have to be delayed with respect to the AVR and governor dynamics.
In general, the majority of unexpected operations of generator protection systems are
due to the lack of coordination of generator control and protection functions with
system control and protection functions. Thus, the application of new technology may
not improve the performance of protection. There are, however, some exceptions. For
example, old legacy excitation equipment still in-service in some power plants that do
not have overexcitation limiters are more prone to tripping during a severe
disturbance (see section 2.3.2.3), thus replacing legacy equipment with modern
control systems can have benefits. Also, in some cases, the characteristics of existing
protection devices can change significantly during severe frequency or voltage
fluctuations. This might be a potential opportunity for new technology to resolve such
issues. This type of limitation can only be discovered during special testing of the
generator protectionnot during routine testing.

2-3
2.2.3 Model Validation
Adequate commissioning and periodic tests and tuning of plant equipmentmainly
control and protection systemsis very important in verifying engineering and
operational objectives, and then for system stability and blackout prevention
performance. This aspect is often neglected while most of the field problems often
involve misoperation of regulating and protecting equipment rather than lack of more
sophisticated control functionalities. Commissioning often skips tests related to large
system perturbations, and the risk of inadequate control equipment tuning may be
high. The use of real-time detailed simulators to validate models or other
commissioning test is rare and expensive. These are often not requested by the
customer who skips the fine-tuning costs and simply waits for problems that may
appear. Again, periodic control system maintenance should be mandatory because
control equipment may be inadvertently mal-adjusted or partially disabled. The
consequence is that, when required by an unusual critical situation, the
regulating/protecting control does not work as desired. Best practice is to require
validated dynamic models and model data as part of commissioning, periodic testing
[10], and the monitoring of actual performance for system events.
Adequate power plant control room monitoring of the operating status of the
regulating and protecting functions, and their signals and alarms are very important.
At the equipment rack there should be additional information for maintenance and
initiating local maneuvers. Digital technology facilitates monitoring functions and is
an advantage of control and protection upgrades.
Especially in a restructured environment, the Independent System Operator or
Regional Transmission Organization, as well as the transmission company should
monitor the delivered active and reactive power to detect poor control tuning and
other abnormalities such as restricted reactive power capability. Plant models can be
verified for system events by inputting recorded voltage and frequency, and
monitoring model power output.
As discussed in Section 2.9, plant equipment must be represented reasonably
accurately in power system dynamics studies to predict generator response and system
performance under severe contingency conditions. Study results are accurate only if
appropriate details of power plant controls (as tuned) are included in study models. If
study results are optimistic, the consequence is a false sense of security since the
system is not as robust as the study predicts. This could lead to increased incidents of
service interruptions and equipment damage. On the other hand, if study results are
pessimistic, the consequence is sub-optimum design and operation. This would also
lead to an economic penalty, with unused power system transfer capability and over-
designed and underutilized equipment.
For simulation studies, its important to have reasonably accurate computer models of
the excitation and turbine controls. To develop and validate these models, staged
field-testing, analysis of routine system events and disturbances, vendors design data,
and field readings of control settings should all be used.

2.2.4 Other Issues


Many generator outages during system disturbances are because of problems with
generator/turbine/energy supply system controls rather than with the generator
protection. Boiler controls sometimes have difficulty in keeping units on line during

2-4
rapidly varying frequency. Combustion turbines may also trip off line due to
inadequacies of the controls during system frequency changes. Prioritized
replacement of older controls with modern digital control is best practice.
Finally, in order for the generator to provide adequate voltage support, it has to be
able to meet the reactive power requirements of the power system. To start with, the
generator should have the inherent capability to produce reactive power, in terms of
overexcited and underexcited reactive power capability. Also, the overexcited and
underexcited capability should be demonstrated through periodic staged testing under
realistic operating conditions.
2.2.5 Summary of dynamic performance best practices for power plant
equipment
Power plant control devices play a crucial role in the degradation process leading to a
blackout. Power plant control and protection, correctly tuned and coordinated, help
prevent upsets resulting in generator unit trips. Such undesired generator trips have
been a contributing factor in many blackouts. Best practices include:
1. Modernization of equipment, utilization of new technology, and improved control
and protection functions to increase reliability and robustness. While prioritization
of upgrades mainly concerns the AVRs including power system stabilizers and
limiters, turbine and boiler control modernization should also be considered.
2. Automatic voltage regulator line drop compensation or automatic transmission-
side voltage control should be considered to provide better regulation of the
transmission network voltage and improved system stability performance. Wide-
area secondary voltage regulation may also be considered to improve voltage
stability in heavy stressed conditions while coordinating the reactive power
reserves of generators.
3. It is preferred to avoid operating thermal power plants under outer-loop megawatt
control, which overrides the turbine-generator primary governor response and
maintains a fixed megawatt output on the unit.
4. Adequate commissioning and periodic tests and tuning of plant control and
protection equipment. Commissioning and other tests should be used for model
validation. Power plant owners must provide validated model data to
interconnected power companies, and to Independent System Operators or
Regional Transmission Organizations.
5. Real-time detailed simulations of closed-loop tests (also including large
perturbations) are suggested to verify and adjust control and protection system
equipment settings.
6. Units above a designated MW rating should have their real and reactive output
and station frequency monitored continuously. The data obtained is invaluable for
many reasons, including model validation, performance tracking, off-nominal
frequency criteria compliance, and reconstructive analysis of power system
events.

2.3 Reactive Power Compensation and Control


Reactive power compensation and control affects both rotor angle and voltage
stability, and there may be tradeoffs between the two types of stability.

2-5
As described in Section 2.2, best practices for voltage and reactive power control
require modern excitation equipment at generators. Prioritized replacement of very
old equipment at generators with modern thyristor exciters and digital voltage
regulators with power system stabilizers improves generator performance and
reliability. Generator voltage regulator controls, including limiter circuits, should be
coordinated with protective relayinglack of coordination has contributed to the
severity of blackouts. Automatic voltage regulator line drop compensation or
automatic transmission-side voltage control should be considered for better regulation
of transmission network voltage and system stability.

2.3.1 Rotor angle stability


Best practices related to power plants are detailed in Section 2.2 and in references 3
and 12. State-of-the-art generator excitation equipment is often the most effective
means to improve stability. Although the benefit is somewhat reduced with powerful
excitation control, high pre-disturbance excitation (high internal voltage) improves
angle stability.
Mechanically switched transmission-level shunt and series compensation can aid
stability [12, Section 2.11.6]. The compensation may be fixed (typical for series
capacitors), or switched by local or by wide-area control such as the special protection
systems described in Section 2.6. Best practices for angle stability challenges often
include relatively low-cost mechanically switched capacitor/reactor banks. Reference
68 describes an application of mechanically switched series capacitor banks.
Transmission-level power electronic devices such as static var compensators or
thyristor-controlled series capacitors are also available to improve rotor angle
stability. For series compensation, TCSC (thyristor controlled series capacitors,
usually combined with fixed series compensation) has the advantage of immunity
from subsynchronous resonance. Although much more expensive than mechanically
switched compensation, power electronic devices may be cost-effective especially for
long distance interties. Concordia [13] states that series compensation is favored for
long-distance transmission from a remote power plant to a large system, or for long-
distance ties between large systems. Shunt compensation (static var compensators) are
favored for long-distance transmission from power plants to load areas with little
generating capacity. Chapter 3 describes these power electronic technologies and
devices, and their application, in more detail.

2.3.2 Voltage stability


For the last two decades, voltage collapse has been a major consideration in power
system engineering [3, 1417]. There are, however, relatively simple and low-cost
best practices that greatly improve reliability. These have been practiced by some
companies for over 50 years [18, 19]. These best practices are not always followed,
however, with the August 14, 2003 cascading failure [20] being a prime example as
described in 2.3.2.3.

2.3.2.1 Power plants


Over the years, there have been debates on use of generators as primary reactive
power sources, versus use of transmission-level shunt compensation for base reactive
power requirements, with substantial generator reactive power reserves for
emergencies [16, 18, 21]. With lower excitation levels at generators with more reserve
capability, however, there is a tradeoff with angle stability as mentioned above.

2-6
Another debate is use of local control at generators and substations versus wide-area
control mainly at power plants based on remote voltage measurements (i.e., secondary
voltage regulation described in Section 2.2). To some extent, best practice is dictated
by system characteristics such as highly meshed networks with short lines versus
sparser networks with longer lines.
Generator automatic voltage regulators usually include line drop compensation
capability [22]. This allows tighter control of transmission voltage and improved
stability. The only cost is the connection cost, but this capability seems to be
underutilized.
More advanced is high speed, high side voltage control [23]. The transmission side
voltage is directly controlled at the speed of the automatic voltage regulators, a droop
characteristic for stable operation with parallel generators. Following their 1987
blackout, this control was implemented on many generators of Tokyo Electric Power
Company (see Section 1.5.1).
A recent development that has been tested at a large hydroelectric power plant with
two generators per transformer is a control that does not require high side voltage
measurement [24, 25]. Rather, the high side voltage is computed from terminal
voltage and current. This has advantages where the high side switchyard and
instrument transformers are not owed by the generation company, or is some distance
from the power plant.
For long-term voltage stability, several high side voltage controllers operating in tens
of seconds time frame have been developed [4, 7, 26]. These provide outer-loop
control of automatic voltage regulator setpoints.
For generation connected to transmission networks, a best practice is to not allow
automatic or manual power factor or reactive power override of automatic voltage
regulators [27]. All generators should be in automatic voltage regulator mode without
secondary override control.

2.3.2.2 Transmission and distribution


For heavy load, high stress conditions, the transmission network voltage profile
should be near the maximum of the allowed voltage range, and should be fairly
uniform at all locations. This high, flat voltage profile reduces losses that cause
conductor heating and sagging into trees. Extensive use of relatively low cost shunt
capacitor banks1 in both transmission and distribution allow a high and flat voltage
profile, with substantial reactive power reserves at generators for emergencies [21].
Reactive power compensation can be applied at all voltage levels to avoid excessive
reactive power losses in transformers. A high, flat voltage profile reduces reactive
power production at power plants.
In inflation-adjusted terms, transmission and distribution mechanically switched shunt
capacitor banks have become less expensive and more reliable in recent decades.
Notable are fuseless capacitor banks [28], SF6 and vacuum switchgear, and control
and communication technology. Thus there is a best-practice trend towards increased
used of shunt compensation for both base reactive power requirements and for

1
In some cases power electronic rather mechanical circuit breaker control of reactive power devices
may be desired.

2-7
emergency switching. With 25 cycle circuit breaker operating time, mechanical
switching is sufficiently fast for both angular stability, and short-term and long-term
voltage stability. Cost estimates for transmission and distribution capacitor banks are
available [29]. There are new options for large capacitor banks such as CAPS
(CAPacitor bank Shorting) [30].
Mechanically switched equipment can be controlled via SCADA, local voltage relays,
or by wide-area controls [31] and special protection systems. More sophisticated local
control is possible. Following their 1987 voltage collapse and blackout, Tokyo
Electric Power Company developed a digital controller for fast switching of capacitor
banks with coordinated control of autotransformer tap changing [23]. This control is
implemented at most TEPCO transmission substations (see section 1.5.1).
For short-term voltage stability, transmission or distribution static var compensators
(SVCs) or STATCOMs may be considered [32]. Best practice is to consider methods
to reduce cost of power electronic equipment, including:
short time ratings including coupling transformer;
SVC/STATCOM control of mechanically switched capacitor/reactor banks;
autotransformer tertiary connection.
Transmission-level versus distribution-level SVC/STATCOM locations should be
evaluated. To support motor and other onerous load, multiple small, relocatable
distribution-connected equipment avoid the reactive power losses of transmission-
level equipment (i.e., in coupling transformers and bulk power delivery transformers)
[32].
With industry restructuring, voltage and reactive power is more complicated.
Rigorous standards with performance monitoring are required. Overly complex
payments for reactive power or reactive power markets should be avoided. Close to
unity power factor (within a deadband) at the points of interconnection is often
suggested, with substantial payments for excessive leaning on the interconnected
company for reactive power [33].

2.3.2.3 Example of importance of voltage/reactive power


An example of the impact of voltage/reactive power practices on system performance
is from the August 14, 2003 blackout [20]. The initial outage of the Eastlake 5
generator was related to excitation equipment problems during high reactive power
production. As an example of poor voltage/reactive power practice, Figures 2.1 and
2.2 show conditions on 8/14. Figure 2.1 shows the 345-kV voltage profile that many
engineers would consider as terrible, especially considering that the load was less than
80% of peak summer load and the 13:00 voltage profile was before any outages.
Figure 2.1 also shows a more desired flat voltage profile of 103% (could be even
higher, standard voltage range is 345-kV 5% or maximum voltage of 362 kV).
Voltages at the west (left) end near Detroit are very good. Voltage at a large Ohio
River power plant on the east end is relatively low. Despite substantial reactive power
reserves in the AEP area (Figure 2.2) and a 765-kV infeed, voltage at the South
Canton bus is poor. Besides placing additional burden on the generators, the poor
voltage profile also contributed to lines sagging into trees, with heating inversely
proportional to voltage squared.
Figure 2.2 shows the very low reactive power reserves at power plants in the

2-8
Cleveland area. Again, the corresponding high reactive power output combined with
old excitation equipment caused the initial Eastlake 5 outage. Although the outage of
Eastlake 5 was not considered causal to the blackout, the disaster would likely have
been avoided with many more capacitor banks in the Cleveland/Akron area. The
power system would have been much more robust and resilient.

Desired voltage profile

Figure 2.1: Northern Ohio voltage profile on 14 August 2003 [20].

Figure 2.2: Reactive power reserves on 14 August 2003 [20].

2-9
2.3.3 Summary of dynamic performance best practices for reactive power
compensation and control
1. Prioritized replacement of very old equipment at generators with modern static
exciters and digital voltage regulators, including power system stabilizers,
improves generator performance and reliability.
2. Automatic voltage regulator line drop compensation or automatic transmission-
side voltage control should be considered for better regulation of transmission
network voltage and improved system stability.
3. Automatic or manual power factor/reactive power control override of automatic
voltage regulators must not be allowed. All generators connected to transmission
networks should be in automatic voltage regulator mode without secondary
override control.
4. During heavy load conditions a high, flat voltage profile should be maintained to
the extent possible. Reactive power reserve should be maintained at power plants
via transmission and distribution shunt capacitor banks for base reactive power
needs.
5. The discontinuous control capability of mechanically switched series and shunt
reactive power compensation should also be exploited. These mechanically
switched capacitor/reactor banks can be switched in or out within a few cycles.
Control can be local or wide-area, ranging from voltage relays to special
protection systems, to more sophisticated methods.
6. For short-term stability problems one potential solution that should be considered
are SVCs or STATCOMs. Short-term ratings should be considered, where
appropriate. Generally these power electronic devices should also perform
coordinated switching of nearby mechanically switched capacitor/reactor banks.
7. Transmission-level versus distribution-level power electronic devices should be
evaluated.

2.4 Transmission Protective Relaying


Protective relaying is divided into two categories:
1. Equipment protection to remove faulted equipment (especially equipment
with short circuits) from service in the shortest possible time. This is to
minimize damage to the faulted equipment and to minimize impact to the
power system due to the voltage depression.
2. System integrity protection schemes (SIPS) otherwise known as special
protection systems or emergency controls that are intended to take corrective
action to assist the system to perform adequately during emergency
conditions. SIPS include underfrequency and undervoltage load shedding, as
well as special protection systems or SPS2.
Both these classes can ameliorate or exacerbate the extent of blackouts, depending on
their performance. This section will focus on equipment protection. SIPS are
discussed in later sections.

2
Special protection systems (SPS) are also sometimes referred to as remedial action schemes (RAS).

2 - 10
2.4.1 Equipment protection General Observations
Equipment protection must discriminate between healthy and faulted equipment. As
equipment becomes more stressed, the boundary between healthy and faulted
equipment becomes blurred, making it more difficult for the protection to
discriminate. Two items can assist in reliable discrimination. In many blackouts, from
the 1965 Northeast blackout to the November 2006 UCTE disturbance, transmission
protection operations in the absence of any faults have played a major role in power
system cascading.
New technology can improve the ability of protection to discriminate. The cost and
resource requirements of widespread application of new technology, however, prevent
system-wide replacement of old with new. The replacement work is further
complicated by the constantly increasing enhancements associated with new
technology making the characterization of new technology a moving target. Best
practice requires prioritization of replacements.
Even the application of new technology will not provide black and white answers.
The boundary between acceptable and unacceptable conditions should be clarified.

2.4.2 Transmission Line Protection


Several aspects of transmission line protection affect their response to system
disturbances, and the possibility of cascading outages.
1. Overreaching distance protection (phase overcurrent functions, zone 3 and
even some zone 2)
2. Hidden failures (e.g. KD relays) and phase comparison relays with faulty local
delay timers.
3. Sensitive ground overcurrent protection
4. Complexities, with associated difficulty of ensuring correct settings are
applied.
2.4.2.1 Overreaching Distance Protection (The Zone 3 Issue)
It is very difficult for transmission line distance protection to discriminate between
heavy load with depressed voltages and distant short circuits. Protection at remote
stations to sense faults on equipment beyond the local substation has in the past been
driven by the need for remote backup to breaker failure. This is the so-called zone 3
issue [34]. The need for sensitive zone 3 and other overreaching elements is reduced
by the following:
1. Redundant protection systems remove the need for remote backup of local
protection systems.
2. Local breaker failure protection (may not totally eliminate the need for remote
zone 3, but can reduce the need for very long reach settings if local infeed to
faulted lines is removed by local breaker failure protection).
3. Direct transfer trip facilities to remove remote infeed in the case of local
breaker failure. This will completely eliminate the need for zone 3 elements if
there is complete faith in local batteries.
Zone 3 relays have sometimes been applied because of the possibility of failure of
non-redundant battery equipment at a substation. There is no doubt that although

2 - 11
battery equipment is normally very reliable, they can fail from time to time. Since
battery failure will negate the benefits noted above of redundant protectionlocal
breaker failure and direct transfer trip systemssome utilities still find a need to
apply remote backup protection through zone 3 functions. The application of
redundant battery systems together with the above noted enhancement will eliminate
the need for zone 3 functions. The cost of redundant battery systems will often only
be justified at the most critical substations on a companys backbone transmission
system.
Following the 14 August 2003 blackout in mid and northeast North America [20],
NERC has specified the conditions which must be treated as load, and North
American users have reviewed the application and setting of zone 3 and other open
zone functions to ensure complianceor to document cases where compliance is not
needed or not applicable. By 2006 regional reliability council member utilities had
completed extensive work on line relaying:
10,901 terminals reviewed, had revealed 2192 non-conforming zone 3
applications
1821 terminals require mitigation1496 setting changes, 65 functions
disabled, 258+ equipment changes
Power companies are also identifying additional protection elements (such as zone 2
functions or phase overcurrent functions that are at risk of operating undesirably
under highly stressed conditions).
Zone 3 and related problems are discussed above, and are more fully discussed in
references [20, 3439]. New technology certainly plays an important role in:
Application of special characteristics to blind distance protection functions to
load
Application of local breaker failure protection and cost effective transfer trip
facilities for tripping of remote sources during breaker failure conditions.
2.4.2.2 Hidden Failures
Hidden failures have been addressed [40, 41]. The key is to identify the regions of
greatest vulnerability and then to investigate mitigation of hidden failures.
Modern digital relays:
Allow the application of self-checking techniques to more readily detect
hidden failures during normal system operation.
Allow the inspection of fault records after all disturbances to verify correct
performance when exposed to events that would reveal hidden failures.
Other techniques include the application of more complex protection (e.g. voting
schemes) and adaptive protection systems (switching to a more secure mode during
times of system stress).
Zone 3 or other relays applied to detect short circuits should not be used for overload
protection. Automatic transmission or transformer overload protection, if necessary,
should reflect thermal time constants and be based on overheating or line sagging.
Ambient temperature, wind speed and direction, and other phenomena affect heating.

2 - 12
2.4.2.3 Sensitive Ground Overcurrent Protection
Sensitive ground overcurrent protection has not been widely identified as a cause of
exacerbation of system disturbances. However it has contributed to some disturbance
and has the potential to contribute to more. The ground overcurrent protection
responds to the unbalance current on a transmission line (either zero sequence or
negative sequence). This function is usually set to be secure against misoperation
during normal system conditions by using an arbitrary assumption as to maximum
levels of unbalance current that might exist during normal conditions. This
assumption is challenged during heavy load conditions, and where appropriate, the
settings may be adjusted to override the worst case unbalances. The concern arises
under exceptionally heavy load conditions such as may exist during disturbances and
to which the ground overcurrent functions have not previously been exposed. Under
these conditions, there is a risk that unexpected operation of sensitive ground
overcurrent protection may occur.
New technology can assist in two ways:
1. Provide a restraining signal to the ground overcurrent function to decrease its
sensitivity during very heavy load. This automatically compensates for larger
absolute levels of unbalance current during very heavy load conditions.
2. Use additional functionality to initiate and alarm when unbalance currents
approach the setting by some predefined amount (say 50-75%). This will
allow the protection system to alert engineering staff to the presence of large
unbalance currents during abnormal conditions. These large unbalances could
approach the setting even more closely during stressed system conditions and
threaten the security of the protection system.
Regarding complexities, with associated difficulty of ensuring correct settings are
applied, new technology could have a negative impact. The increased complexity of
digital relays and controls, and communication can challenge human resources and
could result in unexpected and undesirable operations.

2.4.3 Substation equipment


Equipment protection for substation equipment has rarely been a factor during system
disturbances. Bus protection, shunt reactor protection, shunt or series capacitor
protection are usually well focused on the protected equipment and secure against
misoperation due to external disturbances.
Transformer protection systems sometimes include phase time overcurrent functions
that may operate on heavy load. Phase overcurrent functions for transformer
protection has been added to the NERC list of elements that must be capable of
remaining secure under moderate overloads [35].
The main substation protection deficiencies have been limitations in breaker failure
protection. Protection at remote stations to sense faults on equipment beyond the local
substation has in the past been driven by the need for remote backup to breaker
failure. As discussed above, this is commonly known as the zone 3 issue.
Application and settings of synchro-check (standing phase angle sensing) relays for
automatic reclosure blocking should be evaluated for scenarios that may result in
blackouts.

2 - 13
2.4.4 Summary of dynamic performance best practices for transmission
protective relaying
1. Prioritize protective relaying upgrades to first address locations where inadequate
performance or hidden failures will have the most impact in exacerbating
disturbances.
2. Clarify the boundary between acceptable and unacceptable system conditions
(particularly for equipment that is moderately overloaded). This is to ensure that
protective relaying can remain secure against unexpected operation during
stressed system conditions.
3. Ensure the reason for application of zone 3 elements is well understood. Check
whether alternative means of providing equivalent protection (such as local and
remote breaker failure protection) is practical.
4. Zone 3 or other relays applied to detect short circuits should not be used for
overload protection.
5. Check and adjust if necessary, the load carrying capability of protection systems
to meet or exceed regulatory requirements.
6. Application and settings of synchro-check (standing phase angle sensing) relays
for automatic reclosure blocking should be evaluated for scenarios that may result
in blackouts.
7. Be aware of possible increases in unbalanced currents and voltages during
stressed conditions that may threaten the security of sensitive protection.
8. Use the additional functionality of modern protection systems to monitor the
health of the system under all conditions and the approach to operating
characteristics under stressed conditions.

2.5 Load Shedding


Total blackout of an area under stressed condition may be avoided by limited load
shedding in an appropriate and timely manner. For example, the U.S.Canada task
force final report [20] pointed out that shedding of 1500 MW load in northern Ohio
would have prevented the extensive power outage on August 14th 2003.
To relieve the dispatcher burden in stressed system condition, to enable them to focus
on the outage restoration and to arrest very fast disturbances such as voltage collapse
and frequency decay, automatic load-shedding programs may be implemented. Recent
IT and microprocessor-based technology progress allows transmission owners to
realize reliable and sophisticated protection and control system for automated load
shedding.
To avoid cascading outages and blackouts, load shedding programs for extreme
contingencies beyond normal planning criteria, N-1 or N-2, can be economically
reasonable alternatives compared to transmission network enhancements.
Furthermore, for social understanding of load shedding as an emergency action,
transmission owners should design load shedding programs so that the amount of
loads to be shed is as small as possible. Designers of load shedding programs should
have high level of expertise in power system dynamic performance as well as control,
protection and communication.
It has been pointed out that voltage effects significantly affect the load-generation

2 - 14
imbalance. For example, Florida Power and Light and others have observed that a
50% or greater loadgeneration imbalance may very well cause a voltage collapse,
which prevents frequency decay. This actually occurred during the May 17, 1985
South Florida disturbance [14].
2.5.1 Underfrequency load shedding
The frequency deviation following system islanding disturbances or loss of massive
generators is caused by the imbalance between load and generation. This effect is
most serious with excess load, since speed governing is usually effective in reducing
the generation in islands with an excess of generation.
Generators, transformers and turbines in power stations have operational frequency
limitations. Turbine capabilities, mainly determined by turbine blade fatigue limits,
are generally more restrictive than those of generators and transformers.
Operation of generators at low frequencies can cause overheating because of reduced
ventilation. Operation of generators and their associated transformers at reduced
frequencies can also result in exceeding over-excitation limits, as flux within these
devices is inversely proportional to frequency. In nuclear plants, the main effect of
frequency changes on the steam cycle is that the output of electrical pumps will vary
with system frequency, causing coolant flows to change [42].
Reference 43 says that UFLS should be simple, rapid and decisive as a last-resort
expedient. It should be automatic, be distributed uniformly across the system at each
step to avoid aggravating line overloading, and be locally controlled in response to
local frequency to be independent of system splitting. Load shedding should prevent
very severe frequency excursions that may cause power plant tripping, often
unnecessarily, for control and protection reasons.
Before designing UFLS, objectives or goals have to be defined such as frequency
recovery strategy, minimum allowable frequencies for secure system operation and
the worst estimated generation deficiency. For example, the North American
Northeast Power Coordinating Council (NPCC) region has defined the objective of
UFLS program as follows: The objective of automatic underfrequency load shedding
relays is to return the system frequency to 58.5 Hz in 10 seconds and to 59.5 in 30
seconds following a major system emergency for a generation deficiency of up to
25% of the load. To achieve the defined objective or goal, parameters such as
frequency threshold, step size, number of steps, time delay and priorities are
determined based on dynamic simulations.
Underfrequency load shedding should be designed by detailed large-scale simulations
[44]. The models capture, for instance, the effect of voltage sensitive loads on
generationload imbalance.
Reference 45 categorizes UFLS into three schemes: Traditional, Semi-Adaptive and
Adaptive. The Traditional scheme sheds a certain amount of load when the system
frequency falls below a certain threshold. The Semi-Adaptive scheme measures df/dt
or ROCOF (Rate of Change of Frequency) when a certain frequency threshold is
reached. Adaptive is based on the relation between the initial values of ROCOF and
the size of the disturbancein other words the loadgeneration imbalance that caused
the frequency decline. Adaptive UFLS estimates loadgeneration imbalance from the
initial value of ROCOF.
Some abnormal frequency relays provide an element for measuring the average rate-

2 - 15
of-change of frequency. Including the frequency change trend, a more secure decision
can be made during contingencies [42].
Reference 46 provides dynamic simulation results with UFLS using average rate-of-
change, df/dt. Under the heavily overloaded condition, voltage deviation caused by
load shedding is very critical. In an overloaded system, voltage-sensitive load
characteristics may provide a load-shedding effect. If, however, overvoltages occur
following load shedding or transmission unloading because of islanding, over-
excitation protection for generators may operate for overvoltage and underfrequency.
If volts/hertz (V/f) relays trip out critical generators during frequency decline, a
complete blackout may result.
Generator loss of excitation relays have also operated with disastrous consequences
during overvoltages following islanding with transmission overloading and load
shedding, with the 1977 New York City blackout and the 10 August 1996 blackout in
the southwestern U.S. being prime examples. Loss of excitation relays should instead
operate for excitation problems causing collapse of terminal voltage. We suggest
voltage supervision of loss of excitation relays to prevent operation for abnormally
high voltage [47, 48].
Figure 2.3 shows dynamic simulation examples for a heavily overloaded system with
32% imbalance. In case (a), loads are overshed resulting in volts/hertz relay operation.
After improving the ROCOF setting of the UFLS relays, more appropriate amounts of
load are shed to keep the system stable.

2 - 16
(a) Generator tripped by V/f

(b) Avoided generator tripping by improving ROCOF settings


Figure 2.3: Dynamic simulation results for power system islanding [7]

In Italy [122] in the range 50 Hz49.1 Hz, pumped storage power plants and
interruptible loads are disconnected. The relays controlling the pumped storage plants
are triggered when a) both the frequency is lower than 49.6 Hz and its time derivative
is lower than -0.2 Hz/s (load shedding at different stages) or when b) the frequency is
lower than 49.3 Hz. At f = 49.1 Hz, automatic load shedding takes place. The relays
shedding the loads according to different stages operate based a) on the frequency and
its time derivative and b) on the frequency only. The load shedding procedure should
be complete at 47.7 Hz, which is a security threshold to avoid the undesired operation
of the protection relays in the thermal power plant (not necessarily the electric section
of the power plant) before the threshold 47.5 Hz is reached.
The load shedding scheme includes about 60% of the total Italian load, accounting for
possible failures and ensuring at least 50% of the total load can be shed. The load is
subdivided into stages (about 14) corresponding to different thresholds of frequency
and/or its time derivative. Each stage is 3%5% of the total load, in order to avoid
severe transients (e.g. voltage instabilities); moreover, each stage is uniformly
distributed within the network. The first seven stages (about 4.5% of the total load
each) are disconnected at f = 49.1 Hz and depending on the frequency derivative (>1

2 - 17
Hz/s, 1 Hz/s 0.8 Hz/s, , 0.2 Hz/s0.1 Hz/s, respectively). The same loads can also
be disconnected based on the frequency only, when f = 49.0 Hz is reached (f = 49 Hz,
f = 48.9 Hz, , f = 48.4 Hz, respectively). The sets from 8th to 14th are shed under
f = 48.4Hz (each set about 3.5% of the total load).
2.5.2 Undervoltage load shedding
As power systems have matured voltage problems are often more likely than islanding
with a large loadgeneration imbalance. Undervoltage load shedding (UVLS) is a
partial solution to voltage stability challenges analogous to underfrequency load
shedding. UVLS provides protection for unusual disturbances outside planning and
operating criteria and may also help delay the need for capital investments [49].
The UVLS designer should understand the voltage instability mechanisms. Voltage
instability phenomena can be classified in different time scales such as short-term of
several seconds and long-term of tens of seconds to many minutes.
Short-term voltage stability involves dynamics of fast acting load components such as
induction motors, electronically controlled loads, and HVDC converters. Long-term
voltage stability involves slower acting equipment such as tap-changing transformers,
thermostatically controlled loads, and generator excitation current limiters [1].

Figure 2.4: Voltage at Malin substation (OregonCalifornia border) on the Pacific Intertie.

Figure 2.4 describes 500-kV transmission voltage during a cascading failure that
occurred in the Western U.S. on July 2, 1996. After the triggering events in Wyoming,
the system survived for about 22 seconds with relatively small decline in voltage,
mainly in southern Idaho. After around 20 seconds, two small generators tripped
because of high field current and a critical 230-kV line tripped, leading to severe
voltage decay and system-wide cascading outages. Five generators at the McNary
hydro plant in Oregon tripped due to misoperation of the excitation protection system
(see section 1.2.3). The entire power system then broke into several islands [50]. For
the initial period, this phenomenon might be described as instability from the loss of
short-term equilibriummeaning no intersection between the short-term load
characteristics and PV curves with (possible) overexcitation limiting (OEL) [15]. In

2 - 18
this case, UVLS design might be difficult using only undervoltage relays. To solve
these difficulties, some techniques have been applied to actual UVLS design such as
supervision of reactive output at critical generators and/or synchronous condensers,
rate of change of voltage, and loss of critical line detection. Some actual UVLS
applications follow.
2.5.2.1 Automatic undervoltage load shedding remedial action scheme to maintain
B.C. Hydro 500-kV network voltage stability [51]
Purpose: To prevent voltage collapse following loss of major transmission or reactive
power support facilities.
System Description: The scheme comprises two independent subsystems, one on
Vancouver Island and another on the Lower Mainland, which can shed load in their
respective areas, or jointly for more severe system problems. The System Control
Centre directs local control centers supervising each area to arm or disarm this scheme
as required for system conditions. Each subsystem monitors three key station bus
voltages and a designated group of units for its reactive power reserve, which is the
remaining VAR boost capacity of the group. If the bus voltage drops below a set
level, or if the VAR reserve drops below a set level, its sensor will key a continuous
signal to either subsystem to initiate load shedding after time delay until the initiating
conditions have reset.
2.5.2.2 Hydro Quebec UVLS (TDST) [52, 53]
Purpose: To prevent voltage collapse following loss of two or more 735kV lines or
any other extreme disturbances.
System Description: TDST is a redundant scheme, which is located at two different
operating centers and always armed. The communication links between the operating
centers and the substations are also redundant. To detect voltage instability, 3 out of 5
decision-making logic is applied based on the average value of five positive sequence
bus voltages in Montreal area. Cumulated possible shedding of 1500 MW for the first
stage depends on the voltage decline and duration such as less than 0.94 pu for 11
seconds, 0.92 pu for 9 seconds and 0.90 pu for 6 seconds. Additional load shedding of
1000 MW can be initiated based on 3-second voltage integral calculation. Figure 2.5
shows the TDST operation to justify load shedding based on voltage integral
calculation comparing to the case with only shunt reactor tripping.

Figure2.5: TDST operation after loss of two lines

2 - 19
2.5.2.3 Entergy UVLS (VSHED) [54]
Purpose: To prevent voltage collapse following critical double contingencies.
System Description: This scheme uses the energy management system and SCADA
communications. It is activated automatically, when the load in the Western Region
(east Texas) exceeds 1000 MW. Voltages at four 138-kV buses are monitored. When
three of the voltages are below threshold and an overexcitation relay for one of two
critical generators is activated or both of generators are out of service, load shedding
is executed in several seconds. During a disturbance on September 22 1998, when the
Western Region was on the verge of voltage collapse, VSHED activated and operated
successfully, protecting the region from wide-spread voltage collapse.
2.5.2.4 New Mexico UVLS (ICLSS) [55]
Purpose: To prevent voltage collapse following loss of both 345-kV lines serving the
Albuquerque area during peak load periods.
System Description: This scheme is designed to deal with both short-term and long-
term voltage instability, to mitigate equipment overloads, and to allow for acceptable
reclosing angles. Load is tripped at either the sub-transmission or feeder level through
PLC and distribution SCADA. Figure 2.6 shows the block diagram for this scheme.

Figure 2.6: Albuquerque area load shedding.


2.5.2.5 TEPCO UVLS
Purpose: To prevent voltage collapse following extreme contingencies such as loss of
multiple 500-kV transmission lines during abnormally high load periods.
System Description: This scheme is composed of MJ (Monitoring and Judging) units
installed at four 500-kV substations and LS (Load Shedding) units installed at several
275- or 154/66-kV substations. Each MJ unit is connected via a microwave

2 - 20
communication channel and LS units are connected to a MJ unit in a star topology via
microwave. Long-term voltage collapse is detected on the 500-kV network, because
275-kV or lower voltages are automatically regulated by tap changing on 500/275-kV
or 154-kV transformers. Therefore, the SPS requires MJ units measuring 500-kV bus
voltages. For the purpose of security, the SPS uses 3-out-of-4 decision-making logic.
MJ units detect slow types of voltage collapse, ten seconds to minutes, by using
unusual continuous V/t value. Fast voltage collapse can be also detected by V/t
calculation with a one second data window. See Figure 2.7.
5 0 0 k V M a in G r id
C o m m u n i c a ti o n ; M e a s u r e d
5 0 0 k V v o lt a g e s

MJ
MJ MJ
MJ

2 7 k V ,1 5 4 k V C o m m u n i c a ti o n ; V o lta g e
c o ll a p s e d e t e c tio n r e s u lt
r a d i a l n e tw o r k
S u b - S t a t io n
LS 2 7 5 ,1 5 4 / 6 6

LS S u b - S t a t io n
2 7 5 ,1 5 4 / 6 6

M J : M o n i t o r in g a n d J u d g i n g U n i t
L S : L o a d s h e d d i n g U n it

Figure 2.7: TEPCOs UVLS

2.5.3 Direct load shedding


Direct load shedding is sometimes applied to provide high speed relief from excessive
load (such load shedding is typically a part of special protection system, see next
section). This type of shedding monitors system topology, particularly equipment
status. This type of shedding may be appropriate where credible multiple
contingencies could result in fast cascading. For example, in the case of multiple
transmission lines sharing towers or rights of way it is conceivable that a single
common factor could result in simultaneous or near simultaneous loss of several lines.
Fast load shedding initiated directly by loss of multiple circuits may sometimes be
required to maintain system stability.
Direct load shedding may be armed or disarmed to be effective only under certain
loading conditions when the multiple contingency would result in cascading if action
were not taken. The arming is determined by system state, and rules developed during
planning studies.
Due to the use of system topology, and the planning studies that pre-determine the
load shedding requirement, intentional time delays are not necessarily required.
Therefore very high speed load shedding (in the region of a few tenths of a second)
may be achieved by direct tripping.

2.5.4 Summary of dynamic performance best practices for load shedding


1. To relieve the dispatcher burden in stressed system condition, to enable them to
focus on the outage restoration and to arrest very fast disturbances such as voltage
collapse and frequency decay, automatic load-shedding programs should be
implemented. Recent IT and microprocessor-based technology progress allows

2 - 21
transmission owners to realize reliable and sophisticated protection and control
system for automated load shedding.
2. To get social understanding of load shedding as an emergency action,
transmission owners should design load shedding programs so that amount of
loads to be shed is as small as possible.
3. Designers of load shedding programs should have high level of expertise in power
system dynamic performance as well as control, protection and communication. It
is vital that these schemes be fully coordinated with other control and protection
functions on a wide area basis.
4. UFLS should be simple, rapid and decisive as a last-resort expedient. It should be
automatic, be distributed uniformly across the system at each step to avoid
aggravating line overloading, and be locally controlled in response to local
frequency to be independent of system splitting. Shedding based on both
frequency level and rate of change of frequency can be considered.
5. The UVLS designer should understand voltage instability mechanisms. Voltage
instability phenomena can be classified in different time scale such as short term
of several seconds and long term of tens of seconds to many minutes. Some
techniques have been applied to actual UVLS design such as supervision of
reactive output at critical generators and/or synchronous condensers, rate of
change of voltage, and loss of critical line detection.
2.6 Special Protection Systems
The criteria normally adopted in system planning include the well-known N-1
criterion, where the contingency loss of a single system element should not cause load
supply interruption or violations of system operating limits. In other words, a power
system must be designed to support normal contingencies with no service
interruption, typically without the assistance of special protection systems.
Economic operation has caused systems nowadays to operate closer to their limits
more of the time, thus increasing the severity of system contingencies. A power
system is dynamic. Many sources of vulnerability exist, e.g., equipment failures, loss
of communication, human errors, market uncertainties, acts of vandalism, terrorist
plots, and natural disasters. Cascading events can lead to catastrophic power outages.
Thus, power systems are exposed to extreme contingencies characterized by multiple
faults, single faults with multiple outages or cascading outages of transmission
elements. So, the system must include means of avoiding system-wide extreme
contingencies.
This can be achieved by using automatic measures that must be simple, reliable and
secure for the system, and provide the most extensive possible coverage against all
extreme contingencies. In this way, defense plans have been established to deal
with extreme contingencies and thereby reduce the frequency and extent of major or
total power failures. One of the main components of any defense plan is Special
Protection SystemsSPS (also termed remedial action schemes or emergency
controls). SPS can be described as discontinuous local or wide-area stability control
activated only during emergencies.
Special Protection Systems (SPS) are common measures to minimize the spreading of
unavoidable disturbances over a system. These blackout-prevention measures limit
the extension of the disturbances. Considering the importance of SPS performance for

2 - 22
the interconnected system security, careful conception and implementation are
necessary. Otherwise the overall reliability of the system can be compromised through
improper applications of SPS, especially when used in lieu of adding needed basic
infrastructure to the system.
2.6.1 SPS philosophy and design aspects
The following points highlight the key aspects of SPS:
System security must never be compromised by unintended operation of any SPS.
The use of remote actions must be minimized and a high level of security is
required.
It is preferable to shed a portion of pre-selected loads than to uncontrollably lose
large amounts of load or even lose all load by allowing the system to deteriorate.
Since there are a very large number of possible extreme contingencies, it is
preferable to detect the consequences to the power system rather than trying to
detect the contingency itself.
It is better to make the greatest possible use of measures based on the detection of
local variables and perform actions locally.
Simplicity must prevail over selectivity in the determination of the amplitude of
actions to be carried out.
As far as possible, the use of measures that have a direct impact on service
continuity must be limited.
2.6.2 SPS classification
The SPS can be classified by its actions and by its objectives.
SPS Classification by action, as described in various reports [3, 11, 31, 44, 5773,
123128]:
Generation tripping
Underfrequency load shedding (Section 2.5)
Undervoltage load shedding (Section 2.5)
Direct load tripping including pumped storage units (Section 2.5)
Automatic switching of reactors and shunt capacitors (Section 2.3)
Controlled system separation
Power plant islanding with local loads
Transformer tap-changer blocking
Dynamic braking
Fast on-off switching of reactive power stored in capacitors
HVDC rapid power ramping
Automatic Generation Control blocking
Fast valving of steam turbines, fast power reduction of other turbines
Transient excitation boosting

2 - 23
Transmission lines automatic tripping or reclosing.
SPS Classification by objectives:
Underfrequency control
Overfrequency control
Transformer and transmission overload control
Stability control
Overvoltage control
Undervoltage control
2.6.3 Aspects related to reliability, security and rapidity
Critical SPS should have the following characteristics:
Functional redundancy (circuit breakers are not redundant so overtripping may
be necessary). Where redundancy is not possible, the possible failure of a
component of the SPS must be considered in the design (e.g,. if a circuit breaker
fails to open to shed load, alternative loads must be tripped by the SPS).
Redundant telecommunication links with physically diverse routes.
Self-checking and supervision functions.
Flexible design to easily accept modifications or additions, for the adaptability
that an evolving power system requires.

2.6.4 Aspects related to the performance of system protection systems


Information is necessary so that the performance of the SPS can be evaluated. The
following performance indexes may be defined related to the number of actuations:
N1 This index measures effectiveness of the
Index of effectiveness = N1 + N 2 + N 3 SPS in achieving its purpose

This index measures how well the SPS


N1
Index of dependability = N1 + N 2 achieves its conceived level of
performance

This index measures the vulnerability of


Index for unnecessary N4 the SPS for unnecessary operations that do
operation rate = NYEARS not contribute to disturbances in the Bulk
Energy System

Where NYEARS = number of years of operation of the SPS


The first one deals with the number of activations that are recorded for a SPS. This
value may be divided into different subsets as defined below:
N1 number of correct operations or number of successful operations. In other
words, it means the number of times the SPS operation achieves the performance
objective.

2 - 24
N2 number of failures. A SPS fails to prevent or minimize the effect of a Bulk
Energy System (BES) disturbance in the event of a contingency of severity equal to,
or less than, the specified. Or a SPS operates when it should not, resulting in, or
contributing to, a BES disturbance.
N3 number of unsuccessful operations. A SPS fails to prevent or minimize the
effect of a BES disturbance in the event of a contingency of greater severity than that
specified in its design.
N4 number of unnecessary operations. A SPS operation that should not have
occurred (i.e., resulting from inadequate discrimination in the scheme design,
equipment malfunctioning, human error, etc) and that does not result in, or contribute
to, a BES disturbance. Local service interruption or generating unit outages may
occur.
The performance analysis may also be complemented by including other aspects such
as:
Types of SPS actuation versus the degree of redundancy adopted;
Types of SPS actuation versus the frequency of performance evaluation;
Cost of SPS versus the cost of its incorrect actuation;
Costs of SPS implementation (or deactivation of existing SPS) versus. the cost of
other alternatives (new transmission lines or equipments, reduction of generation
or interchange);
Estimated frequency of events that may cause a SPS scheme to operate.
Other important aspects are the percentage of time in which each SPS remain
activated (on-line and ready to intervene) and the predominating cause of failures or
unnecessary actuations affecting each SPS.
Of singular importance is the consequence of inadequate operation, considering
incorrect operations, operation refusals or accidental operations.
2.6.5 Phasor measurement units - PMUs
PMUs are devices for synchronized measurement of ac voltages and currents from the
power system with a common time (angle) reference. The most common time
reference is a global positioning system (GPS) signal, which has accuracy better than
1 s. These ac quantities are acquired with high sampling rates and represent the
voltage and current wave forms. These waveforms are characterized by two
parameters: amplitude and phase angle. The waveforms are processed by
mathematical algorithms, usually based on the Fourier Transforms. These algorithms
assume the samples are synchronized and calculate the magnitude (modulus) and
phase angle of the fundamental component of the waveform. The available techniques
allow obtaining measurements with high accuracy that can be directly compared.
Besides this, if the sampling is done for the three phases of the circuit, the voltage and
current symmetrical components (positive, negative, and zero components) can be
easily computed.
The calculated phasors at different locations are then transmitted through
communication channels to a central computer, termed Central Data Concentrator
CDC or Phasor Data ConcentratorPDC. In this computer the phasors from the
different PMUs are aligned according to their time tags, allowing a global view of the

2 - 25
power system, with all phasors synchronized to the same time reference. Later on,
these phasors can be visualized, processed by applications (computer software)
specially designed for power system monitoring and control, transmitted to other
applications or stored for future analysis. The set PMUs + CDC can be referred as a
Phasor Measurement SystemPMS (the term WAMS, wide-area measurement
system, is also used).
Phasor measurements may result in more reliable and less restrictive special
protection systems. The complexity of using signals across large portions of an
interconnection, however, must be considered.
2.6.6 Special protection systems and PMS
The most common SPS take action through generation tripping and load tripping.
Generation tripping (often limited to hydro generation) is an effective and economic
option for system stabilization for loss of a generating plants outgoing lines or tie
lines in order to prevent the remaining transmission lines from overloading or
violating transfer limits. Generation tripping logic is heavily dependent on a systems
power flow patterns, type of faults and clearing time. SPS generally initiate generation
tripping by detecting transmission line outages. In the simplest type, generation
tripping is performed based on preset values, but the drawback to this method is that
the amount of generation to be dropped has to be set for the worst possible
contingency. This will typically result in the disconnection of more generation than
necessary. An improvement can be made by making decisions based both on line
status and on the type of disturbance so that the level of generation to be dropped with
a single line to ground fault is less than it would be with a multiphase fault. Several
approaches have been proposed for more accurately determining the amount of
generation to drop based on transient stability analysis. However, these methods are
difficult to apply in real-time applications due to their high computational burden.
Remote load tripping is a remedial action scheme that is conceptually similar to
generation tripping, but applied to the load portion of the system. It is also used to
reduce power mismatches in generation deficient systems. This is done to assure that
the system meets minimum operating standards, and to facilitate the quick restoration
of customer load. Remote load tripping schemes can be used to maintain system
frequency within a preset range.
The use of PMS can greatly improve traditional SPS. Several methods have been
developed for frequency stability control based on wide-area measurements.
2.6.7 Wide-area controls using phasor measurements
Many power companies and manufacturers are developing wide-area controls based
on synchronized positive sequence phasor measurements. Two examples are cited
here: in the United States and France.
Bonneville Power Administration. BPA and Washington State University are
designing and testing a Wide-Area stability and voltage Control System (WACS)
providing a flexible platform for rapid implementation of generator tripping and
reactive power compensation switching for transient stability and voltage support
[25]. Features include synchronized positive sequence phasor measurements, digital
fiber optic communications from 500-kV substations, a real-time control computer
programmed in the G language running two algorithms in parallel, and output
communications for generator tripping and 500-kV capacitor/reactor bank switching.

2 - 26
In contrast with SPS that provide control for only pre-defined events and are burdened
by complexity and relatively high cost, WACS employs strategically placed sensors to
react to the power system response to arbitrary disturbances, in such a way that the
need for discontinuous action is determined and commanded, the power system
response is observed, and further discontinuous action such as generator tripping or
capacitor bank switching is taken as necessary. The WACS platform may also be used
for wide-area modulation control of generators and transmission-level power
electronic devices, and for control center operator alarms and monitoring.

Since March 2003 WACS has been installed in a laboratory and then at a control
center with real-time phasor measurement inputs from a PDC, and recording of
contact outputs. Based on the following, proof of concept was demonstrated, with
tuning and validation of the real-time controller hardware and software:
Large-scale simulations including playback of simulation results into real-time
code
Monitoring of real system performance over several years
Playback of archived data into real-time code, particularly for the June 14, 2004
massive generation outage event ( 4589 MW of generation tripped ), originated by
a short circuit occurred near the Palo Verde Nuclear Plant west of Phoenix,
Arizona, that was not completely cleared for almost 39 seconds.
France. In France a new defense plan, better adapted to the operating conditions
beyond the year 2000, was considered by EDF for the following reasons:
to ensure a reliable, simultaneous separation of all out-of-synchronism areas;
to promote the restoration of generation/demand balances under steady-state, and
also transient, conditions immediately following the separation of out-of-
synchronism areas so as to avoid any further problems.
The first function is an improvement of the existing defense plan; it is still based on
the organization of the network into islandable areas. This improvement would be
obtained through a loss of synchronism detection based on a criterion much closer to
the actual phenomenon than voltage beatsa comparison between the voltage phases
measured within the elementary areas of the electric system. Figure 2.8 shows the
architecture best suited to this novel detection principle. Network phasor
measurements are pooled within a central computer. This computer then decides the
necessary separations and delivers the appropriate orders for line tripping.

2 - 27
Figure 2.8: Phasor measurements and controlled islanding in France.
The second function is more unusual. It aims at distributing load shedding over
portions of the system as necessary to speed up restoring balance after a separation.
The required load shedding algorithm (volume and distribution) implies knowledge of
the regional generation/demand imbalance state just before the incident; such
knowledge supposes a global view of the system that can be achieved within the
computer already responsible for the separation.
2.6.8 Summary of special protection systems
1. The utilization of special protection systems has been increasing all over the
world as a resource to prevent blackouts, as well as to minimize the consequences
of disturbances.
2. The restructuring of power systems has been leading to improved economical
operation of the system and higher system transfers. However, because these
higher transfers may also increase the severity of disturbances, SPS are becoming
more and more prevalent.
3. Special Protection Systems make possible the operation of power systems closer
to their limits, thus allowing the maximum exploitation of resources.
Misapplication of an SPS, however, can compromise security.
4. Special protection systems can assure system security in the eventuality of
transmission reinforcement delays, but should not be used a replacement of
needed system infrastructure.
5. Prudent application of special protection systems can solve many operational
problems, such as equipment overloads, over and under-voltages, over and under-
frequencies, voltage and angular instability, in an economical way. However,
without careful design and with excessive reliance, SPS can overly complicate
system operation and compromise overall reliable operation of the network.
6. Adequate reliability can be achieved by redundancy and other design methods.
7. New SPS are being investigated in many countries, considering the utilization of
Phasor Measurement Units (PMUs) in order to make them more efficient.

2 - 28
2.6.9 Summary of dynamic performance best practices for special protection
systems
1. Special Protection Systems should be considered as an economical means to
prevent blackouts, as well as to minimize the consequences of disturbances.
2. Due to the importance of SPS for overall system performance, a thorough
reliability design is critical. Measurements, communication channels, and decision
making logic or algorithms should be given the necessary functional redundancy
and reliability.
3. Monitoring and arming capability is required at control centers. On-line security
assessment is valuable for arming SPS.
2.7 HVDC Links and HVDC Controls
High Voltage Direct Current (HVDC) power transmission is being used worldwide at
different high-voltage levels up to 600-kV. This is an established technology that has
been used for over 50 years. Presently there are more than one hundred HVDC
systems in operation around the world. Anticipating an increased load growth ranging
from 10 to 15% per year, countries like China, India, and South Africa are considering
building ultra high voltage DC links at 800-kV or above.
High voltage direct current links are of several types:
long distance or undersea radial lines connecting major generation centers to a
synchronous network,
long distance or undersea links within a synchronous network,
back-to-back links between asynchronous networks, and
(rarely) back-to-back within a synchronous network.
Combinations are possible such as the multi-terminal QuebecNew England link that
can operate in parallel with the Quebec ac network while also linking the Quebec and
Eastern interconnections (Quebec and the Eastern interconnection are asynchronous
with respect to one another). The configuration influences power system dynamic
performance, with long distance HVDC in parallel within a synchronous network
being the most complex.
In this section we discuss power system dynamic performance associated with
traditional HVDC links employing line-commutated converters [3,14,74,75]. The
focus in present years has been in developing advanced HVDC schemes with modular
designs and voltage source converter technologies using devices such as insulated
gate bipolar transistors (IGBTs) operating at a high frequency (e.g., 1350 Hz) in pulse
width modulation mode requiring small simple harmonic filters, as there are no lower
harmonics (e.g., no lower than 37th at 1350 Hz). Newer HVDC technologies can help
to improve power system dynamic performance and are described in more detail in
Chapter 3 (i.e., capacitor-commutated converters and voltage-sourced converters). For
an HVDC application, best practice involves tradeoffs between cost and performance
of the different types of converters.
2.7.1 HVDC applications
HVDC may be applied for several reasons [75]:
Economy for very long distance transmission where the lower cost of the HVDC

2 - 29
transmission line overcomes the high cost of terminal equipment (converters).
Life-cycle cost includes the lower losses of dc transmission lines and the higher
losses of dc terminal equipment.
Undersea links where the capacitance of cables limits ac transmission to around
40 km.
Connection between asynchronous networks, possibly of different frequency.
Most commonly, this is with back-to-back converters that are most economical
when ac transmission lines associated with the HVDC link are relatively short.
Reduction of short-circuit currents.
Higher power density and reduced visual impact.
Nearly independent poles with a bipolar link equivalent to a double circuit ac line.
Power increase on the healthy pole can compensate for outage of the faulty pole.
Though each individual transmission project will have specific reasons justifying the
choice of HVDC, the most common include the following:
Lower overall investment cost.
The potential for economical long distance transmission.
Lower losses in long distance applications. Typically, because HVDC comprises
active power flow only, it causes 20% lower losses than AC lines.
The potential for asynchronous interconnection. For example, it allows connecting
networks of 50 Hz and 60 Hz frequencies.
Higher system controllability with at least one HVDC link embedded in an ac
grid. In the deregulated environment, the controllability feature is particularly
useful where control of power flow and energy trading is needed.
Significant reduction in short circuit currents in some applications because dc
current can be rapidly controlled electronically.
Enhanced utilization of limited rights-of-way with reduced visual impact.
In addition, HVDC may offer the following benefits through judicious design of
converter controls:
Management of congestion
Frequency control following loss of generation, when linking two networks that
are otherwise isolated from each other. That is, if a frequency decline is sensed on
one side of the HVDC link, through control action the megawatt transfer over the
link can be quickly changed to increase/decrease megawatt flow to help stabilize
frequency.
Power oscillation damping can sometimes be effected through supplemental
controls on the HVDC.
Precise power transfer control between interconnected transmission areas during
certain emergencies. For example, HVDC does not overload for outage of a
parallel ac transmission path so HVDC links are not susceptible to cascading due
to thermal overloads.
Fundamental disadvantages of HVDC links include:

2 - 30
Difficulty and high cost of tapping a long distance link as a grid evolves.
Inherent complexity compared to ac transmission, including the addition of
modulation or other stability controls.
With parallel ac/dc and without special stability controls, there is not the inherent
support during outages provide by ac transmission. For outage of a line, power
(synchronizing power) is rerouted on parallel ac lines by physical laws. HVDC
power transfer, however, is maintained constant by the power electronic controls.
From a power system dynamic performance perspective for parallel ac/dc, an ac
voltage change or oscillation near one terminal may detrimentally be coupled to
other terminals via rapid dc voltage dynamics, with active and reactive power
injection changes at all terminals. In one event a local instability caused system-
wide oscillations via voltage-coupled dynamics on two dc links.
Conventional line-commutated converters draw high reactive current during ac
voltage disturbances. AC network faults may cause commutation failures with
temporary loss of dc power, possibly causing substantial acceleration of rectifier-
end generators.
Conventional line-commutated converters consume reactive power that is 5060%
of the dc power. The reactive power is usually supplied by the shunt harmonic
filters (capacitive at fundamental frequency) and shunt capacitor banks. The
reactive power supplied by the filter banks drops proportional to ac system
voltage-squared contributing to possible voltage instability3.
Converter stations have high losses compared to ac stations. This is especially true
for the newer voltage-sourced converters.
A bipolar outage may be a very severe disturbance.
As an example of the second bullets immediately above, the design of the voltage
dependent current order limiter on the Pacific HVDC Intertie contributed to negative
damping during the 10 August 1996 cascading outage in western North America [76,
77].
Related to blackouts, an HVDC application often mentioned is breaking large
synchronous interconnections into smaller networks via HVDC links. This would
result in smaller interconnections that would contain the extent of the outages and
affect a smaller portion of the population. This solution could provide the flexibility
of providing aid during emergencies while preventing the propagation of disturbances
across very large areas. There are, however, many considerations in these approaches.
For example, supplemental dc controls are required to provide the mutual emergency
assistance of large synchronous interconnections.
As a possible application example following the 14 August 2003 failure, back-to-back
HVDC links between Ontario and Michigan are being considered. This would have
prevented the power surges around the Lake Erie loop and probably would have
reduced the extent of the blackout. The alternative of HVDC links to achieve smaller
interconnections or break large loops, however, is extremely expensive because of

3
This is one of the main reasons for the application of voltage dependant current order limits on the
converters, to help ac voltage recovery by reducing dc current order and thus converter reactive power
consumption.

2 - 31
converter cost, and is difficult in highly meshed networks with multiple ownerships.
Furthermore replacement of ac ties with dc could have the following adverse impacts
on reliability:
a. Increased dependence upon special HVDC controls (with a commensurate
reduction in reliability).
b. The severity of disturbances could be greater due to the loss of HVDC ties as
opposed to the system splitting with less loss of load.
2.7.2 Basic HVDC controls affecting power system dynamic performance
During normal operation the rectifier terminal typically controls the dc current to an
ordered value, and the inverter terminal controls dc voltage. In multi-terminal
applications, the largest inverter terminal typically controls dc voltage and the
remaining terminals control current.
As explained in references 3 and 120, there are several possibilities for dc voltage
control at the inverter, including constant extinction angle, constant dc voltage
control, and beta control. The traditional constant extinction angle (CEA) control
minimizes equipment cost and losses, but can affect power system dynamic
performance adversely. DC voltage or beta control involves operation with higher
than minimum extinction angle for increased controllability and stability. With dc
voltage control, for example, small fluctuations in ac voltage on the inverter side are
not transmitted to the rectifier end. For inverter side ac voltage depression of more
than a few percent, however, control limits are reached resulting in CEA control until
converter tap changing occurs. Moreover, converter tap changing may be detrimental
to voltage stability analogous to tap changing on bulk power delivery transformers.
Current control order is typically derived from a power order by dividing the power
order by dc voltage. Implementation details such as dc voltage measuring time
constants impact system dynamic performance. For large network disturbances that
result in low ac voltage, voltage dependent current order limiters reduce the current
order in order to reduce the reactive power demand of converter during the
disturbance until ac voltage recovers. The reduction in dc current and power could be
detrimental for angle stability.
In order to prevent dc link collapse for low ac voltages at the rectifier end, the dc
current and dc voltage controls have a mode shift feature so that the inverter will
takeover current control for depressed rectifier voltage. The current order is reduced
by around 10% during the mode shift.
Other controls such as a forced gamma ramp reduce commutation failures that affect
system dynamic performance. That is, if a commutation failure is detected, the
controls can transiently advance the inverter extinction angle (gamma) reference to
force an increase in gamma and thus minimize the possibility of subsequent
commutation failures. The voltage dependant current order limiter at the inverter will
also help this process as it reduces the current order in response to the voltage collapse
during commutation failure.
With the microprocessor controls used for several decades, many variations of the
basic controls are possible. Controls are often customized for the application. A best
practice is to have detailed models suitable for large-scale transient stability
simulation and eigenanalysis. The effect on power system dynamic performance
should be integral to the evaluation of possible application of HVDC links.

2 - 32
2.7.3 HVDC controls to enhance power system dynamic performance
The controllability of HVDC links can be exploited to enhance system dynamic
performance. Custom-tailored controls are required for back-to-back links that are
frequently located at weak (low short circuit capacity) locations in ac networks.
Current and extinction angle control features to regulate ac voltage on both sides of
the link are relatively simple because both terminals are co-located. The dc controls
may also order switching of ac reactive power devices to aid ac voltage regulation.
HVDC links that are radial into an interconnection may have several controls to aid
system dynamic performance. These include fast HVDC power changes and
participation in the regulation of ac system frequency.
For long distance HVDC links in parallel with an ac network, the controls are more
complex. The simplest control may be at the inverter terminal (often the weaker load
end) to regulate ac voltage by extinction angle (gamma) control. At additional
equipment cost and losses, normal operation is at a higher than minimum extinction
angle so that ac voltage may be regulated similar to a SVC. But similar to a SVC at its
capacitive limit, the minimum gamma limit is reached for more than a few percent
depression of ac source voltage, thus limiting the ability of the converter to regulate
voltage for severe voltage dips.
Similar to ac voltage regulation, oscillation damping is possible by gamma
modulation. This essentially involves phase compensation of ac voltage regulation,
but sophisticated controls may be necessary for the various operating conditions.
Control mode shifting between voltage regulation and modulation is possible. Again,
the minimum gamma limit will be reached for large network disturbances resulting in
one-sided modulation, if the inverter operates in the CEA mode.
Discontinuous control (special protection system) for fast dc power changes following
large disturbances can be fairly straight forward.
DC current modulation to damp oscillations or provide synchronizing power for
parallel ac transmission may be quite complex, particularly if one or more dc
terminals are at weak network locations. In the face of various operating conditions,
wide-area monitoring and supervision of control may be necessary. Current
modulation results in active and reactive power injections at all dc terminals. With
rectifier current modulation, ac voltage regulation may be necessary at inverter ends
connected to a weak ac network. The effect of mode shifts, communication failures,
etc. must be considered.
References 78 and 79 describe a successful current modulation implementation on the
two-terminal Pacific HVDC Intertie. Because of added complexities with additional
terminals (two-terminals at each end for a four-terminal link), because of a few
adverse interaction incidents and because of inadequate wide-area monitoring,
modulation was de-commissioned with the terminal expansion. Reference 80
describes extensive study with field tests of another modulation application; the
complexities are described and modulation was not implemented.
2.7.4 Summary of dynamic performance best practices for HVDC applications
1. Potential HVDC applications should be carefully studied for their effect on power
system dynamic performance. This requires preliminary detailed models of the
link, large scale transient stability simulation, and possibly small disturbance
analysis (e.g. eigenanalysis). Models should include dc line/cable RL or RLC

2 - 33
dynamic models, and converter control models applicable to the bandwidth of
electromechanical oscillations and short-term voltage stability. Companion
EMTP-type models are also desirable. In comparing dc and ac alternatives, long-
range needs as the network evolves must be considered.
2. Based on the studies, tradeoffs between higher cost and better performance must
be made. Decisions such as capability for higher than minimum extinction angle
operation, need for modulation controls, need for capacitor commutated
converters or the need to use voltage source converter technology, should be
reflected in the specification. Alternatively, the specification should define
required performance, with vendors responsible for choosing the cost effective
technology in their proposals.
3. Provision should be made in specifications to facilitate future addition of
supplemental controls to aid power system dynamic performance.
4. Specifications for HVDC links should require comprehensive models, both
preliminary and final models that are validated by simulator and commissioning
tests. Cost should be low as part of the overall specification with competitive
procurement.
5. Wide-area monitoring and supervision requirements should be evaluated.
6. Specified commissioning tests should include tests related to power system
dynamic performance. Wide-area monitoring of response to staged tests may be
necessary.

2.8 Dynamic Performance Monitoring


Management of large power systems valued in the tens or hundreds of billion dollars
requires an information infrastructure to observe, interpret, and predict system
dynamic performance. References 8194 provide technical details, with reference 94
providing an international perspective. All of these functions require integrated use of
both measurements and models, in an overall process represented in Figure 2.9.
Dynamic performance monitoring, though chiefly based upon direct measurements,
must draw upon dynamic models to interpret observed data and to recognize
conditions of special interest. Cross validation of measurements and models is an
ongoing requirement that must be factored into the design and the operation of grid
management resources [82].

2 - 34
Figure 2.9: The role of measurement based information in planning and operations.
There are a variety of means by which dynamic information can be extracted from a
large power system. These include the following:
Disturbance analysis
Ambient noise measurements
- spectral and correlation analysis
- parametric modal analysis
Direct tests with
- low-level noise inputs
- mid-level inputs with special waveforms
- high-level pulse inputs
- network switching
For comprehensive results, at best cost, dynamic performance monitoring will draw
upon all of these methods in combinations that are tailored to the circumstances at
hand. A general paradigm for this is shown in Figure 2.10.

2 - 35
Figure 2.10: Integrated use of measurement and modeling tools.
The resources needed for performance monitoring of a large power system represent
significant investments in measurement systems, mathematical tools, and staff
expertise. Guidelines for this can be found in collective utility experience of the
Western Electricity Coordinating Council (WECC), in the western interconnection of
the North America power system [8183].
Wide area monitoring for a large power system involves the following general
functions:
Disturbance monitoring: characterized by large signals, short event records,
moderate bandwidth, and straightforward processing. The highest frequency of
interest is usually in the range of 2 Hz to perhaps 5 Hz. Operational priorities
tends to be very high.
Interaction monitoring: characterized by small signals, long records, higher
bandwidth, and fairly complex processing (such as correlation analysis). Highest
frequency of interest ranges to 2025 Hz for rms quantities but may be
substantially higher for direct monitoring of phase voltages and currents.
Operational priority is variable with the application and usually less than for
disturbance monitoring.
System condition monitoring: characterized by large signals, very long records,
very low bandwidth. Usually performed with data from SCADA or other EMS
facilities. Highest frequency of interest is usually in the range of 0.1 Hz to perhaps
2 Hz. Core processing functions are simple, but associated functions such as state
estimation and dynamic or voltage security analysis can be very complex.
Operational priority tends to be very high.
Interactions monitoring is the most technically demanding of these functions, and the
one most likely to provide early warnings of emerging trouble.
Experience in the western interconnection demonstrates that these monitor functions
are effectively and efficiently supported through coordinated recording of high quality
synchronous data at key locations across the grid. The resulting wide area
measurement system (WAMS) is a network of networks of about 1500 primary

2 - 36
signals that are continuously recorded in their raw form. These primary signals are the
basis for several thousand derived signals (e.g., MW and MVAr signals from voltage
and current phasors) that are viewed in real time, or during off-line analysis of power
system performance. Data sources are of many kinds, and they may be located
anywhere in the power system. This is also true for those who need the data, or those
who need various kinds of information extracted from the data.
2.8.1 Organization and management of WAMS data in WECC
A major test or disturbance on a large power system produces literally thousands of
data objects in the form of raw or processed measurements, modeling (simulation)
results, message traffic, staff activity logs, and reports. In many cases the signals
derived from measurements are analyzed in parallel with equivalent signals from
computer simulations.
The WAMS database for a major event on the power system may be far larger than
that for regular system operation. The analysis itself is usually much more thorough,
and produces a greater number of analysis products. Also, as insight into the event
evolves, the analysis will often extend to secondary data that are not usually
examined. It is necessary to smoothly manage the database as it expands, and to do so
in a manner that observes confidentiality agreements among data owners or system
managers. New computer technologies such as XML can be considered.
Organization of the WAMS database relies upon:
A standard dictionary for naming power system signals
A summary processing log indicating where the signals originated and how they have
been processed
Data management conventions that name and store the data objects according to the
system event
This aspect of WECC practice is build into workflow patterns that have evolved
among WAMS facility owners and various WECC technical groups. Some parts of it
have been automated into the Dynamic System Identification (DSI) Toolbox, as
indicated in Figure 2.11. The DSI Toolbox is the latest generation of software that has
supported BPA and WECC performance validation work since 1975 [87]. It is coded
in Matlab, and its core elements are distributed as freeware from WAMS websites
such as ftp://ftp.bpa.gov/pub/WAMS_Information/.

2 - 37
PMU

PMU PMU PDC


Local PDC Remote PDC Applications

PMU PDC PDC PMU


Archive Archive PDCDataReader

To EMS
Phasor StreamReader

Real-Time
Interface
PDC StreamReader PDC StreamReader
Measurements Add-Ons
Network StreamReader StreamReader
Archive Archive

Other Integrated
Monitors Data
DSI Report
Toolbox Materials

Analysis
Results
Simulation
Programs

Figure 2.11: Flow of multi-source data within the WECC WAMS.


Other aspects of WAMS database management are incorporated into WAMS
operation, or into the measurement system itself. A key element of this is the data
source configuration file, which provides information for the following purposes:
Converting raw data to engineering units. Includes initial corrections to known
offsets in the data.
Automatic naming of extracted signals. Includes substitution of generic names
to avoid revelation of data sources and ownership.
Logging of data source characteristics. Links to data servicing tools for repair,
adjustment, or other modifications that may be required immediately or at
some future time.
Standardizing naming of the data source. Links to dictionaries that contain
processing menus that have been customized for specific users and/or
operating environments.
The same measurements that system operators see in real time may contain
benchmark performance information that is valuable for years into the future. Such
measurements may also be needed to determine the sequence of events for a complex
disturbance, to construct an operating case model for the disturbance, or as a basis of
comparison to evaluate the realism of power system models in general.

These observations imply two general rules for the organization of WAMS data:

Rule 1. WAMS data must remain retrievable and useable for many years after acquisition.

Rule 2. Procedures and toolsets for analysis of the WAMS database must be applicable to
simulated measurements produced by model studies.

2 - 38
The above two rules are actually basic requirements from which many other rules,
practices, or guidelines can be derived; e.g., from Rule 1 we can derive the following:

Rule 3. Any WAMS data or analysis product must be clearly tagged to


indicate:
- The origin of the data
- The conditions under which the data were acquired
- The processing applied to the data
And, expanding upon Rule #2, for any large power system we have:
Rule 4. Procedures and toolsets for analysis of the WAMS database must
permit integrated analysis of
- Measured power system behavior as obtained from any recorder in
general use
- Simulated power system behavior as obtained from any modeling
program in general use
2.8.2 Performance monitoring and situational awareness
Some grid managers, chiefly independent system operators (ISOs, TSOs, RTOs) and
utilities engaged in long distance transmission, are developing substantial
measurement facilities. The critical path challenge is to extract essential information
from the data, and to distribute the pertinent information where and when it is needed.
Otherwise system control centers will be progressively inundated by potentially
valuable data that they are not yet able to fully utilize.

Figure 2.12: Oscillation buildup for the WSCC breakup of August 10, 1996.

2 - 39
Figure 2.13: Oscillation spectra for the WSCC breakup of August 10, 1996.
These issues were brought into sharp and specific focus by the massive breakup
experienced by the western interconnection on August 10, 1996. The mechanism of
failure (though perhaps not the cause) was a transient oscillation, under conditions of
high power transfer on long paths that had been progressively weakened through a
series of seemingly routine transmission line outages.
Buried within the measurements lay the information that system behavior was
abnormal, and that the system itself was vulnerable. Later analysis of monitor records,
as in Figures 2.12 and Figure 2.13, provides many indications of potential oscillation
problems [88]. Verbal accounts also suggest that less direct indications of a weakened
system were observed by system operators for some hours, but that there had been no
means for interpreting them. The final minutes before breakup represented a situation
that had not been anticipated, and for which no operational procedures had been
developed.
Its likely that standard planning models could not have predicted the August 10
breakup, even if the conditions leading up to it had been known in full detail [76, 81].
The U.S.Canada Blackout on August 14, 2003 was immediately notable for its
extent, complexity, and impact. Among many other actions, the event triggered a
massive effort to secure and integrate regional operating records. Much of this was
done at the NERC level, through the U.S.Canada Power System Outage Task Force
[20, 89].
Additional background information concerning the event was gathered by a group of
utilities that, collectively, had been developing a WAMS for the eastern
interconnection [90]. Like the WECC WAMS, WAMS East had a primary
backbone of synchronized PMUs that continuously stream data to phasor data
concentrators (PDCs) at central locations for integration, recording, and further
distribution.
WAMS data collected on August 14 provide a rich cross section of interarea dynamics
for the eastern interconnection. Much of this information is imbedded in small
ambient interactions, and is readily apparent to spectral analysis. Figure 2.14, for bus
frequency fluctuations at the American Electric Power (AEP) Kanawha River

2 - 40
substation, is typical of data that were collected as far away as Entergy's Waterford
substation near New Orleans.
Frequency of the spectral peaks shows a general downward trend, plus sharp
discontinuities that are associated with system events. This behavior suggests that the
swing frequencies associated with interarea modes were declining through
increasing stress and network failures on the power system [91]. Though oscillation
problems were not a significant factor in the August 14 Blackout, oscillation
signatures such as those in Figure 2.14 provide readily available information that can
be factored into situational awareness for real time operation of the overall grid.
The August 14 Blackout provided considerable stimulus to the pre-existing Eastern
Interconnection Phasor Project (EIPP). Progress in this effort can be found at the
WAMS website http://phasors.pnl.gov/.

Data provided by Navin Bhatt, AEP

Figure 2.14: Spectral History for US-Canada Blackout of August 14, 2003: AEP Kanawha River bus
frequency, 12:00-16:10 EDT.
2.8.3 WECC requirements for monitor equipment
A monitor meets basic performance requirements if it satisfies the following technical
criteria:
Frequency response of overall data acquisition:
Is -3 dB or greater at 5 Hz.
Does not exceed -40 dB at frequencies above the Nyquist frequency ( a limit
of -60 dB is preferred)
Does not exceed -60 dB at frequencies that are harmonics of the actual power
system operating frequency (for design purposes, assume all frequencies in the
range of 59 Hz to 61 Hz)
Does not produce excessive ringing in records for step disturbances
Data sampling rate:

2 - 41
Overall frequency response requirements imply a minimum sample rate that is
4 to 5 times the -3 dB bandwidth of overall data acquisition
For compatibility with other monitors, the sample rate should be an integer
multiple of 20 or 30 samples per second (sps). A multiple of 30 sps is
preferred.
Numerical resolution and dynamic range:
Resolution of the analog-to-digital (A/D) conversion process must be 16 bits
or higher.
Scaling of signals entering the A/D conversion should assure that 12-14 bits
are actively used to represent them. Signals for which this scaling may
overload the A/D during large transients may be recorded on two channels, in
which one has less resolution but a greater dynamic range.
Measurement noise must be within the normal limits of modern instrument
technology. Noise levels for frequency transducers that are based upon zero-
crossing logic tend to be unacceptable.
Documentation for the data acquisition process:
Must be sufficiently detailed that overall quality of the acquisition system can
be assessed
Must be sufficiently detailed that acquired records can be compensated for
attenuation and phase lags introduced by the acquisition system
The monitor or monitor system stores data continuously and retains the last
240 hours (10 days) at all times without operator intervention. A monitor that
automatically erases the oldest file and stores the newest file will meet this
criterion if the buffer area is 10 days or more. If the monitor requires an
operator to remove old data to prevent storage overflow, a 60-day buffer is
required to accommodate typical practices with monitor systems.
The monitor is able to typically store event data files for 60 days without
operator intervention. Since events are inherently unpredictable, this is only a
typical value based on operating experience. If the monitor stores continuous
data, it does not have to store events.
The monitor demonstrates synchronization to Universal Time (UTC) to a
100s level or better. Synchronization to GPS based timing with suitable
technique is preferred. Other approaches may be acceptable.
Data access is by network, leased line, or dial-up with software for transfer,
storage, and data archiving.
Data formats are well defined and reasonable. Preferred formats for real-time
data transfer are those equivalent to or meeting IEEE standards IEEE1344 or
PC37.118 or the PDCstream format for concentrator output. The preferred file
format is PhasorFile described in PhasorFileFormat.doc (*.dst) commonly in
use in the WECC.
Figure 2.15 represents the filtering requirements in graphical form and applies it to an
order 4 Butterworth filter that has a 12 Hz bandwidth and an output rate of 60 sps.

2 - 42
Figure 2.15: WECC filtering requirements and fourth order Butterworth filter response.
These minimum requirements are indicated as sufficient for meeting WECC needs,
but they may not be seen as necessary in some cases. They are intended as quantified
guidelines for monitor evaluation, and they are deliberately stated in a simple manner.
There are many underlying assumptions, plus considerable room for engineering
judgment.
2.8.4 Summary of dynamic performance best practices for power system
dynamic performance monitoring
WAMS, as the term is used here, describes an advanced technology infrastructure that
is designed to develop and integrate measurement based information into the grid
management process. The measurement facilities augment those of conventional
SCADA, providing wide-area dynamic data at rates and resolutions that legacy
systems can rarely match. These measurements and the information extracted from
them are expressly designed to enhance the operators real-time situational
awareness that is necessary for safe and reliable grid operation.
A WAMS should be engineered and operated to meet information needs of grid
managers, while observing proper concerns of data owners. Table 2.1 shows a
detailed list of specific WECC applications. Many objectives are implicit in this table,
and other electrical interconnections might state or prioritize their objectives
differently.

2 - 43
Table 2.1: Key Monitoring Applications.
Monitoring Application
Real time observation of system performance
Early detection of system problems
Real time determination of transmission capacities
Analysis of system behavior, especially major disturbances
Special tests and measurements, for purposes such as:
- special investigations of system dynamic performance
- validation and refinement of planning models
- commissioning or re-certification of major control systems
- calibration and refinement of measurement facilities
Refinement of planning, operation, and control processes essential to best use of
transmission assets.

Dynamic performance monitoring, though chiefly based upon direct measurements,


should draw upon dynamic models to interpret observed data and to recognize
conditions of special interest. Cross validation of measurements and models is an
ongoing requirement. See also the following Section 2.8 on modeling and simulation.
Characterization and analysis of dynamic performance requires frequency domain
tools. Figure 2.16 shows results for what is believed to be the first system wide model
validation test in the western North American interconnection. This test established
that modeling of the 0.3 Hz north-south mode was strongly optimistic, and that 0.7 Hz
oscillations predicted by model studies were largely a modeling artifact [81].
Corrective actions were initiated, but many aspects of the modeling problem remained
through the summer of 1996 [76].
Frequency domain attributes of system oscillatory dynamics should be characterized
by the following information:
a) Mode parameters (eigenvalues). Usually characterized in terms of frequency and
damping.
b) Mode shape (eigenvectors). Usually characterized by the relative strength and
phasing of generator swings for each mode.
c) Interaction paths. The lines, buses, and controllers through which generators
exchange energy during oscillatory behavior.
d) Response to control. Modification of oscillatory behavior due to control action of
any kind.
Grid managers who are concerned with oscillatory dynamics should design and
operate their WAMS to provide this information, and their planning models should be
validated against it.

2 - 44
Figure 2.16: Modeled versus actual response of Pacific AC Intertie power to Chief Joseph 1400 MW
brake test on May 16, 1989 (0.5 second insertion).
2.9 Modeling and Simulation
Here we describe best practices in modeling and simulation. We describe current
modeling development efforts, particularly in the North American Western Electricity
Coordinating Council (WECC, formerly WSCC), to improve power system dynamic
performance and reduce the risk of cascading blackouts. This implies using improved
and validated models for studying scenarios and performing simulations in advance of
likely multi-contingency cascading events that should be anticipated and prevented.
2.9.1 Time frame of simulation studies
In advance could mean day-ahead evaluation or a previous assessment such as a
critical summer assessment about three months in advance. An hour-ahead
simulation is also an advance assessment; however, the data are necessarily those
obtained from on-line state estimators but are often simulated off-line using the same
programs, models and techniques as in day-ahead simulations.
In a day-ahead power flow or dynamic simulation, the state of the system is obviously
much better known to the control area operator or reliability coordinator than a three-
month in advance assessment. Load forecast, generator availability, forced- or
maintenance-outages, and operating reserves (including frequency-responsive
reserves) are all better known.
The month(s)-ahead evaluation facilitates planning of the coming critical operating
months, providing awareness of potential dangers. These studies determine the
approximate Operating Transfer Capability (OTC) of critical paths and operating
nomograms of simultaneous transfer limits. These evaluations, however, typically
assume that all transmission and generation will be in place except those that are
clearly known to be out of service in advance. Published OTCs therefore often add
operating margin to account for day-ahead operating limits which may include
unstudied conditions of outages and other limitations.
2.9.2 Models for simulations
Large-scale power flow and stability studies using the latest validated models and data
are performed regularly by operation and planning engineers using programs available
from vendors. Stability studies include transient and voltage stability studies. Modal
or eigenanalysis may be included.

2 - 45
Principal models and data used in the studies are:
Generating plant including:
Generators and excitation equipment
- Excitation equipment including power system stabilizer and limiters
- Generator backup protection including distance, overcurrent, voltage and
frequency relaying
- Synchronous condensers
Turbines and governors
- Prime movers and energy supply such as boilers; wind generation including
wind dispatch
- Governors including thermal and hydro governors, load limiters, load or MW
power controllers
Load modeling: including static loads and dynamic models of induction motors
Transmission network: including:
Transmission lines
Transformers
Capacitor banks, series and shunt, both switched and fixed
SVCs, STATCOMS
Phase shifters
HVDC (High Voltage Direct Current) links
2.9.3 Power flow programs and methodology for simulations
Power flow programs comprise 95% of the simulations by planners and operators and
are the most important of the operators tools. (Power flow programs, however, do not
capture the time dependence and evolution of control actions and switching events.)
In todays operating environment, its important to recognize the part played by the
market in the reliability of the system.
Programs used for power flow studies include:
Standard power flow
Optimal power flow (OPF)
Security Constrained OPF (SCOPF)
A further subdivision includes the following:
- Power flow with slack bus
- Power flow with distributed generation reference buses
- Power flow with distributed load reference buses

Special routines for post-transient power flows include:

2 - 46
- Power flow with special routines for individual generator pick up after a major
generation trip (used in PV and VQ voltage stability studies)
- Power flow with special routines used in the WECC adopting the principles of
the New Thermal Governor modeling for dynamic analysis, described later.
The standard power flow program determines the voltages on all buses and the flows
on all lines given the generator dispatch at all generator nodes (buses) and the loads at
all load nodes (buses). It solves a system of non-linear equations using algorithms
such as the Newton-Raphson method.
The OPF or optimal power flow program can be Security Constrained Economic
Dispatch (SCED) or Security Constrained Optimal Power Flow (SCOPF) program
that provides for the optimal dispatch of generators for the least production cost. The
program uses a linear programming or other algorithm to minimize the objective or
cost function considering equality and inequality constraints for MW and MVAr
power balance, and voltage and loading limits of generators, lines, transformers etc.
Security constrained means optimization considering N-1 contingencies.
Both dc and ac type OPF models are used in day-ahead and near real-time control of
the system. The DC OPF model is used in many of the new electricity market areas
that run on LMP (Locational Marginal Price) models, and is used primarily for its
speed and ease of solution convergence. LMP computation of the dc type, however,
inherently cannot model voltage and reactive power effects. OPF programs have been
continuously improved for robustness and solution convergence, and are now being
used for large systems. More work is required to convert dc OPF programs in market
environments to ac OPF programs.
OPF programs can utilize real-time state-estimators that can run every 5 minutes or so
to assess operating constraints. Security-constrained programs imply that they attempt
to contain system overloads (or congestion) by dispatching generators that
simultaneously provide system MW balance economically within the market island.
The SCOPF algorithm takes the next (N-1) outage within the island in its
determination of price signals.
2.9.3.1 The seams issue
Areas using OPF that are also common to all bounded control areas require access
to data for adjoining areas. A failure to do so presents the following seams issues:
Market or control area islands are isolated within the larger interconnected
network. Intertie paths or flowgates, and external system equivalents of uncertain
accuracy are modeled at the seams (boundaries) of these islands.
The next critical outage may well be in the neighboring companyor two areas
awayand not internal.
Outages of critical heavily loaded EHV lines within the island may not converge.
The optimum economic dispatch could well take the dispatched unit to its MW
and MVAr limits, allowing no margin for critical dynamic responses during
power excursions.
A developing system collapse situation may involve a neighboring market area or

2 - 47
control area.4
To overcome the seams issue, near real-time or hour-ahead simulations should be for
a much larger area than the control area footprint. This has been standard practice in
many areas. An exceptionally wide area view of the system is planned to be
implemented in a few years in the entire WECC Interconnection with a real time
Western System Model (WSM) providing an integrated state-estimated model. This
solution could however be difficult or too expensive to implement in very large
systems such as the North American Eastern Interconnection. The WSM will have the
ability to archive state estimated power flow data and network topology for immediate
or later analysis. Network applications will include power flow, OPF, transient and
voltage stability analyses.
2.9.3.2 Slack bus and generator participation factors
A system slack bus is required for the mathematical solution of the non-linear
power flow problem. The slack bus generator sets the angle reference for the system
(or island). In the standard power flow, the incremental change in losses is taken up
by the slack bus during the solution. Control areas can have separate reference buses.
OPF models for LMP in market environments have also been developed with a
distributed load reference, mainly for dc OPF programs to handle losses.
Power flow programs can have either external or built-in special routines called post-
transient routines for pick up of generation for individual generators after a major
generation trip. If not provided with these special routines, the entire generation lost in
a system trip (plus the consequent redistribution of losses) will be picked up by the
slack generator bus. This routine is especially important in PV and VQ voltage
stability studies. Currently many routines, including some OPF programs, pick up
generation and losses in proportion to participation factors which are in proportion
to their rating. This is incorrect because the only mechanisms to pick up generation
after a trip are by frequency responsive governors or AGC.
A special routine is now used for post-transient power flow simulations in the
Western Electricity Coordinating Council (WECC) based on the improved simulation
of base-loaded units and load-controlled units in the new thermal turbine-governor
dynamic modeling approach described below. Generators that are unresponsive in the
dynamic solution are blocked in power flow simulation.
2.9.4 Dynamic programs and methodology for simulation studies
Programs used for stability studies include the well-known classical programs for:
transient stability
voltage stability including PV, VQ static studies
dynamic voltage stability studies
Stability programs should have the required switching and special protection system
routines as applicable. The starting point for stability or dynamic simulation studies is
a base power flow case. The importance of the power flow simulation cannot be
minimized because this is the key for operating the system within safe limits. If it is
flawed and the basic generator dispatch and voltages are incorrect, the dynamic
simulation that follows will also be flawed and unreliable.

4
This was apparent in the August 14, 2003 blackout in the eastern USA and Canada.

2 - 48
2.9.4.1 Model validation of previous blackouts and other events
The study of previous events including large scale blackouts provide us with a good
idea of how good, or how mediocre, the current dynamic modeling and simulation
capability is. Validating the blackouts with current models gives us the opportunity to
improve modeling, and to introduce new procedures for testing and model validation
as necessary.
The three most recent major system blackouts in the North American
interconnections, i.e. July 2, 1996 and August 10, 1996 in the western interconnection
and August 14, 2003 in the Eastern Interconnection,5 have similarities in the
cascading events leading finally to fast collapse. A series of initial outages of
transmission lines and generators weakened the power system, imperceptibly at first,
but with increasing impacts later. It took just a few critical trips in one of the
blackouts, and more in the others for the final outcomea fast collapse.
In all three blackouts the investigations went through the phases of collecting
disturbance recordings, establishing the outage sequence from sequence of events
recorders, and the time-stamping of key events, namely trips of lines, generators and
loads. From disturbance monitoring and SCADA information, the dispatch of
generators, flows, voltages and frequency were determined.
Creating a credible base case for power flow from information from diverse sources
after the blackout is usually a major effort. If a single state-estimated solution is
obtained, e.g., from the WSM for the Western Interconnection, this effort would be
greatly facilitated. In order to achieve this goal and substantially reduce the burden of
case creation, a translation between the on-line models for real-time operations and
the off-line models for system planning needs to be developed and maintained.
2.9.4.2 Modeling validation of the WECC August 10th 1996 collapse
A case study example of blackout investigation, and model validation and
development is the simulation of the collapse of the western interconnection on
August 10, 1996. Simulation improvements occurred over many years as more
modeling information was obtained. The highly stressed western interconnection
collapsed when north-south intertie mode oscillations of 0.250.28 Hz involving the
CaliforniaOregon Intertie became negatively damped, resulting in growing
oscillations and cascading failure.
Initial stability modeling using the collected data and current programs failed to show
the oscillatory collapse. As more accurate data was obtained of the trips, different
investigators in 199698 were able to obtain a collapse from the simulations by
varying different modeling parameters including AVR, PSS, and load model data.
Figure 2.15 compares recordings of the collapse with simulation after performing
model modifications including the controls of the Pacific HVDC Intertie, BPA AGC,
blocked large steam turbine governors, added high side voltage control of lower
Columbia hydro plants and modified load representation [76]. The paper discussed a
number of factors including details of the highly stressed initial system weakened by
key outages.

5
The load lost during these collapses were 11,700 MW, 30,500 MW and 62,000 MW respectively.

2 - 49
Figure 2.15: Actual and simulated voltage at Malin Substation on Oregon
California border [73].

2.9.4.3 Generator model validation by testing


Following the two 1996 collapses, the WECC developed the first comprehensive
generator model validation testing guidelines in the NERC regions [95], and required
validation of generator computer models for over 1500 generators greater than 10
MW. This included testing of the generator reactive power limits and dynamic model
validation. The dynamic modeling tests included the generator, AVR, PSS, governor,
etc. The tests for the reactive capability curve of the generator included
protective/control functions such as the over-excitation limiter (OEL) and under-
excitation limiter (UEL) settings. Figure 2.16 shows a typical test result of this type
[96]. In practice, testing at all UEL and OEL limit points has practical limitations and
this is an area requiring further improvement. There are many models that do not have
validated OEL/UEL settings. During the 1996 blackouts as well as the 2003 blackout,
many generators tripped out at a critical time due to OELs tripping when it should not
have (Figure 2.17).

2 - 50
A OVEREXCITATION R EGION

50

REACTIVE PO WER IN MVAR

S UP PLIED
MVARS
S AF E OPERATION

POWER IN M W

10 50 85 MW 100

ABS ORBED
MVARS
C

D
50
UNDER EXC ITATION R EGION

Figure 2.16: Capability curve and steady-state measurement points of a 100 MVA, 0.85 power factor
unit [96].

Figure 2.17: August 14, 2003 Blackoutgenerator trips before 16:10:38. Numerous generators trip
out due to overexcitation limiter operation and plant auxiliaries tripping out on low voltage supplies,
compounding the loss of active and reactive power supply to maintain voltages in critical areas.
2.9.4.4 Interim dynamic load model
The 1996 model simulations indicated that the load model had a significant influence
on the results given the highly stressed system conditions during the collapses. Four
years later, an event on August 4, 2000 renewed concerns that the Western
Interconnection was still inherently susceptible to oscillatory behavior in the 0.2 to 0.3
Hz range during peak summer stressed conditions. Figure 2.18 shows the poorly
damped system-wide oscillation that lasted for about 60 seconds when the tie line
between BC Hydro and Alberta tripped, separating Alberta from the system.
Validation studies indicated that both the August 10, 1996 and the August 4, 2000
events could not be reproduced by simulations using the existing static load modeling
in the WSCC. The studies had in common the need to include system wide dynamic
induction motor models in order to obtain a correlation between the actual recordings
and the simulations. An interim composite load model was finally implemented in
the WSCC after extensive modeling efforts. The model contained an 80% static part
and a 20% default induction motor dynamic model. The interim model was designed
primarily to capture the effects of dynamic induction motor loads for highly stressed
north to south flow conditions during summer peaks in the WSCC [97].

2 - 51
Figure 2.18: Oscillations in the August 4, 2000 disturbance when Alberta tie line tripped.
Extensive modeling efforts are currently underway to replace the interim dynamic
load model with a model that more accurately represents the loads in the various parts
of the WECC with particular attention to the modeling of air-conditioning loads in the
southern areas of WECC.
2.9.4.5 New thermal governor model
Simulations of the frequency excursions in the 1996 collapses and other events,
indicate that only about 40% of the simulated governor response actually occurs.6
Recordings of generator trip tests show this discrepancy; see Figure 2.19 [98]. The
principal reason for this large discrepancy was that a large percentage of thermal unit
governors were not properly modeled. (Thermal plants include conventionally fired
steam, nuclear steam, simple cycle gas turbine, and combined cycle gas turbine
plants.) While these units were operated in practice as base-loaded or load-limited
or temperature-limited units, existing governor modeling practice incorrectly
assumed that all governors respond to system frequency deviations in accordance with
its 5% speed droop governor characteristic (per NERC criteria). Additionally, many
units operated with power (load) controllers, primarily set at a fixed MW loading,
were also modeled as responsive, ignoring the fact that units would return to their
MW setting value after a deviation (Figure 2.20).
The development and validation work for the new turbine-governor modeling
approach included the creation of a WECC-wide system database based on
disturbance monitoring and SCADA recordings of staged tests and several large
disturbances. Figures 2.20 and 2.21 show unit and system responses with the new
models compared with the old models. Figure 2.22 shows the validation with a system
event. The new model is currently being used in all operation and planning studies in
the WECC [98, 99].

6
This phenomenon had also been noticed in the Eastern Interconnection and other systems.

2 - 52
Figure 2.19: Discrepancy between existing model simulations (blue plots) and system frequency
recordings (green plots) for the SW and NW system test trips in WECC on May 18, 2001. All AGC
were switched off during the tests.

Figure 2.20: Simulations with the new governor model of a thermal unit with a fast load controller
(red plot) compared with its May 18th test SCADA recordings (green plot). Existing (old) model
simulations (blue plot).

Figure 2.21: System frequency response simulations with the new thermal governor model (red plot)
compared with May 18th test recordings (green plot) for the NW 1250 MW trip, all AGC switched off.
Existing (old) model simulations (blue plot).

2 - 53
Figure 2.22: Governor model verification, 2000 MW Colstrip trip in Montana on August 1, 2001.
Existing (old) model simulation (blue plot), recordings (green plot), and new thermal governor model
(red plot).
2.9.4.6 Frequency responsive reserves and traditional spinning reserves
The new thermal governor modeling concepts also indicate other market
implications that affect reliability. The everyday understanding of spinning reserves
in market terms assumes that if, for example, a 200 MW generator is scheduled at 100
MW, the spinning reserve immediately available in that unit is 100 MW. This is
obviously incorrect because a 200 MW unit picks up less than 10 MW for a 0.1 Hz
deviation of system frequency, assuming a 5% droop governor action. The remaining
part of the available spinning reserve can only be automatically obtained if an AGC
signal is sent to the unit. (Note: only selected units are on AGC at a given moment,
not all units are with spinning reserves.) Alternatively, the operator should be
instructed telephonically (with delays) to ramp up the generation.
2.9.4.7 Voltage stability simulations
Deterioration of system voltage support by reactive power is difficult to detect by an
operator. An operator can be easily misled by viewing close to normal voltages on his
control desk instrumentation as the voltage collapse progresses. As more
comparatively inexpensive HV shunt capacitor banks support the voltage profile to
near nominal values, operators can be lulled into a sense of false security. This is
illustrated in Figure 2.23 from the July 2, 1996 Western System collapse [99].
However, as the voltage collapse progresses, line loadings increase and voltages fall.
The static MVAr support decreases in proportion to the square of the voltage, with
disastrous consequence (see Figure 2.24).

2 - 54
Figure 2.23: Recording of July 2, 1996 collapse in the western interconnectionshowing the
difficulty for operators in recognizing oncoming voltage collapse.

Figure 2.24: July 2, 1996 collapse in the western interconnectionsudden collapse towards the end.
PV and VQ methodology are used for voltage stability assessment and are widely
described in the literature [100]. A report entitled Voltage Stability Criteria,
Undervoltage Load Shedding Strategy and Reactive Power Reserve Monitoring
Methodology was approved by the WECC in 1998. This document formally
established the first set of reliability criteria applicable to WECC member-systems
related to required reactive power margins for establishing operating transfer
capability limits of major EHV paths.
Many papers, mostly dealing with small systems, have compared static PV/VQ
studies, and dynamic voltage stability solutions. The practical problem for
operating/planning engineers is access to a practical and credible computer analysis
program that has been validated for consistent performance and can correlate
accurately static and dynamic voltage stability analyses in large systems. Although the
industry has made improvements in understanding the related phenomena, further
work in the areas of load modeling is needed for successful dynamic studies. Model
data should use validated test data for OELs, generator capability curves, SVCs and
STATCOM models.
Other practical devices to assist operation engineers include color displays of voltage
and generator reactive power contour maps. These will highlight the areas of
voltage depression and reactive power deficiency in generators to give the operator a
better feel of impending voltage stability situations. With additional displays of LMPs
(locational marginal prices) as well, contour maps can give operators in those markets

2 - 55
a better perspective of potential congestion scenarios, which often go hand-in-hand
with potential voltage instability scenarios. It is necessary to have an AC power flow
or OPF.
2.9.4.8 Predicting future cascading events is infinitely more difficult than validating
previous known blackout events
In practically all large blackouts, initial outages of transmission lines and generators
weakened the power system, sometimes many hours prior to the collapse, followed by
critical outages leading to the final collapse. Often this was a combination of line and
generation outages. Planning for such a multi-contingency cascading blackout with
any degree of certainty is impossible because in a group of 200 lines, transformers and
generators in a critical area, there are 20,000 possibilities of N-2 outages. Even
assuming that 10% of these pose a real danger, we have 2000 possibilities for N-2 and
exponentially a larger number for N-3 etc. Certain critical outage search routines
have been discussed in the technical literature, but we are not aware of actual usage of
these by operators. These search routines mainly involve large numbers of AC power
flow cases that can be mechanically run. The problem is in organizing the huge data
for intelligent analysis. Given the low probability of multi-contingencies, we have to
rely on operating/planning engineers experience for combinations of potential
cascading candidates. A list of onerous double-contingency outages is usually selected
for establishing secure operating transfer capability limits of major EHV paths. As
discussed previously, a day ahead simulation would be more accurate than a 3-
month ahead analysis. Reference 17 discusses an advanced tool for analyzing multiple
cascading failures.
A new uncertainty is the growing penetration of highly variable wind generation
described in Section 2.12.
2.9.5 Summary of dynamic performance best practices for power system
modeling and simulation
1. Validation studies of past blackouts have shown that accuracy of dynamic and
power flow models and data are a key factor in operating systems within safe
limits.
2. Closer to real-time simulation capability should be developed to reduce
uncertainties and improve operating decisions. In real time simulations, it is
critical to have the correct state-estimated system topology and data for the
largest footprint possible, including adjoining control areas where the next
critical outage may be located.
3. Open, interconnection wide, exchange of reliability-related models and data is
required for both off-line and on-line simulations.
4. Models and data should be validated by staged tests and by disturbance
monitoring during system events. This should be an ongoing priority. Examples
include excitation limiters, and governor modeling including load controllers.
5. Correct representation of controls including special protection systems,
impedance relays, underfrequency and undervoltage relays should be also
included in modeling for dynamic studies.
6. Generator reactive power limits used in dynamic and power flow simulations
should be field tested and included in the modeling. These limits should be
verified and updated in close to real time as plant problems occur.

2 - 56
7. Power flow simulations, particularly outage simulation, should be benchmarked
against more detailed time domain simulations, and operation monitoring.
8. Simulation programs should be continually developed and validated to improve
model detail and accuracy. Vendors and users should cooperate in development.
9. Compatibility and cross-referencing of off-line and on-line power system
models and data should be developed and maintained.
10. Commissioning tests for power plants, HVDC links, SVCs and similar
equipment should require validation of dynamic models and data. On-going
performance monitoring is critical to ensure fidelity of models and data.
11. Composite load models, including dynamics, should be developed by laboratory
and field tests, and monitoring. These should be validated by monitoring during
major system disturbances.
12. Performance of major power plants should be monitored during frequency
excursions and other disturbances.
13. Simulations will never be completely accurate, but should be considered a tool
for more accurate system operations for reliability, for insight into power system
dynamic performance and modeling accuracy, and to support future
improvement in system performance and modeling.
2.10 On-line Dynamic Security Assessment and Other Operator Aids
On-line dynamic security assessment reduces the risk of cascading blackouts. In this
era of restructuring and deregulation around the world, the diversity of operating
conditions that exist create a high degree of uncertainty with regard to the scenarios
and contingencies that need to be examined. The conventional operating guides from
off-line studies on likely operating conditions and possible contingencies do not cover
all the operating conditions and situations that operators faces in real time settings.
This situation is further exacerbated by the large number of market based transactions
that occur on the transmission network, by operation of the network closer to its limit,
and by unanticipated outages of equipment. As a result, operators are often faced with
operating situations which are not adequately addressed by the operating guides.
Hence, it is imperative to calculate system operating limits (SOL) and interconnection
reliability operating limits (IROL) [102] using near real time conditions. This would
represent the network topology and operating conditions seen by the operator in the
control center and the limits calculated using this data would more accurately
characterize the existing system conditions and also allow the operator to evaluate the
impact of certain operating decisions with changing operating conditions.
Consequently, dynamic security assessment conducted in a near real time setting
would greatly enhance operational decision making capabilities and significantly
reduce the risk of cascading blackouts. This is identical to the scenarios discussed in a
panel session presented in 1987 [103] when restructuring was not a factor.
2.10.1 Security assessment by direct observation
Operators can assess security for the current operations (but not for potential
contingences) by observation and computer-aided analysis of measurements. For
example, the measurement and analysis methods from Section 2.8 can be applied in
an on-line, close to real-time environment.

2 - 57
2.10.1.1 Example operator aid for oscillatory stability
The Electric Reliability Council of Texas, Inc. (ERCOT) is a Regional Reliability
Organization (RROs) in North America. The overall generation capacity is
approximately 70,000 MW and the summer peak demand exceeded 60,000 MW in
2006. An important characteristic of ERCOT is that it is entirely located in the state of
Texas without synchronous connections to other areas.
The total circuit distance from far west Texas to far south Texas is approximately
1290 km (800 miles). The transmission system for the majority of this circuit distance
consists of multiple 345 kV circuits. Large portions of west Texas are sparsely
populated, and generation in west Texas often exceeds the load. Thus west Texas is
often a net exporter of power to the rest of ERCOT. The nearest large load center is
the Dallas and Fort Worth (DFW) area, about 580 km (360 miles) away. Under
certain contingencies and transfer levels from west Texas, slow damping of
oscillations has been observed as synchronous machines in far west Texas swing
against machines in far south Texas. Basically, the operator aid defines the maximum
stability based transfer limit considering the two most important elements affecting
damping: the specific transmission circuits in service and the number of generator
power system stabilizers (PSS) in service. Depending on the circumstances, the actual
transfer limit could be thermal or stability based. This aid allows operators to know
where they are and consequently know when to take some action should the actual
transfer approach an unsafe level. The operator aid is based on off-line dynamic
simulations, and on operating experience and field measurements.
2.10.1.2 Example operator aid for voltage stability
Its well accepted that monitoring of voltage levels is insufficient in determining
voltage security. In particular, a power system is insecure if reactive power reserves
are loweven if voltages are in range. Several control centers have developed on-line
monitors of reactive power reserves, especially automatically controlled reserves from
generators, synchronous condensers, and SVCs/STATCOMs. An example from
ERCOT follows.
The DFW area is a large load (over 20,000 MW at peak), with about 10,000 MW of
local generation that provide reactive power for the area load. Mechanically switched
capacitors have been used extensively in the area to maintain voltages at an acceptable
level. Unfortunately, that has resulted in P-V curve nose points at voltages little
different from normal operating voltages in many cases, as illustrated in Figure 2.26.

2 - 58
Figure2.26: P-V curve showing voltage in operating range close to limit.
Thus, operators gain little insight in terms of how close the DFW area might be to
voltage collapse from monitoring bus voltages. In order to provide the operators with
useful information the minimum reactive power margin that must be maintained to
survive NERC Category B and C contingencies are determined. All active generators
that consistently provide reactive power to the DFW area load are identified. Next, the
limiting contingencies and the pre-contingency reactive power capacity from the
identified generators necessary to maintain voltage stability post contingency are
determined. The operator aid is a screen that shows the required reactive power
margin needed to survive the limiting contingency and the actual reactive reserve
power available from the identified generators. The actual reactive power reserve
available is recomputed and displayed every few seconds. Figure 2.27 shows an
operator display. Thus, operators have some sense of how close they might be to a
voltage stability problem, and have the opportunity to take corrective actions before
the system reaches a non-recoverable state.
The required reactive power reserves ideally should be determined from on-line
dynamic security assessment as next described.

2 - 59
Figure 2.27: Reactive power reserve monitor display.
2.10.2 Dynamic security assessment for potential contingences

On-line dynamic security assessment requires an integrated analysis of small-


disturbance angle stability, transient stability, and voltage stability [102] for a given
operating condition and a set of probable contingencies. In addition to judging the
appropriate stability or instability behavior in each case, it is essential to derive the
corresponding operating limits, detect the presence of undesirable oscillations, or
determine the appropriate reactive power margindepending on the type of analysis
being conducted. The final aspect of the analysis is the translation of the security
assessment into preventive control or corrective control decisions.
The small-disturbance angle stability problem is conventionally solved using an
eigenvalue approach. Commercial products apply several different eigenvalue
approaches to a state estimation based power flow case in an on-line setting. A
versatile on-line small disturbance angle stability tool should include the following
capabilities:
Computation of all modes in a system or in single-machine-infinite-bus
equivalents of all generators.
Computation of the modes within a specified range of frequencies and/or
damping (ideal for computation of interarea modes).
Computation of the modes associated with specified generators (ideal for
computation of local modes).
Computation of small signal stability indices. The computation can include the

2 - 60
entire spectrum, or specific group of modes defined by frequency ranges or by
participating generators.
Full modal characteristics (mode shapes and participation factors) for the
modes computed.
Time and frequency response computation (ideal for control design/tuning and
model validation.
These are essential features that would significantly enhance operator decision
making in an on-line setting with regard to small-disturbance rotor angle stability.
The other approach to on-line small-disturbance angle stability determination is via
wide area measurements [104], applying signal processing approaches to identify the
modal content and its characteristics. This approach holds great promise and has the
potential for a wide range of on-line security applications.
On-line transient stability tools have had a long history of development. Currently
available commercial versions of on-line tools which are fully integrated with existing
EMS typically share the following characteristics [105]:
The tools are based on conventional time domain analysis and are capable of
full modeling detail available in conventional stability packages.
Most commercially available packages have the ability to filter a large set of
contingencies using techniques that are computationally fast and select only
critical contingencies for detailed analysis using the time domain tools. This
feature is critical in meeting the stringent computational-time requirements in
an on-line setting.
Another important feature that is essential in an on-line setting is the
availability of a multiprocessor architecture to conduct the time domain
simulations on the multiple critical cases identified by the filter.
Depending on the phenomena that is prevalent during the transient stability
analysis, the simulation time for the time domain analysis will vary. This
could impose a severe burden on the computational-time requirement. Hence,
a feature which is highly desirable is the ability to stop a time domain
simulation which is clearly stable or unstable. This feature would require a
conventional time domain package to be augmented with additional
capabilities to classify clearly stable and unstable cases. In many cases a
transient energy function based approach is incorporated within the time
domain package to classify the case.
A typically time domain simulation only indicates whether the scenario
considered is stable or unstable. In an operational setting in order to guide the
operator it is important to obtain limits in terms of parameters that can be
measured. These typically include generator MW outputs or critical interface
flows. A single time domain simulation does not provide these limits. As a
result, the simulations have to be repeated on cases where the desired
parameters are varied and the limits derived using either numerical
sensitivities or analytical sensitivities of these critical parameters.
In addition to the stability information, the on-line time domain simulation
should also have the feature of monitoring and displaying other parameters
like voltage magnitudes, line flows, reactive power margins, etc. In many

2 - 61
instances reliability criteria with regard to these parameters may be violated
even if the system is stable, or before it becomes unstable.
The on-line transient stability analysis tool should also have the capability of
allowing the operator to examine system behavior under future conditions that
will be reached given the present operating conditions. This would require the
ability to change system conditions and examine the critical contingency list
which could have changed, and also evaluate the new operating limits to
evaluate whether a critical state is being reached.
If the analysis of future scenarios indicates a potential problem with regard to
security, then the on-line tool should also have the capability to provide
candidate options for preventive control and corrective control. This would
include generation rescheduling, generation tripping, or load shedding. Special
protection systems could also be armed based on the control action suggested.
These features characterize the key ingredients needed to reduce the uncertainty in the
operating conditions and scenarios analyzed in on-line dynamic security analysis and
bring into play the actual system conditions that exist and provide the operator
enhanced decision making capabilities as the real time system evolves.
An effective on-line voltage stability analysis tool also requires specific features for
aiding decision making during an emergency. These features include:
Ability to screen contingencies and identify critical cases.
Ability to perform security analysis and determine if the existing on-line
system is secure for all critical contingencies, allowing all existing transactions
to take place.
In addition the voltage stability analysis requires analyzing the problem from several
different angles. These include:
Voltage magnitude declines/rises
Voltage stabilitysmall disturbance and large disturbance
Reactive power reserves
The analysis approach for each type of voltage problem requires the power flow
based techniques, quasi-dynamic simulation, or full time domain simulation.
The limits derived from the security analysis would also have to be translated into
appropriate remedial actions if the analysis indicates that the system does not have
sufficient margin.
Several of the features identified in the on-line transient stability simulation would
also greatly enhance the voltage stability analysis.
A recent CIGRE document [106] provides a detailed account of the present state of
the art in on-line dynamic security assessment tools. Reference [107] is another good
article on this subject.
2.10.3 Summary of best practices for on-line dynamic security assessment
1. Ability to directly observe critical system measurements in an efficient manner
including, for example, reactive power reserves.
2. Ability to represent the area being analyzed and external areas in the real time

2 - 62
environment to accurately represent the dynamic characteristics of the entire
actual system.
3. Ability to analyze state estimated data with frequent updates for changing system
conditions.
4. Ability to transform busbreaker format EMS data to busbranch format power
flow and dynamic data to facilitate system studies in the real time environment.
5. Operator flexibility to trigger dynamic security assessment when needed.
6. Automatic triggering of dynamic security assessment when large changes in the
system occur.
7. Automatic triggering of dynamic security assessment on fixed cycle.
8. Ability to filter critical contingencies.
9. Ability to run several contingencies cases for the same operating condition in
parallel.
10. Option to conduct dynamic security assessment for change cases in order to
evaluate sensitivity to changing operating conditions and determine security limits
in terms of critical operating parameters.
11. Integrated ability to conduct transient, voltage, and oscillatory security analysis.
12. Translation of security limits in terms of system operating parameters as
determined by available reliability criteria.
13. Features for early termination of time domain simulations in situations where the
system is clearly stable or clearly unstable.
14. Automatic determination and arming of preventive control and corrective control
following occurrence of insecure contingencies.
2.11 Wind Generation

2.11.1 Introduction
Installed wind power generation capacity continues to increase at a fast pace.
Internationally, one of the primary drivers has been the Kyoto Protocol.
Environmental issues have lead to a governmental policies and promotion of
renewable generation technologies such as wind generation. By the end of 2005, the
total world-wide installed capacity exceeded 59 GW. Approximately 70 % of this
total is installed in Europe and the rest in the U.S., primarily California, Texas and
North Central states such as Iowa. Based on current trends, the installed capacity is
projected to double within the next five years, with the U.S. and Canada emerging as
the largest markets. Wind power has already reached high penetration levels in some
areas. In Denmark, for instance, over 20% of energy produced comes from wind, and
there are periods of time where wind production exceeds the local demand. It is not
surprising that the performance of wind power plant can have a significant impact on
power system performance and reliability. A recent CIGRE report details the dynamic
behavior of wind turbine generators and how they affect bulk power system dynamic
performance [108].
2.11.1.1 Wind turbine generators
Over the last 20 years, the typical wind turbine generator (WTG) unit has gradually

2 - 63
increased in size from the 25 kW range to over 3 MW. WTGs for offshore
applications are typically larger than 3 MW. There are presently four major types of
wind-turbine generator technologies:
Type 1: Induction generators driven by a stall-regulated (or active-stall), fixed-
speed wind turbine
Type 2: Induction generator with a variable external rotor resistance, driven by a
variable-speed, pitch regulated wind turbine
Type 3: Doubly-fed asynchronous generators driven by a variable-speed, pitch-
regulated wind turbine
Type 4: Generators with full converter interface (back-to-back frequency
converter), driven by a variable-speed, pitch-regulated wind turbine. Conventional
generators, permanent magnet generators, or induction generators are used. The
reactive power of the generator is supplied by the generator side voltage-sourced
converter.
In addition, there are other emerging technologies that allow the use of truly
synchronous generators directly coupled to the system (similar to conventional fossil
fuel power plants), through the use of a dynamically variable gear ratio between the
turbine and generator (the so-called hydro-dynamic wind turbine generator [108]).
Figure 2.28 shows a typical WTG power curve, or output versus wind speed
characteristic. The cut-in, rated and cut-out wind speeds are typical for utility-scale
WTGs. Generally, WTGs are designed to work at maximum aerodynamic efficiency
between cut-in and rated wind speed. For wind speeds higher than rated and lower
than cut-out, blade pitching or blade stalling is used to maintain loading within the
equipments rating. WTGs shut down for wind speeds larger than cut-out wind speed
to avoid excessive mechanical stress.
120
Percent or Rated Output

100

80

60

40
Cut-in Rated
20 Cut-out
0
0 5 10 15 20 25 30
Wind speed (m/s)

Figure 2.28: Typical WTG power curve.


2.11.1.2 Wind power plants
Utility-scale wind power plants consist of several tens to hundreds of WTGs, each
with a pad-mounted transformer, connected to the transmission network through a
medium-voltage collector network (Figure 2.29). A power transformer is used to
interface with the transmission grid. Depending on the application and type of WTG,
shunt reactive power compensation may be added at the WTG terminals, in the
collector system or at the substation interfacing to the utility grid (or at all three
locations).

2 - 64
Transmission Collector station
station

Individual WTG

Feeders (overhead and/or


underground)

Figure 2.29: Typical wind power plant layout.


For grid planning studies, most wind power plants are represented using a single-
generator equivalent model as shown in Figure 2.30. However, more detailed
modeling may be warranted in some cases [108], particularly when the wind power
plant is to be installed in the vicinity of series compensation or large power electronic
based transmission equipment (e.g., HVDC) that may cause concerns with
interactions between the WTGs and the transmission equipment.
Main Equivalent pad-mounted
transformer transformer

System

Equivalent feeder
POI
Plant-level reactive
Turbine-level shunt capacitor, if any.
compensation. Could be
Stage-switched to maintain pf
static and/or dynamic

Figure 2.30: Single-generator equivalent representation.


2.11.2 Wind power plant performance
Wind power plants can have a significant effect on power system performance in local
areas or in regions where the installed wind capacity is relatively high.
2.11.2.1 Steady-state performance
Real power output from a wind power plant is a function of the wind resource, and
thus cannot be dispatched like conventional generating plants. A change or
perturbation in the wind resource implies a change in the real power output of each
WTG. Because wind power plants typically cover a large geographical area, the
variability of the power plant output is less than that of each individual WTG. Further,
when several large wind plants are combined over a large region, an additional
smoothing effect occurs. Still, wind power output is variable. Power system operators
cannot control the rate of real power decreases (ramps down due to falling wind
speeds). For ramping up, some manufacturers offer the option of controlling the rate
of real power increase. For these reasons, wind generation is often viewed as an
energy resource, and not a capacity resource. Increasing experience with wind
forecasting tools, however, helps in operational performance since it gives system
operators visibility of the availability of the potential wind resource in the coming

2 - 65
hours or day ahead. Nonetheless, forecasting is not perfect and there are still errors
between the actual and forecasted resource [108].
As wind power capacity within a balancing (control) area increases, the additional
variability can measurably impact efficiency of the unit commitment process, and
require increased reserves to meet reliability performance standards. One study
estimated regulation reserves would increase by 36 MW when adding 3,300 MW of
wind to a 34,700 MW peak demand [109]. Another study estimated 10 MW of
additional regulation reserves were required when adding 1500 of wind to a 9,000
MW peak demand [110].
In areas with large amounts of wind generation, wind variability can have a
significant impact on voltage profiles. Shunt capacitor banks, shunt reactors, or power
electronic voltage control devices (e.g., SVCs and STATCOMS) may be required. If
switched shunt capacitor/reactor banks are used, the effect of switching these devices
should also be considered.
For reactive power management, several strategies exist. One common strategy is for
the wind power plant to operate at a fixed power factor when the output is above a
certain MW output level. Another strategy is reactive power control, whereby reactive
power exchange at the point of interconnection is maintained within a prescribed band
(e.g., 10 Mvar). These strategies do not provide voltage support to the power system
during emergencies. During periods of low voltage or high voltage, the wind plant
will not aid the system by producing or absorbing more reactive power. Induction
generators absorb reactive power, and under conditions where the system voltage may
drop following a disturbance they will actual absorb more reactive power. In addition,
the reactive power produced by shunt compensation devices falls as system voltage
drops. Therefore, wind power plants operating on fixed power factor or reactive
control mode can contribute to voltage instability.
Variations on the above reactive control strategies are to switch to voltage control
when voltage drifts outside the normal range, or to have the ability to respond to a
signal to adjust reactive power from a control centerwithin limits. Some wind
power plants also have the ability to control/regulate voltage at or near the point of
interconnection (POI), which can be accomplished by installing a separate device
(e.g., SVC) or an external controller that adjusts the power factor of each individual
WTG until the target voltage is achieved. These strategies are good for the system
from a voltage stability point of view.
2.11.2.2 Dynamics
The dynamics of individual WTGs and the entire wind farm have significant impact
on the stability of local and regional power grids. Simulation studies using planning-
grade wind plant models (see Section 2.12.4) will help to evaluate the positive or
negative effects of wind plants on angle and voltage stability.
Angle stability is typically not an issue with land-based WTGs because most WTGs
are asynchronous units such as doubly-fed asynchronous generators (Type 3), units
fully decoupled from the system by back-to-back frequency converters (Type 4) or
induction machines (Type 1 and 2). Thus, there is no equivalent concept of rotor
angle or synchronizing and damping torque for these types of generators. Some
studies have found that bulk power system transient stability improved if wind power
plants as compared with conventional synchronous generators are added at the same
location and at the same output level. Reference 109 states the following with respect

2 - 66
to 3,300 MW of wind generation (Type 3 WTGs) added in New York State:
However, unlike synchronous machines there is no physically fixed
internal angle that must be respected in order to maintain stability with
the grid, and which dictates the instantaneous power delivered by the
machine to the grid. With WTGs, the internal angle is a function of the
machine characteristics and controls, allowing a smooth and non-
oscillatory reestablishment of power delivery following disturbances.
Wind generation does impact voltage recovery after a system fault event, assuming
that they remain on-line after the event. Type 1 and Type 2 WTGs are induction
machines that absorb considerable reactive power when the voltage is low (such as
during and right after a disturbance). Some Type 3 WTGs designs may crow-bar
during a fault to prevent over-voltage in the converter dc circuit [108]. During the
fault period, the WTGs acts as an induction generator, therefore reactive power
consumption can also be high. Increased reactive power consumption can be
especially troublesome if the transmission in the area is weak. Delayed voltage
recovery can cause or exacerbate other problems in the nearby system, such as motor
stalling.
As discussed in the next section, most wind power plants are required to remain in
service for network faults and the ensuing recovery period. Techniques to achieve low
voltage ride-through during system disturbance are available from most of the major
turbine manufacturers for modern WTG designs [108] and may include control
modifications and installation of fast acting smoothly controlled external reactive
power devices such synchronous condensers, SVCs or STATCOMs. (SVCs and
STATCOMs may be augmented by fast coordinated switching of mechanically
switched capacitor banks. This, however, requires careful design and study.). A side
effect of higher fault tolerance is increased propensity to islanding and associated
overvoltages. This should be addressed by proper application of transfer-tripping
schemes or fast-acting overvoltage relays.
Protective relaying must be designed with care, taking into account the comparatively
small fault current contribution from wind power plants during faults.
2.11.2.3 Grid Codes
In the past, wind power plants were allowed (and in some cases required) to trip off
for nearby transmission faults. Until just a few years ago, utility-scale WTGs were
designed to trip off line for system disturbance that resulted in system voltage dipping
below approximately 75% of nominal. Due to the actual and projected increase in
wind power capacity, this practice is no longer appropriate. Transmission operators
and other standards development and reliability organizations have begun to capture
performance requirements for wind power plants in grid codes. These codes address,
among other things, fault tolerance, reactive power control requirements, and in some
cases, ramp rate control and frequency response.
A fault typically results in tripping of a transmission line to clear the fault, possibly
leaving the system in a stressed condition. The sudden loss of large amounts of wind
generation can cause further system stress. As a result, nearly all wind grid codes
contain requirements for fault tolerance. The first of such standards, was issued in
2004 by EON Netz, a German grid operator [111]. The standard required wind power
plants to stay in service if the voltage at the machine terminals did not fall below the
performance curve (Figure 2.31). Over the years, several jurisdictions have adopted

2 - 67
similar performance standards. The current U.S. standard, effective January 2008,
states that:
Wind generating plants are required to remain connected during
three-phase faults with normal clearing (which is defined as a time
period of approximately 4-9 cycles) and single line to ground faults
with delayed clearing, and subsequent recovery to pre-fault voltage
unless clearing the fault effectively disconnects the generator from
the system [112].
The voltage at the point of interconnection, as measured at the transmission side of the
power transformer, can be as low as zero.7 Various approaches are used to accomplish
low voltage ride-through, and additional evolution in this regard can be expected in
the future.
120
Percent of Rated Voltage

100

80

60

40

20

0
-1 0 1 2 3 4 5
Time (sec.)

Figure 2.31: Low voltage ride through curve.


With respect to reactive power, some grid codes prescribe the power factor range
within which the wind power plant must be capable of operating if called upon,
typically 0.95 lead and lag. In the U.S., the reactive power capability that needs to be
continuously adjustable is established through interconnection studies.
Some grid codes, such as those of Eirgid in Ireland [108], also address ramp rates and
frequency response capability for WTGs. There are two issues involved here. First,
many wind turbine generator technologies (e.g., Type 3 and 4) do not contribute to
system inertia unless the controls are designed to do so [113]. This results in a higher
rate of change (decline or rise) in system frequency following a generation/load
imbalance event; particularly in small systems with high penetration of wind
generation. Furthermore, present WTG designs do not contribute to primary
frequency regulation, due to the intermittent nature of the wind. Some demonstration
projects in Europe (such as the Horns Rev project in Denmark [108]) have illustrated
the concept of frequency regulation using WTG. This subject, however, requires more
work and study to further mature. These requirements are important in systems where
wind penetration levels may soon reach very high levels, and particularly important in
smaller interconnections such as ERCOT, Qubec, Hawaii and other islands.
2.11.2.4 Models
Planning models are needed to assess the impact of wind generation additions, and to
perform routine system planning studies. These studies are a very important part of

7
The interim standard specifies that the voltage at the point of interconnection can be as low as 15% of
nominal.

2 - 68
maintaining system reliability. Unfortunately adequate WTG models are difficult to
access or are available under non-disclosure terms. Some of the models that do exist
are not validated. With an increased amount of wind generation capacity in the
system, the need for better and more accessible models is evident. Currently, the
Western Electricity Coordinating Council is leading an effort to develop standard,
non-proprietary positive-sequence models for use in planning studies.
2.11.3 Summary of dynamic performance best practices for wind generation
1. Improve access to simulation models by moving toward standard models suitable
for planning studies.
2. Install time-synchronized monitoring equipment at the POI to monitor wind power
plant performance.
3. Enforce existing requirements on voltage ride-through (fault tolerance) as per the
latest standards.
4. Evaluate requiring next-generation wind power plants that can provide reactive
power support to the system and assist in regulating system voltage at the POI.
5. Perform studies to understand the mean and variance of the power produced.
6. Consider the use and application of wind forecasting tools to help in predicting the
wind resource for operational performance of the system.
7. Perform studies to understand plant capacity, the effects on angle and voltage
stability, and reactive power characteristics.
8. Carefully review interconnection agreements. Equipment should not be allowed to
connect to the transmission grid if the responsible grid operators and reliability
coordinators do not have adequate models and data to represent and understand
the behavior of such equipment.

2.12 Reliability in Market Environment


In a market environment most of the grid operators (also known as the Independent
System OperatorISO, Regional Transmission OperatorRTO, or Transmission
System OperatorTSO) are responsible for the electricity market, the reliability of
system operation, and the planning of the system. Reliability requirements in a market
environment are no different that they are in an environment that lacks a competitive
component. The main difference is that in a competitive environment, payments (and
penalties) are designed to be in line with the requirements of meeting load,
maintaining reserve, etc. In a competitive environment there are economic incentives
for the suppliers of energy and ancillary services to supply those services and there is
competition among the suppliers. In many cases the market simply chooses the least
cost way to meet load and maintain reliability. The best practice for an ISO to
maintain system reliability is to use an integrated, multi-stage approach. Figure 2.32
shows such an approach using five major progressions starting from many years in the
future to real-time operation [114]. The time scales in Figure 2.32 reflect the lead time
required to ensure resource adequacy for a power system to meet its reliability
commitment.

2 - 69
Figure 2.32: Diagram to illustrate security functions in different time frames and power system
reliability functions.
2.12.1 Reliability of generation
In a market environment, the total available generation is based on the generator bids
into the market. In some markets to ensure adequacy of supply, Installed Capacity
(ICAP) payments are provided so generator owners bid in their capacities. In a well
functioning market the prices in the energy and installed capacity markets provide
adequate incentives for investments in new generation to meet future demand.
Furthermore, generator maintenance needs to be coordinated, in conjunction with
transmission maintenance, to avoid potential generation shortage during peak load
periods.
Reliability can also be enhanced by employing a multiple settlement scheme for unit
commitment. Many ISOs use a two-settlement scheme in which the day-ahead energy
(DAE) market commits most of the needed generators 1224 hours ahead of time,
using unit ramp rates as constraints, and the real-time energy (RTE) market dispatches
additional generation, if needed, to balance the supply and demand. Such an approach
ensures the participation of generators with long startup and restart time and minimum
run time, and can activate demand response, if needed, on peak load days.
Several markets also provide an adjustment energy (AE) market after the closure of
the DAE market, where both the generators and loads can adjust their schedules based
on unforeseen changes.
In some cases, due to their peculiar characteristics, their location, or political reasons,
some generators are must run for reliable system operation, even though they are
not competitive in the electricity market. This can be due to the need of real power
reserve in some areas or more often because their reactive power is required. The
choice of these generators is, of course, very critical for the electricity market
transparency (usually, their economic treatment is defined by regulators).
In the U.S., under NERC guidelines, each ISO procures generation for regulation and
reserve equivalent to the generation of the largest unit or the two largest units in the
region, depending on whether a single contingency criterion or a double contingency
criterion is used. The reserve is categorized into several classes: (1) regulation, (2)
spinning reserve, and (3) non-spinning reserve.
A similar arrangement is defined by the Union for the Co-ordination of Transmission
of Electricity (UCTE), which is responsible for the security standards of system
operators in Europe. According to its operational handbook, the real power reserve is
classified into primary, secondary, and tertiary. Primary reserve operates within a
short time frame (50% in 15 sec and 100% to 30 sec). The total reserve must be made
available for a reference incident, which currently is the loss of 3000 MW. The
secondary reserve has the goal of restoring the primary reserve and bringing the

2 - 70
power exchanges among areas back to the schedule within 30 seconds to 15 minutes.
Finally, tertiary reserve is a control that uses reserve available in 15 minutes in order
to restore the secondary reserve. While the primary and secondary controls are
automatic, the tertiary reserve is not.
In some European countries, this classification is adopted to define different reserves
that can be bought by an ISO in the ancillary service market and used in the real time
dispatch. The operational handbook is the basis for the interconnected operation
among almost all the areas in Europe.
It is also important to note that for regions with internal congested transmission paths,
the reserves need to be located in the appropriate regions so that tapping into the
reserves will not violate any system constraints. In some market environments,
separate markets for reserve and regulation are established to select and compensate
generators which provide these services.
2.12.2 Reliability of transmission network
The main reliability functions in the transmission network are to ensure that the
critical transmission lines are operational, all facilities are operated within pre- and
post-contingency thermal and voltage limits, adequate dynamic reactive power is
online, and the system will remain dynamically stable. Thus routine transmission line
maintenance should be scheduled at off-peak hours. Unscheduled line maintenance
should be minimized. In some markets, if a forced line outage causes congestion in
other parts of the system, the responsible transmission owner has to pay for the
congestion costs. Such a penalty provides an incentive for the transmission owner to
quickly restore the outaged equipment back to normal operation.
In some control areas, because of the constraints in the transmission system, zonal
models are used for the electricity market. In this case, the reliability of the power
system and the operation of the market are affected by the definition of zones and by
the computation of the total transfer capability (TTC). Such computation is strictly
related to the power system security and makes it necessary to take into account any
special control device. For example, in some cases, some remotely-controlled relays
are used in order to avoid congestion following any contingency: these devices
disconnect specific power stations in case of a line trip in order to avoid overloading
of some other lines. This allows the increase of the TTC. Also in this case, it is
necessary to consider that following the operation of such devices, the tripping of
some power plants causes a real power imbalance; the consequent action of the
frequency control must not cause any further congestion. Dynamic studies are also
necessary in order to avoid any undesired operation of protection devices following
this double perturbation. Reference 115 discusses such dynamic security issues for the
Italian power system.
In some markets, in order to increase the investment opportunities to reinforce the
transmission grid, the possibility to build new transmission lines is also given to
private investors that can be awarded new congestion contracts. The process of the
planning and operation of such transmission lines must be coordinated by the ISO
with the existing system to maximize the overall benefit.
In some market environments, a separate revenue stream is established for voltage
support, however few, if any, markets have implemented a competitive mechanism
for voltage support. By and large the existing financial incentives for suppliers of
reactive power take two forms. First are payments, generally based on demonstrated

2 - 71
reactive power capacity, that are meant to cover the costs of maintaining the
equipment and expertise needed for voltage regulation through the controlled
modulation of reactive power production or consumption. These payments are
generally restricted to continuously controlled reactive power sources such as
generators and synchronous condensers. The second payment, compensation for lost
opportunity, is paid to generators whose real power output must be reduced in order to
increase reactive power output. The lost opportunity payment makes the generator
indifferent to its scheduled mix of real and reactive power. With the lost opportunity
payment, the generator should be willing to reduce its real power production when
needed for voltage support because its bottom line will be unaffected. However, the
price signals that would indicate where and how much additional voltage support may
be needed are still lacking with these mechanisms in place. At best, the current
payment mechanisms guarantee that reactive power suppliers see no financial obstacle
to their continued reactive power contributions. Voltage support and reactive power
best practices for reactive power are discussed in Sections 2.2 and 2.3.
Continuously controlled reactive power supply from SVC and STATCOM can
substantially improve the reliability of a power system. In most cases, the installation
of such a Var supplier in a substation also requires the installation of switched
capacitor and reactor banks in neighboring and sometimes remote substations. In
normal operations, the switched capacitor and reactor banks are configured so that an
SVC or STATCOM operates at a small fraction (like 20%) of the equipments rated
capability. Such an operation is sometimes known as the Var reserve mode [116]. In a
system emergency, the SVC or STATCOM has most of its rated capability available
to respond to the sudden increased demand. In most power markets, transmission
network continuous reactive power supply is not paid from ancillary services [117],
but some markets compensate these suppliers with transmission congestion contracts
(TCCs) if their applications result in a higher transfer capability. An offer-based
approach [118] proposes additional incentives for investment in such dynamic
reactive power supplies. (As discussed in Section 2.3, mechanically switched
capacitor/reactor banks with discontinuous control can support system dynamic
performance, and in market environments could be treated like continuously
controlled reactive power devices.)
Although transmission investment by outside developers is considered desirable,
many problems exist with developing a workable pricing structure to encourage
investment. In order to avert a lack of adequate transmission and/or generation to
serve future demand, many ISOs have developed a planning process which identifies
reliability deficiencies five to ten years in the future. These processes may solicit
proposals from other entities to provide solutions to the identified deficiencies. If
these proposed solutions are inadequate, the ISO and/or the utilities develop their own
set of projects to address the deficiencies, with the costs included in utility rate bases
or ISO uplift charges.
2.12.3 Reliability of system operation
As in a fully regulated power system, system operators in a deregulated electricity
market perform similar system dispatch functions to ensure reliable operation. In
addition, because of the need to perform energy balancing in real time, one or more
operators need to be dedicated to such tasks and operators must be provided with the
proper tools to continuously review and assess the state of the system and take
corrective action, if needed. It is also important that the real time dispatch software be

2 - 72
efficient and robust, so that operators can manage large numbers of energy
transactions, without being distracted from the system reliability functions (August
14, 2003 was a graphic demonstration of the possible consequences).
One of the tools increasingly exploited by system operators in the market environment
is interruptible load (and correspondingly, interruptible generation). It can be provided
by generators and loads in several ways: for example, different economic treatment
can be defined for operators that make available for shedding their loads in real time
(e.g., 0.5 seconds) or upon notice (e.g., 15 minutes). For this service, it is important
that the ISO knows in real time:
The availability and amount of load or generation ready to be shed;
The cost of real and reactive load to be shed;
The sensitivities of voltages and/or currents on the real/reactive shedding;
A priority lists of loads or generating units to be shed.
For power system dynamic performance, fast automatic switching of lower priority or
contracted interruptible load is valuable. For severe disturbances, this is a painless
alternative to underfrequency, undervoltage, or SPS load shedding. This may require
further IT development such as internet protocol reliable communications from
control centers to energy management systems of large industrial or commercial
loads.
As discussed in the preceding section, a new issue for operators is the increasing role
of renewable energy, in particular of non-dispatchable electricity. In particular, an
issue that can significantly affect the power system operation is the fact that often
renewables receive priority in the merit order in the electricity market for political
reasons and do not pay for any imbalance they cause, and that imbalance must be
compensated by other balancing resources. This is not particularly significant in today
power systems, but the impact is increasing as the percentage of renewable generation
increases (e.g., in Germany installed wind power is a significant percentage of the
peak load and this can cause a significant imbalance for lower wind speed). This issue
is today under discussion because system operators are still trying to evaluate the
actual impact of the incentives on the power system operation.
2.12.4 Summary of dynamic performance best practices in market
environments
1. Reliability should be an important long-term planning process involving the grid
operators, market participants, and public commissions.
2. Ancillary services should be compensated properly. In particular, adequate
regulation and reactive power supply should be guaranteed. Markets should
continue to consider developing policies to encourage reactive power investments.
Verification of performance in ancillary service markets is essential.
3. Unit commitment should be co-optimized with ancillary services so that reliability
is not compromised. This will provide energy price security.
4. Loads should be included as a means for enhancing physical and price security.
Practices such as dispatchable large industrial loads would be a first step in this
activity. Demand response should be automatic for system dynamic
performancethis may require further development such as internet protocol
reliable communications from control centers to large industrial or commercial

2 - 73
loads.
5. If alternatives to must run generation need to be considered, for economic or
environmental reasons, then transmission or distribution level reactive power
devices in some cases can replace the generator reactive power support.
Transmission or distribution compensation may be optimally located, and
designed for required response speed. Regulators should facilitate cost recovery.
6. ISO software and procedures for market operation should not distract from system
dynamic performance and other reliability functions.
7. Market environment procedures and standards should respect that a power system
must be designed and operated as a (dynamic) system.

2.13 Restoration
The goal of this work is to prevent instability and cascading outages resulting in
uncontrolled separation and islanding. If separation is necessary, the goal is controlled
islanding that prevents complete blackout and facilitates restoration.
If uncontrolled islanding and a widespread blackout does occur, there are many power
system dynamic performance issues and best practices related to restoration. Many are
similar to best practices describe above that may be even more important for
restoration. Restoration is described in many publications [119121, 130]. Reference
119 is a collection of many IEEE papers on restoration. A new IEEE/PES Power
System Stability Controls Subcommittee task force on Power System Restoration
Dynamics will focus on dynamic issues of restoration. A summary of restoration
practices to be considered by the new task force follows.
2.13.1 Summary of dynamic performance best practices issues for restoration
1. Planning of reactive power devices should consider possible need in system
restoration.
2. Monitoring systems such as WAMS should consider system restoration needs.
3. Capabilities and strategies to trip generators to house or local load should be
considered.
4. Standing phase angle constraints and reduction strategies should be analyzed.
5. Operator simulator training of restoration scenarios capability should be
developed.
6. Models and simulation methods with validation as described in Section 2.8 must
be adequate for restoration. On-line simulation capability to guide restoration
capability should be developed.
7. Control and protection adaptation during restoration should be investigated.
8. Restoration of large wind power plants with coordination between plants and
transmission operators.
2.14 Reliability Standards
In the past years, vertical integration, and cooperation and peer pressure among
interconnected power companies aided the development of best practices. Reliability
standards were often developed by individual power companies or groups of
interconnected companies, but not codified into mandatory standards or laws. Best

2 - 74
practices were often documented in IEEE transactions papers and other reports from
industry authors. Other industry experts often wrote formal discussions of transaction
papers. Today there are far fewer best practices papers and reports.
In todays restructured and highly competitive environment, however, very detailed
mandatory standards with enforcement and penalties for violations have evolved.
Nevertheless, standards tend to lag best practices, and often are based on least
common denominator consensus.
With separate ownership of generation, transmission, and distribution, reliability
standards help ensure at least adequate overall power system engineering.
Reliability standard development is an ongoing process that should reflect best
practices. Indeed, a purpose of describing best practices in this chapter is to stimulate
improvement of existing standards. Detailed and specific critique of existing standards
is outside the scope of this report, but recommendations are implicit in the suggested
best practices
2.14.1 Summary of dynamic performance best practices for reliability
standards
1. Reliability standard development should have the continuing goal of reflecting
best practices.
2. Monitoring, as described in this chapter, should be applied to verify satisfactory
dynamic performance and ensure compliance with standards.
3. Regulatory agencies should allow cost recovery for implementation of reliability-
related best practices.
2.15 Conclusions and Summary of Best Practices for Power System
Dynamic Performance
Best practices are recommended at the end of each section above. All areas are
important, and an important principle is defense in depth or multiple lines of defense.
By applying best practices in multiple areas in generation, transmission, and
distribution planning and operation, the risk of cascading blackouts is greatly reduced.
While the latest mandatory reliability criteria and rules are valuable in reducing
cascading blackout risk, we go beyond the rules in recommending best practices in
more detail.
Many of the best practices comprise relatively low-cost improvement and
modernization of control, protection, and communications equipment. Other relatively
low-cost areas include EMS security assessment and training simulator software,
generator excitation equipment modernization, substation bus configuration
improvements, and shunt capacitor banks and other reactive power equipment. Power
companies should prioritize implementation of best practices.
A concern for regulated power companies in implementing best practices is difficulty
in cost recovery, including for increased engineering staff. Regulators should consider
these best practice strategies in cost recovery decisions, balancing the reliability
benefits to consumers to the additional cost.
2.15.1 Overall summary of best practices for system dynamic performance and
reducing the risk of cascading power failures
We summarize the best practices recommendations from the sections above.

2 - 75
1. Benefits/cost of modernization of power plant equipment, and improved control
and protection functions to increase reliability and robustness should be
evaluated. Prioritization of upgrades mainly concerns the excitation equipment
(exciter, AVR, PSS), but turbine and boiler control modernization should also be
considered.
2. At power plants, automatic voltage regulator line drop compensation or
automatic transmission-side voltage control should be considered for better
regulation of transmission network voltage and system stability. Wide-area
secondary voltage regulation can be considered.
3. Automatic or manual power factor/reactive power control override of automatic
voltage regulators must not be allowed. All generators connected to transmission
networks should be in automatic voltage regulator mode without secondary
override control.
4. It is preferred to avoid operating thermal power plants under outer-loop
megawatt control, which overrides the turbine-generator primary governor
response and maintains a fixed megawatt output on the unit.
5. During heavy load conditions a high, flat voltage profile should be maintained to
the extent possible. Reactive power reserve should be maintained at power
plants via transmission and distribution shunt capacitor banks for base reactive
power needs.
6. The discontinuous control capability of mechanically switched series and shunt
reactive power compensation should be exploited. These mechanically switched
capacitor/reactor banks can be switched in or out within a few cycles. Control
can be local or wide-area, ranging from voltage relays, to special protection
systems, to more sophisticated microprocessor algorithms.
7. For short-term stability problems one potential solution that should be
considered are SVCs or STATCOMs. Short term ratings should be considered,
where appropriate. Generally these power electronic devices should also
perform coordinated switching of nearby mechanically switched
capacitor/reactor banks.
8. Transmission-level versus distribution-level power electronic devices should be
evaluated.
9. Protective relaying upgrades should be prioritized to first address locations
where inadequate performance or hidden failures will have the most impact in
exacerbating disturbances.
10. The reason for application of zone 3 elements should be well understood.
Alternative means of providing equivalent protection (such as local and remote
breaker failure protection) should be considered. Impedance relay reach in the
load direction should be limited to prevent operation on mild overload and
voltage depression.
11. The load carrying capability of protection equipment to meet or exceed
regulatory requirements requires continued attention and vigilance.
12. Zone 3 or other relays applied to detect short circuits should not be used for
overload protection.
13. Application and settings of synchro-check (standing phase angle sensing) relays

2 - 76
for automatic reclosure blocking should be evaluated for scenarios that may
result in blackouts.
14. To relieve the dispatcher burden in stressed system condition, to enable them to
focus on the outage restoration, and to arrest rapid frequency decay or voltage
collapse, automatic load-shedding programs should be implemented. IT
technology progress allows transmission owners to realize reliable and
sophisticated protection and control system for automated load shedding.
15. Underfrequency load shedding should be simple, rapid and decisive as a last-
resort expedient. It should be automatic, be distributed uniformly across the
system at each step to avoid aggravating line overloading, and be locally
controlled in response to local frequency to be independent of system splitting.
16. Special protection systems should be considered to prevent blackouts as well as
to minimize the consequences of disturbances. Excessive use of SPS that overly
complicates system operation, however, should be avoided.
17. Special protection systems should be considered to assure system security for
transmission reinforcement delays.
18. Special protection systems should be considered to solve operation problems
such as equipment overloads, over and under-voltages, over and under-
frequencies, voltage and angular instabilityin an economical way.
19. Due to the importance of SPS for overall system performance, a thorough
analysis of its reliability at the time of its conception and design is required.
Measurements, communication channels, and decision-making logic or
algorithms require the necessary functional redundancy. Adequate reliability is
achieved by design.
20. Phasor Measurement Units (PMUs) and other new sensors should be considered
for advanced special protection systems.
21. Potential HVDC applications should be carefully studied for the effect on power
system dynamic performance. This requires preliminary detailed models of the
dc link, large scale transient stability simulation, and possibly small disturbance
analysis (e.g., eigenanalysis). Models should be required in HVDC
specifications. Models must be validated as part of simulator and commissioning
tests.
22. In comparing dc and ac alternatives, long-range needs as the network evolves
must be considered (e.g., future tapping of long distance lines).
23. Based on HVDC studies, tradeoffs between higher cost and better performance
must be made. Decisions such as capability for higher than minimum extinction
angle operation, modulation controls, capacitor-commutated converters, voltage
sourced converters, etc. should be reflected in the specification.
24. Wide-area monitoring and supervision for HVDC links should be evaluated and
are required when modulation controls are used.
25. Commissioning tests of HVDC and other power electronic devices should
include tests related to power system dynamic performance. Wide-area
monitoring of response to staged tests may be necessary.
26. Dynamic performance monitoring such as WAMS should be designed to

2 - 77
develop and integrate measurement-based information into the grid management
process, including situational awareness.
27. Dynamic performance monitoring should draw upon dynamic models to
interpret observed data and to recognize conditions of special interest. Cross
validation of measurements and models is an ongoing requirement. Large-scale
dynamic simulations should be validated by wide-area measurements.
28. Characterization and analysis of dynamic performance require frequency
domain tools.
29. Past blackouts have shown that accuracy of dynamic and power flow models
and data are a key factor in operating systems within safe limits. Validation of
models is required.
30. Models and data should be validated through staged tests and disturbance
monitoring during system events. This should be an ongoing priority. Examples
include excitation limiters, and governor modeling including load controllers.
31. Closer to real-time simulation capability should be developed to reduce
uncertainties and improve operating decisions. In near real-time simulations, it is
critical to have the correct state-estimated system topology and data for the
largest footprint possible, including adjoining (and perhaps remote) control areas
where the next critical outage may be located.
32. Open, interconnection-wide, exchange of reliability-related models and data is
required for both off-line and on-line simulations.
33. Generator reactive power limits used in dynamic and power flow simulations
should be field tested and included in the modeling. These limits should be
verified and updated in close to real time as plant problems or changes occur.
34. Simulation programs should be continually developed and validated to improve
model detail and accuracy. Vendors and users should cooperate in development.
35. Compatibility and cross-referencing of off-line and on-line power system
models and data should be developed and maintained.
36. Commissioning tests for power plants, HVDC links, SVCs and comparable
equipment must require validation of dynamic models and data. Power plant
owners, including wind power plants owners, must provide validated model data
to interconnected power companies. On-going performance monitoring is
critical to ensure fidelity of models and data.
37. No equipment should be allowed to connect to the transmission grid if the
responsible transmission operators and reliability coordinators do not have
adequate models and data to represent the equipment and understand its
behavior.
38. Composite load models, including dynamics, should be developed by laboratory
and field tests, and monitoring. These should be validated by monitoring during
major system disturbances.
39. Performance of major power plants should be monitored during system
frequency and voltage excursions. Units above a designated MW rating should
have their real and reactive power output, station frequency, and POI voltage
monitored continuously (sampling rate of at least ten samples per second). The

2 - 78
data is invaluable for model validation, performance tracking, off-nominal
frequency criteria compliance, and reconstructive analysis of power system
events.
40. Monitoring, such as WAMS, should verify satisfactory dynamic performance
and ensure compliance with standards.
41. Critical system measurements should be observed by operators in an efficient
manner including, for example, reactive power reserves.
42. State-of-the-art dynamic security assessment should be included as part of EMS.
DSA should have integrated ability to conduct transient, voltage, and oscillatory
security analysis.
43. DSA should determine security limits in terms of system operating parameters
as determined by available reliability criteria. DSA should determine preventive
control and corrective control for simulated insecure contingencies.
44. Time-synchronized measurements at wind power plant point of interconnection
are required to monitor plant performance.
45. Wind plant voltage ride-through (fault tolerance) performance per the latest
standards is required.
46. Simulation studies to understand the effect of wind plants on angle and voltage
stability are required.
47. In market environments, ancillary services should be compensated
appropriately. In particular, adequate regulation and reactive power supply
should be guaranteed. Markets should continue to consider developing policies
to encourage reactive power investments. Verification of performance in
ancillary service markets is essential.
48. Consider the use and application of wind forecasting tools to help in predicting
the wind resource for operational performance of the system.
49. Evaluate requiring next-generation wind power plants that can provide reactive
power support to the system and assist in regulating system voltage at the POI.
50. If alternatives to must run generation need to be considered, for economic or
environmental reasons, then transmission or distribution level reactive power
devices in some cases can replace the generator reactive power support.
Transmission or distribution compensation may be optimally located, and
designed for required response speed. Regulators should facilitate cost recovery.
51. ISO/RTO/TSO software and procedures for market operation should not distract
from system dynamic performance and other reliability functions.
52. Market-based demand side response capabilities should be developed for system
reliability. Demand response should be automatic for system dynamic
performance using modern information technology.
53. Market environment procedures and standards should respect that a power
system must be designed and operated as an overall (dynamic) system.
54. Reliability standard development should have the continuing goal of reflecting
best practices.
55. Regulatory authorities should allow cost recovery for implementation of

2 - 79
reliability-related best practices (cost of capital is profit).

2.16 References

[1] IEEE/CIGRE Joint Task Force on Stability Terms and Definitions, Definition and
Classification of Power System Stability, IEEE Transactions on Power Systems, Vol. 19, No. 2,
pp. 13871401, August 2004.
[2] IEEE/PES Power System Relaying Committee, Performance of Generating Protection During
Major System Disturbances, IEEE paper TPWRD-00370-2003, available: http://www.pes-
psrc.org/.
[3] P. Kundur, Power System Stability and Control, McGraw-Hill, 1994.
[4] S. Corsi, M. Pozzi, C. Sabelli, and Antonio Serrani, The Coordinated Automatic Voltage
Control of the Italian Transmission GridPart I: Reasons of the Choice and Overview of the
Consolidated Hierarchical System, IEEE Transactions on Power Systems, Vol. 19, No. 4, pp.
17231732, November 2004. Discussion and Closure in February 2006 issue.
[5] M.M Adibi, D.P. Milanicz, and T. L. Volkmann, Optimizing Generator Reactive Power
Resources, IEEE Transactions on Power Systems, Vol. 14, No. 1, pp. 319324, February 1999.
[6] IEEE C57.116-1989, IEEE Guide for Transformers Directly Connected to Generators.
[7] S. Corsi, M. Pozzi, M. Sforna, and G. DellOlio, The Coordinated Automatic Voltage Control
of the Italian Transmission GridPart II: Control Apparatuses and Field Performance of the
Consolidated Hierarchical System, IEEE Transactions on Power Systems, Vol. 19, No. 4, pp.
17331741, November 2004.
[8] CIGRE TF C4.602, Coordinated Voltage Control in Transmission Networks, technical brochure
310, February 2007.
[9] IEEE Task Force on Large Interconnected Power Systems Response to Generation Governing,
Interconnected Power System Response to Generation Governing: Present Practice and
Outstanding Concerns, draft final report February, 2007.
[10] IEEE Task Force on Generator Model Validation Testing, Guidelines for Generator Stability
Model Validation Testing, to be published in the Proceedings of the IEEE PES General
Meeting, Tampa, FL, 2007.
[11] N. B. Bhatt, Field Experience with Momentary Fast Turbine Valving and Other Special
Stability Controls Employed at AEPs Rockport Plant, IEEE Transactions on Power Systems,
Vol. 11, No. 1, pp. 155161, February 1996.
[12] R. G. Farmer (editor), Power System Dynamics and Stability, Chapter 11, The Electric Power
Engineering Handbook, CRC Press/IEEE Press, second edition 2007.
[13] C. Concordia, System Planning Considerations of Sub-Synchronous Resonance, Analysis and
Control of Subsynchronous Resonance, IEEE publication 76 CH1066-00PWR, 1976.
[14] C. W. Taylor, Power System Voltage Stability, McGraw-Hill, 1994.
[15] T. Van Cutsem and C. Vournas, Voltage Stability of Electric Power Systems, Kluwer Academic
Publishers, 1998.
[16] CIGR TF 38.02.12, Criteria and Countermeasures for Voltage Collapse, CIGR Brochure
101, October 1995.
[17] IEEE Committee Report, Voltage Stability Assessment: Concepts, Practices and Tools, 2003.
[18] E. C. Starr and E. J. Harrington, Shunt Capacitors in Large Transmission Networks, AIEE
Transactions, pp. 11291140, December 1953.
[19] T. J. Nagel and G. S. Vassell, Basic Principles of Planning VAR Control on the American
Electric Power System, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-87,
No. 2, pp. 488495, February 1968.

2 - 80
[20] U.S.Canada Power System Outage Task Force, Final Report on the August 14, 2003 Blackout
in the United States and Canada: Causes and Recommendations, April 2004.
[21] P. Nedwick, A. F. Mistr, Jr., and E. B. Croasdale, Reactive Management: A Key to Survival in
the 1990s, IEEE Transactions on Power Systems, Vol. 10, No. 2, pp. 10361043, May 1995.
[22] D. Kosterev, Design, Installation, and Initial Operating Experience with Line Drop
Compensation at John Day Powerhouse, IEEE Transactions on Power Systems, Vol. 16, No. 2,
pp. 261265, May 2001.
[23] S. Koishikawa, S. Ohsaka, M. Suzuki, T. Michigami, and M. Akimoto, Adaptive Control of
Reactive Power Supply Enhancing Voltage Stability of a Bulk Power Transmission System and
a New Scheme of Monitor on Voltage Security, CIGR 38/39-01, 1990.
[24] S. Noguchi, M. Shimomura, and J. Paserba, Improvement to an Advanced High Side Voltage
Control, IEEE Transactions on Power Systems, Vol. 21, No. 2, pp. 683692, May 2006.
[25] S. Noguchi, M. Shimomura, J. Paserba and C. Taylor, Field Verification of an Advanced High
Side Voltage Control at a Hydro Power Station, IEEE Transactions on Power Systems, Vol. 21,
No. 2, pp. 693701, May 2006.
[26] J. B. Davies and L. E. Midford, High Side Voltage Control at Manitoba Hydro, panel session
on Power Plant Secondary (High Side) Voltage Control, Proceedings of IEEE/PES 2000
Summer Meeting, 1620 July 2000, Seattle.
[27] J. D. Hurley, L. N. Bize, and C. R. Mummert, The Adverse Effects of Excitation System Var
and Power Factor Controllers, IEEE Transactions on Energy Conversion, Vol. 14, No. 4, pp.
16361645, December 1999.
[28] P. H. Thiel, J. E. Harder, and G. E. Taylor, Fuseless Capacitor Banks, IEEE Transactions on
Power Delivery, Vol. 7, No. 2, pp. 10091015, April 1992.
[29] S.-M. Hsu, H. J. Holley, W. M. Smith, and D. G. Piatt, Voltage Profile Improvement Project at
Alabama Power Company: A Case Study, Proceedings of the IEEE/PES 2000 Summer
Meeting, Seattle, pp. 20392044, July 2000.
[30] D. P. Bruns, G. R. Newcomb, S. A. Miske Jr., C.W. Taylor, G. E. Lee, and A.-A. Edris, Shunt
Capacitor Bank Series Group Shorting (CAPS) Design and Application, IEEE Trans. Power
Delivery, Vol. 16, No. 1, pp. 2432, January 2001
[31] C. W. Taylor, D. C. Erickson, K. E. Martin, R. E. Wilson, and V. Venkatasubramanian,
WACSWide-Area Stability and Voltage Control System: R&D and On-Line
Demonstration, Proceedings of the IEEE special issue on Energy Infrastructure Defense
Systems, Vol. 93, No. 5, pp. 892906, May 2005.
[32] J. A. Diaz de Leon II, and C. W. Taylor, Understanding and Solving Short-Term Voltage
Stability Problems, Proceedings of IEEE/PES 2000 Summer Meeting.
[33] C. W. Taylor, Reactive Power and Deregulation: System Engineering, Reliability, Best
Practices, and Simplicity, Proceedings of X Symposium of Specialists in Electric Operational
and Expansion Planning (X SEPOPE), 2125 May 2006, Florianpolis, Brazil.
[34] S. H. Horowitz and A. G. Phadke, Third Zone Revisited, IEEE Transactions on Power
Delivery, Vol. 21, No. 1, pp. 23 29, January 2006.
[35] NERC System Protection Task Force, Protection System Review Program - Beyond Zone 3,
August 2005. ftp://www.nerc.com/pub/sys/all_updl/pc/spctf/Beyondz3-8-12-05.pdf
[36] C. W. Taylor and D. C. Erickson, Recording and Analyzing the July 2 Cascading Outage,
IEEE Computer Applications in Power, Vol. 10, No. 1, pp. 2630, January 1997.
[37] IEEE/PES Power System Relaying Committee, Transmission Line Protective Systems
Loadability, March 2001, available: http://www.pes-psrc.org/.PSRC.
[38] IEEE/PES Power System Relaying Committee, paper on zone 3 relays in preparation.
[39] WSCC Relay Work Group Report, Application of Zone 3 Distance Relays on Transmission
Lines, 10 September 1997, available: www.wecc.biz.
[40] A. G. Phadke, J. S. Thorp Expose Hidden Failures to Prevent Cascading Outages, Computer

2 - 81
Applications in Power, Vol. 9, No. 3, July 1996.
[41] S. Tamronglak, S. H. Horowitz, A. G. Phadke, and J. S. Thorp Anatomy of Power System
Blackouts: Preventive Relaying Strategies, IEEE Transactions on Power Delivery, Vol. 11, No.
2, pp. 708715, April 1996.
[42] IEEE/PES Power System Relaying Committee, Draft Guide for Abnormal Frequency Protection
for Power Generating Plants.
[43] Charles Concordia, Lester H. Fink and George Poullikkas, Load Shedding on an Isolated
System, IEEE Transactions on Power Systems, Vol.10, No.3, August 1995.
[44] C. W. Taylor, F. R. Nassief, and R. L. Cresap, Northwest Power Pool Transient Stability and
Load Shedding Controls for GenerationLoad Imbalances, IEEE Transactions on Power
Apparatus and Systems, Vol. PAS-100, No. 7, pp. 34863495, July 1981.
[45] B.Delfino, S.Massucco, A.Morini, P.Scalera and F.Silvestro, Implementation and Comparison
of Different Under Frequency Load-Shedding Schemes, 2001 IEEE.
[46] Shinichi Imai and Tadaaki Yasuda, UFLS Program to Ensure Stable Island Operation, 2004
IEEE/PES Power System Conference and Exposition.
[47] IEEE Committee Report, Loss-of-Field Relay Operation during System Disturbances, IEEE
Transactions on Power Apparatus and Systems, vol. PAS-94, No. 5, pp. 14641472,
September/October 1975.
[48] C. Concordia, discussion of reference 17 and companion papers, IEEE Transactions on Power
Apparatus and Systems, Vol. PAS-94, No. 5, p. 1481, September/October 1975.
[49] Carson Taylor, Concepts of Undervoltage Load Shedding for Voltage Stability, IEEE
Transactions on Power Delivery, Vol.7, No.2, April 1992.
[50] Charles F. Henville and Yofre Jacome, Real Consequences Follow Imaginary Power
Deficiencies, Western Protective Relaying Conference, Spokane, WA, 18 October 2004.
[51] WSCC, Voltage Stability Criteria, Undervoltage Load Shedding Strategy, and Reactive Power
Reserve Monitoring Methodology, May 1998.
[52] Daniel Lefebvre, Cedric Moors and Thierry Van Cutsem, Design of an Undervoltage Load
Shedding Scheme for the Hydro-Quebec System, 2003 IEEE.
[53] D. Lefebvre, S. Bernard and T. Van Cutsem, Undervoltage Load Shedding Scheme for the
Hydro-Quebec System, IEEE Power Engineering Society General Meeting, June 2004.
[54] S. Kolluri, Tao He, Design and Operating Experience with Fast Acting Load Shedding Scheme
in the Entergy System to Prevent Voltage Collapse, IEEE Power Engineering Society General
Meeting, June 2004.
[55] J. Mechenbier, A. Ellis, R. Curtner and S. Ranade, Design of an Under Voltage Load Shedding
Scheme, IEEE Power Engineering Society General Meeting, June 2004.
[56] C. W. Taylor, Power System Stability Controls, Chapter 11.6, The Electric Power
Engineering Handbook, CRC Press/IEEE Press, 2001. Second edition in press.
[57] IEEE Discrete Supplementary Control Task Force, A Description of Discrete Supplementary
Controls for Stability, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-97, pp.
149165, January/February 1978.
[58] IEEE Special Stability Controls Working Group, Annotated Bibliography on Power System
Stability Controls: 1986-1994, IEEE Transactions on Power Systems, Vol. 11, No. 2, pp. 794-
800, August 1996.
[59] IEEE/ CIGR Committee Report (P. M. Anderson and B. K. LeReverend), Industry Experience
with Special Protection Schemes, IEEE Transactions on Power Systems, Vol. 11, No. 3, pp.
11661179, August 1996.
[60] D. Dodge, W. Doel, and S. Smith, Power System Stability Control Using Fault Tolerant
Technology, ISA Instrumentation in Power Industry, Vol. 33, 33rd Power Instrumentation
Symposium, May 2123, 1990, paper 90-1323.

2 - 82
[61] D. C. Lee and P. Kundur, Advanced Excitation Controls for Power System Stability
Enhancement, CIGR, paper 38-01, 1986.
[62] C. W. Taylor, J. R. Mechenbier, and C. E. Matthews, Transient Excitation Boosting at Grand
Coulee Third Power Plant, IEEE Transactions on Power Systems, Vol. 8, No. 3, pp. 1291
1298, August 1993.
[63] C. A. Stigers, C. S. Woods, J. R. Smith, and R. D. Setterstrom, The Acceleration Trend Relay
for Generator Stabilization at Colstrip, IEEE Transactions on Power Delivery, Vol. 12, No. 3,
pp. 10741081, July 1997.
[64] D. N. Kosterev, J. Esztergalyos, and C. A. Stigers, Feasibility Study of Using Synchronized
Phasor Measurements for Generator Dropping Controls in the Colstrip System, IEEE
Transactions on Power Systems, Vol. 13, No. 3, pp. 755762, August 1998.
[65] K. Matsuzawa, K. Yanagihashi, J. Tsukita, M. Sato, T. Nakamura, and A. Takeuchi, Stabilizing
Control System Preventing Loss of Synchronism from Extension and Its Actual Operating
Experience, IEEE Transactions on Power Systems, Vol. 10, No. 3, pp. 16061613, August
1995.
[66] IEEE Committee Report, Single-Pole Switching for Stability and Reliability, IEEE
Transactions on Power Systems, Vol. PWRS-1, pp. 2536, May 1986.
[67] H. Ota, Y. Kitayama, H. Ito, N. Fukushima, K. Omata, K. Morita, and Y. Kokai, Development
of Transient Stability Control System (TSC System) Based on On-Line Stability Calculation,
IEEE Transactions on Power Systems, Vol. 11 No. 3, pp. 14631472, August 1996.
[68] E. W. Kimbark, Improvement of System Stability by Switched Series Capacitors, IEEE
Transactions on Power Apparatus and Systems, Vol. PAS-85, No. 2, pp. 180188, February
1966.
[69] Y. Ohura, M. Suzuki, K. Yanagihashi, M. Yamaura, K. Omata, T. Nakamura, S. Mitamura, and
H. Watanabe, A Predictive Out-of-Step Protection System Based on Observation of the Phase
Difference Between Substations, IEEE Transactions on Power Delivery, Vol. 5, No. 4, pp.
16951704, November 1990.
[70] J. M. Haner, T. D. Laughlin, and C. W. Taylor, Experience with the R-Rdot Out-of-Step
Relay, IEEE Transactions on Power Delivery, Vol. PWRD-1, No. 2, pp. 3539, April 1986.
[71] C. W. Taylor, J. M. Haner, L. A. Hill, W. A. Mittelstadt, and R. L. Cresap, A New Out-of-Step
Relay with Rate of Change of Apparent Resistance Augmentation, IEEE Transactions on
Power Apparatus and Systems, Vol. PAS-102, No. 3, pp. 631639, March 1983.
[72] A. F. Djakov, A. Bondarenko, M. G. Portnoi,, V. A. Semenov, I. Z. Gluskin, V. D. Kovalev, V.
I. Berdnikov, and V. A. Stroev, The Operation of Integrated Power Systems Close to Operating
Limits with the Help of Emergency Control Systems, CIGR, paper 39-109, 1998.
[73] CIGR TF 38.02.17, Advanced Angle Stability Controls, CIGR Brochure No. 155, April 2000.
[74] IEEE Committee Report, HVDC Controls for System Dynamic Performance, IEEE
Transactions on Power Systems, Vol. 6, No. 2, pp. 743752, May 1991.
[75] IEEE Committee Report, AC-DC Economics and Alternatives1987 Panel Session Report,
IEEE Transactions on Power Delivery, Vol. 5, No. 4, pp. 19561976, October 1990.
[76] D. N. Kosterev, C. W. Taylor, W. A. Mittelstadt Model Validation for the August 10, 1996
WSCC System Outage, IEEE Transactions on Power Systems, Vol. 14, No. 3, pp. 967979,
August 1999.
[77] R. Bunch and D. Kosterev, Design and Implementation of AC Voltage Dependent Current
Order Limiter at Pacific HVDC Intertie, IEEE Transactions on Power Delivery, Vol. 15, no. 1,
pp. 293299, January 2000.
[78] R. L. Cresap, D. N. Scott, W. A. Mittelstadt, and C. W. Taylor, Operating Experience with
Modulation of the Pacific HVDC Intertie, IEEE Transactions on Power Apparatus and
Systems, Vol. PAS-98, pp. 10531059, July/August 1978.
[79] R. L. Cresap, D. N. Scott, W. A. Mittelstadt, and C. W. Taylor, Damping of Pacific AC Intertie
Oscillations via Modulation of the Parallel Pacific HVDC Intertie, CIGR, paper 14-05, 1978.

2 - 83
[80] C. E. Grund, J. F. Hauer, L. P. Crane, D. L. Carlson, and S. E. Wright, Square Butte HVDC
Modulation System Field Tests, IEEE Transactions on Power Delivery, Vol. 5, No. 1, pp. 351
357, January 1990.
[81] J. F. Hauer and J. R. Hunt in association with the WSCC System Oscillations Work Groups,
"Extending the Realism of Planning Models for the Western North America Power System." V
Symposium of Specialists in Electric Operational and Expansion Planning (SEPOPE), Recife
(PE) Brazil, May 19-24, 1996.
[82] J. F. Hauer, W. A. Mittelstadt, K. E. Martin, J. W. Burns, and Harry Lee in association with the
Disturbance Monitoring Work Group of the Western Electricity Coordinating Council, Direct
Analysis of Wide Area Dynamics. In Chapter 8 of The Electric Power Engineering Handbook,
L. L. Grigsby ed., CRC Press, second edition 2007.
[83] WSCC Plan for Dynamic Performance and Disturbance Monitoring, prepared by the WECC
Disturbance Monitoring Work Group, October 4, 2000. (Available at
http://www.wecc.biz/committees/JGC/DMWG/documents/)
[84] J. Hauer, D. Trudnowski, G. Rogers, B. Mittelstadt, W. Litzenberger, and J. Johnson, Keeping
an eye on power system dynamics, IEEE Computer Applications in Power, Volume: 10, No: 4,
pp. 5054, October. 1997.
[85] J. F. Hauer, W. A. Mittelstadt, W. H. Litzenberger, C. Clemans, D. Hamai, and P. Overholt,
Wide Area Measurements For Real-Time Control And Operation of Large Electric Power
Systems Evaluation And Demonstration Of Technology For The New Power System. Prepared
for U.S. Department of Energy Under BPA Contracts X5432-1, X9876-2; January 1999. This
report and associated attachments are available on compact disk.
[86] J. F. Hauer, F. J. Hughes, D. J. Trudnowski, G. J. Rogers, J. W. Pierre, L. L. Scharf, and W. H.
Litzenberger, A Dynamic Information Manager for Networked Monitoring of Large Power
Systems. EPRI Report TR-112031, May 1999.
[87] J. F. Hauer, Functionalities Provided by the BPA/PNNL DSI Toolbox: A Tabular Summary,
WAMS Working Note, August 3, 2004. Available at as
ftp://ftp.bpa.gov/pub/WAMS_Information/
[88] J. F. Hauer and J. E. Dagle, Review of Recent Reliability Issues and System Events. PNNL
technical report PNNL-13150, prepared for the U.S. Department of Energy Transmission
Reliability Program by the Consortium for Electric Reliability Solutions (CERTS), December
1999.
[89] J. E. Dagle, Data Management Issues associated with the August 14, 2003 Blackout
Investigation, Proceedings of IEEE/PES 2004 General Meeting.
[90] J. F. Hauer, N. Bhatt, K. Shah, and S. Kolluri, Performance of WAMS East in Providing
Dynamic Information for the Northeast Blackout of August 14, 2003, Proceedings of
IEEE/PES 2004 General Meeting.
[91] R. P. Schulz, Power System Dynamics in the Eastern U.S.Canada Interconnection: Some
Experiences, AEP Internal Report, July 30, 2004. Available: http://phasors.pnl.gov/.
[92] J. F. Hauer and J. G. DeSteese, A Tutorial on Detection and Characterization of Special
Behavior in Large Electric Power Systems, PNNL Report PNNL-14655, July 2004.
[93] N. Zhou, J. Pierre, and J. F. Hauer, Initial Results in Power System Identification from Injected
Probing Signals Using a Subspace Method, IEEE Transactions on Power Systems, Vol. 21, No.
3, pp. 12961302, August 2006.
[94] CIGRE Technical Brochure on Wide Area Monitoring and Control for Transmission Capability
Enhancement, prepared by CIGRE WG C4.601, draft final report, December 2006.
[95] Test Guidelines for Synchronous Unit Dynamic Testing and Model Validation WECC Feb.
1997 -
http://www.wecc.biz/modules.php?op=modload&name=Downloads&file=index&req=viewsdo
wnload&sid=30.
[96] J. W. Feltes, J. R. Willis and C. Grande-Moran, Testing Methods An Overview, presented at
SERC Generator Testing Workshop, Atlanta, GA, December 2000.

2 - 84
(http://www.serc1.org/minutes/GenTest_2000_12
[97] L. Pereira, D. Kosterev, P. Mackin, D. Davies, J. Undrill, and W. Zhu, An Interim Dynamic
Induction Motor Model for Stability Studies in the WSCC, IEEE Transactions on Power
Systems, Vol. 17, No. 4, pp. 11081115, November 2002.
[98] L. Pereira, J. Undrill, D. Kosterev, D. Davies, and S. Patterson, A New Thermal Governor
Modeling Approach in the WECC, IEEE Transactions on Power Systems, Vol. 18, No. 2, pp.
819829, May 2003.
[99] L. Pereira, D. Kosterev, D. Davies, and S. Patterson, New Thermal Governor Model Selection
and Validation in the WECC, IEEE Transactions on Power Systems, Vol. 19, No. 1, pp. 517
523, February 2004.
[100] L. Pereira, Cascade to Black, IEEE Power and Energy Magazine, May/June 2004.
[101] R. C. Hardiman, M. Kumbale, and Y. V. Makarov, An Advanced Tool for Analyzing Multiple
Cascading Failures, Probabilistic Methods Applied to Power Systems, 2004 International
Conference, 1216 Sept. 2004, pp. 629634, ISBN: 0-9761319-1-9.
[102] Standard IRO-004-0 Reliability Coordination Operations Planning,
ftp://www.nerc.com/pub/sys/all_updl/standards/rs/
IRO-004-0.pdf.
[103] Fouad, A.A. (Chair), Dynamic Security Assessment Practices in North America, IEEE
Transactions on Power System, Vol. 3, No. 3, pp. 13101321, August 1988.
[104] Hauer, J.F., Application of Prony Analysis to the Determination of Modal Content and
Equivalent Models for Measured Power System Response, IEEE Transactions on Power
Systems, Vol. 6, No. 3, pp. 1062 1068, August 1991.
[105] Vittal, V., Sauer, P.W., Meliopoulos, A.P., On-line Transient Stability Analysis Scoping Study,
PSERC Final Report, 2005, http://www.pserc.org/cgi-
pserc/getbig/publicatio/reports/2005report/vittal_pserc_report_s-21_2005.pdf.
[106] CIGRE WG C4.601, Review of On-Line Dynamic Security Assessment Tools and Techniques,
CIGRE Technical Brochure, draft final report December 2006.
[107] K. Morison, L. Wang, and P. Kundur, Power System Security Assessment, IEEE Power &
Energy Magazine, Vol. 2, No. 5, pp. 30-39, Sept/Oct 2004.
[108] CIGRE WG C4.601, Modeling and Dynamic Behavior of Wind Generation as it Relates to
Power System Control and Dynamic Performance, CIGRE Technical Brochure, draft final report
December 2006.
[109] New York State Energy Research and Development Authority (NYSERDA) The Effects of
Integrating Wind Power on Transmission System Planning, Reliability, and Operations, Phase 2,
March 2005 p 2-11 http://www.nyserda.org/publications/default.asp,.
[110] Minnesota Department of Commerce, Wind Integration Study ,Introduction, November 2004,
p 90, http://www.state.mn.us/portal/mn/jsp/home.do?agency=Commerce
[111] I. Erlich, U. Bachmann, Grid Code Requirements Concerning Connection and Operation of
Wind Turbines in Germany.
[112] Joint Report of the North American Electric Reliability Council and the American Wind Energy
Association before the United States Federal Regulatory Commission on Interconnection of
Wind Energy (Docket N0. RM05-4-000).
[113] G. Lalor, A. Mullane and M. OMalley, Frequency Control and Wind Turbine Technologies,
IEEE Transactions on Power Systems, Vol. 20, No. 4, pp. 1905-1913, November 2005.
[114] J. H. Chow, R. W. de Mello, and K. W. Cheung, Electricity Market Design: An Integrated
Approach to Reliability Assurance, Proceedings of IEEE, Vol. 93, No. 11, pp. 19561969,
Nov. 2005.
[115] A. Berizzi and M. Sforna, Dynamic Security Issues in the Italian Deregulated Power System,
Proc. IEEE General Power Meeting, Montreal, 2006.
[116] S. Arabi, H. Hamadanizadeh, and B. Fardanesh, Convertible Static Compensator Performance

2 - 85
Studies on the NY State Transmission System, IEEE Transactions on Power Systems, Vol. 17,
pp. 701706, 2002.
[117] FERC Staff Report, Principles for Efficient and Reliable Reactive Power Supply and
Consumption, Docket No. AD05-1-000, February 4, 2005.
[118] J. H. Chow, R. W. de Mello, M. R. L. Carvalho, and R. W. Waldele, Offer-Based Real-Time
Reactive Power Supply in a Restructured Power System, submitted for publication.
[119] M.M. Adibi (editor), Power System RestorationMethodologies & Implementation Strategies,
IEEE Press, 2000.
[120] R. Salvati, M. Sforna, and M. Pozzi, Restoration Project, IEEE Power & Energy Magazine,
Vol. 2, No. 1, pp. 4451, January/February 2004.
[121] IEEE Power System Relaying Committee, Protection Issues During System Restoration, IEEE
Transactions on Power Delivery, Vol. 20, No. 1, January 2005.
[122] A. Berizzi, P. Bresesti, P. Marannino, G. P. Granelli, and M. Montagna, System-Area
Operating Margin Assessment and Security Enhancement Against Voltage Collapse, IEEE
Transactions on Power Systems, vol. 11, no.3, pp. 14511462, August1996.
[123] P. Marannino, F. Zanellini, A. Berizzi, D. Medina, M. Merlo, and M. Pozzi, Steady State and
Dynamic Approaches for the Evaluation of the Loadability Margins in the Presence of the
Secondary Voltage Regulation, IEEE Transactions on Power Systems, May 2004.
[124] P. Gomes, H .J. Chipp, J. M. O. Filho, and S. L. Sardinha, Brazilian Defense Plan Against
Extreme Contingencies Quality and Security of Electric Power Delivery Systems
CIGR/IEEEPES International Symposium, October 2003 Montreal, Canada.
[125] P. Gomes, S. L. Sardinha, and G. Cardoso, Brazilian Experience with SPS, CIGR 2004,
Paris.
[126] P. Gomes, M. G. Santos, and A. F. C. Aquino, Operating a Power System Closer to its
Technical Limit The Brazilian Experience CIGR 2004, Paris.
[127] P. Gomes, New Strategies to Improve Bulk Power System Security: Lessons Learned from
Large Blackouts,Panel Session on Major Grid Blackouts of 2003 in North America and Europe,
IEEE/ PES General Meeting, 2004.
[128] P. Gomes, G. Cardoso Jr., Reducing Blackout Risk by System Protection Schemes Detection
and Mitigation of Critical System Conditions, Paper C2-201, CIGR 2006.
[129] G.Trudel, S. Bernard, G. Scott, Hydro Qubecs Plan Against Extreme Contingencies IEEE
Transactions on Power Systems, Vol. 14, No 3, August 1999.
[130] P. Gomes, A. C. S.Lima, and A. P. Guarini, Guidelines for Power System Restoration in the
Brazilian System, IEEE Transactions on Power Systems, Vol. 19, No. 2, May 2004.

2 - 86
CHAPTER 3

New and Existing Technologies to Improve Power


System Dynamic Performance
3.1 Introduction
This chapter focuses on new and existing technologies. The emphasis is on a
comprehensive framework that can prevent and/or contain large-scale disturbances by
exploiting technologies that have become commercially available in the recent years
or are emerging for application in the near future.
3.2 Existing Modern Power Electronic Based Transmission
Technologies That Can Help to Significantly Improve System
Dynamic Performance and Reduce the Risk of Cascading [58]
Presently there exists a wide range of established modern technologies that can assist
in improving system dynamic performance and thereby help to minimize the impact
of major disturbances and prevent widespread blackouts. Many of these technologies
are power electronics based transmission technologies, which help to increase transfer
capabilities over long transmission corridors at a fraction of the cost of building new
transmission lines. In addition, they can provide a host of other benefits. These
technologies can be categorized into two groups:
Flexible AC Transmission Systems (FACTS)
High Voltage Direct Current (HVDC) Transmission Systems
The latter of the two, HVDC, is an alternative to AC transmission that provides
certain benefits in specific applications. This section will describe the state-of-the-art
in these two groups of technologies.
3.2.1 Flexible AC Transmission Systems
The concept of Flexible ac Transmission Systems (FACTS) was first introduced in the
mid-1980s [1], [2]. In traditional power systems, at remote locations far from
generating plants, network voltage is controlled by either switching mechanically
switched shunt devices (capacitors and reactors) or by changing taps on on-load tap-
changing transformers. These methods of voltage control can be sluggish and rather
coarse (i.e. result in step changes in voltage rather than smooth continuous
regulation). Under severe contingency scenarios, particularly near load centers remote
from generation, such slow and coarse voltage control can result in an inability to
regulate voltage and possibly lead to voltage instability or voltage collapse [3-5]. One
of the causes of such voltage stability problems is the use of modern air-conditioning
[6]. As air-conditioning has become a necessary comfort of the typical household,
the on-peak summer time load in many electrical power systems has grown. To
further exacerbate the problem, air-conditioning load is characterized by light inertia
electric motors that are at the heart of the air-conditioning systems (the compressor).
During major system disturbances these motors have a tendency to stall and become a
significant drain of real and reactive current resulting in delayed voltage recovery
(sometime tens of seconds [6]) that may, under extreme conditions, lead to voltage
collapse. To address such problems the solution often tend to be a combination of
faster protection systems (that is, an ability to remove the faulted line as quickly as

3-1
possible from service) and the addition of fast-acting, smoothly controlled reactive
power devices1. With the advent of modern power electronics, shunt devices such as
static VAr compensators (SVC) and static compensators (STATCOM) can be
implemented to provide fast vernier regulation of voltage. These devices can provide
other significant operational benefits in financial and technical terms by improving
transient stability and small-signal stability [7]. Figure 3.1 shows a typical thyristor
controlled SVC device.

Figure 3.1: Thyristor valves used in SVCs (courtesy of ABB).


At the heart of FACTS devices is either the thyristor valve or gate-turn-off devices.
The thyristor valve has been around since the 1970s and is a four layered PNPN
junction semiconductor device. The thyristor is line-commutated and while its turn-on
time can be controlled, turn-off occurs only when the line current reverses. Thus,
thyristor based devices are controlled passive devices, that is through controlled
switching of thyristors the effective impedance of a series or shunt device can be
quickly and smoothly changed to result in control of a power system parameter. More
advanced devices use gate-turn-off thyristors (GTOs) or insulated gate bipolar
transistors (IGBTs). These devices allow controlled switching for both turn-on and
turn-off times of the device, and consequently, extremely fast switching frequencies
can be used (kilohertz) to fully control the output of the device. In this way, through
forced commutation, a truly active voltage-source device can be designed. Figure 3.2
shows an IGBT based STATCOM device.

1
Presently, work by the Loading Modeling Task Force of the WECC Modeling and Validation
Working Group, and research being sponsored by the California Energy Commission, is looking into
other potential solutions to the delayed voltage recovery caused by air-conditioner motor load stalling,
such as fitting the actual air-conditioning units with under-voltage relays.

3-2
Figure 3.2: STATCOM IGBT Valve (courtesy of ABB).
In the following subsections, these technologies and their benefits will be described in
more detail.
3.2.2 Shunt Devices and Voltage Control
Shunt FACTS devices such as the static VAr compensators (SVCs) or the static
compensator (STATCOM) can be used to provide significant improvements in
voltage control and stability. This is particularly true in regions where old generation
assets are being retired leaving large load pockets with little or no smoothly controlled
dynamic reactive support in the immediate vicinity. References [5] and [8] provide
practical examples of the application of these devices for this purpose. The examples
are for recent applications related to the retirement of old (aging generating facilities
are often retired due to environmental concerns related to emissions) or uneconomic
generation assets. Under market-based generation dispatch, the lowest-cost generation
is the first to come on line, except in cases where a higher cost generator is required to
maintain voltage under normal or contingency conditions in order to maintain system
reliability typically called a reliability must run (RMR) generator. Multiple ISOs
and RTOs have reported tens of millions of dollars of annual RMR costs. Given the
magnitude of these costs and the objective of market operators to constantly improve
market efficiency, several system operators have taken specific actions to reduce
RMR costs. However, retiring generation without consideration for voltage support
can adversely impact overall system reliability. Power electronic based shunt FACTS
device are thus a practical alternative to RMR generation.

3-3
The SVC is an impedance device that uses thyristor valves to control the effective
impedance of the shunt device. By regulating the devices impedance as a function of
measured system voltage, vernier voltage control can be affected. In addition, by
coordinating the control of other nearby mechanically switched capacitor banks
(which may already exist in the system), a fully integrated static VAr system (SVS)
can be implemented to provide much greater flexibility and control [9]. Such a design
ensures that the switching of the mechanically switched capacitors banks and the SVC
is fully coordinated and automated, thus removing the need for operator action (and
possible operator error) following a major disturbance.
Another family of static compensators (STATCOM) is based on voltage sourced
converter technology. Under certain system conditions, these devices present
additional benefits since once at their reactive limit a STATCOM is a constant current
device, while an SVC is a constant impedance device. Since a STATCOM is typically
a higher cost item than a comparable thyristor based SVC, the application should
justify the additional cost.
Figure 3.3 shows the basic thyristor-based building blocks used for SVC and the
voltage source converter, the main building block for STATCOM. The thyristor
controlled reactor (TCR) provides vernier control throughout the rating of this
building block but as a non-linear device the TCR generates harmonics that must be
mitigated with filters. For this reason, a TCR is always combined with harmonic
filters which also provide part or all of the SVCs capacitive rating. A thyristor
switched capacitor (TSC) can also be added to a TCR and filter to increase the
operating range of an SVC. Figure 3.4 shows a typical high voltage SVC.

Figure 3.3: Building blocks for shunt power electronic based reactive devices.

3-4
Figure 3.4: 420 kV, +/-160 MVAr SVC (courtesy of ABB)
A voltage source converter (VSC) has by its nature a symmetrical output. However, in
many cases where reactive support is needed, a greater capacitive rating of the device
is needed relative to inductive rating. For this reason, a VSC is often combined with
capacitors.
In terms of cost per MVAr, mechanically switched capacitors (MSCs) are
considerably less costly than FACTS devices. Given these economics and the fact that
MSCs work well to control steady-state voltage it is often desirable to automate the
operation of MSCs with an SVC or STATCOM. The vernier output of the FACTS
device can be used to smooth the voltage profile between MSC switching steps while
the output of the SVC or STATCOM is held within a small bandwidth close to zero,
to conserve its dynamic range for severe events [9]. However, when combining shunt
FACTS devices with mechanically switched devices such as capacitor banks, it is
crucial to confirm that the mix of the smoothly controllable reactive support under the
control of power electronics versus discrete mechanically switched reactive
components is appropriate.
Another means of reducing the cost of an SVC or STATCOM is by designing a short-
term or overload rating. The reason for having a short-term rating is that in some
applications the FACTS device may only be needed for a short time following
contingencies. However, care must be taken in applying this approach. Following a
critical contingency, it may take operators minutes or hours to establish the impact on
load and generation and re-dispatch the system appropriately. While dynamic
simulations may demonstrate that a device rated with a short-term rating of seconds is
appropriate to recover system voltage immediately following a critical contingency, it
is often advantageous for system operation to have a fully rated device. For example,
blocks of load may trip during a fault and then automatically reconnect several
seconds or minutes following the contingency. A fully rated FACTS device can be
used during this period to stabilize voltage during these types of operations while a
short-term device may be at its limit due to the initial contingency.
3.2.3 Series Devices
Series devices such as thyristor controlled and conventional series capacitors help to
improve transient stability margins on long extra-high voltage transmission corridors.

3-5
A conventional series capacitor adds reactive impedance in series with a transmission
line. As a capacitor is the dual of an inductor, a series capacitor acts to cancel part
of the impedance inherent in a transmission line. Since power flow on a transmission
line is inversely proportional to the series impedance of the line, a series capacitor can
effectively increase power flow on a transmission line if other parameters, i.e. voltage
and voltage angle difference are held constant. By corollary, the introduction of a
series capacitor can be used to hold power flow constant and reduce the angular
difference between sending and receiving end voltages. In this way, a series capacitor
can improve system stability. The principles behind series compensation are
illustrated in Figure 3.5.

Figure 3.5: Principle of series compensation.


From a reactive power point of view, a transmission line produces capacitive reactive
power under light load conditions. This is due to the electric fields from the line. As
the current flow (and consequently power flow) increases on a transmission line, the
magnetic fields associated with current flow cause the transmission line to look more
inductive. At some power flow level the capacitive reactive power generated by the
electric fields is exactly equal to the inductive reactive power due to the magnetic
fields. This point where the line is neither producing nor consuming reactive power is
the surge impedance loading (SIL) of the line. It is very convenient to operate a
transmission line at this point because no additional elements i.e. reactors or
capacitors are needed to manage the system voltage.
It is, however, frequently advisable to operate a transmission line past the SIL as the
thermal or stability limit of a line may be hundreds of MW above SIL. Lines may also
become loaded beyond SIL under critical contingency cases where one or more major
system elements are not operational. Under either of these circumstances, capacitive
MVAr must be supplied by shunt capacitors, series capacitors, or generation to
maintain system voltage. Furthermore, the reactive demand increases as the square of
current flow on the line.
As a series element, a series capacitor will produce capacitive reactive power as the
square of the line current. As a line loads with current flow the reactive demand of the
line also increase as the square of the current. Thus, if the line has series
compensation, the series capacitor will produce capacitive MVAr as the square of the
line current making the series capacitor a self-regulating device. This principle is
shown in Figure 3.6.

3-6
Figure 3.6: Benefits of series compensation.
One of the concerns with series capacitor application is that of subsynchronous
resonance (SSR) [10]. There are, however, many established means of addressing this
issue [10]. One effective way to address this issue is by implementing part or all of the
series capacitor as a thyristor-controlled series capacitor (TCSC). This is a power
electronic based solution. One concept of a TCSC control is described in more detail
in the literature [11-13]. One of the consequences of the control strategy described in
[12] is to cause the apparent impedance of the TCSC to be inductive at frequencies
corresponding to the various modes of torsional oscillations of the turbine generator
shafts. Thus, there is no subsynchronous electro-mechanical resonance between the
controlled series capacitor and the torsional modes of nearby turbine-generators.
Figure 3.7 shows how the TCSC can effectively mitigate torsional instability. As
shown in the figure, from one to five seconds the thyristor valves in the TCSC portion
of the series capacitor are deliberately blocked. Thus, due to SSR the 20 Hz torsional
mode of the shaft becomes unstable (the machine used is the IEEE First SSR
benchmark). Once the TCSC is released, the apparent impedance of the line at
subsynchronous frequencies changes dramatically thus eliminating the SSR problem
near 20 Hz and damping out the torsional oscillations.

Figure 3.7: TCSC application for mitigating torsional oscillations.


Other benefits of a TCSC include the ability to regulate flows between parallel
transmission paths and the ability to improve small-signal stability [11], [13]. Through
dynamic control of the effective impedance of the series device, the impedance of a

3-7
transmission path can be varied relative to other parallel paths thereby controlling
power flow. This can be particularly useful after a major disturbance when lines
become overloaded due to the re-distribution of power. Following such a disturbance,
either by operator action or through automated control the effective impedance of a
path compensated by the TCSC can be controlled in order to change the power flow
on that path to mitigate thermal overload on that or adjacent paths. More traditional
devices such as phase-shifting transformers can also be applied for controlling power
flow on parallel paths, however, these devices do not offer the significantly faster
response time associated with power electronic devices employed in a TCSC.
Emerging technologies such as the Unified Power Flow Controllers (UPFC) can also
control power flow on parallel transmission corridors, though the UPFC is yet to be
established as a commercially viable technology.
Similarly, through fast modulation of the effective impedance of a TCSC the power
oscillations on a major transmission line can be quickly damped to prevent small-
signal instability following a major disturbance [11], [13]. A practical example of
such an application [13], is described here in brief. In 1999 two major power systems
in Brazil became interconnected through a 1020 km long 500 kV ac transmission line.
The installed generation in the systems was 48 GW and 14 GW respectively and the
rated power transfer was 1300 MW. Fixed series capacitors (54% of the line
reactance) and shunt reactors (95%) were installed in five substations along the line.
The shunt reactors compensate the reactive power generated by line charging under
light-load conditions while the series capacitors ensure transient first swing stability
so that the interconnected system remains in synchronous operation following a major
disturbance.
It is well known that low frequency modes of electromechanical oscillation between
large groups of generators, called inter-area modes, occur in large interconnected
systems. In the Brazilian North-South Interconnection the oscillation frequency for
the most critical mode is 0.20 Hz. These oscillations are weakly damped and may
sometimes be unstable leading to exponentially growing power oscillations on major
interconnecting power lines. To alleviate the power oscillations a portion of the series
capacitors can be fitted with thyristor control (Thyristor Controlled Series Capacitor,
TCSC). By modulating the inserted series impedance, depending on the detected
power oscillation, active damping can be created.
In the Brazilian North-South Interconnection two TCSCs, one in each end of the
interconnecting transmission line, were installed for this purpose. The rating of each
TCSC was about one tenth of the rating of the totally inserted capacitive reactance.
The damping performance was tested at commissioning of the Interconnection by
tripping one 300 MW generator in one of the interconnected systems. During the
testing the damping in the power system was intentionally weakened to make the
system prone to oscillations. Actually, when the power oscillation damping (POD)
system in the TCSC was disabled, the system was not even stable when the generator
was tripped. In contrast, when the damping system was enabled in only one TCSC
(the northern one), the oscillation caused by the generator tripping becomes well
damped. The figure below shows the test arrangement and recorded power
oscillations without and with the TCSC POD active. This is shown in Figure 3.8
below.

3-8
No TCSC POD active North TCSC POD active
800 800

600 600

Pline (MW)

Pline (MW)
400 400

trip 300 MW 200 200

0 0
0 20 40 60 0 20 40 60

1020 km
500 MW time (sec) time (sec)
10

X TCSC
-10

-20

-30

-40
0 20 40 60
time (sec)

Figure 3.8: Effectiveness of TCSC in damping inter-area power oscillations.


3.2.4 High-Voltage Direct-Current Transmission
High Voltage Direct Current (HVDC) transmission is widely recognized as being
advantageous for economic long-distance, bulk-power delivery, asynchronous
interconnections and long submarine cable crossings. HVDC lines and cables are less
expensive and have lower losses than those for three-phase ac transmission. Higher
power transfers are possible over longer distances with fewer lines with HVDC
transmission than with ac transmission. Higher power transfers are possible without
distance limitation on HVDC cable systems using fewer cables than with ac cable
systems whose capacity diminishes with distance due to their charging current.
Because of their controllability HVDC links offer firm capacity without limitation due
to network congestion or loop flow on parallel paths.
With HVDC transmission systems, interconnections can be made between
asynchronous networks for more economic or reliable operation. The asynchronous
interconnection allows interconnections for mutual economical benefits but provides a
physical buffer between the two systems. Often these interconnections use back-to-
back converters with no transmission line. The asynchronous links act as an effective
firewall against propagation of cascading outages in one network from passing to
another network. Several asynchronous interconnections exist in North America
between the eastern and western interconnected systems, between the Electric
Reliability Council of Texas (ERCOT) and its neighbors, i.e., Mexico, Southwest
Power Pool (SPP) and the Western Interconnect, and between Quebec and its
neighbors, i.e., New England and the Maritimes. The August 2003 northeast blackout
provides an example of the firewall against cascading outages provided by
asynchronous interconnections. As the outage propagated around the lower Great
Lakes and through Ontario and New York it stopped at the asynchronous interface
with Quebec. Quebec was unaffected, the weak ac interconnections between New
York and New England tripped but the HVDC links from Quebec continued to deliver
power to New England.
Conventional HVDC transmission employs line-commutated, current-source
converters with thyristor valves. These converters require a relatively strong
synchronous voltage source in order to commutate. The conversion process demands

3-9
reactive power which is supplied by mechanically-switched ac filters or shunt
capacitor banks which are an integral part of the converter station. Any surplus or
deficit in reactive power must be accommodated by the ac system. This difference in
reactive power needs to be kept within a given band to keep the ac voltage within the
desired tolerance. The weaker the system or the further away from generation, the
tighter the reactive power exchange must be to stay within the desired voltage
tolerance.
Converters with series capacitors connected between the valves and the transformers
were introduced in the late 1990s for weak-system back-to-back applications. These
converters are referred to as capacitor-commutated converters (CCC). The series
capacitor provides some of the converter reactive power compensation requirements
automatically with load current and provides part of the commutation voltage
improving voltage stability. This reduces the need for switching of shunt
compensation with load changes. CCC configuration allows higher power ratings in
areas were the ac network is close to its voltage stability limit.
HVDC transmission using voltage-source converters (VSC) with pulse-width
modulation (PWM) was introduced in the late 1990s. These VSC-based systems are
force-commutated with insulated-gate bipolar transistor (IGBT) valves and solid-
dielectric, extruded HVDC cables.
HVDC transmission and reactive power compensation with VSC technology has
certain attributes which can be beneficial to overall system performance. VSC
converter technology can rapidly control both active and reactive power
independently of one another (Figure 3.9). Reactive power can also be controlled at
each terminal independent of the dc transmission voltage level. This control capability
gives total flexibility to place converters anywhere in the ac network since there is no
restriction on minimum network short circuit capacity. Forced commutation with VSC
even permits black start, i.e., the converter can be used to synthesize a balanced set of
three phase voltages like a virtual synchronous generator. The dynamic support of the
ac voltage at each converter terminal improves the voltage stability and increases the
transfer capability of the sending and receiving end ac systems.

u DC1 u DC2

i i uAC2 uAC-ref2
-
uAC1
uAC-ref1
- -
uDC-ref2
+ uDC-ref1
AC
AC + + voltage
voltage control
DC DC
control voltage voltage
control control
PWM PWM
internal internal
qref1 current p current
control ref1 pref2 control
q
ref2

Figure 3.9: VSC-Based HVDC System Control.


Figure 3.9 depicts the range of control options possible with a VSC based HVDC link.
Being able to independently control ac voltage magnitude and phase relative to the
system voltage allows use of separate active and reactive power control loops for
HVDC system regulation. The active power control loop can be set to control either
the active power or the dc side voltage. In a dc link, one station will then be selected
to control the active power while the other must be set to control the dc side voltage.
The reactive power control loop can be set to control either the reactive power or the
ac side voltage. Either of these two modes can be selected independently at either end
of the dc link. No mechanical switching of reactive power compensation elements is

3 - 10
required as with conventional HVDC. The reactive power dynamic range, however,
can be biased by a mechanically switched capacitor bank similar to what is done with
static var systems.
Figure 3.10 shows the reactive power demand of a conventional converter station, the
reactive power from the filters and the reactive power exchange with the ac network
as a function of power transfer. In contrast, Figure 3.11 shows the reactive capability
of a VSC based HVDC system.

Figure 3.10: Reactive Power Balance Conventional HVDC.

P-Q Diagram
Active Power (p.u.)

Operating Area

Reactive Power (p.u.)

HVDC VSC Operating Range

Figure 3.11: Typical HVDC VSC Converter Active (P) and Reactive (Q) Power Capability.

3 - 11
3.2.5 Role of HVDC in Limiting the Scope of the August 14, 2003 Northeast
Blackout and in Helping to Restore Service Following the Event
It is interesting to note the role that HVDC transmission played in limiting the area
affected by the August 14, 2003 Northeast blackout and in assisting with system
restoration. As the blackout spread and propagated around the lower Great Lakes,
Quebec, the Maritimes and most of New England remained unaffected. This can be
directly attributed to the asynchronous nature of the Quebec network and the
relatively weak interconnections between New York and New England.
The asynchronous boundary between Quebec and its neighbors acted as a firewall
preventing the outage from propagating through Quebec. The relatively weak
interconnections between New York and New England likely served as a de-facto
shear-pin effectively stopping the outage from cascading further into New England.
Although market proponents deplore seams, from a reliability perspective they may
serve as a natural point of system separation. Asynchronous HVDC interconnections,
either back-to-back or transmission, allow power exchange between networks while
serving as a buffer between them.
Figure 3.12 shows the situation in the Northeast 7 hours after initiation of the August
14, 2003 blackout. The asynchronous boundaries between Quebec and its neighbors
are indicated. It can be clearly seen that service to the load centers in Quebec, New
England and the Maritimes continues whereas much of New York, Ontario, Ohio and
Michigan remains in the dark. In the meantime, power exports from Quebec and the
Maritimes to New England continue to flow as indicated. It should be pointed out that
the Chateauquay HVDC link between Quebec and New York was out-of-service since
the receiving network (New York) was dead. With the new generation of voltage
source converters with black start capability, however, power can be fed into a passive
network without any ties to generation an example of this is described below.

Asynchronous borders
Asynchronous Quebec
unaffected continues to
export to NE
200 MW

500
147 MW
MW

40,000 MW out of
62,000 MW still out of service

Source: Public Power Weekly, August 25, 2003

Figure 3.12: Conditions 7 hours after start of blackout.

3 - 12
Figure 3.12 shows that most of New York City and Long Island were still without
service 7 hours after the start of the blackout. At that time the Cross Sound Cable, an
HVDC interconnection with voltage source converters across Long Island Sound, was
laying dormant, one year after commissioning, due to permitting issues in
Connecticut. Due to the extraordinary circumstances, the US Department of Energy
ordered the start-up of the Cross Sound Cable to assist with restoration of service to
New York and Long Island. Figure 3.13 shows the timeline for this sequence of
events. Figure 3.14 shows two snapshots indicating the progress of service
restoration to Long Island following the start-up of Cross Sound Cable.
MW
350

300

250

200
CSC
energized
150
Blackout hits Federal order
Long Island permits use of
100 CSC

50

0
August 13, 01:00

August 16, 24:00

August 17, 24:00


August 15, 24:00
August 14, 24:00

Figure 3.13: Time line of Cross Sound Cable operation.


During the system restoration process, the Cross Sound Cable not only provided an
additional source of power to eastern Long Island but also provided a much needed
source of reactive power during cold load pick up and generator start ups. During the
time that it was ordered in operation 2275 MVAr hours of dynamic reactive power
support were provided to Connecticut and 1841 MVAr hours of dynamic reactive
power support were provided to Long Island. The power source curtailed the need for
rolling blackouts during restoration due to inadequate capacity. The dynamic reactive
power source provided for voltage support and stabilization during system restoration.

3 - 13
Long Island, Seven hours after blackout

Cross Sound Cable


Energized for about 6 hours

Long Island, Fourteen hours after blackout

Figure 3.14: Restoration of Service to Long Island, 7 and 14 hours after blackout respectively.
3.2.6 Summary
This subsection has provided a brief overview of modern transmission technologies
and how they may be effectively used to enhance system dynamic performance in
response to major system disturbances. In this way, some of the risks of potential
cascading outages and islanding of the system can be mitigated. Of course, the
application of these devices requires proper coordination and tuning of controls to
ensure robust performance. The key benefit is the ability to react quickly and
automatically in response to system disturbances thereby enhancing damping in
power oscillations, transient stability margins, and smooth voltage recovery and

3 - 14
regulation following major system disturbances. The table below provides a summary
of the various potential benefits afforded by each of the technologies discussed.
Table 3.1: Summary of the benefits of power electronic based technologies.
Device Potential Application Potential Benefits as Counter Measures to Blackouts
HVDC Transmission of power Ability to control power on the dc path independently of
(conventional) over long distances or parallel ac paths. Ability to isolate the two interconnected
between asynchronous systems, creating a firewall between the two systems. In
systems. some cases the ability to improve damping of power
oscillations on parallel ac paths through proper application of
supplemental controls on the dc converters.
HVDC Transmission of power Same benefits as conventional dc with the added advantage of
(voltage over long distances or being able to control both real and reactive power
source between asynchronous independently at each converter, thereby providing voltage
converter) systems. support/regulation. This type of dc link also allows for black-
start, that is, load can be picked up without any other source
of power but the dc converter.
SVC/ To provide local voltage These shunt devices can provide a number of potential
STATCOM support in heavy load benefits:
centers remote from
o Improve/ensure voltage stability and regulation
generation. To improve
power transfer on long o Improve/ensure transient stability when placed
transmission paths, by appropriately on long transmission paths.
providing fast voltage
o Improve small-signal stability through the proper
support mid-line and
thus improved transient tuning and application of supplemental damping
stability margins. controls.

TCSC To allow for control of This technology can mitigate SSR for series capacitor
power flow on parallel applications. In addition, through the application of
ac paths. To mitigate supplemental power oscillation damping controls it can be
subsynchronous used to enhance small-signal stability. Clearly, the series
resonance (SSR). To capacitor itself provides significant improvements in transient
improve small-signal stability margins.
stability by allowing for
power oscillation
damping on major
transmission paths.

3.3 Advanced Control Strategies


Power system control throughout history depended on exploiting the latest available
devices/technologies to implement distributed and hierarchical controls to address the
various physical phenomena. The drivers for both such distribution and hierarchy
were the financial and technical constraints of the necessary systems and devices for:
Real-time data acquisition for monitoring current system operating
conditions against the applicable limits
Analysis for assessing the viability of current operating conditions with
respect to various potential disturbances
Identifying appropriate optimal or suboptimal control actions
Implementing the identified control actions.

3 - 15
The above four functions have to be performed within the time scales required by the
relevant physical phenomena. Traditional examples include:
Fault clearing should be done within a few cycles to prevent damage to
equipment and potential collateral damage to property and life.
System frequency error should be brought back to zero every few minutes. To
accomplish this, a few generators in each control area are pre-assigned to
control the area interchange to realize the required frequency. Control signals
are issued by numerous autonomous control centers every few seconds
(typically 1 or 2 seconds). The cumulative effect of any inadvertent
deficiencies in the performance of such distributed control actions is
monitored as time error and is periodically corrected by a regionally
coordinated control action.
Thermal and stability problems relevant to the power network are addressed
by monitoring data every few seconds (e.g., SCADA) while performing the
relevant analyses every few minutes (typically every 5 minutes for thermal
problems and every 10 to 30 minutes for stability problems). Most often the
control actions identified by these analyses are implemented through human
intervention.
The time constraints associated with the physical phenomena and the available speed
of communications and computations, along with the relevant costs determined the
geographical scope and execution cycle times for each of the monitoring, analytical
and control functions. When on-line analysis of the phenomena was not feasible,
hard-wired solutions based on off-line analyses were used. The inherent deficiencies
in such traditional approaches manifest themselves as small disturbances and
occasionally cascade into large-scale disturbances such as the one on August 14, 2003
in the Northeast region of North America. Capabilities of modern monitoring and
control devices as well as faster communications and computing have been
demonstrated in various projects. The relevant technologies have reached a level that
can support the realization of a self-healing power grid with a qualitatively different
behavior.
In essence, the realization of self-healing capabilities requires a high performance IT
infrastructure to respond to actual steady-state as well as transient operating
conditions in real-time and near real-time [14], [15]. Such infrastructure would
facilitate solutions more effective than conventional solutions that are generally based
on off-line analyses. The infrastructure encompasses the entire power grid at all
control levels and includes a large number of functions implemented through
autonomous intelligent agents deployed on a (potentially continental-scale) computing
network. Agents operating at different timescales ranging from milliseconds to an
hour (see Figure 3.15) corresponding to the physical phenomena of the power grid
address all functional areas. The agents are virtually distributed in various locations
and facilitate all functional areas through:
more ubiquitous use of local controls coordinated by global analysis
near-real-time tuning of control parameters
automatic arming and disarming of control actions in near real-time
coordination of the functional areas in multiple dimensions such as:
organizational hierarchy, geographical locations, and multiple timescales.

3 - 16
The technical feasibility of such architecture was established in [14-18] and the
financial feasibility was established in [19-22].

Figure 3.15: Temporal Coordination for a Self-Healing Power Grid.


Such agile systems can provide the necessary (but hitherto considered exotic) control
capabilities wherever and whenever required. Examples of such control capabilities
include:
Wide Area Controls to maintain voltage profiles and preserve system stability
[23-25]
Prediction of sustained oscillations and automatic tuning of relevant control
systems including arming/disarming appropriate discrete controls [26-34]
Detection of sustained oscillations and make split-second decisions to take
discrete control actions [35], [36]
Controlled Separation including generation shedding to maintain system
frequency and stability
Intelligent load-shedding to maintain voltage profile and system stability
In the next subsection we will discuss in more detail one such emerging advanced
control philosophy, namely Wide Area Control systems.
3.4 Wide Area Monitoring and Control Systems (WAMC) [37]
Based on Wide Area Monitoring Systems (WAMS) [38] the next logical step is
towards Wide Area Control Systems (WAMC) [39]. On the primary component side
several components like generators, FACTS, switchable compensation devices etc.,
provide the flexibility to adapt the power system in volatile and emergency operating
conditions. In order to use their capabilities in a most beneficial way it is necessary to
have automated control schemes for these situations. These control schemes have to
take system aspects into account, which means the usage of dynamic wide area
information. Automatic control schemes have to be reliable against changing system
conditions to avoid any kind of misbehavior. Emergency situations need to be handled
and stabilized. All automatic interactions have to be well defined and transparent for
the operator to avoid unpredictable interactions [39].
For WAMC applications, the most anticipated application is wide area damping
control or a wide area power system stabilizer [40], [41]. Coordinated reactive power
and voltage control is also a much anticipated development [42], [43]. Another
application could be wide area power flow control, which means the coordinated

3 - 17
control of Phase Shifting Transformers or FACTS-devices [39]. The coordination of
Phase Shifting Transformers has been evaluated to be beneficial [44], but a control
scheme based on WAMS with PMU has only been considered in planning studies so
far. A centralized wide area control demands highly reliable communication
architecture. In addition, there are high expectations on the use of WAMS as a base
for system wide protection schemes in the future. Wide Area Protection applications
can be used for general grid protection solutions. They are able to operate in wide
areas whereas special protection schemes (SPS) are specifically linked with defined
events in a given part of the network. The use of WAMS allows the timely
recognition of risky phenomena, their nature, origin and propagation. It is an
extremely difficult but fundamental task to develop effective emergency control
systems to be used in any grid.
The consequence of today's control is that set values are predefined for longer time
frames and are changed only on manual request. The set values must be prepared for
all expected system changes and may therefore not be optimal for the present or
expected system condition. This means, that capabilities of faster acting devices are
not used and the system is running permanently sub-optimal in the best hypothesis or
cannot react appropriately in emergency cases. WAMC possesses specific features
with respect to todays hierarchical control concept. Related to time/speed aspects,
synchronized and frequent sampling of WAMC is close to the response time of the
primary controllers. Related to geographical coverage, WAMC can act either within a
single control area or beyond such as in the case of interconnected power systems
monitoring and control, e.g. interarea oscillations.
The requirements for the behavior of WAMC are that it must be deterministic,
transparent and easy to understand for the operator. Under all conditions such
schemes need to stabilize the system and never weaken it. If a system is rule based the
control rules must be valid under all conditions otherwise the control should follow
conservative conventional schemes. If a certain kind of controller is used within the
wide area control scheme it needs to be coordinated with all other controllers, but for
practical reasons in case of a system extension a redesign of all existing controllers
need to be avoided. In general it is necessary to have a wide area control system
which does not contain 'hidden intelligence', which may lead to the system behaving
in an unpredictable way. The operators need always to know what a wide area control
system is doing or will do. This is a key criterion for practical acceptance.
It needs to be stated that not in all cases of wide area control, a WAMS system based
on PMU is required. For example, for power flow control a fast flow measurement of
different parallel lines would be sufficient time synchronization would not be
necessary. In this case the data-transfer rate and communication requirements would
be the same as for a PMU based WAMS system.
The design of WAMC has to consider the case of unavailability of the communication
link or GPS signal, which means that in any case a local backup for control and
protection actions is needed. Conservative and robust local control settings are
required and need to be considered during the design of such a system. As a main
requirement we always have to keep in mind that WAMC should do what an operator
would do, if he or she would have sufficient time to analyze the situation and take the
action. All actions need to be simple and fully transparent.
There has to be a clear distinction between normal (usually continuous or stepwise)
control actions and remedial action sachems (RAS) or SPS, which apply only in

3 - 18
critical conditions [45], [46]. To operate closer to stability boundaries and misuse the
RAS/SPS to stabilize the system if something goes wrong is definitively undesirable.
Any kind of continuous or stepwise WAMC-applications for normal operation have to
be considered carefully to understand how they react in contingency cases. They have
to be integrated well into RAS/SPS action schemes to avoid any kind of misbehavior.
For practical applications it is expected that the WAMS oscillation detection will lead
firstly to an improved damping control scheme for PSS or FACTS-devices. The
WAMS will detect the performance and update the control parameters accordingly.
For damping purposes there will be no wide area closed loop control because of the
performance of the communication channels in present and near future technologies.
The communication delay has to be considered carefully [47], [48]. For voltage or
thermals limitations a closed loop wide area control is considered as possible, since
the detection of instabilities and control actions take place within several seconds.
Therefore, the actions are faster than by coordination via SCADA/EMS systems [49].
The implementation of Wide Area Protection applications is more complicated, in
order to become robust against all potential situations. An automatic design of
emergency rules is required but is presently mainly in a research stage.
3.5 Load Control and Demand side Management
Analytical techniques and operating guidelines relevant to load control and demand-
side management are lagging significantly behind the device level technologies
available for the purpose. Some examples:
Load shedding is a very effective solution for various acute underfrequency or
undervoltage conditions. However, the NERC operating requirements as well
as most other published literature is based on various a priori and heuristic
considerations. In the event of August 14, 2003 Northeast USA blackout the
power grid was operating with a phase angle spread in excess of 45o for about
3 minutes immediately preceding the final break-up. If only system-wide
phasor measurements along with appropriate algorithms to take advantage of
them were in place, system integrity could have been maintained in that event,
in spite of the various monitoring and control problems in the local control
centers. Most existing voltage monitoring systems monitor only bus voltage
magnitudes. However voltage stability analyses show the importance of
considering internal voltages of the generators, field current limitations, and
saturation. In addition, for voltage stability the relative flatness of the voltage
profiles (including internal voltages) is somewhat more important than the
absolute magnitudes. When reactive outputs of generators are close to their
limits, it is possible to encounter voltage instability even if all voltages are at
1p.u. and the phase angle across the system is only 60o. There were also some
disturbances aggravated by excessive load shedding during undervoltage
conditions [50]. New analytical tools are necessary to develop improved
operating criteria and to provide appropriate level of load-shedding
capabilities.
Demand-side management traditionally has been addressed from the supplier
viewpoint (e.g., modified meters, etc.). With modern technology, it is feasible
to empower the consumer to determine individual loads (e.g., air conditioner
or water heater) to be equipped with digitally controlled switches that could be
monitored by the supplier. For example, the supplier could measure the power

3 - 19
consumption just before shedding that particular load and then reward the
customer a commensurate amount for participating at that moment. An
example of emerging technologies associated with demand-side management
is the Grid FriendlyTM Appliance Technology, described in the next
subsection.
3.6 Grid FriendlyTM Controllers
Grid FriendlyTM controllers are an emerging technology developed at the Pacific
Northwest National Laboratory (PNNL) in Richland, Washington [51]. The Grid
Friendly concept focuses on demand side management (DSM), but it goes beyond
traditional DSM technologies like underfrequency and undervoltage load shedding.
Unlike load shedding which would interrupt power supply to entire feeders, the Grid
Friendly controller focuses on interruptible load. The Grid Friendly technology
enables active load control so electrical loads can be used as an active resource
participating in stabilizing power grids, rather than just be passive components in the
power grids.
Central to the Grid Friendly technology is a small digital controller which can
continually monitor frequency or voltage of the power grid at the local point and
process this information to control electrical loads based on pre-defined control
strategies (Figure 3.16). The Grid Friendly controller can be integrated into appliances
like water heaters and refrigerators, and it is very cost-effective (batch production is
estimated to be $2-5 per controller). When the grid experiences an emergency
condition, the Grid Friendly controller would identify this situation within
milliseconds and adjust the load for a short period of time.

Figure 3.16: Grid Friendly Controller.


3.6.1 Grid Friendly Applicability and Availability
The Grid Friendly technology intends to manage load but not inconvenience
customers. Many of todays home appliances are compatible with the Grid Friendly
technology, including water heaters, air conditioning units, refrigerators, clothes
washers, clothes dryers, and dishwashers.
The Grid Friendly controller would be tailored for each appliance. A Grid Friendly
refrigerator, for example, would have the controller to turn off the compressor, which
is the major load component of a refrigerator, for a few minutes, but keeps the interior

3 - 20
light operational. A Grid Friendly clothes dryer does not shut down the whole unit,
but only turns off the heating element, letting the dryer drum tumble so clothes does
not get wrinkled. A Grid Friendly air conditioner would turn up the thermostat setting
by a few degrees so the air conditioner would cycle off for minutes without noticeable
discomfort in the room temperature. Similarly, thermostat setting can be adjusted to a
lower temperature for a Grid Friendly water heater without interrupting hot water
supply.
Grid frequency events are typically of short duration, and Grid Friendly responses can
occur without significantly inconveniencing the customer. This fact improves the
acceptance of the technology compared to other load shedding approaches.
Figure 3.17 shows that Grid Friendly-compatible appliances represent 18% of total
electric load, which exceeds the 13% required operating reserve. Further studies show
that 49% of Grid Friendly load, if deployed, would be able to compensate the
generation deficit during the energy crisis of July 2001 in California, as shown in
Figure 3.18. Figure 3.18 also shows good Grid Friendly potentials in other areas.
Freezer
4%
Clothes Dryers
Water Heating 6%
11%
Cooking
Industrial Residential Space Heating 3%
28% (non GFA) 12%
12%
Residential Air
Commercial (GFA)
Expanding Conditioning
29% Operating 18% residential 12%
Reserve Non-GFA (incl.
13% portion Lighting, Color
Refrigerator TVs)
13% 39%

Loads and Reserves on a


Typical U.S. Peak Day Residential Energy Consumption, 1997
Source: EIA (RECS)

Figure 3.17: Grid Friendly Compatible Appliances.


Consider actual load shapes for all appliances.
% homes # homes Annual consumption July consumption Average Watts Sheddable power
1000000 with unit with unit per unit (kWh) per unit (kWh) per unit (W) (MW)
Homes #1 #2 #3 #4 #5 #6
#3 & Load shapes #4/31 days
Sources AHS 2002 #1*1000000 ELCAP 1992 from ELCAP 1992 /24 hours #2*#5
Refrigerator 98% 977740 3044 293 394 385
Air conditioner 54% 536600 1014 265 356 191
Hot water heater 40% 398033 4707 350 470 187
Space heating 31% 313785 27298 744 1000 314
Other sheddables 62% 624479 1760 130 175 109
Total 1186 MW

35% Dishwasher
ELCAP Load Shapes California Peak Demand July 2001 (61125 MW)
Clothes washer
Clothes dryer
California Peak Capacity July 2001 (54370 MW)
30%
Electric range California Homes 11.5m * (1185 MW) = GFA (13627 MW)
Re frigerator & Freeze r
25% Air conditioner 49%
Hot water heater
Space heating
GFA Potential in Other Areas
20%
Total
6292 MW New England
15%
8090 MW New York
8252 MW Texas
10%
7026 MW Florida
5%
20034 MW Northwest Central
13940 MW South Atlantic
0% 6292 MW Middle Atlantic
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Figure 3.18: Grid Friendly potential in California and other areas (note: AHS American Housing
Survey http://www.census.gov/hhes/www/ahs.html; ELCAP End-use Load Consumption Assessment
Program, a BPA/PNNL program on load monitoring and analysis in early 90s.)

3 - 21
3.6.2 Grid Friendly Control Strategies
Collectively, hundreds of thousands of Grid Friendly electrical loads can play an
important role in reducing risks of cascading blackouts, mitigating system recovery
after blackouts, and stabilizing power grid oscillations. With properly designed
strategies, the collective effect of Grid Friendly can bring traditional generator-side
control functions to the load side by mimicking spinning reserves, governor droop
control and power system stabilizer modulation control.
Mimicking spinning reserve with Grid Friendly appliances is a quite straightforward
extrapolation from the basic Grid Friendly concept. Grid Friendly appliances can
serve as spinning reserves by reducing their load when generation-load imbalance is
detected via frequency measurement. As shown in Figure 3.17, Grid Friendly
appliances can potentially offset traditional spinning and operating reserve, which
would improve asset utilization and would bring significant cost benefits for utilities.
Grid Friendly droop control can be implemented by setting different frequency
thresholds for different Grid Friendly loads. Governor droop control, illustrated in
Figure 3.19, responds to frequency deviations by adjusting mechanical power input. If
we have different numbers of Grid Friendly appliances respond to different frequency
thresholds with a pre-determined statistical distribution, the power can be adjusted as
well, as shown in Figure 3.20, assuming the number of Grid Friendly responding to
the frequency deviation results in appropriate MW load reduced. Furthermore, the
Grid Friendly droop function can vary rather than being a straight line, as loads are
more flexible to be adjusted compared with mechanical governors. It can have a
piece-wise structure or an exponential function with saturation (Figure 3.20). One
advantage of Grid Friendly droop control over governor droop control is that loads
can respond far quicker through digital control than governor mechanical actions. Of
course, the settings and deployment of Grid Friendly appliances should be
appropriately designed so as to maintain reasonable amount of Grid Friendly
appliances available at any given time with reasonable mix of load types [52].
63
Speed Regulation Curve
5% Droop
62
Generation decreases Generation increases
when responding to when responding to
frequency increase frequency decrease
61
Frequency (Hz)

60

59

58

57
0 100 200 300 400
no load Generator output (MW) full load

Figure 3.19: Governor droop control.

3 - 22
64
5% GFA Droop
63
GFA turn off GFA turn on
when responding to when responding to
62 frequency decrease frequency increase
Frequency (Hz)

61

60
Piece-wise
59
Exponential

58

57

56
-10000 -8000 -6000 -4000 -2000 0 2000 4000 6000 8000 10000
all GFA off no response all GFA on
Number of GFAs

Figure 3.20: Grid Friendly droop control.


Grid Friendly load modulation control can use the same control scheme as PSS,
consisting of lead-lag blocks with a gain (Figure 3.21). The input signal can be
frequency deviation measured at the Grid Friendly load location (or local voltage
signal). Time constant TW is a washout filter, TL is a low-pass filter, and T1 and T2 are
lead-lag time constants. TW and TL are adjusted to restrict the bandwidth of the
controller, and T1 and T2 are adjusted to provide the proper phasing. Classical design
techniques can be used to select the controller parameters to provide proper phasing
and gain for damping oscillations of a specific frequency.

Figure 3.21: Grid Friendly load modulation control.


Initial studies show that Grid Friendly load modulation control is efficient in
increasing system transfer capability [52-54]. In [52], loads within the southern
California portion of a detailed model of the western North American power system
are modulated by 450 MW. The modulation damps north-south system oscillations
and enables an additional 400 MW of power transfer on a critical transmission path
into southern California (Figure 3.22). Grid Friendly load modulation is fully
distributed as loads are distributed across the system. Previously studies show that
distributed modulation is 30% more efficient than traditional centralized modulation
control [52].

3 - 23
1.2
No Control Insufficient Control Minimum Control
Requirement

1
Bus VoltageMagnitude(pu)

0.8

0.6

0.4
0 5 10 15 20
Seconds

Figure 3.22: Control performance for 400 MW increase in power import into southern California.
3.6.3 Ongoing Efforts in Grid Friendly Technology
Current Grid Friendly efforts focus on transferring the technology into actual grid
operations environment and expanding the function of the Grid Friendly controller.
As part of the United States Department of Energy GridWiseTM program [55], [56],
PNNL, teaming with utility and industrial partners, is leading a demonstration project
to demonstrate the benefit of Grid Friendly technology in the Pacific Northwest [51].
Clothes dryers and water heaters in homes are installed the Grid Friendly controller
with the ability to respond to power grid frequency fluctuations. The Grid Friendlys
field performance will be assessed by correlating when each load automatically turns
itself off with known fluctuations in frequency that indicate when the grid is under
stress. Customer acceptance will be assessed through a post-experiment survey
conducted by the dryer manufacturer.
Further development of the Grid Friendly controller hardware features a more
compact design and uses more standard load interface to increase the applicability and
convenience of the controller. In terms of functionality, the new controller is
expanding from frequency responsiveness to including voltage responsiveness and
price responsiveness. For the Grid Friendly controller to respond to local frequency
and voltage measurements, no communication system is required beyond the power
grid itself as Grid Friendly controllers act autonomously. However, if communication
is available to feed the Grid Friendly controllers with other signals (e.g. price signal),
the "smarts" are already on board the appliances to do much more sophisticated
negotiation and control. One benefit of price responsiveness is reducing peak loads.
As a result of research into these concepts, PNNL and major appliance manufacturers
have begun to define a simple appliance interface based on successful implementation
of such an interface during the Pacific Northwest GridWiseTM Demonstration [51].
This interface will guide not only load manufacturers but designers of any residential

3 - 24
product in developing a standard way to receive communications and respond to a
load reduction request from an advanced load management system.
3.7 Distributed Generation
In technologically advanced countries, the average duration for transmission related
service interruptions is approximately 10 minutes per year and the average number of
such interruptions is approximately one per year. Thus distributed generation (storage
systems or small generators, wind generators with storage, etc.) that can cover the
local load for about 30 minutes can add significantly to the reliability of the
transmission and distribution systems. For example, a turnkey 30MW battery storage
device was installed in Alaska to cover an outage for the 15 minutes needed to start up
a local combustion turbine. At present such systems may cost about a $1M/MW.
Advances in the near future are likely to bring down the cost of distributed generation
technologies.
Assuming that such equipment would be available at a reasonable cost, the most
significant advances needed are in the area of system protection. Traditionally, most
power distribution systems are operated with one source of energy at one end of the
feeder supplying energy to passive devices along the rest of the feeder. With
distributed generation embedded within the distribution system, the protection
systems should be redesigned for multiple energy sources on both sides of a given
point on the feeder.
Most of these generation units are interfaced by power electronic converters, which
can support active and reactive power locally and even provide local black-start
functions, if appropriate regulatory and market conditions permit it. It should be
emphasized that distributed generation (DG) is generation that is connected to the
medium-voltage and low-voltage networks. However, the volatility or limitations of
the energy sources (wind, sun), the link to industrial process for co-generation and the
inability to control frequency and voltage (depending on the type of generation
technology) may often limit the contribution of the associated forms of distributed
generation in very perturbed situations. It must be noted that leading edge
technologies (such a four-quadrant voltage-source converters), when used to connect
distributed generators to the grid, can offer benefits such as voltage control.
Moreover, frequency control is possible, if islanded operation is allowed. The
potential for distributed resources to provide support at lower voltage levels needs to
be recognized in the development of interconnection standards for distributed
generation. There are, of course, significant challenges to realizing the full potential
benefits of distributed generation. First, state of the art technologies need to be applied
to allow control action that is beneficial to system reliability and secondly regulatory
and market rules must help to facilitate their integration into the bulk power grid.
3.8 Operator Tools
In the context of the self-healing grid, many of the tasks requiring operator
intervention at present would be automated [14]. This requires a comprehensive
overhaul of all existing operator tools to enable them for intelligent interaction with
agents at the lower levels (e.g., substation) and higher levels (e.g., region or
interconnection). In addition, new techniques are required to deal with distributed
analysis at different hierarchical levels and the associated seams issues. Enhanced
techniques are required for visualization of the control actions and animation of the
corresponding real-time feedback from the power system in order to improve operator

3 - 25
interaction with the more complex control schemes. Enhanced intelligent alarming
including filtering of heart beat messages from crucial devices, functions, and
systems at all hierarchical levels is essential for fail-proof performance of the IT
infrastructure and the self-healing power grid.
3.9 Cost Considerations Related to Self-Healing Grids
Another emerging concept is that of the self-healing grid. This concept is one of
essentially deploying intelligent controls system wide, that are married with
transmission controllers (e.g. FACTS) and generation controls, to allow the system to
adapt and self-heal itself in the aftermath of a major disturbance. In [19] and [21] a
comprehensive assessment of the various costs associated with the realization of a
self-healing grid was presented. For the convenience of the reader, we provide a
summary of that assessment in Figure 3.23 to serve as an empirical model. This model
shows the various estimated costs for full scale implementation as a function of the
number of substations.

Figure 3.23: Costs for Full Scale Implementation.

Line A shows the full cost of implementation as the sum of the four cost components:
1) One-time R&D cost for the industry as a whole, 2) The one-time shake-down cost,
3) The cost of software implementation and integration and 4) The cost of hardware.
Line B shows the total cost for the first implementation after the prototype is already
demonstrated, thus eliminating the risk associated with R&D. This excludes the cost
of R&D but includes the three remaining costs. To be exact, lines A and B should be
augmented by segments showing higher costs for the shake-down at the first 5
zones/vicinities, and the first 10 substations. The shake-down cost at the area level is
just a constant because only one control area is included in these costs. However, the
resolution available in the figure is not sufficient to show these details.
Line C shows the costs for subsequent implementations, thus excluding the costs of
R&D and shakedown, but including hardware, software implementation and
integration costs.
Line D shows the costs of later implementations excluding the costs of R&D,
shakedown and hardware (assumed to become negligible with time), but including
only the software implementation and integration costs.

3 - 26
The cost of software R&D is the biggest entry barrier for implementations. The
estimated cost of initial development of the required software modules or intelligent
agents would be in the order of $8M for prototype development, and $27M for
production grade development. These costs are negligible compared to the expected
benefits estimated in [19], [20] and [22]. Once the first implementation is
accomplished the entry barrier at the control area level would be in the order of
about $3M which is similar to the conventional control center budgets.
After the first few implementations, the entry barrier at the substation level would
be about $183k per substation including both hardware and software. This cost may
go down to as low as $58k as the cost of hardware decreases with time according to
Moores Law [57].
For estimating equipment costs at individual substations, the installed costs for typical
control equipment given in Table 3.2 were used in [19]. For reasons of reliability and
cost, in general shunt devices are preferable over series devices. In the past, series
capacitors were preferred over shunt capacitors for improving transient stability limits
of long lines. At the time, one of the primary barriers against using shunt capacitors
was the lack of a reliable and economical means of providing synchronized fast
control signals to those capacitors separated by hundreds of miles. With a modern
high performance IT infrastructure, this will no longer be an impediment.

3 - 27
Table 3.2: Typical Costs of Control Equipment.
# Category/ Cost
Functional Area
1 Solid-state $35k-$100k /MVA
Controls (SVCc,
Power flow
controllers)
2 Adding remote $20k-$70k
feedback to
existing PSS/AVR
3 Energy storage $3.5M - $10M
devices (Batteries, /MW
Fuel cells etc.)
4 Synchronous $30k-$35k /MVAr
Condenser
5 Capacitor banks $10k-$17k /MVAr
(shunt)
6 Capacitor banks $15k-$22k /MVAr
(series)

3.10 Summary
The focus of this chapter was on discussing both existing modern technologies that
can help to effectively enhance system dynamics performance, and thus reduce the
risk of blackouts, and to present a discussion on some of the newly emerging
technologies. Section 3.2 provided a comprehensive discussion of existing modern
transmission technologies, primarily based on power electronics, and a few examples
were given on actual system performance where these technologies have shown their
benefit.
With regard to emerging technologies, a number of categories were briefly discussed.
These are:
1. Advance Control System one key example is Wide Area Controls.
2. Load Control and Demand Side Management one key example discussed is
the Grid FriendlyTM technologies being developed by PNNL.
3. The potential benefit of distributed generation.
4. The concept of self-healing grid.
All these areas have significant potential in helping to improve system dynamic
performance. Much research is being conducted in all these areas, and further research
and development is needed to bring these concepts to fruition and to make them
economically and technically feasible and acceptable.
3.11 References
[1] N. G. Hingorani, High Power Electronics and flexible AC Transmission System, IEEE Power
Engineering Review, Volume 8, Issue 7, July 1988, pp. 3 4.

3 - 28
[2] N. G. Hingorani, Flexible AC transmission, IEEE Spectrum, Volume 30, Issue 4, April 1993,
pp. 40 45.
[3] J. A. Diaz de Leon II, and C. W. Taylor, Understanding and Solving Short-Term Voltage Stability
Problems, Proceedings of IEEE/PES 2000 Summer Meeting, invited paper for panel session on
Power System Stability Controls using Power Electronic Devices.
[4] P. Pourbeik, R J Koessler and B. Ray, Addressing Voltage Stability Related Reliability
Challenges Of San Francisco Bay Area With a Comprehensive Reactive Analysis, Proceedings of
the IEEE PES General Meeting, Toronto, July 2003.
[5] E. John; A. Oskoui and A. Petersson, "Using a STATCOM to Retire Urban Generation,"
Proceedings of the IEEE PES Power Systems Conference and Exposition, New York, NY, October
10-13, 2004.
[6] G. L. Chinn, Modeling Stalled Induction Motors, Proceedings of the IEEE PES T&D, May 21-
24, 2006, pp. 1325 1328.
[7] P. Pourbeik and M. J. Gibbard, Damping and Synchronizing Torques Induced on Generators by
FACTS Stabilizers in Multimachine Power Systems, IEEE Trans. PWRS, November 1996, pp.
1920-1925.
[8] B. Ray, P. Pourbeik, A. Bostrom and M. Van Remoortere, Static VAr Compensator Installation in
the San Francisco Bay Area, Proceedings of the IEEE Power Systems Exposition and Conference,
October 2004, New York, NY.
[9] P. Pourbeik, A. Bostrm and B. Ray, Modeling and Application Studies for a Modern Static VAr
System Installation, IEEE Transactions on Power Delivery, January 2006, pp 368-377.
[10] M. Anderson and R.G. Farmer, Series Compensation of Power Systems, 1996, Encinitas,
California.
[11] L. Angquist, G. Ingestrom and H-A. Jonsson, Dynamical Performance of TCSC Schemes,
CIGRE Session 1996, Paris, France, paper 14-302.
[12] L. Anguist, Synchronous Voltage Reversal Control of Thyristor Controlled Series Capacitor, PhD
Thesis, Royal Institue of Technology, Stockholm, Sweden, 2002.
[13] C. Gama, L. Angquist, G. Ingestrom and M. Noroozian, Commissioning and Operative
Experience of TCSC for Damping Power Oscillation in the Brazilian North-South
Interconnection, CIGRE Session 2000, Paris, France, paper 14-104.
[14] Transmission Fast Simulation and Modeling (T-FSM)Functional Requirements Document,
EPRI, Palo Alto, CA: 2005. 1011666.
[15] Transmission Fast Simulation and Modeling (T-FSM), Architectural Requirements, EPRI, Palo
Alto, CA: 2005. 1011667.
[16] K. Moslehi, A.B.R. Kumar, et.al, Distributed Autonomous Real-Time System for Power System
Operations - A Conceptual Overview, Proceedings of the 2004 IEEE PES Power Systems
Conference & Exposition, October 10-13, 2004, New York, NY, USA.
[17] K. Moslehi, A.B.R. Kumar, et.al, Control Approach for Self-Healing Power Systems: A
Conceptual Overview, Presented at the Electricity Transmission in Deregulated Markets:
Challenges, Opportunities, and Necessary R&D, Carnegie Mellon University, Dec. 15-16, 2004.
[18] K. Moslehi, A.B.R. Kumar, D. Shurtleff, M. Laufenberg, A. Bose and P. Hirsch, Framework for a
Self-Healing Power Grid, presented at IEEE PES General Meeting San Francisco, June 2005.
[19] IntelligridSM Transmission Fast Simulation and Modeling (T-FSM) Business Case Analysis,
EPRI, Palo Alto, CA: 2005. 1012152.
[20] K. Moslehi, A.B.R. Kumar, P. Hirsch, Valuating Infrastructure for a Self-Healing Grid,
Presented at the Electric Power Systems: Monitoring, Sensing, Software and Its Valuation,
Carnegie Mellon University, 11-12 January, 2006.
[21] K. Moslehi, A.B.R. Kumar, P. Hirsch, Feasibility of a Self-Healing Grid Part I Methodology
and Cost Models, Presented at IEEE PES General Meeting Montreal, June 2006.

3 - 29
[22] K. Moslehi, A.B.R. Kumar, P. Hirsch, Feasibility of a Self-Healing Grid Part II Benefit Models
and Analysis, Presented at IEEE PES General Meeting Montreal, June 2006.
[23] C. Martinez, M. Parashar, et.al., Whitepaper on Real-Time Wide-Area Monitoring, Control and
Protection Applications Prepared for EIPP Real Time Task Team, January 26, 2005,
http://phasors.pnl.gov/resources_realtime.html.
[24] C. W. Taylor, D. C. Erickson, K. E. Martin, R. E. Wilson, V.Venkatasubramanian, WACS--
Wide-Area Stability and Voltage Control System: R&D and On-line Demonstration, Proceedings
of the IEEE, Vol.93, Issue 5, pp. 892-906, May 2005.
[25] J. Ballance, B. Bhargava, G.D.Rodriguez, Use of Synchronized Phasor Measurement System for
Enhancing AC-DC Power System Transmission Reliability and Capability, Presented at the
CIGRE Meeting, Paris-2004.
[26] Q. Chen, The Probability, Identification, and Prevention of Rare Events in Power Systems,
www.pserc.org/ecow/get/publicatio/ 2004public/qimingchen_phd_dissertation_on_cascading.pdf
[27] I. Dobson, B. A. Carreras and D. Newman, A Criticality Approach to Monitoring Cascading
Failure Risk and Failure Propagation in Transmission Systems, Presented at the Electricity
Transmission in Deregulated Markets: Challenges, Opportunities, and Necessary R&D, Carnegie
Mellon University, Dec. 15-16, 2004.
[28] D. Newman, B. A. Carreras, et.al., The Impact of Various Upgrade Strategies on the Long-Term
Dynamics and Robustness of the Transmission Grid, Presented at the Electricity Transmission in
Deregulated Markets: Challenges, Opportunities, and Necessary R&D, Carnegie Mellon
University, Dec. 15-16, 2004.
[29] H. Liao, J. Apt, S. Talukdar, Phase Transitions in the Probability of Cascading Failures,
Presented at the Electricity Transmission in Deregulated Markets: Challenges, Opportunities, and
Necessary R&D, Carnegie Mellon University, Dec. 15-16, 2004.
[30] H. Lefebvre, D. Fragnier, J. Y. Boussion, P. Mallet and M. Bulot, Secondary Coordinated Voltage
Control System: Feedback on EdF, Panel Session on Power Plant Secondary (High-Side) Voltage
Control, IEEE/PES Summer Meeting, Seattle, WA, July 2000.
[31] S. Corsi, M. Pozzi, U. Bazzi, M. Mocenigo, and P. Marannino, A Simple Real-Time and On-line
Voltage Stability Index Under Test in Italian Secondary Voltage Regulation, in Proc. CICGRE,
2000, paper 38-115.
[32] J. Van Hecke, N. Janssens, J. Deude, and F. Promel, Coordinated Voltage Control Experience in
Belgium, in Proc. CIGRE, Paris, France, 2000, paper 38-111.
[33] L. Layo, L. Martin, and M. Alvarez, Final Implementation of a Multi-level Strategy for Voltage
and Reactive Control in the Spanish Electrical Power System, in Proc. PCI Conf., Glasgow,
Scotland, U.K., 2000.
[34] G.N. Taranto, N. Martins, D.M. falcao, A.C.B. Martins and M.G. Santos, Benefits of Supplying
Secondary Voltage Control Schemes to the Brazilian System, IEEE PES Winter Meeting, 2000,
pp.937-942.
[35] C. Rehtanz, J. Bertsch, Wide Area Measurement and Protection System for Emergency Voltage
Stability Control,
http://www.transmission.bpa.gov/orgs/opi/Power_Stability/EmergVoltStabControlRehtanz.pdf
[36] G.G. Karady, A. A. Dauod, M.A. Mohamed,On-line Transient Stability and Voltage Collapse
Prediction Using Multi-Agent Technique,
http://www.pserc.org/ecow/get/generalinf/presentati/psercsemin1/seminars20/karady_mar2002.pdf
[37] CIGRE Technical Brochure on Wide Area Monitoring and Control for Transmission Capability
Enhancement, prepared by CIGRE WG C4.601, final report, January 2006.
[38] D. Karlsson, M. Hemmingsson, S. Lindahl: "Wide Area System Monitoring and Control", IEEE
Power & Energy Magazine, Vol. 2, No. 5, September/October 2004, pp 68-76
[39] X.P. Zhang, C. Rehtanz, B. Pal: "FACTS - Modelling and Control", Chapter 8 " Non-Intrusive
System Control of FACTS ", Springer, Jan. 2007

3 - 30
[40] I. Kamwa, J. Bland, G. Trudel, R. Grondin, C. Lafond, D. McNabb, "Wide-Area Monitoring and
Control at Hydro-Qubec: Past, Present and Future", IEEE PES General Meeting, Montral, June
18-22, 2003
[41] X. Xie, J. Li, J. Xiao, Y. Han: "Inter-Area Damping Control of STATCOM using Wide Area
Measurements", IEEE Int. Conf. on El. Utility Deregulation, Restructuring and Power
Technologies, Hong Kong, April 2004
[42] S. Corsi: "Wide are Voltage Regulation and Protection: When Their Co-ordination is Simple"
Proceedings of Power Tech 05 - St. Petersburg, June 2005
[43] S. Corsi, G. Cappai, I. Valad: "Wide Area Voltage Protection", CIGRE Conference, 2006 Paris
[44] B. Marinescu, J.M. Coulondre: "A Coordinated Phase Shifting Control and Remuneration Method
for a Zonal Congestion Management Scheme", Proc. IEEE Power Systems Conference and
Exposition, New-York (USA), October, 2004
[45] C.W. Taylor, D.C. Erickson, K.E. Martin, R.E. Wilson, and V. Venkatasubramanian, "WACS-
Wide-Area Stability and Voltage Control System: R&D and On-Line Demonstration", Proceedings
of the IEEE, Special Issue on Energy Infrastructure Defense Systems, Vol. 93, No.5, May 2005.
[46] A.Danelli, G.Giannuzzi, M.Pozzi, R.Salvati, M.Salvetti, M.Sforna: "A DSA-integrated Shedding
System for Corrective Emergency Control", Power Systems Computation Conference, PSCC,
August 22-26, 2005, Liege, Belgium
[47] B. Chaudhuri, R. Majumder, B.C. Pal: "Wide Area Measurement Based Stabilizing Control of
Power System Considering Signal Transmission Delay", IEEE Trans. on Power Systems, vol. 19,
no. 4, pp. 1971-0, Nov. 2004
[48] H. Wu, K.S. Tsakalis, G.T. Heydt: "Evaluation of Time Delay Effects to Wide-Area Power System
Stabilizer Design", IEEE Trans. on Power Systems, vol. 19, no. 4, pp. 1935-41, Nov. 2004
[49] D. Cirio, A. Danelli, M. Pozzi, S. Cecere, G. Giannuzzi, M. Sforna: "Wide Area Monitoring and
Control System: the Italian research and development", CIGRE Session 2006, paper C2-208, Paris,
2006
[50] North American Electric Reliability Council, Disturbance Analysis Working Group (DAWG)
Reports
[51] Grid FriendlyTM Appliances. [Online], Available:
http://gridwise.pnl.gov/technologies/transactive_controls.stm
[52] J. Dagle, D. W. Winiarski, and M. K. Donnelly, End-Use Load Control for Power System
Dynamic Stability Enhancement, Report PNNL-11488, Pacific Northwest National Laboratory,
Feb. 1997.
[53] D. Trudnowski, M. Donnelly and E. Lightner, Power-System Frequency and Stability Control
using Decentralized Intelligent Loads, PES TD 2005/2006, May 21-24, 2006 Page(s):1453
1459.
[54] L. Ning and D. J. Hammerstrom, Design Considerations for Frequency Responsive Grid
FriendlyTM Appliances, PES TD 2005/2006, May 21-24, 2006 Page(s):647 652.
[55] US Department of Energy, Electric Distribution Transformation Program description. [Online].
Available:
http://electricity.doe.gov/program/electric_rd_distribution.cfm?section=program&level2=distributi
on
[56] R. Pratt, Transforming the U.S. Electricity System, presented at the 2004 IEEE Power Systems
Conference and Exposition, New York, New York, October 10-13, 2004.
[57] G. E. Moore, Cramming More Components onto Integrated Circuits, Electronics, Vo.38,
Number 8, April 19, 1965.
[58] P. Pourbeik, M. P. Bahrman, E. John and W. Wong, Modern Countermeasures to Blackouts,
IEEE Power and Energy Magazine, Vol. 4, No. 5, September/October 2006,pp. 36-45.

3 - 31

View publication stats

Das könnte Ihnen auch gefallen