Sie sind auf Seite 1von 573

Event-Based

Control and Signal


Processing
Embedded Systems
Series Editor
Richard Zurawski
SA Corporation, San Francisco, California, USA

Event-Based Control and Signal Processing, edited by Marek Miskowicz


Real-Time Embedded Systems: Open-Source Operating Systems Perspective,
Ivan Cibrario Bertolotti and Gabriele Manduchi
Time-Triggered Communication, edited by Roman Obermaisser
Communication Architectures for Systems-on-Chip, edited by José L. Ayala
Event-Based
Control and Signal
Processing
Edited by /CTGM/KĎMQYKE\

Boca Raton London New York

CRC Press is an imprint of the


Taylor & Francis Group, an informa business
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in
this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks
of a particular pedagogical approach or particular use of the MATLAB® software.

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2016 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20151022

International Standard Book Number-13: 978-1-4822-5656-7 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and
information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission
to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic,
mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or
retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact
the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides
licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment
has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation
without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
Contents

Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
About the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

I Event-Based Control 1
1 Event-Based Control: Introduction and Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Jan Lunze

2 Event-Driven Control and Optimization in Hybrid Systems . . . . . . . . . . . . . . . . . . . . . . . . . 21


Christos G. Cassandras

3 Reducing Communication by Event-Triggered Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 37


Marek Miśkowicz

4 Event-Triggered versus Time-Triggered Real-Time Systems: A Comparative Study . . . . . . . . . . . . 59


Roman Obermaisser and Walter Lang

5 Distributed Event-Based State-Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79


Jan Lunze and Christian Stöcker

6 Periodic Event-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105


W. P. Maurice H. Heemels, Romain Postoyan, M. C. F. (Tijs) Donkers, Andrew R. Teel, Adolfo Anta,
Paulo Tabuada, and Dragan Nešić

7 Decentralized Event-Triggered Controller Implementations . . . . . . . . . . . . . . . . . . . . . . . . . 121


Manuel Mazo Jr. and Anqi Fu

8 Event-Based Generalized Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151


Andrzej Pawlowski, José Luis Guzmán, Manuel Berenguel, and Sebastián Dormido

9 Model-Based Event-Triggered Control of Networked Systems . . . . . . . . . . . . . . . . . . . . . . . . 177


Eloy Garcia, Michael J. McCourt, and Panos J. Antsaklis

10 Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems . . . . . . . . . . . 203


Cameron Nowzari and Jorge Cortés

11 Efficiently Attentive Event-Triggered Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221


Lichun Li and Michael D. Lemmon

12 Event-Based PID Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235


Burkhard Hensel, Joern Ploennigs, Volodymyr Vasyutynskyy, and Klaus Kabitzsch

v
vi Contents

13 Time-Periodic State Estimation with Event-Based Measurement Updates . . . . . . . . . . . . . . . . . . 261


Joris Sijs, Benjamin Noack, Mircea Lazar, and Uwe D. Hanebeck

14 Intermittent Control in Man and Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281


Peter Gawthrop, Henrik Gollee, and Ian Loram

II Event-Based Signal Processing 351


15 Event-Based Data Acquisition and Digital Signal Processing in Continuous Time . . . . . . . . . . . . 353
Yannis Tsividis, Maria Kurchuk, Pablo Martinez-Nuevo, Steven M. Nowick, Sharvil Patil, Bob Schell,
and Christos Vezyrtzis

16 Event-Based Data Acquisition and Reconstruction—Mathematical Background . . . . . . . . . . . . . . 379


Nguyen T. Thao

17 Spectral Analysis of Continuous-Time ADC and DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409


Yu Chen, Maria Kurchuk, Nguyen T. Thao, and Yannis Tsividis

18 Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing . . . 421


Dieter Brückmann, Karsten Konrad, and Thomas Werthwein

19 Asynchronous Processing of Nonstationary Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441


Azime Can-Cimino and Luis F. Chaparro

20 Event-Based Statistical Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457


Yasin Yılmaz, George V. Moustakides, Xiaodong Wang, and Alfred O. Hero

21 Spike Event Coding Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487


Luiz Carlos Paiva Gouveia, Thomas Jacob Koickal, and Alister Hamilton

22 Digital Filtering with Nonuniformly Sampled Data: From the Algorithm to the Implementation . . . . 515
Laurent Fesquet and Brigitte Bidégaray-Fesquet

23 Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples . . . . . . . . . . . . . . . 529


Dominik Rzepka, Mirosław Pawlak, Dariusz Kościelnik, and Marek Miśkowicz
Contributors

Adolfo Anta Laurent Fesquet


General Electric University Grenoble Alpes, CNRS, TIMA
Munich, Germany Grenoble, France

Panos J. Antsaklis Anqi Fu


University of Notre Dame Delft University of Technology
Notre Dame, IN, USA Delft, Netherlands

Manuel Berenguel Eloy Garcia


University of Almerı́a Infoscitex Corp.
ceiA3-CIESOL, Almerı́a, Spain Dayton, OH, USA

Brigitte Bidégaray-Fesquet Peter Gawthrop


University Grenoble Alpes, CNRS, LJK University of Melbourne
Grenoble, France Melbourne, Victoria, Australia

Dieter Brückmann Henrik Gollee


University of Wuppertal University of Glasgow
Wuppertal, Germany Glasgow, UK

Azime Can-Cimino Luiz Carlos Paiva Gouveia


University of Pittsburgh University of Glasgow
Pittsburgh, PA, USA Glasgow, UK

Christos G. Cassandras José Luis Guzmán


Boston University University of Almerı́a
Boston, MA, USA ceiA3-CIESOL, Almerı́a, Spain

Luis F. Chaparro Alister Hamilton


University of Pittsburgh University of Edinburgh
Pittsburgh, PA, USA Edinburgh, UK

Yu Chen Uwe D. Hanebeck


Columbia University Karlsruhe Institute of Technology
New York, NY, USA Karlsruhe, Germany

Jorge Cortés W.P. Maurice H. Heemels


University of California Eindhoven University of Technology
San Diego, CA, USA Eindhoven, Netherlands

M.C.F. (Tijs) Donkers Burkhard Hensel


Eindhoven University of Technology Dresden University of Technology
Eindhoven, Netherlands Dresden, Germany

Sebastián Dormido Alfred O. Hero


UNED University of Michigan
Madrid, Spain Ann Arbor, MI, USA

vii
viii Contributors

Klaus Kabitzsch Marek Miśkowicz


Dresden University of Technology AGH University of Science and Technology
Dresden, Germany Kraków, Poland

Thomas Jacob Koickal George V. Moustakides


Beach Theory University of Patras
Sasthamangalam, Trivandrum, India Rio, Greece

Karsten Konrad Dragan Nešić


University of Wuppertal University of Melbourne
Wuppertal, Germany Parkville, Australia

Dariusz Kościelnik Benjamin Noack


AGH University of Science and Technology Karlsruhe Institute of Technology
Kraków, Poland Karlsruhe, Germany

Maria Kurchuk Steven M. Nowick


Pragma Securities Columbia University
New York, NY, USA New York, NY, USA

Cameron Nowzari
Walter Lang
University of Pennsylvania
University of Siegen
Philadelphia, PA, USA
Siegen, Germany
Roman Obermaisser
Mircea Lazar
University of Siegen
Eindhoven University of Technology Siegen, Germany
Eindhoven, Netherlands
Sharvil Patil
Michael D. Lemmon Columbia University
University of Notre Dame New York, NY, USA
Notre Dame, IN, USA
Mirosław Pawlak
Lichun Li University of Manitoba
Georgia Institute of Technology Winnipeg, MB, Canada
Atlanta, GA, USA
Andrzej Pawlowski
Ian Loram UNED
Manchester Metropolitan University Madrid, Spain
Manchester, UK
Joern Ploennigs
Jan Lunze IBM Research–Ireland
Ruhr-University Bochum Dublin, Ireland
Bochum, Germany
Romain Postoyan
Pablo Martinez-Nuevo University of Lorraine
Massachusetts Institute of Technology CRAN UMR 7039, CNRS
Cambridge, MA, USA Nancy, France

Manuel Mazo Jr. Dominik Rzepka


Delft University of Technology AGH University of Science and Technology
Delft, Netherlands Kraków, Poland

Michael J. McCourt Bob Schell


University of Florida Analog Devices Inc.
Shalimar, FL, USA Somerset, NJ, USA
Contributors ix

Joris Sijs Volodymyr Vasyutynskyy


TNO Technical Sciences SAP SE
The Hague, Netherlands Research & Innovation
Dresden, Germany
Christian Stöcker
Ruhr-University Bochum Christos Vezyrtzis
Bochum, Germany IBM
Yorktown Heights, NY, USA
Paulo Tabuada
University of California Xiaodong Wang
Los Angeles, CA, USA Columbia University
New York, NY, USA
Andrew R. Teel
University of California Thomas Werthwein
Santa Barbara, CA, USA University of Wuppertal
Wuppertal, Germany
Nguyen T. Thao
City College of New York Yasin Yilmaz
New York, NY, USA University of Michigan
Ann Arbor, MI, USA
Yannis Tsividis
Columbia University
New York, NY, USA
Acknowledgments

Preparation of the book is an extensive task requiring Fesquet, Peter Gawthrop, Alister Hamilton, Maurice
complex coordination and assistance from authors and Heemels, Klaus Kabitzsch, Michael D. Lemmon, Manuel
the publisher. There are many people I have to acknowl- Mazo, Roman Obermaisser, Joris Sijs, Nguyen T. Thao,
edge and thank for their contribution in this work at and Yasin Yilmaz.
various stages of its evolution. I am indebted to Nora Konopka, Publisher with CRC
First of all, I wish to thank Karl Johan Åström, the Press/Taylor & Francis, for giving me the opportunity
pioneer of modern event-based control, for his insight- to prepare that book. Hayley Ruggieri, Michele Smith,
ful Foreword to the book. Richard Zurawski, Editor of Jennifer Stair, and John Gandour of the CRC
the CRC Press Embedded Systems Series, shared with Press/Taylor & Francis offered great assistance
me his experience on the process of publishing edited throughout the whole process. I also thank Viswanath
books for what I am most obliged. Furthermore, I would Prasanna at Datapage for his patience and constructive
like to thank Jan Lunze and Yannis Tsividis for their cooperation.
extended involvement in the book preparation, con- Finally, I express my deep gratitude to all the authors
structive advice, and especially for the contribution of who contributed to the book. I believe the book will
survey chapters beginning Part I and Part II of the book. stimulate research in the areas covered, as well as fruit-
I am very grateful to authors who took the lead in ful interaction between event-based control and signal
the preparation of chapters: Panos J. Antsaklis, Dieter processing communities.
Brückmann, Christos G. Cassandras, Luis F. Chaparro,
Yu Chen, Jorge Cortés, Sebastián Dormido, Laurent Marek Miśkowicz

xi
Foreword

Control and signal processing are performed very dif- difficulties when dealing with systems having multiple
ferently in man and machine. In biological systems it is sampling rates and systems with distributed comput-
done by interconnecting neurons that communicate by ing. With multi-rate sampling, the complexity of the
pulses. The systems are analog bio-chemical reactions, system depends critically on the ratio of the sampling
execution is parallel. Emulation of neurons in silicon rates. For distributed systems the theory requires that
was pioneered by Carver Mead. Some of his designs the clocks are synchronised. Sampling jitter, lost samples
have shown to give very efficient processing of images and delays create significant difficulties in networked,
particularly when sensing information generates pulses. distributed systems on computer controlled systems.
Engineering applications of control and signal process- Event-based sampling is an alternative to periodic
ing were originally implemented by mechanical or elec- sampling. Signals are then sampled only when signifi-
trical analog computing, which was also analog and cant events occurs, for example, when a measured signal
asynchronous. The signals were typically continuous. exceeds a limit or when an encoder signal changes.
The advent of the digital computer was a major Event-based control has many conceptual advantages.
paradigm shift, the analog signals were sampled and Control is not executed unless it is required (control
converted to digital form, control and signal process- by exception). Event-based control is also useful in sit-
ing were executed by numerical algorithms, the results uations when control actions are expensive; changes
were converted to analog form and applied to the of production rates can be very costly when control-
physical devices by actuators. A very powerful theory ling manufacturing complexes. Acquiring information
was developed for analysis and design by introducing in computer systems and networks can be costly because
proper abstractions. The computer based systems for it consumes computer resources.
control and signal processing were traditionally based Even if periodic sampling and sampled-data theory
on a central periodic clock, which controlled sampling, will remain a standard method to analyse and design
computing and actuation. For physical processes mod- systems for control and signal processing, there are
eled as linear time-invariant systems the approach leads many problems where event-based control is natural.
to closed loop systems that are linear but periodic. Event-based control may also bridge the gap between
The periodic nature can be avoided by considering the biological and engineered systems. Recently there has
behaviour at times that are synchronised with the sam- been an increased interest in event-based control and
pling, resulting in the stroboscopic model. The system can papers have appeared in many different fields. The
then be described by difference equations with constant theory currently available is far from what is needed
coefficients. Traditional sampled data theory based on for analysis and design, an equivalent of the theory
the stroboscopic model is powerful and simple. It has sampled-data systems is missing. The results are scat-
been used extensively in practical applications of control tered among different publications. Being interested in
and signal processing. Periodic sampling also matches event-based control, I am therefore very pleased that
the time-triggered model of real time software which is this book, which covers views of different authors, is
simple and easy to implement safely. The combination published. I hope that it will stimulate further research.
of computers and sampled data theory is a very pow-
erful tool for design and implementation of engineered
systems. Today it is the standard way of implementing Karl Johan Åstrœm
control and signal processing. Lund University, Sweden
In spite of the tremendous success of sampled-data
theory, it has some disadvantages. There are severe

xiii
xiv MATLAB

MATLAB is a registered trademark of The MathWorks,


Inc. For product information, please contact:
The MathWorks, Inc.
3 Apple Hill Drive
Natick, MA 01760-2098 USA
Tel: 508 647 7000
Fax: 508-647-7001
E-mail: info@mathworks.com
Web: www.mathworks.com
Introduction

Control and signal processing belong to the most The response to these challenges claims the reformu-
prominent areas in science and engineering that have lation of answers to fundamental questions referred
a great impact on the development of modern technol- to discrete-time systems: “when to sample,” “when
ogy. To benefit from continuing progress of computing to update control action,” or “when to transmit
in the past century, most control and signal processing information.”
systems have been designed in the discrete-time domain One of the generic answers to these questions is
using digital electronics. Because of the existence of covered by adoption of the event-based paradigm that
mature theory, well-established design methods, and represents a model of calls for resources only if it is
easier implementation, the classical digital control, sig- really necessary in system operation. In event-based sys-
nal processing, and electronic instrumentation are based tems, any activity is triggered as a response to events
on periodic sampling. Indeed, the analysis and design of defined usually as a significant change of the state of rel-
control systems sampled equidistantly in time is rela- evance. Most “man-made” products, such as computer
tively simple since the closed-loop system can be then networks, complex software architectures, and produc-
described by difference equations with constant coeffi- tion processes are natural, discrete event systems as
cients for linear time-invariant processes. As time is a all activity in these systems is driven by asynchronous
critical issue in real-time systems, the adoption of time- occurrences of events.
driven model for sampling, estimation, control, and The event-based paradigm was an origin of funda-
optimization harmonizes well with the development of mental concepts of classical computer networking. In
time-triggered real-time software architectures. On the particular, the random access protocols for shared com-
other hand, the recovery of signals sampled uniformly munication channels (ALOHA and CSMA) and the
in time requires just low-pass filtering which promotes most successful commercial technology in local area net-
practical implementations of classical signal processing works (Ethernet) have been inherently based on the
systems. event-triggered model. The same refers to CSMA/CA
Although synchronous architectures of circuits and scheme adopted in IEEE 802.11 standard to wireless
systems have monopolized electrical engineering since local area network technology. Many fieldbus technolo-
the 1960s, they are definitely suboptimal in terms of gies for industrial communication are event-based con-
resource utilization. Sampling the signal and transmit- troller area network. A representative example of the
ting the samples over the communication channel, as commercial platform designed consistently on the basis
well as occupying the central processing unit to per- of event-based architecture is LonWorks (Local Operat-
form computations when the signal does not change ing Networks, ISO/IEC 14908), where the event-driven
significantly, are evident waste of resources. Nowa- model is involved in task scheduling, strategy of data
days, the efficient utilization of communication, com- reporting, application programming, explicit flow con-
puting, and energy capabilities become a critical issue trol, and protocol stack design. The other well-known,
in various application domains because many systems event-driven communication architecture is controller
have become increasingly networked, wireless, and area network, which is based on a message-oriented
spatially distributed. The demand for the economic protocol of CSMA type. Perhaps, the most spectacular
use of computational and communication resources example of a widespread modern event-based system
becomes indispensable as the digital revolution drives is Internet, the operation of which, as a global net-
the development and deployment of pervasive sens- work, is dictated by events, such as error detection,
ing systems producing huge amount of data whose message transmission and reception, and segmentation
acquisition, storage, and transmission is a tremendous and reassembling. The rigorous theory for study of dis-
challenge. On the other hand, the postulate of pro- crete event dynamic systems was developed in the 1980s
viding ubiquitous wireless connectivity between smart [1] (see Chapter 2).
devices with scarce energy resources poses a require- The notion of “event” is also an efficient abstraction
ment to manage efficiently the constrained energy for computer programming and data processing. Most
budget. graphical user interface (GUI) applications designed to

xv
xvi Introduction

be event-driven and reactive to certain kinds of events the independent idea equivalent to Lebesgue sampling
(e.g., mouse button clicks) while ignoring others (e.g., provided the ground for the invention of asynchronous
mouse movement). Many current trends in computing delta modulation in the 1960s [12]. In a parallel line of
are event-based, for example, event stream process- research, the framework for the class of adaptive (time-
ing, complex event processing, smart metering, or the triggered) sampling designed under event-based criteria
Internet of Things. Finally, the event-based paradigm was evolved during the 1960s and 1970s (see Chap-
is one of the underlying concepts adapted to vari- ter 3). In particular, the constant absolute-difference crite-
ous aspects of human activity (e.g., event-driven busi- rion introduced in [13] is the adaptive (time-triggered)
ness process management or event-driven marketing) as sampling equivalent of the (event-triggered) Lebesgue
“events define our world.” sampling. Since the recent decade, the adaptive sam-
The past decade has seen increasing scientific interest pling has been applied in the self-triggered control strat-
in event-based paradigm applied for continuous-time egy to cope with event unexpectedness and to avoid
systems in a wide spectrum of engineering disciplines, event detection by the use of prediction (see Chapter
including control, communication, signal processing, 10). Nowadays, the Lebesgue sampling, known also in
and electronic instrumentation. The significant research industrial communication as send-on-delta scheme [14],
effort was initiated by publishing a few important is used in many industrial applications, including build-
research works independently in the context of control ing automation, street lighting, and smart metering,
[2–4] and signal processing [5–9] at the turn of the cen- and has been extended among others to spatiotempo-
tury. These works released great research creativity and ral sampling adopted in sensor networks. The notion
gave rise to a systematic development of new event- of the “event” in the technical context comes neither
triggered approaches for continuous-time systems. from control nor from signal processing but originated
The concepts to involve the event-based paradigm in computing in the 1970s [15].
in control, communication, and signal processing are In signal processing, the event-based scheme similar
in general not new. The event-based systems with to the Lebesgue sampling was rediscovered by Mark
continuous-time signals exist in nature. In particular, and Todd at the beginning of 1980s and introduced
the hand control system is event driven because human under the name level-crossing sampling [16]. The inspi-
hand movements are synchronized to input signals ration to adopt the level-crossings as a sampling cri-
rather than to an internal clock (see also Chapter 14). terion was derived not from the control theory but
Event-based control is the dominating principle from studies on the level-crossing problem in stochas-
in biological neurons that process information using tic processes (see Chapter 3). The level-crossing rate
energy-efficient, highly parallel, event-driven architec- is a function of the signal bandwidth, which for ban-
tures as opposed to clock-driven serial processing dlimited Gaussian processes were proven by Rice in
adopted in computers. The observation that the brain his monumental study published in the 1940s [17].
operates on event-based analog principles of neural Hence, the level-crossing rate provides an estimate of
computation that are fundamentally different from dig- local bandwidth for time-varying signals (see Chapter
ital techniques in traditional computing was the moti- 23). In parallel, at least since the 1990s, event-based
vation for the development of neuromorphic engineering. sampling has been used to solve problems in statis-
Event-based neuromorphic technologies inspired by the tical signal processing (see Chapter 20). The adoption
architecture of the brain (see [10]) was pioneered by the of the level-triggered sampling concept to hypothesis
group of Mead at Caltech that proposed to integrate testing for distributed systems via Sequential Probabil-
biologically inspired electronic sensors with analog cir- ity Ratio Test (SPRT) first appeared in [18] although
cuits, as well as to introduce an address-event-based without any reference to the event-based sampling.
asynchronous, continuous-time communications proto- The potential of level-crossing scheme for signal pro-
col (see also Chapter 21). cessing applications has been significantly extended
Early event-based systems (termed also as aperiodic or because of an observation made by Tsividis that the
asynchronous) have been designed in feedback control, level-crossing sampling in continuous time results in
data transmission, and signal processing at least since no aliasing [6,7]. This observation was the ground
the 1950s. The first idea to adopt a significant change of for the formulation of a new concept of continuous-
the signal as a criterion for triggering the sampling of time digital signal processing (CT-DSP) based essen-
continuous-time signals, termed later as Lebesgue sam- tially on amplitude quantization, and dealing with
pling [3], is attributed to Ellis [11] and appeared in binary signals in continuous time, which is complemen-
the context of control although it does not refer explic- tary to conventional clocked DSP (see Chapters 15, 17,
itly to the concept of “event.” In signal processing, and 18).
Introduction xvii

In recent research studies, the concept of “event” paradigm has been adopted during the past decade
evolves to address a wide range of design objectives and in signal processing and electronic circuitry design as
application requirements. The answer to the question an alternative to the conventional approach based on
“when to sample” depends on the technical context. periodic sampling. The motivation behind event-based
In particular, the developments of new event-based sam- signal processing is that the modern nanoscale VLSI
pling criteria for state estimation focused on reducing technology promotes fast circuit operation but requires
state uncertainty (e.g., variance-based triggering, or low supply voltage (even below 1 V). The latter makes
Kullback–Leibler divergence) (see Chapter 13) give rise the fine quantization of the amplitude increasingly
to the extension of the traditional concept of event orig- difficult as the voltage increment corresponding to the
inated in computing: an event is now regarded not only least significant bit may become close to noise level. In
to a significant change of the state, but may define a the event-based signal processing mode, the periodic
significant change of the state uncertainty. sampling is substituted by discretization in the ampli-
The primary reason for the increasing interest in tude (e.g., level crossings), whereas the quantization
event-based design of circuits and systems is its superi- process (if any) is moved from the amplitude to the
ority in the resource-constrained applications. The main time domain. This allows for the removal of the global
benefit provided by adoption of event-based strategies clock from system architecture and ensures benefits from
in networked sensor and control systems consists in growing time resolution provided by the downscaling
decreasing the rate of messages transferred through the of the feature size of VLSI technology. The technique
shared communication channel (see Chapter 3), which of encoding signals in time instead of in amplitude
for wireless transmission allows to minimize power is expected to be further improved with advances in
consumption. chip fabrication technology. Finally, the event-based sig-
On the other hand, the reduction of communication nal processing systems are characterized in general
traffic is the most effective method to minimize the by activity-dependent power consumption, and energy
network-induced imperfections on the control perfor- savings on the circuit level.
mance (i.e., packet delay, delay jitter, and packet losses). In response to challenges of nanoscale VLSI design,
In this way, the adoption of the event-triggered strat- the development of event-based signal processing tech-
egy meets one of the fundamental criteria of the net- niques has resulted in the introduction of a new class of
worked control system design. This criterion is aimed analog-to-digital converters and digital filters that use
to achieve performance close to that of the conventional event-triggered sampling (see Chapter 22), as well as
point-to-point digital control system while still retaining a novel concept of CT-DSP. According to Tsividis’ clas-
well-known advantages provided by networking (e.g., sification, the CT-DSP may be regarded as a “missing
high flexibility, efficient reconfigurability, reduction of category” among well-established signal processing
installation and maintenance costs). modes: continuous-time continuous-amplitude (ana-
The other strong motivation behind the development log processing), discrete-time discrete-amplitude (digi-
of event-based methodology is that the networked tal processing), and discrete-time continuous-amplitude
control systems with nonzero transmission delays and processing techniques (see Chapters 15, 17, and 18).
packet losses are event-based. Therefore, the control Although major concepts and ideas in both techni-
loops established over networks with varying transmis- cal areas are similar, so far the event-based control and
sion delays and packet losses can no longer be modeled event-based signal processing have evolved indepen-
by conventional discrete-time control theory, even if dently, being explored by separate research communities
data are processed and transmitted with a fixed sam- in parallel directions.
pling rate. Hence, the strategies for event-based control But control and signal processing are closely related.
are expected to be similar to strategies for networked Many controllers can be viewed as special kind of sig-
control. nal processors that convert an input and a feedback into
Finally, the appropriability of event-based control is a control signal. The history of engineering provides
not restricted only to economizing the use of digital notable examples of successful interactions between
networks but relates also to a class of control prob- communities working on control and signal processing
lems in which the independent variable is different from domains. In particular, the term “feedback,” which is an
time. For example, it is more efficiently to sample the ancient idea and a central concept of control, was used
crankshaft angle position in the value domain instead of for the first time in the context of signal processing to
in time (see Chapter 1). express boosting the gain of an electronic amplifier [19].
In parallel to the development of the event-triggered On the other hand, the Kalman filter, which is now a crit-
communication and control strategies, the event-based ical component of many signal processing systems, has
xviii Introduction

been developed and enhanced by engineers involved in time quantization. In Proceedings of IEEE Interna-
the design of control systems [17]. One of main objec- tional Symposium on Asynchronous Circuits and Sys-
tives of this book is to provide interactions between tems ASYNC 2003, 2003, pp. 196–205.
both research areas because concepts and ideas migrate
between the fields. [9] F. Aeschlimann, E. Allier, L. Fesquet, M. Renaudin,
Marek Miśkowicz Asynchronous FIR filters: Towards a new digital
processing chain. In Proceedings of International Sym-
posium on Asynchronous Circuits and Systems ASYNC
2004, 2004, pp. 198–206.

[10] S. C. Liu, Event-Based Neuromorphic Systems. Wiley,


Bibliography 2015.
[1] C. G. Cassandras, S. Lafortune, Introduction to Dis- [11] P. H. Ellis, Extension of phase plane analysis to
crete Event Systems. Springer, New York, 2008. quantized systems. IRE Trans. Autom. Control,
vol. 4, pp. 43–59, 1959.
[2] K. E. Årzén, A simple event-based PID controller.
In Proceedings of IFAC World Congress, 1999, vol. 18, [12] H. Inose, T. Aoki, K. Watanabe, Asynchronous
pp. 423–428. delta-modulation system. Electron. Lett., vol. 2,
no. 3, pp. 95–96, 1966.
[3] K. J. Åstrom, B. Bernhardsson, Comparison of
periodic and event based sampling for firstorder [13] J. Mitchell, W. McDaniel, Adaptive sampling tech-
stochastic systems. In Proceedings of IFAC World nique. IEEE Trans. Autom. Control, vol. 14, no. 2,
Congress, 1999, pp. 301–306. pp. 200–201, 1969.
[4] W. P. M. H. Heemels, R. J. A. Gorter, A. van Zijl, [14] M. Miśkowicz, Send-on-delta concept: An event-
P. P. J. van den Bosch, S. Weiland, W. H. A. Hendrix, based data reporting strategy Sensors, vol. 6,
et al., Asynchronous measurement and control: pp. 49–63, 2006.
A case study on motor synchronization. Control
Eng. Pract., vol. 7, pp. 1467–1482, 1999. [15] R. E. Nance, The time and state relationships in sim-
ulation modeling. Commun. ACM, vol. 24, no. 4,
[5] N. Sayiner, H. V. Sorensen, T. R. Viswanathan, pp. 173–179, 1981.
A level-crossing sampling scheme for A/D conver-
sion. IEEE Trans. Circuits Syst. II: Analog Digital [16] J. W. Mark, T. D. Todd, A nonuniform sampling
Signal Process, vol. 43, no. 4, pp. 335–339, 1996. approach to data compression. IEEE Trans. Com-
mun., vol. 29, no. 1, pp. 24–32, 1981.
[6] Y. Tsividis, Continuous-time digital signal process-
ing. Electron. Lett., vol. 39, no. 21, pp. 1551–1552, [17] S. O. Rice, Mathematical analysis of random noise.
2003. Bell System Tech. J., vol. 24, no. 1, pp. 46–156, 1945.

[7] Y. Tsividis, Digital signal processing in continuous [18] A. M. Hussain, Multisensor distributed sequential
time: A possibility for avoiding aliasing and reduc- detection. IEEE Trans. Aerosp. Electron. Syst., vol. 30,
ing quantization error. In Proceedings of IEEE Inter- no. 3, pp. 698–708, 1994.
national Conference on Acoustics, Speech, and Signal [19] W. S. Levine, Signal processing for control. In Hand-
Processing ICASSP 2004, 2004, pp. 589–592. book of Signal Processing Systems, S. S. Bhattacharyya,
[8] E. Allier, G. Sicard, L. Fesquet, M. Renaudin, A new E. F. Deprettere, R. Leupers, J. Takala, Eds. Springer,
class of asynchronous A/D converters based on New York, 2010.
About the Book

The objective of this book is to provide an up-to- includes links to the other chapters in the book. It has
date picture of the developments in event-based control been emphasized in particular that the event-based con-
and signal processing, especially in the perspective of trol may be viewed as a combination of feedforward
networked sensor and control systems. and feedback control in the sense that the feedback
Despite significant research efforts, the event-based loop is closed only at event times while remaining open
control and signal processing systems are far from between consecutive events. Although this is the case
being mature and systematically developed technical with all the discrete-time control systems, the average
domains. Instead, they are the subject of scientific con- time interval between event-triggered control updates is
tributions at the cutting edge of modern science and in general much longer than the sampling period in the
engineering. The book does not pretend to be an exhaus- classical approach to sampled-data control. Therefore, in
tive presentation of the state-of-the-art in both areas. It the event-triggered control, modeling the plan behavior
is rather aimed to be a collection of studies that reflect only at the sampling instants, as in classical sampled-
major research directions. Due to a broad range of top- data control theory, is insufficient and requires integra-
ics, the mathematical notation is not uniform throughout tion of the continuous-time model for time intervals
the book, but is rather adapted by the authors to cap- between events.
ture the content of particular chapters in the clearest way Chapter 2 focuses on the issue of how to apply the
possible. event-driven paradigm to control and optimization of
The Introduction to the book outlines the broad range hybrid systems, including both time-driven and event-
of technical fields that adopt the events as an underlying driven components. The general-purpose control and
concept of system design and provides a historical back- optimization problem is carried out by evaluation (or
ground of scientific and engineering developments of estimation for stochastic environment) of gradient infor-
event-based systems. The present section includes short mation related to a given performance metric using
presentations of the book contributions. infinitesimal perturbation analysis (IPA). The adoption
The book contains 23 chapters divided in two parts: of IPA calculus to control and optimize hybrid systems
Event-Based Control and Event-Based Signal Process- results in a set of simple and computationally efficient,
ing. Over 60 authors representing leading research event-driven iterative equations and is advantageous
teams from all over the world have contributed to this because the size of the event space of the system model
work. Each part of the book begins with a survey chapter is much smaller than of the state space.
that covers the topic discussed: Chapter 1 for “Event- Chapter 3 provides a survey on event-triggered sam-
Based Control” and Chapter 15 for “Event-Based Signal pling schemes applied both in control and signal pro-
Processing.” cessing systems, including threshold-based sampling,
Most chapters included in Part I, “Event-Based Con- Lyapunov sampling, reference-crossing sampling, cost-
trol,” describe the research problems in the perspective aware sampling, quality-of-service-based sampling, and
of networked control systems and address research adaptive sampling. Furthermore, the chapter contains
topics on event-driven control and optimization of derivation of the analytical formulae referred to the
hybrid systems, decentralized control, event-based mean sampling rate for threshold-based sampling
proportional-integral-derivative (PID) regulators, peri- schemes (send-on-delta, send-on-area, and send-on-
odic event-triggered control (PETC), event-based state energy). It is shown that the total variation of the sampled
estimation, self-triggered and team-triggered control, signal is an essential measure to approximate the num-
event-triggered and time-triggered real-time computing ber of samples produced by the send-on-delta (Lebesgue
and communication architectures for embedded sys- sampling) scheme. Furthermore, Chapter 3 addresses
tems, intermittent control, model-based event-triggered the problem of evaluating the reduction of event-based
control, and event-based generalized predictive control. sampling rate compared with periodic sampling. As
Chapter 1 is a tutorial survey on the main meth- proved, the lower bound for send-on-delta reduction
ods of analysis and design of event-based control and of communication (guaranteed reduction) is determined

xix
xx About the Book

by the ratio of the peak-to-mean slope of the sampled a discussion of open research problems in the area of
signal and does not depend on the parameter of the PETC.
sampling algorithm (threshold size). In Chapter 7, a decentralized wireless networked con-
Chapter 4 gives a comparative analysis of event- trol architecture is considered where the sensors provid-
triggered and time-triggered real-time computing and ing measurements to the controller have only a partial
communication architectures for embedded control view of the full state of the system. The objective of
applications in terms of composability, real-time require- this study is to minimize the energy consumption of
ments, flexibility, and heterogeneous timing models. The the system not only by reducing the communication but
attention of the reader is focused on time-triggered also by decreasing the time the sensors have to listen
architectures as a complementary framework to event- for messages from the other nodes. Two techniques of
triggered models in implementation of real-time sys- decentralized conditions are examined. The first tech-
tems. Finally, the chapter concludes with a case study of nique requires synchronous transmission of measure-
a control application realized alternatively as the event- ments from the sensors, while the second one proposes
triggered and as the time-triggered system and their the use of asynchronous measurements. Both techniques
comparative analysis based on experimental results. are finally compared through some examples followed
Chapter 5 extends the event-based state-feedback by a discussion of their benefits and shortcomings. By
approach introduced in technical literature to the employing asynchronous updates, the sensor nodes do
distributed event-based control of physically inter- not need to stay listening for updates from other sen-
connected systems. To avoid the realization of the sors thus potentially bringing some additional energy
distributed state feedback with continuous communica- savings. The chapter concludes with ideas for the future
tions between subsystems, it is proposed to introduce and ongoing extensions of the proposed approach.
interactions between the components of the networked Chapter 8 focuses on the practical implementation of
controller only at the event times although the feed- event-based strategy using the generalized predictive
back of the local state information is still continuous. control (GPC) scheme, which is the most popular model
Two event-based communication schemes initiated by predictive control (MPC) algorithm in industry. To cir-
a local controller have been adopted: the transmission cumvent complexity of MPC implementation, a straight-
of local state information to and the request for infor- forward algorithm that includes event-based capabilities
mation from the neighboring subsystems. The chapter in a GPC controller is provided both for sensor and
concludes with a demonstration of the distributed event- actuator deadbands. The chapter concludes with a sim-
based state-feedback approach for a thermofluid process ulation study, where the event-based control schemes
by simulation and experimental results. are evaluated for the greenhouse temperature control
Chapter 6 focuses on the analysis of PETC systems problem that highlights the advantages of the proposed
that combine the benefits of time-triggered and event- event-based GPC controllers.
triggered control strategies. In PETC, the sensor is sam- Chapter 9 presents the model-based event-triggered
pled equidistantly and event occurrence is verified at (MB-ET) control framework for networked systems. The
every sampling time to decide on the transmission of MB-ET control makes explicit use of existing knowl-
a new measurement to a controller. The PETC can be edge of the plant dynamics built into its mathemati-
referred to the concept of globally asynchronous locally cal model to enhance the performance of the system
asynchronous (GALS) model of computation introduced and to use network communication resources more effi-
in the 1980s and adopted as the reference architecture ciently. The implementation of an event-based scheme
for digital electronics, including system-on-chip design. in the model-based approach to networked control sys-
Furthermore, the PETC suits practical implementations tems represents a very intuitive way of saving network
in embedded software architectures and is used, for resources and has many advantages compared with the
example, in event-based PID controllers (see Chapter classical periodic implementation (e.g., robustness to
12), LonWorks commercial control networking platform packet dropouts).
(ISO/IEC 14908), and EnOcean energy harvesting wire- In Chapter 10, the extensions to event-triggered con-
less technology. trol called self-triggered and team triggered control are
The chapter focuses on approaches to PETC that presented. Unlike the event-triggered execution model
include a formal analysis framework, which applies that consists in taking an action when a relevant
for continuous-time plants and incorporates intersam- event occurs, in the self-triggered control strategy, an
ple behavior. The analysis first addresses the case of appropriate action is scheduled for a time when an
plants modeled by linear continuous-time systems and appropriate event is predicted to happen. Unlike the
is further extended to preliminary results for plants event-triggered strategy, the self-triggered control does
with nonlinear dynamics. The chapter concludes with not require continuous access to some state or output
About the Book xxi

of the system in order to monitor the possibility of an values, the Kullback–Leibler divergence is used for com-
event being triggered, which can be especially costly to parison of two probability distributions representing
implement over networked systems. However, the self- state estimates. The approaches discussed in the chapter
triggered algorithms can provide conservative decisions adopt not only events to state estimation but utilize also
on when actions should take place to properly guarantee implicit information defined as a lack of event to improve
the desired stability results. The team-triggered control system performance between events. Hence, Chapter 13
approach summarized in Chapter 10 combines ideas belongs to a small number of works on event-based con-
from both event- triggered and self-triggered controls to trol that propose to utilize the implicit information to
help overcome these limitations. improve system performance (see also Section 3.2.9 in
Chapter 11 addresses the problem of conditions under Chapter 3).
which the event-triggered system is efficiently atten- Chapter 14 gives an extensive overview of the current
tive in the sense that the intersampling interval gets state-of-the-art of the event-based intermittent control
longer when the system approaches the equilibrium. discussed in the context of physiological systems. Inter-
The problem of efficient attention is formulated from mittent control has a long history in the physiological
the perspective of networked control systems when the literature and appears in various forms in engineering
communication channel transmits data in discrete pack- literature. In particular, intermittent control has been
ets of a fixed size that are received after a finite delay. The shown to provide an event-driven basis for human con-
actual bandwidth needed to transmit the sampled infor- trol systems associated with balance and motion control.
mation is defined as a stabilizing instantaneous bit rate The main motivation to use intermittent control in the
that equals the number of bits used to encode each sam- context of event-based systems is the observation that
pled measurement and the maximum delay with which the zero-order hold is inappropriate for event-based sys-
the sampled-data system is known to be asymptotically tems as the intersample interval varies with time and is
stable. The chapter provides a sufficient condition for determined by the event times.
event-triggered systems to be efficiently attentive. Fur- The intermittent approach avoids these issues by
thermore, the study introduces a dynamic quantization replacing the zero-order hold by the system-matched
policy and identifies the maximum acceptable transmis- hold (SMH), a type of generalized hold based on sys-
sion delay for which the system is assured to be locally tem states, which explicitly generates an open-loop
input-to-state stable and efficiently attentive. intersample control trajectory using the underlying
The objective of Chapter 12 is to give an overview continuous-time closed-loop control system. Further-
of the state-of-the-art on event-based PID control. The more, the SMH-based intermittent controller is asso-
introduction of the event-based PID controller by Årzén ciated with a separation principle similar to that of
in 1999 was one of the stimuli for increasing research the underlying continuous-time controller, which, how-
interest in event-based control. The PID regulators ever, is not valid for zero-order hold. An impor-
are the backbone of most industrial control systems tant consequence of this separation principle is that
involved in more than 90% of industrial control loops. neither the design of the SMH nor the stability of
The chapter synthesizes research on event-based PID the closed-loop system is dependent on sample inter-
control and provides practical guidelines on design and val. Hence, the SMH is particularly appropriate when
tuning of event-based PID controllers that allow for an sample times are unpredictable or nonuniform, pos-
adjustable trade-off between high control performance sibly arising from an event-driven design. Finally,
and low message rate. at the end of the chapter, challenging problems for
Chapter 13 refers to event-based state estimation dis- relevant future research in intermittent control have
cussed from the perspective of distributed networked been indicated.
systems that benefit from the reduction in resource uti- In Part II, “Event-Based Signal Processing,” the tech-
lization as the sensors transmit the measurements to the nical content focuses on topics such as continuous-time
estimator according to the event-triggered framework digital signal processing, its spectral properties and
(see also Chapter 20). The send-on-delta sampling, pre- hardware implementations, statistical event-based sig-
dictive sampling (send-on-delta with prediction), and nal processing in distributed detection and estimation,
matched sampling are analyzed in this chapter in terms asynchronous spike event coding technique, perfect sig-
of their applicability to state estimation. The matched nal recovery, including nonstationary signals based on
sampling uses the threshold-based scheme in relation to event-triggered samples, event-based digital filters, and
the Kullback–Leibler divergence that shows how much local bandwidth estimation by level-crossing sampling
information has been lost since the recent measurement for signal reconstruction at local Nyquist rate.
update. Unlike the send-on-delta scheme that is aimed Chapter 15 opens part of the book related to event-
to detect the difference between two measurement based signal processing. This chapter discusses research
xxii About the Book

on event-based, continuous-time signal acquisition and quantization of the time axis by uniform sampling of
digital signal processing presented as an alternative piecewise constant signals needed, for example, for
to well-known classical analog (continuous both in interface compatibility with standard data processing.
time and in amplitude) and classical digital (discrete Chapter 18 contains concepts for hardware-efficient
in time and in amplitude) signal processing modes. design of CT-DSP. The most critical point in the imple-
The continuous-time digital signal processing is char- mentation of the CT-DSP is the realization of time-
acterized by no aliasing, immediate response to input continuous delay elements that require considerable
changes, lower in-band quantization error, and activity- amount of hardware. In the chapter, several new
dependent power dissipation. Although the level- concepts developed recently for the implementation
crossing sampling has been known for decades, earlier of delay elements, which enable significant hardware
works seem not to have recognized aliasing as its prop- reduction, have been discussed. In particular, the study
erty in continuous-time, perhaps because in those refer- includes the proposition of a new CT base cell for the CT
ences the time was assumed discretized. Although the delay element built of a cascade of cells that allow the
discussion in the chapter is concentrated on the level- chip area be reduced by up to 90% at the cost of neg-
crossing sampling realized in continuous time, most ligible minor performance losses. It is also shown that
of the principles presented are valid for other types the hardware requirements can be reduced by apply-
of event-based sampling, and for the technique that ing asynchronous sigma-delta modulation (ASDM) for
uses discretized time. The chapter also discusses design quantization. Finally, filter architectures especially well-
challenges of CT-DSP systems and reports experimen- suited for event-driven signal processing are proposed.
tal results from recent test chips, operating at kilohertz The presented design methods help to evolve event-
(kHz) to gigahertz (GHz) signal frequencies. driven systems based on continuous-time DSPs toward
In the CT-DSP mode discussed in Chapter 15, the commercial application.
method of signal reconstruction from the time instants Chapter 19 discusses the issues related to asyn-
of amplitude-related events by the zero-order hold is chronous data acquisition and processing of nonstation-
focused on feasibility of practical implementation rather ary signals. The nonstationary signals occur in many
than perfectibility of signal recovery. The theoretical applications, such as biomedical and sensor network-
issue of perfect signal reconstruction from the discrete ing. As follows from the Wold-Cramer spectrum, the
events (e.g., level crossings) requires the use of advanced distribution of the power of a nonstationary signal is
methods of linear algebra. The recovery of signals from a function of time and frequency; therefore, the sam-
event-based samples may use methods developed for pling of such signals needs to be signal dependent to
more general class of nonuniform sampling. Although avoid collection of unnecessary samples when the signal
a number of surveys have been published in the area is quiescent. In the chapter, sampling and reconstruc-
of nonuniform sampling and its applications, this lit- tion of nonstationary signals using the level-crossing
erature is, however, addressed more toward applied sampling and ASDM are discussed (see Chapters 16
mathematicians rather than engineers because of its and 23 that also address the signal recovery problems).
focus on advanced knowledge on functional and com- It is argued that level-crossing is a suitable sampling
plex analysis. Chapter 16 tries to bridge the gap in the scheme for nonstationary signals, as it has the advan-
scientific literature and is aimed to be a tutorial on tage of not requiring bandlimiting constraint. On the
advanced mathematical issues referred to processing of other hand, the prolate spheroidal wave or the Slepian
nonuniformly sampled signals for engineers and keeps function is more appropriate than the classical sinc func-
a connection with the constraints of real implementa- tions for reconstruction of nonstationary signals as the
tions. The problems of perfect signal recovery of nonuni- former have better time and frequency localization than
formly sampled signals are addressed also in Chapters the latter. Unlike the level-crossing sampler, the ASDM
19 and 23. encodes the signal only into sample times without
Chapter 17 refers to spectral analysis of continuous- amplitude quantization but requires the condition that
time ADC and DSP and, thus, is closely related to the input signal must be bandlimited. To improve signal
Chapter 15. Chapter 17 discusses the spectral features encoding and processing, the method of decomposition
of the quantization error introduced by continuous- of a nonstationary signal using the ASDM by latticing
time amplitude quantization. In particular, it is shown the time–frequency plane, as well as the modified ASDM
that the amplitude quantization in continuous time can architecture, are discussed.
be analyzed as a memoryless nonlinear transforma- Chapter 20 addresses statistical signal processing
tion of continuous signals, which yield spectral results problems by the use of event-based strategy (see also
that are harmonic based, rather than noise oriented. Chapter 13). First, the decentralized detection problem
Furthermore, the chapter addresses the effects of fine based on statistical hypothesis testing is considered,
About the Book xxiii

in which a number of distributed sensors under energy Chapter 22 focuses on synthesis and design of
and bandwidth constraints report sequentially a sum- discrete-time filters using the level-crossing sampling.
mary of their discrete-time observations to a fusion It presents several filtering strategies, such as finite
center, which makes a decision satisfying certain per- impulse-response (FIR) and infinite impulse-response
formance constraints. The level-triggered sampling has (IIR) filters. Furthermore, a novel strategy of filter syn-
been already used to transmit effectively the sufficient thesis in the frequency domain based on the level-
local statistics in decentralized systems for many appli- crossing sampling referred to the transfer function is
cations (e.g., spectrum sensing and channel estimation introduced. This novel strategy allows for a significant
in cognitive radio networks, target detection in wire- reduction of the number of coefficients in filter design.
less sensor networks, and power quality monitoring in Finally, the architecture for hardware implementation
power grids). In the context of the decentralized detec- of the FIR filter based on an asynchronous, data-driven
tion problem, the chapter discusses the event-based logic is provided. The designed circuits are event-driven
information transmission schemes at the sensor nodes and able to process data only when new samples are
and the sequential decision function at the fusion center. triggered, which results both in a reduction of system
The second part of the chapter, by the use of the level- activity and average data rate on the filter output.
triggered sampling, addresses the decentralized esti- In Chapter 23, the problem of perfect recovery of
mation of linear regression parameters, which is one bandlimited signals based on local bandwidth is consid-
of the common tasks in wireless sensor networks (e.g., ered. In this sense, the content in Chapter 23 is related
in system identification or estimation of wireless mul- to the content in Chapters 16 and 19. In general, the
tiple access channel coefficients). It is shown that the signal processing aware of local bandwidth refers to a
decentralized detection and estimation schemes based class of signals, whose local spectral content strongly
on level-triggered sampling significantly outperform the varies with time. These signals, represented, for exam-
schemes based on conventional uniform sampling in ple, by FM signals, radar, EEG, or neuronal activity, do
terms of average stopping time. not change significantly their values during some long
Chapter 21 focuses on a technique of signal process- time intervals, followed by rapid aberrations over short
ing named the asynchronous spike event coding (ASEC) time intervals. By exploiting local properties, these sig-
scheme associated with the communication method nals can be sampled faster when the local bandwidth
known as the address event representation (AER) and becomes higher and slower in regions of lower local
realized via an asynchronous digital channel. The ASEC bandwidth, which provides the potential for more effi-
consists of analog-to-spike and spike-to-analog cod- cient resource utilization. The method which is used for
ing schemes based on asynchronous delta modulation. this purpose as discussed in Chapter 23 is the level-
The ASEC is suitable to perform a series of arithmetic crossing sampling because the mean level-crossing rate
operations for analog computing (i.e., negation, amplifi- depends on the power spectral density of the signal. The
cation, modulus, summation and subtraction, multipli- estimates of the local bandwidth are obtained by evalu-
cation and division, and averaging). Although digital ating the number of level crossings in time windows of a
circuits are more robust and flexible than their ana- finite size. The chapter develops a methodology for esti-
log counterparts, analog designs usually present a more mation of local bandwidth by level-crossing sampling
efficient solution than digital circuits because they are for signals modeled by Gaussian stochastic processes.
usually smaller, consume less power, and provide a Finally, the recovery of the signal from the level cross-
higher processing speed. The AER communication pro- ings based on the methods suited for irregular samples
tocol originates from ideas to replicate concepts of sig- combined with time warping is presented.
nal processing in biological neural systems into VLSI
systems. The applications of ASEC are programmable
analog arrays (complementary to FPGAs) for low-power Marek Miśkowicz
computing or remote sensing, mixed-mode architec-
tures, and neuromorphic systems.
Part I

Event-Based Control
1
Event-Based Control: Introduction and Survey

Jan Lunze
Ruhr-University Bochum
Bochum, Germany

CONTENTS
1.1 Introduction to Event-Based Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Basic Event-Based Control Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Event-Based versus Sampled-Data Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Architecture of Event-Based Control Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Components of Event-Based Control Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1 Event Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.2 Input Signal Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3 Characteristics of the Closed-Loop System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Approaches to Event-Based Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Emulation-Based Design Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 Switching among Different Sampling Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.3 Self-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.4 Simultaneous Design of the Sampler and the Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.5 Event-Based State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.6 Event-Based Control of Interconnected Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5 Extension of the Basic Principles and Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

ABSTRACT This chapter gives an introduction to the of event-based control, surveys analysis and design
aims and to the analysis and design methods for event- methods, and gives an overview of the problems and
based control, surveys the current research topics, and solutions that are presented in the following chapters of
gives references to the following chapters of this book. this book.
Feedback is the main principle of control. It enables a
controller
• To stabilize an unstable plant
1.1 Introduction to Event-Based Control
• To attenuate disturbances
1.1.1 Basic Event-Based Control Configuration
Event-based control is a new control method that closes • To change the dynamical properties of a plant
the feedback loop only if an event indicates that the • To tolerate uncertainties of the model that has been
control error exceeds a tolerable bound and triggers a used to design the controller
data transmission from the sensors to the controllers
and the actuators. It is currently being developed as This principle has been elaborated in detail for
an important means for reducing the load of the digi- continuous-time systems and for implementing con-
tal communication networks that are used to implement trollers in a discrete-time setting as sampled-data
feedback control. This chapter explains the main ideas controllers with sufficiently high sampling rate.

3
4 Event-Based Control and Signal Processing

If the reasons for information feedback mentioned There are three components to be designed for event-
above do not exist in an application, feedforward control based control:
can be used, which does not necessitate any communi-
cation network and, moreover, leads to a quicker system • The event generator evaluates continuously the
response. In many situations, feedforward control is current behavior of the plant, which is expressed
combined with feedback control in a two-degrees-of- in the measured state x(t) or output y (t), generates
freedom configuration to take advantage of the char- events, and initiates at the event times tk the data
acteristics of both control methods. This combination transfer of the current state x(tk ) or output y(tk )
shows that an information feedback is necessary only to toward the controller.
the extend of instability of the plant, to the magnitude • The controller acts at the time instants tk and
of the effects of disturbances, to the necessity to change determines the current input u(tk ) of the plant in
the dynamics of the plant, and to the size of uncertain- dependence on the new information x (tk ) or y(tk )
ties in the model used for control design. Event-based and the reference trajectory yref (t).
control combines feedforward and feedback control in
the sense that only at the event times is the feedback • The control input generator uses the values u(tk )
loop closed, and between consecutive event times, the of the control input received from the controller
control is carried out in an open-loop fashion. Hence, to generate the continuous-time input u (t) for the
event-based control lives in the range between both con- time interval t ∈ [tk , tk+1 ) before it receives the
trol methods. It chooses the sampling instances online in next input value at time tk+1 .
dependence on the current plant behavior and in doing
so it answers the question when information feedback is In general, the event generator and the control input
necessary to reach a desired level of performance. generator are, like the controller, dynamical compo-
nents. They are implemented on smart sensors or smart
Basic event-based control configuration. Figure 1.1 actuators, which have enough computing capabilities to
shows the basic configuration of event-based control. solve the equations that describe the event-generation
A digital communication network is used to implement rules or determine the progress of the input signals,
the data flow from the event generator toward the con- respectively. In literature, zero-order holds have been
troller and from the controller toward the control input widely used as control input generators, although this
generator. The dashed lines indicate that this data flow simple element limits the effectiveness of event-based
occurs only at the event time instants tk (k = 0, 1, . . .). control considerably. For the investigation of the control
In the typical situation considered in event-based con- loop shown in Figure 1.1, the controller is often inte-
trol, the communication network is not used exclusively grated into the control input generator, and the control
for the control task at hand, but also for other tasks; loop is simplified accordingly.
hence, it may introduce a time delay or even loss of
information into the control loop in dependence on the Motivation for event-based control. The main moti-
overall data traffic. To emphasize this situation, digi- vation of the huge current research interest in event-
tal networks are drawn as clouds in the figures of this based control results from the aim to adapt the usage
chapter. of all elements in a control loop to the current needs.
In particular, the usage of the communication network
should be considerably reduced in time intervals, in
d(t) which the plant is in the surroundings of a steady state
and the network can be released for other tasks. Like-
Control input u(t) x(t) Event
Plant
generator
wise, the sensors, the actuators, and the controller hard-
generator
Smart actuator Smart sensor
ware should be used only if necessary to save energy in
x(tk) battery-powered equipment. All these reasons for event-
u(tk)
based control are summarized in the popular term of a
Communication network resource-aware design of the control loop.
In addition, there are several reasons why event-based
u(tk) x(tk) control is an excellent option:
Controller
• Asynchronous communication and real-time
yref(t) computation: The main digital components of
a control loop work in asynchronous modes
FIGURE 1.1 and necessitate synchronization steps whenever
Event-based control loop. two of them have to communicate information.
Event-Based Control 5

Hence, if these components are not specifically to event-based control with the event generator replaced
scheduled for control purposes, the standard by the sampler and the control input generator by a
assumptions of sampled-data control with fixed hold element. The main difference is in the fact that in
sampling instants and delay-free transmissions sampled-data control, both the sampler and the hold are
are not satisfied, and a more detailed analysis triggered by a clock, which determines the time instants
needs to be done by methods of event-based
control. Due to such technological circumstances, tk = kTs , k = 0, 1, . . . , (1.1)
field bus systems, asynchronous transfer mode
for a reasonably chosen sampling period Ts . The peri-
networks, or wireless communication networks
odic sampling occurs after the elapse of a time, not
may not be available if needed by the control loop,
in dependence upon the behavior of the plant. There-
and even the sampling theorem can be violated.
fore, in the top part of Figure 1.3, the event instants are
• Working principles of technological systems:
Several technologies require adaptation of the con-
Discrete-time plant
trol systems to the timing regime of the plant, for
example, internal combustion engines, DC (direct
current)/DC converters, or satellites. For example, d(t)
the control of internal combustion engines does u(t) y(t)
not use the time as the independent variable, but Hold Plant
instead uses the crankshaft angle, the progress of
which depends on the engine speed and not on the
clock time. Sampling should not be invoked by a Communication network
clock, but by the movement of the engine.
• Organization of real-time programming: Shared u(k) y(k)
Controller
utilization of computing units in concurrent pro-
gramming can be made much easier if the real- yref(t)
time constraints are not given by strict timing
rules, but by “soft” bounds. Then, the average per- FIGURE 1.2
formance of the computing system is improved at Sampled-data control loop.
the price of shifting the sampling instants of the
controllers. x
x(1) x(2) x(5)
• State-dependent measurement principles: There
x(3) x(6)
are several measurement principles that lead to x(4)
nonperiodic sensor information. For example, the x(0)
positions of rotating axes are measured by equidis-
tant markers, which appear in the sensor quicker t
or slower in accordance with the rotation speed. Clock
Quantization of measurement signals may lead 0 1 2 3 4 5 6 t
likewise to nonperiodic sensor information.
These and further situations show that the motiva-
tion for event-based control is not restricted to econ- x
x(t1) x(t2)
omizing the use of digital networks and, hence, will
not be overcome by future technological progress in x(t3)
communication. x(t0)

1.1.2 Event-Based versus Sampled-Data Control t0 t1 t2 t3 t4 t5 t6 t

For the digital implementation of control loops,


sampled-data control with the sample-and-hold config- Events
uration shown in Figure 1.2 has been elaborated during t
the last decades and numerous modeling, design, and
implementation methods and tools are available. As FIGURE 1.3
Figure 1.2 shows, this standard configuration is similar Periodic versus event-triggered sampling.
6 Event-Based Control and Signal Processing

equidistant on the time axis, and the lines from the event Event-based control leads to new theoretical prob-
markers toward the measurement values x (k) are drawn lems. The event generator, the controller, and the control
as arrows from the clock axis toward the curve showing input generator have to be designed so as to answer the
the time evolution of the state x (t). following fundamental questions:
The usefulness of sampled-data control results from • When should information be transmitted from the
the fact that the continuous-time plant together with sensors via the controller toward the actuators?
the sampler and the hold can be represented in a
compact form by a discrete-time state-space model • How should the control input be generated
or for linear systems by a transfer function in the between two consecutive event instants?
z-domain. This model is not an approximation but a If these components are chosen appropriately, the main
precise representation of the input–output behavior of question of event-based control can be answered:
the plant at the time instances (1.1), which are enu-
merated by k (k = 0, 1, . . .). It includes the dynamics • Which improvements can be obtained by event-
of the plant and—for a fixed sampling time Ts —the based control in comparison to sampled-data con-
effect of the communication network on the control per- trol with respect to an effective operation of all
formance. Hence, the design of the controller can be components in the control loop?
separated from the design of the communication net- The answer to this question depends, of course, on
work, and a comprehensive sampled-data control theory the practical circumstances of the control loop under
has been elaborated for digital control with periodic investigation, which determine whether the usage of the
sampling. communication network, the activities of the sensors or
Under these circumstances, it is necessary to clearly the actuators, or the complexity of the control algorithm
determine why a new theory of event-based control is bring about the main restrictions on the design and the
necessary and useful. The starting point for this argu- implementation of the controller.
mentation is shown in the bottom part of Figure 1.3. Event-based control leads to more complex online
In event-based control, the system itself decides when computational problems than sampled-data control and
to sample next. Therefore, the dashed arrows between should, therefore, be used only for the reasons men-
the measurement points and the events are oriented tioned in Section 1.1.1. Otherwise, sampled-data control,
from the curve of x(t) toward the event markers. In which is easier to design and implement, should be
the figure, two event thresholds are drawn as horizon- used. A more detailed comparison of event-triggered
tal dashed lines. Whenever the signal crosses one of versus time triggered system can be found in Chapter 3.
these thresholds, sampling is invoked. Obviously, the
sampling frequency is adapted to the behavior of the
plant with quicker sampling in time intervals with quick 1.1.3 Architecture of Event-Based Control Loops
dynamics and slower sampling in time intervals with This section discusses three different architectures in
slow state changes. To emphasize the difference between which event-based control has been investigated in the
periodic and event-based sampling, one can character- past.
ize the periodic sampling as feedforward sampling and
the event-based approach as feedback sampling, where Single-loop event-based control. The basic structure
the sampling itself influences the selection of the next shown in Figure 1.1 applies to systems, where, with
sampling instant. respect to the communication structure, all sensors and
There is a severe consequence of the step from all actuators are lumped together in a single node each.
periodic toward event-based sampling: Sampled-data Consequently, the current measurement data are col-
control theory is no longer applicable, because its lected at the sensor node and sent simultaneously to the
fundamental assumption of a periodic sampling is controller. Analogously, the updated values of all input
violated. As the sampling time is not predictable and signals of the plant are sent together from the controller
the sampling frequency changes from one event to the toward the actuator node. If the communication network
next, there is no chance to represent the plant behav- introduces some time delay into the feedback loop, then
ior at the sampling instants in a compact form as in all output data or input data are delayed simultaneously.
sampled-data control theory, but the continuous-time
model has to be integrated for the current sampling Multivariable event-based control. Figure 1.4 shows a
time Ts (k) = tk+1 − tk . As shown in Section 1.2.3 in multivariable control loop, where all the actuators Ai
more detail, the closed-loop system represents a hybrid (i = 1, 2, . . . , m) and all the sensors Si (i = 1, 2, . . . , r )
dynamical system, and this characteristic leads to rather are drawn as single units because they represent sepa-
complex analysis and design tasks. rate nodes in the communication structure of the overall
Event-Based Control 7

Plant
u1(t) y1(t) y1(tk1)
I1 A1 S1 E1

u2(t) y2(t) y2(tk2)


I2 A2 S2 E2
Σ

...

...
um(t) yr(t) yr(tkm)
Im Am Sr Er

... ...

Communication network
...

...
...
EC Controller

FIGURE 1.4
Multivariable event-based control.

system. Hence, separate event generators Ei and control the sensor nodes have invoked new data exchanges, the
input generators Ii have to be placed at these units. changes of the input data may remain below the thresh-
From the methodological point of view, this struc- old for sending new information from the controller
ture brings about additional problems. First, the event toward the actuators.
generators Ei have to make their decision to send or
not to send data to the controller based on the mea- Decentralized and distributed event-based control.
surement of only the single output that the associated The third architecture refers to interconnected plants
sensor Si generates (decentralized event triggering that consist of several subsystems Σi , (i = 1, 2, . . . , N ),
rules). This output alone does not represent the state which are physically coupled via an interconnection
of the plant and, hence, makes it difficult to decide block K (Figure 1.5). In decentralized event-based con-
whether the new measurement is important for the trol, separate control loops around the subsystems are
controller and, thus, has to be communicated. This dif- implemented, which are drawn in the figure with the
ficulty can be avoided if the sensor nodes are con- event generator Ei and the control input generator Ii
nected to form a sensor network, where each sensor for the subsystem Σi . For simplicity of presentation, the
receives the measurement information of all other control station Ci of this control loop is thought to be
sensors. This connection can be implemented by the an element of the control input generator Ii and, hence,
digital network used for the communication with not explicitly shown. In this situation, the communica-
the controller or by a separate network [2]. Then, tion network closes the separate loops by sending the
the event generators can find a consensus about the next state xi (t) of the subsystem (or the subsystem output)
event time. from the event generator Ei toward the control input
Second, as the sensor nodes decide independently of generator Ii of the same subsystem, which uses this new
each other when to send information, the communica- information to update its input, for example, according
tion in the overall system is no longer synchronized and to the control law
the control algorithm has to cope with sensor data arriv-
ing asynchronously. In the formal representation, the ui ( t ) = −K i x i ( t k i ), t k i ≤ t < t k i +1 .
event times are denoted by tk i with separate counters k i
(i = 1, 2, . . . , m) for the m sensor nodes. The design problem for the decentralized controller
Third, it may be reasonable to place another event includes the “normal” problem of choosing separate
generator EC at the controller node, because the sepa- controllers K i for interconnected subsystems that is
rate decisions about sending measurement data may not known from the theory of continuous decentralized
imply the necessity to send control input data from the control [27] and the new problem of respecting the
controller toward the actuators. Even if the sensor data restrictions of the event-based implementation of these
have changed considerably and the event generators at decentralized control stations [30,32,43,45]. The latter
8 Event-Based Control and Signal Processing

z1(t) s1(t) z2(t) s2(t) zN(t) sN(t) Plant

Σ1 Σ2 ... ΣN

u1(t) x1(t) u2(t) x2(t) uN(t) xN(t)

I1 E1 I2 E2 ... IN EN

x1(tk1) x2(tk2) xN(tkN)


Controller

Communication network

FIGURE 1.5
Decentralized event-based control.

problem includes the asynchronous activation of com- can the control problem be solved in such a structure
munication links by the event generators of the different with as few communication activities as possible?
loops, which is symbolized by the indexed counters k i New methods to cope with these problems are pre-
for the event times. sented in Chapter 5 for distributed event-based con-
In distributed event-based control, the overall system trol and in Chapter 7 for implementing a centralized
architecture is the same as in Figure 1.5, but now the controller with decentralized event generators.
communication network submits information from the
event generator Ei not only to the control input gener-
ator Ii of the same subsystem but also to neighboring
subsystems. Hence, the input to the block Ii , which is
shown as a dashed arrow, includes state information
1.2 Components of Event-Based Control Loops
x j (tk j ) originating in neighboring subsystems Σ j and
arriving at the event times tk j , which are determined by This section describes in more detail the components
the event generators Ej of these subsystems. Therefore, of event-based control loops and points to the main
the control input of the ith subsystem may depend upon assumptions made in literature when elaborating their
several or even upon all other subsystem states, which is working principles. This explanation is focused on the
shown in the example control law basic configuration shown in Figure 1.1.
⎛ ⎞
x1 ( t k 1 )
⎜ x2 ( t k ) ⎟ 1.2.1 Event Generator
⎜ 2 ⎟
u i (t) = −(K i1 , K i1 , . . . , K iN ) ⎜ .. ⎟, The main task of the event generator E is to evaluate the
⎝ . ⎠
current measurement data in order to decide at which
x N (tk N )
time instant an information feedback from the sensors
max{tk1 , . . . , tk N } ≤ t < min{tk1 +1 , . . . , tk N +1 }, via the controller toward the actuators is necessary. The
in which ui depends upon all subsystem states rule for this decision is often formulated as a condition
[11,47–49]. h (tk , x(tk ), x(t), ē) = 0, k = 0, 1, . . . , (1.2)
In both the decentralized and the distributed event-
based control structure, the asynchronous data transfer with ē being an event threshold, tk describing the last
poses the main problems in the modeling of the closed- event time, x (tk ) the state at the last event time, and x(t)
loop system and the design of the controller. In compar- the current state of the plant. For any time t > tk , the
ison to multivariable event-based control, the structural condition h depends only on the state x (t). If the state
constraints of the control law lead to further difficulties. x(t) makes the function h vanish, the next event time t =
In a more general setting, this structure can be dealt with tk+1 is reached.
as a set of computing units that are distributed over the From the methodological point of view, it is important
control loop and are connected by a packet-based net- to see that the time point tk+1 is only implicitly described
work. The question to be answered is as follows: How by the condition (1.2). Even if an explicit solution of the
Event-Based Control 9

plant model for the state x(t) can be obtained, Equa- which, in the continuous implementation, leads to an
tion 1.2 can usually not be transformed into an explicit asymptotically stable closed-loop system
expression for tk+1 . In the online implementation, the ⎧
event generator takes the current measurement vector ⎪ ẋ (t) = ( A − BK ) x (t), x (0) = x0
⎨  
x(t), evaluates the function h, and in case this function is Σ̄ : . (1.5)
⎪ Ā
(nearly) vanishing, invokes a data transfer. ⎩
There is an important difference between event-based y(t) = Cx(t)
signal processing described in the second part of this
The stability can be proved by means of the Lyapunov
book and event-based control. The introductory chapter
function V ( x (t)) = xT (t)Px(t), where P is the symmet-
on event-based data acquisition and digital signal pro-
ric, positive definite solution of the Lyapunov equation
cessing shows that level-crossing sampling adapts the
sampling frequency to the properties of the signal to be T
Ā P + P Ā = − Q,
transmitted. The aim is to set sampling points in such a
way that the progress of the signal can be reconstructed for some positive definite matrix Q.
by the receiver. Now, the controller should be implemented in an
In contrast to this setting, event-based control has to event-based fashion with a zero-order hold as the con-
focus on the behavior of the closed-loop system. The trol input generator:
event generator should send the next signal value if this
value is necessary to ensure a certain performance of the u (t) = −Kx (tk ), t k ≤ t < t k +1 . (1.6)
closed-loop system. Consequently, the event generation
is not based on the evaluation of a single measurement, The event generator should supervise the behavior of
but on the evaluation of features describing the closed- the system Σ under the control (1.6) in order to ensure
loop performance with and without sending the new that the Lyapunov function V ( x(t)) decays at least with
information. the rate λ:

Event generation to ensure stability. To explain the V ( x (t)) ≤ S(t) = V ( x (tk ))e−λ(t−tk) .
event generation for feedback purposes in more detail,
a basic idea from [1] is briefly outlined here (Figure 1.6). This requirement can be formulated with a tolerance of
Consider the linear plant ē as an event function (1.2):

 h (tk , x(tk ), x(t), ē) = xT (t)Px (t) − xT (tk )Px(tk )e−λ(t−tk )


ẋ (t) = Ax (t) + Bu(t), x (0) = x0 − ē = 0, (1.7)
Σ: , (1.3)
y(t) = Cx(t)
that can be tested online for the current measured state
x(t). If the equality is satisfied, the next event time tk+1
with the n-dimensional state vector x and apply, in an
is reached.
event-based fashion, the state feedback
Figure 1.7 illustrates this method of the event genera-
tion. If a continuous controller were used, the temporal
u(t) = −Kx(t), (1.4) evolution of V ( x(t)) followed the lowest curve, which

x(t) h
x(tk )

h(x) = 0

u(tk ) u(t)
K ZOH Plant

x(tk ) x(tk )

Communication network

FIGURE 1.6
Event-based control loop showing the event generation mechanism.
10 Event-Based Control and Signal Processing
V
V(tk)

Estimated behavior

Required behavior

S(t)
Continuous control
V(t)

tk tk+1 t

FIGURE 1.7
Determination of the next sampling instant.

has the largest decay rate of the three lines shown in the event generation, the main idea is to reduce the sam-
figure. This line is denoted as V (t) because it illustrates pling frequency whenever possible. An early attempt to
how the Lyapunov function decreases from its value find such an event generation rule goes back to 1997 [9],
V ( x(tk )) at the event time tk over the time interval t > tk . which tries to limit the number of changes of the input
With event-based feedback (1.6), the same decay rate value u(tk ). The notion of “minimum attention control”
cannot be required, because less information is sent results from the aim to minimize the attention that the
around the control loop. A reasonable requirement is to controller has to spend to the plant behavior during
claim that the Lyapunov function should at least decay operation.
as the function S(t), which is drawn as dashed curve in This idea has been developed further [15], and new
the figure. For the event-based controller, the Lyapunov results are presented in Chapter 11 on efficiently atten-
function V has for a time t, which is only a little bit tive systems, for which the intersampling interval gets
greater than tk a similar decay rate as for a continu- longer when the system approaches its operating point.
ous controller, because the inputs u (t) generated by the
continuous controller (1.4) and by the event-based con-
1.2.2 Input Signal Generator
troller (1.6) are nearly the same. Later on, the decay rate
decreases for the event-based controller. A crossing of The input signal generator has the task to generate a
the middle curve showing the Lyapunov function for the continuous-time input u(t) for a given input value u(tk )
event-based controller with the dashed line occurs, and over the entire time interval t ∈ [tk , tk+1 ) until the next
this point defines the next event time tk+1 . The thresh- event time occurs. In sampled-data control, usually zero-
old ē in Equation 1.7 acts as a bound for the tolerance order holds are used to perform this task. The rationale
that is accepted in the comparison of the two curves. behind this choice is, in addition to the simplicity of its
The important issue is the fact that the rule of the event implementation, the fact that sampled-data control uses
generator for the determination of the next event instant a sufficiently high sampling frequency so that the piece-
depends not only on a single measurement signal but wise constant input signals generated by a zero-order
also on the Lyapunov function characterizing the behav- hold approximate a continuous-time input signal with
ior of the closed-loop system. Furthermore, this method sufficient accuracy.
illustrates that typically no fixed threshold is used in the This argument does not hold for event-based control,
event condition but a changing border like the function because over a long time interval, a constant input signal
S(t) in the figure. Hence, many event generators include is not adequate for control [4,51]. For linear systems, the
dynamical models of the closed-loop system to deter- model of the overall system (e.g., Equation 1.5) shows
mine the novelty of the measurement information and that the typical form of the input is a sum of exponential
the necessity to transfer this information. This is one of functions like in the equation
the main reasons for the higher computational load of
event-based control in comparison with sampled-data u(t) = −Ke Ā(t − tk ) x(tk ), (1.8)
control.
for the continuous controller (1.4). This is the reason
why in the recent literature, the focus has moved from
Minimum attention control. Although there are multi- the use of zero-order holds as input-generating to more
ple aims that are used to find appropriate methods for involved input-generating schemes.
Event-Based Control 11

The necessity to sample in the feedback loop is mini- t ∈ [tk, tk+1) t = tk


mized if the model of the closed-loop system (1.5) is used yref(t) u(t) y(t) yref(tk) u(tk) y(tk)
to generate the input to the plant: I Plant I Plant


ẋs (t) = Āxs (t), xs (t0 ) = x(t0 ) x(tk)
I: . (1.9)
u (t) = −Kxs (t)
FIGURE 1.8
Open-loop versus closed-loop control in event-based control loops.
In order to avoid confusion, the state of this component
has been renamed to xs (t). After the state of the control
control loop has to take into consideration this time
input generator (1.9) has been set to the plant state x (t0 )
interval and cannot be restricted to the event time points.
at the first event time t0 , the model (1.9) precisely gen-
As a direct consequence of these structural differences,
erates the input u(t), (t ≥ t0 ) given in Equation 1.8 for
the plant state x(tk ) has to be available for the control
k = 0 for the continuous state-feedback controller (1.4).
input generator I in order to enable this component to
Hence, the dynamics of the event-based and the con-
generate a reasonable control input in the time inter-
tinuous feedback loop are the same. Deviations occur,
val t ∈ [tk , tk+1 ) between two consecutive event instants.
in practice, due to disturbances or model uncertainties,
Consequently, most of the event-based control methods
which bring the argumentation back to the list of reasons
developed in the literature assume that the whole state
for feedback control mentioned in Section 1.1.1. Only
vector x is measurable and transmitted at event times.
if disturbances or model uncertainties are considered,
If the measurement is restricted to some output y, an
is feedback necessary. In the structure considered here,
observer is typically used to reconstruct the state.
the activation of the feedback means to reset the state
xs (tk ) at the next event time tk . It is surprising to see
Hybrid systems character. The information feedback in
that, nevertheless, many papers on event-based control
event-based control loops usually results in state jumps
in the recent literature consider neither disturbances nor
in the controller or in the control input generator, or
model uncertainties in their problem setting.
in switching among several controllers that have been
Chapter 14 gives a survey on intermittent control,
designed for different sampling frequencies. Hence, the
which is a technique to generate the continuous con-
event-based control loop represents a hybrid dynamical
trol input between two consecutive sampling instants
system [28].
by means of a dynamical model. The control input gen-
To illustrate this fact, consider the system
erator is similar to Equation 1.9. Similarly, Chapter 9 is

concerned with a dynamical control input generator for ẋ (t) = Ax (t) + Bu (t) + Ed(t) x(0) = x0
Σd : ,
which the closed-loop system tolerates time delay and y(t) = Cx(t)
quantization effects. (1.10)

1.2.3 Characteristics of the Closed-Loop System with disturbance d (t) together with the controller (1.9).
At the event time tk , the state of the control input
This section summarizes properties of event-based con- generator jumps, which gives the overall system the
trol loops that occur due to the structure of the compo- characteristics of an impulsive system. Impulsive systems
nents or that have to be considered, tested, or proved in are specific forms of hybrid systems, where state jumps
the analysis of event-based systems. occur but the dynamics are the same before and after the
jumps [17]. Such systems can be represented by a state
Combination of open-loop and closed-loop control. equation together with a jump condition:
An important aspect of event-based control can be found
in the fact that the input signal u (t) to the plant is gen- Impulsive system :
erated alternately in an open-loop and a closed-loop 
ẋ (t) = f ( x(t), d(t)) for x(t) ∈ D
.
fashion (Figure 1.8). Only at the event times tk is the x(t+ ) = Φ( x(t)) for x(t) ∈ D
feedback loop closed. In the time between two con-
secutive event times, the control input is generated in For the event-based systems considered here, the state
dependence upon the data received until the last event equation results from the models (1.10) and (1.9) of the
time and, usually, by means of a dynamical plant model. plant and the controller, the jump set D represents the
This situation does not differ from sampled-data con- event-generation mechanism, and the reset function Φ
trol, but here the time interval between two consecutive shows how the state of the control input generator is
events should be as long as possible and is, in general, reset at event times tk . As long as the state x(t) does
not negligible. Therefore, the analysis of the event-based not satisfy the event condition (e.g., Equation 1.2), the
12 Event-Based Control and Signal Processing

state runs according to the differential equation in the between the events in all possible situations (distur-
first line of the model. If the event condition is satisfied, bances, model uncertainties, etc.). This aim is important
the current state x(t) belongs to the jump set D , and the for two reasons. First, the motivation for event-based
state is reset as indicated in the second line. The symbol control is to reduce the information exchange within a
for the time just after the reset is t+ . control loop, and one has to prove that with a specific
This reformulation of the event-based control loop as event-generation scheme, this task is really fulfilled. Sec-
an impulsive system makes it possible to apply meth- ond, for hybrid systems, the phenomenon of a Zeno
ods that have been specifically developed for this sys- behavior is known, which means that the system gen-
tem class to the analysis of event-based control (cf. [12, erates an infinite number of events within a finite time
14,41] and Chapter 6). This way of analysis and con- interval. Of course, in practice, this situation seems to
trol design is important for multivariable event-based be artificial, and the effects of technological systems
control and distributed event-based control, because in like short time delays that are neglected in the plant
these architectures the different sensors or control loops, model prevent the Zeno behavior from occurring in real-
respectively, generate events asynchronously; hence, ity. However, Zeno behavior has also to be avoided
numerous state jumps occur in different parts of the for the model of the event-based control loop, because
closed-loop system at different times. this phenomenon indicates that the event-generation
Figure 1.9 shows experimental results to illustrate mechanism is not selected appropriately.
this phenomenon. For two coupled reactors, decentral- The usual way to prove the suitability of the event-
ized event-based controllers react independently of one based control and, simultaneously, avoid Zeno behav-
another for their associated subsystem. Whereas in the ior is to derive a lower bound for the interevent time
right part of the figure, events occur only in the first Ts (k) = tk+1 − tk . Equation 1.7 shows that the event
time interval of [0, 50] s, the left part shows events for rules have to be carefully chosen. If in this equation
the whole time horizon. Such asynchronous state jumps ē is set to zero, the event condition h ( x(t)) = 0 is sat-
bring about severe problems for the analysis of the sys- isfied trivially, because the equality x(t) = x(tk ) leads
tem. The “noisy” curves represent the measured liquid to h(tk , x(tk ), x(tk ), 0) = 0 and, hence, an instantaneous
level or temperature, whereas the curves with the jumps invocation of the next data transfer. With a positive
show the behavior of the internal model (1.9) of the con- bound ē, a minimum interevent time can be proved to
trol input generator. Jumps at the event times show the exist.
resetting of the state of the control input generator.
Stability of the control loop. Stability as a prereq-
Minimum interevent time. An important analysis aim uisite for the implementation of any controller has to
is to show that there is a guaranteed minimum time span be tested for the event-based control loop with specific

Reactor TB Reactor TS
0.06
0
0.04
lTS (cm)
lTB (cm)

−0.02
0.02
−0.04
0
−0.06

6 0

4 −2
ϑTS (K)
ϑTB (K)

2
−4
0
–6
Events

Events

0 100 200 300 400 0 100 200 300 400


Time t (s) Time t (s)

FIGURE 1.9
Asynchronous events in distributed event-based control loops.
Event-Based Control 13

methods. In many schemes, asymptotic stability can no Plant model d(t)


longer be reached, because the event-triggering mecha- xs(tk)
u(t) x(t) Event
nism includes some tolerance between the current state Plant generator
and the last communicated state as, for example, in the u(t) = – Kxs(t)
inequality
| x(t) − x(tk )| > ē,
x(tk)
which represents a tolerance of the size ē. Then, the over-
all system does not reach its equilibrium asymptotically, FIGURE 1.10
although the continuous control loop would do so. Event-based state feedback.
In this situation, “practical stability” is required. The
controller should hold the state x (t) of the plant inside attention control [9], self-triggered control [1], or send-on-
a set Ωd for all initial states x0 ∈ Ω1 and all bounded delta control [33]. The following paragraphs classify these
disturbances d(t): approaches.

x(t) ∈ Ωd ∀t ≥ T ( x0 ), x0 ∈ Ω1 , d(t) ∈ [dmin , dmax ].


1.3.1 Emulation-Based Design Methods
Then, the set Ωd is called robust positively invariant, and A well-developed approach is to take a continuous feed-
the system state is said to be “ultimately bounded.” back and to implement this controller in an event-based
A further property of event-based control loops, fashion. As the aim is to get a bounded deviation of
which results likewise from some tolerance that is built the behavior of the event-based control loop from the
into the event-triggering rule, is the insensitivity to smallbehavior of the continuous loop, this strategy of the
disturbances. This property means that for sufficiently event-based control is called emulation-based.
small disturbances, no information feedback is invoked. As an example of this method, consider the event-
based state feedback shown in Figure 1.10 [29]. A linear
Tolerance of event-based control with respect to net-
plant (1.3) is combined with a state feedback (1.4) so as
work imperfections. The communication networks to get a tolerable deviation of the event-based control
may introduce time delays, packet loss, and other imper-
loop from the continuous control loop (1.5). The events
fections into the control loop, and an analysis has to
are generated in dependence on the deviation of the
show which kind of such phenomena can be tolerated by state x (t) of the continuous closed-loop model from the
s
a specific event-based control method [26]. This situation
measured state x(t):
shows that one has to understand which information is
available at which node in the network at which time. h(tk , x(tk ), x(t), ē) = | x (t) − x(tk )| − ē = 0.
For example, if the communication of the state informa-
tion x(tk ) is delayed, the control input generator may At the event time tk the difference x(t) − x(tk ) vanishes,
use the former information x (tk−1 ) for generating the and it increases afterward due to the effect of the dis-
current input u(t), whereas the event generator assumes turbance, which only affects the plant, not the model.
that the input is in accordance with the new information The threshold ē describes the tolerable deviation of both
x(tk ) that it has sent toward the control input generator. states. It can be shown that the deviation of the behavior
To send acknowledgement information may avoid this of the event-based control loop from the behavior of the
situation, but this leads to more data traffic and more continuous control loop for the same disturbance d(t) is
complicated working principles of the control loop. bounded, and that this bound decreases if the threshold
ē is chosen to be smaller. Hence, the event-based loop
mimics the continuous feedback loop with adjustable
precision, where, however, the price for improvement of
the approximation is an increased communication effort.
Many event-based control methods have been devel-
1.3 Approaches to Event-Based Control
oped as emulation-based controllers [1,12,24,29,31,41,44].
Several different methods for event-based control have
been proposed in recent years, which can be distin-
1.3.2 Switching among Different Sampling
guished with respect to the answers given to these ques-
Frequencies
tions. Some of them have been published under different
names like event-driven sampling [19], event-based sam- The second strategy of event-based control considers
pling [4,5], event-triggered control, Lebesgue sampling [6], sampled-data control loops with different sampling fre-
deadband control [36], level-crossing control [23], minimum quencies. An interesting question to ask is under what
14 Event-Based Control and Signal Processing

x(tk) ∈ GF If a disturbance d(t) occurs, another set Gfast has to


x(tk) ∈ GS be defined such that the controller switches from the
slow to the fast sampling if the current state satisfies the
d(t) x(tk) switching rule x(t) ∈ Gfast . Then, the controller adapts
to the effect of the disturbance and switches back to
u(tk) u(t)
K ZOH Plant
the slow sampling if this effect has been sufficiently
attenuated.
x(tk) x(tk) A similar approach is described in Chapter 8, where
a set of controllers is used, which are designed offline
Communication network for different sampling times. A deadband is used for
the control error to select the current sampling time
FIGURE 1.11 and, hence, the appropriate controller from the prede-
Switching between two sampling frequencies. fined set.

condition it is possible to get better loop performance 1.3.3 Self-Triggered Control


with less sampling events. In self-triggered control, the plant state is not continu-
To illustrate the approach, consider a sampled-data ously supervised by the event generator, but the next
control loop with two sampling times Tslow and Tfast event time tk+1 is determined by the event generator at
(Figure 1.11). The objective function the event time tk in advance [1,50]. Compared to event-
 ∞ based control, this approach has the advantage that the
J= xT (t) Qx (t) + uT (t) Ru(t) dt, sensors can “sleep” until the predicted next sampling
0 instant.
has to be brought into a discrete-time form for the To find the prediction of tk+1 , the event condition (1.2)
sampling time used: has to be reformulated so as to find the time

 T   tk+1 = min{t > tk : h(tk , x(tk ), x(t), ē) = 0}.


∞ x(k) x(k)
J= ∑ u( k)
N
u( k)
= xT0 Px0 ,
For the undisturbed plant (1.3) subject to the control
k =1
input (1.6):
where N is an appropriate matrix that depends upon ẋ (t) = Ax (t) − BKx (tk ),
the sampling time. In [2] the performance of the system
with the two sampling times Tslow and Tfast is compared the state trajectory x(t), (t ≥ tk ) can be predicted and
with a system with constant sampling time Tnominal with the time tk+1 determined. The problem lies in the situ-
Tslow > Tnominal > Tfast . For a linear plant (1.3), the opti- ation where the plant is subject to some disturbance. As
mal controller is a state feedback with feedback matrix usual, the disturbance is not known in advance, and one
K fast , K nominal , or K slow , which results from the solutions has to consider a restricted class of disturbances when
Pfast , Pnominal , or Pslow of the corresponding algebraic determining the next event time, for example, a set of
Ricatti equation. bounded disturbances with a known magnitude bound.
To illustrate that it is possible to improve the perfor- This way of solution leads often to rather conservative
mance with less sampling over the long time horizon, results, which means that the predicted interevent time
assume that the undisturbed plant is in the state x0 Ts (k) = tk+1 − tk is much slower than it is in event-based
and should be brought into the equilibrium state x̄ = 0. control with online evaluation of the event condition.
Then, it has been shown in [2] that the controller should Chapter 10 elaborates a new method for self-triggered
start with the fast sampling and switch to the slow control of multi-agent systems, where the overall system
sampling if the current state x(t) satisfies the event rule should ensure a prescribed decay rate for the common
Lyapunov function.
xT (t)(Pslow − Pfast ) x(t) − xT0 (Pnominal − Pfast ) x0 = 0.
1.3.4 Simultaneous Design of the Sampler and
The set of all states x (t) satisfying this relation is denoted
the Controller
by Gslow . Hence, if x (tk ) ∈ Gslow holds, the controller
switches from the fast toward the slow sampling. It can One can expect to get the best possible event-based
be shown that over the infinite time horizon, this strat- controller if the event generation strategy is designed
egy leads to a smaller value of the objective function J together with the control law. This way of solution has
and to less sampling instances. been elaborated, for example, in [34,35] by formulating
Event-Based Control 15

the design task as a stochastic optimization problem, of systems has been the subject of intensive research for
where the common objective function penalize the state many years, as shown in [11,30,47–49] and Chapter 5.
and input to the plant (like in linear-quadratic regu-
lator theory) as well as the transmission of the plant
state in the feedback loop. This problem is rather com-
plex, because it is known from stochastic linear systems
that the resulting strategy is, in general, nonlinear and 1.4 Implementation Issues
includes signaling. Signaling means that the event gen-
So far, the overall system was investigated as a
erator sends the state information x (tk ) toward the con-
continuous-time system in which information is
troller, and sends further information, which is included
exchanged at any discrete time points that are fixed
in the fact to send or not to send the state informa-
by an event generator. If the event generator and the
tion. On the other hand, the controller can modulate
controller or the control input generator should be
its input to the plant to send some information while
implemented, one has to consider these components
using the plant as a communication channel to the event
in a discrete-time version. The following outlines the
generator.
problems to be solved.
An interesting result of these investigation shows
under what conditions the design problem reduces to
Sampled-data event-based control. For the imple-
the emulation-based approach, where first the controller
mentation, one has to go back to the sampled-data
is designed, and second an event generation strategy is
framework of control, because the components to be
developed for the event-based implementation of the
implemented work in a discrete-time fashion. The event-
controller. Such conditions require that the policies of
based character can be retained for the communication
the event generator and of the controller be restricted
step, whereas the rules for the event generator and the
to be deterministic. Then, the overall problem can be
control input generator have to be transformed into their
decomposed into the two problems that can be solved
sampled-data versions. Usually, these components can
sequentially.
be realized with a sufficiently high sampling rate so that
a quasi-continuous behavior is obtained.
1.3.5 Event-Based State Estimation Periodic event-triggered control has emerged as a topic of
interest, as it results from these implementation issues.
The ideas explained so far for event-based control can At each sampling instant, the event generator is invoked
also be used for state estimation, where the measured to decide whether information has to be transmitted.
input and output signals are not transmitted to the esti- If not, the event generator waits for the next sampling
mator at equidistant time intervals but in dependence instant; otherwise, it sends information and, by doing
upon their changes. This problem differs from event- so, starts the control algorithm for updating the con-
based signal processing that is dealt with in the second trol input. The idea for periodic event-based control is
part of this book, because now the input and the out- not new [18,19], but Chapter 6 shows that there are still
put of a dynamical system are processed together with interesting problems to be solved.
a dynamical model of the system under consideration.
The aim is to maintain a small deviation of the estimated Asynchronous components in an event-based feed-
state x̂(t) from the true state x(t). back loop. In network systems, clock synchronization
Chapter 13 presents a state estimation method with poses a severe problem. In contrast, the theory of
event-based measurement updates, which illustrate this event-based control assumes that the timing regime is
problem. the same for all components considered. In particular,
it is often taken for granted that an information packet,
which is sent at time tk , arrives at the same time at
1.3.6 Event-Based Control of Interconnected
the controller and that the controller can determine the
Systems
control input based on precise information about the
Additional problems occur if the event-based control elapsed sampling time Ts (k) = tk+1 − tk . What happens
strategies are not considered for a single-loop system, if there is a time delay in the communication of the
but are considered instead for an interconnected system, data from the event generator toward the control input
where the events occur asynchronously in the subsys- generator or if the clocks used in both components show
tems. From a theoretical viewpoint, interconnected sys- different times is illustrated in Figure 1.12. At time tk ,
tems pose the most interesting problems to be solved. the event generator measures the current state x(tk )
Therefore, multiloop systems, event-based control of and sends the state toward the control input generator.
multi-agent systems, or distributed event-based control Simultaneously, it resets the state xe (tk ) of its internal
16 Event-Based Control and Signal Processing

Control input generator Event generator

xs xe
xk xk
x+
sk
xe (tk +τk)
xs(t)

xsk

tk tk +τk t tk tk +τk t

FIGURE 1.12
State reset at time tk or tk + τk .

model of the plant. The control input generator receives The best situation is found in self-triggered control,
the measured information after a time delay of τk at where the next event time tk+1 is known well in advance
time tk + τk , and the question arises, to which value and can be used to schedule the processes mentioned
it should now reset the state xs (tk + τk ) of its internal above.
model. The aim is to reinstall the synchrony of both
internal models, and this requires that the state xs does
+
not get the new value x (tk ) but the value xsk which Experimental evaluation of event-based control. The-
lies on the trajectory xs (t) of the model that the event oretical results are only valid as long as the main
generator uses. This strategy requires finding methods assumptions of the underlying theory are satisfied. It is,
for determining the right value for xs such that the therefore, important that several applications of event-
equality xs (tk + τk ) = xe (tk + τk ) holds. based control strategies have shown that the promises
of this control method can, at least partly, be kept. As
Scheduling. Event-based control reduces the load of an example, Figure 1.13 shows the result of an exper-
the communication network only if the scheduling of iment described in [25]. The middle part of the figure
the network is taken into account. If a fixed allocation shows the tank level and the liquid temperature of a
protocol is used, event-based data transmission can only thermo-fluid process, where the “noisy” curves repre-
decide whether a data packet should be sent or not, but sent measurements, and the other curves the state of the
the empty slot in the packet flow cannot be used by other control input generator (1.9). A typical sampling period
users of the network. Consequently, one of the main for this process would be Ts = 10 s, which would lead to
advantages of event-based control compared to periodic about 200 sampling instants in the time interval shown.
sampling becomes effective only if online scheduling In the bottom part of the figure, one can see that with
methods are applied. event-based sampling only 10 events occur, which leads
The best performance can be expected if a full-level to a reduction of the sampling effort by a factor of 20.
event-based processing is reached, where sensing, com- The experiment has been done with industry-like com-
munication, computation, and actuation are all included ponents and shows that event-based control can really
in the online scheduling [46]. Then, no time delays occur reduce the control and communication effort. Similar
when data packets are waiting to be processed and no results are reported in [21,40].
additional delays have to be considered when analyz- The application of event-based control techniques also
ing the closed-loop system. On the other hand, the more leads to important extensions of the methodology, which
different control loops are closed over the same net- are necessary for specific areas. In particular, in the
work, the worse is the average performance, even if majority of papers, event-based control is investigated
event-based control methods are applied [10,22]. for linear static feedback, but for set-point following
A detailed comparison of event-based and time- or disturbance attenuation, the usual extensions with
triggered control has been described in [7,8] for the integral or other dynamical parts are necessary. Inves-
ALOHA protocol. It shows that the performance of tigations along this line are reported, for example, in
event-based control is better than that of sampled-data [25,39,40] and in Chapter 12.
control as long as a congestion in the communication Finally, it should be mentioned that experiments with
network does not introduce long time delays. periodic and event-based sampling have been the start-
In [38] it has been shown that event-based trigger- ing point of the current wave of research in event-
ing is better than periodic control if the loss probability triggered and self-triggered control. References [3] and
is smaller than 0.25 for independent and identically [5] have been two of the first papers that demonstrated
distributed random packet loss and level-triggered with the use of interesting examples, that an adaptation
sampling. of the sampling rate to the state of the plant or the effect
Event-Based Control 17
0.2

d, d
0.1 d
d
0
3
2 x

x1 (cm)
1 xs
0
–1
2
x
x2 (K)

0
xs
–2

t (s)
t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 2000

FIGURE 1.13
Experimental evaluation of event-based state-feedback control.

of the disturbance cannot only save sampling efforts that event-based control with fewer sampling instants
but even improve the performance of the closed-loop usually requires more computation to make full use of
system. the information included in the available data. It is,
however, an open problem how the complexity of the
event generation and the control input generation can be
restricted (e.g., with respect to the restricted computing
capabilities in the car industry).
1.5 Extension of the Basic Principles and Open
Besides these theoretical issues, the application of
Problems the theory of event-based control is still marginal, and
This section summarizes several extensions and open experimental evaluations showing where event-based
problems and shows that the field of event-based control control outperforms the classical periodic sampled-data
is rich in interesting problems just waiting to be solved. implementation of continuous control are still to come.
The main ideas of event-based control have been
surveyed in this chapter for linear systems with static Open problems. The following outlines three lines of
state feedback. With respect to these restrictions, the research that can give new direction in the field of event-
chapter does not reproduce the state-of-the-art in the based control.
expected way, because, from the beginnings of research
into this topic, literature was not restricted to linear sys- • Complete event-based control loop. Event-based
tems but has tackled many of the problems mentioned control has been considered in the literature
above directly for nonlinear systems. For such systems, mainly in the one-degree-of-freedom structure
stability is usually investigated in the sense of input- shown in Figure 1.1 with only a feedback part in
to-state stability, and the performance is described by the loop. In applications, often control techniques
gains between the disturbance input and the state or the with more degrees of freedom are used in order
output. to cope with different control tasks for the same
To assume that the state vector x(t) is measurable plant. In particular, feedback control to attenuate
is, in contrast, a common assumption, which has to be disturbances is combined with feedforward con-
released for applications. Observers are the usual means trol for a quick reference following and rejection of
for reconstructing the state, and event-based methods measurable disturbances. In multi-agent systems,
can also be applied as outlined in Section 1.3.5 and further communications are used to coordinate
described in more detail in Chapter 13. several subsystems. In such extended structures,
Another important extension to be investigated con- several components work and communicate and
cerns the computational complexity of event-based con- may do so in an event-based fashion. The question
trol. Literature has posed the problem to find a trade-off of what sampling strategies are adequate for such
between communication and computation in the sense systems is quite open.
18 Event-Based Control and Signal Processing

• Event-based control and quantization. Event- Nonlinear Control Systems, Springer-Verlag, Berlin,
based control has been developed as a means 2008, pp. 127–147.
to cope with bandwidth-restricted networks. An
alternative way is quantization that reduces the [5] K. Aström, B. Bernhardsson, Comparison of peri-
information to be sent over a channel and, hence, odic and event-based sampling for first order
the necessary bandwidth for its communication. stochastic systems, IFAC Congress, Beijing, 1999,
The question is how both methods can be com- pp. 301–306.
bined. Can a higher sampling rate compensate [6] K. Aström, B. Bernhardsson, Comparison of
for the information loss due to quantization? As Riemann and Lebesgue sampling for first order
Chapter 15 shows, level-crossing sampling with a stochastic systems, IEEE Conf. on Decision and Con-
zero-order hold can be equivalently represented as trol, Las Vegas, NV, 2002, pp. 2011–2016.
quantization without sampling. Can one use this
relation for event-based control? [7] R. Blind, F. Allgöwer, Analysis of networked
event-based control with a shared communication
• Protocol design for event-based control. Assume medium: Part I—pure ALOHA; Part II—slotted
that the network protocol can be designed for con- ALOHA, 18th IFAC World Congress, Milan, 2011,
trol purposes. What would be the ideal protocol pp. 10092–10097 and pp. 8830–8835.
for event-based control loops? This question may
seem to be irrelevant because network protocols [8] R. Blind, F. Allgöwer, On the optimal sending rate
will most probably not be designed for control for networked control systems with a shared com-
purposes, because the market is thought to be munication medium, Joint 50th IEEE Conf. on Deci-
too small. However, the answer to this question sion and Control and European Control Conf., Orlando,
can lead to new event-based control mechanisms FL, 2011, pp. 4704–4709.
or configurations, if one compares what would
be optimal for control with what control engi- [9] R. W. Brockett, Minimum attention control, Conf. on
neers get as tools or hardware components for the Decision and Control, San Diego, CA, 1997, pp. 2628–
implementation of their control loops. 2632.

[10] A. Cervin, T. Henningsson, Scheduling of event-


triggered controllers on a shared network, IEEE
Conf. on Decision and Control, Cancun, Mexico, 2008,
pp. 3601–3606.

Acknowledgment [11] C. De Persis, R. Sailer, F. Wirth, On a small-gain


approach to distributed event-triggered control.
This work has been supported by the German Research IFAC Congress, Milan, 2011.
Foundation (DFG) within the priority program, “Control
Theory of Digitally Networked Dynamic Systems.” [12] M. C. F. Donkers, Networked and Event-
Triggered Control Systems, Technische Universiteit
Eindhoven, 2011.

[13] M. C. F. Donkers, M. P. M. H. Heemels, N. van de


Wouw, L. L. Hetel, Stability analysis of networked
Bibliography control systems using a switched linear systems
[1] A. Anta, P. Tabuada, To sample or not to sample: approach, IEEE Trans. Autom. Control, vol. 56,
Self-triggered control of nonlinear systems, IEEE pp. 2101–2115, 2011.
Trans. Autom. Control, vol. 55, pp. 2030–2042, 2010.
[14] M. C. F. Donkers, M. P. M. H. Heemels, Output-
[2] J. Araujo, Design, Implementation and Validation of based event-triggered control with guaranteed
Resource-Aware and Resilient Wireless Networked Con- L∞ -gain and improved and decentralised event-
trol Systems, PhD Thesis, KTH Stockholm, 2014. triggering, IEEE Trans. Autom. Control, vol. 57, no. 6,
pp. 1362–1376, 2012.
[3] K.-E. Arzen, A simple event-based PID controller,
IFAC Congress, Beijing, 1999, pp. 423–428. [15] M. C. F. Donkers, P. Tabuada, M. P. M. H. Heemels,
Minimum attention control for linear systems:
[4] K. Aström, Event based control, In A. Astolfi a linear programming approach, Discrete Event Dyn.
and L. Marconi (Eds.), Analysis and Design of Sys., vol. 24, pp. 199–218, 2011.
Event-Based Control 19

[16] L. Grüne, S. Hirche, O. Junge, P. Koltai, [28] J. Lunze, F. Lamnabhi-Lagarrigue (Eds.), Handbook
D. Lehmann, J. Lunze, A. Molin, R. Sailer, of Hybrid Systems Control—Theory, Tools, Applica-
M. Sigurani, C. Stöcker, F. Wirth, Event-based tions, Cambridge University Press, Cambridge, UK,
control, in J. Lunze (Ed.): Control Theory of Digi- 2009.
tally Networked Dynamic Systems, Springer-Verlag,
Heidelberg, Germany, 2014. [29] J. Lunze, D. Lehmann, A state-feedback approach
to event-based control, Automatica, vol. 46, pp. 211–
[17] W. Haddad, V. Chellaboina, S. Nersesov, Impul- 215, 2010.
sive and Hybrid Dynamical Systems: Stability, Dis- [30] M. Mazo, P. Tabuada, Decentralized event-
sipativity, and Control, Princeton University Press, triggered control over wireless sensor/actuator
Princeton, NJ, 2006. networks, IEEE Trans. Autom. Control, vol. 56,
pp. 2456–2461, 2011.
[18] W. P. M. H. Heemels, M. C. F. Donkers, Model-
based periodic event-triggered control for linear [31] M. Mazo, A. Anta, P. Tabuada, An ISS self-triggered
systems, Automatica, vol. 49, pp. 698–711, 2013. implementation of linear controllers, Automatica,
vol. 46, pp. 1310–1314, 2010.
[19] W. P. M. H. Heemels, J. Sandee, P. P. J. Van
den Bosch, Analysis of event-driven controllers for [32] M. Mazo, M. Cao, Asynchronous decentral-
linear systems, Int. J. Control, vol. 81, pp. 571–590, ized event-triggered control, Automatica, vol. 50,
2007. pp. 3197–3203, 2014.

[20] W. P. M. H. Heemels, M. C. F. Donkers, A. R. Teel, [33] M. Miskowicz, Send-on-delta concept: An event-


Periodic event-triggered control for linear systems, based data reporting strategy, Sensors, vol. 6,
IEEE Trans. Autom. Control, vol. 58, pp. 847–861, pp. 49–63, 2002.
2013. [34] A. Molin, S. Hirche, On the optimality of certainty
equivalence for event-triggered control systems,
[21] W. P. M. H. Heemels, R. Gorter, A. van Zijl, P. v.d. IEEE Trans. Automat. Control, vol. 58, pp. 470–474,
Bosch, S. Weiland, W. Hendrix, M. Vonder, Asyn- 2013.
chronous measurement and control: A case study
on motor synchronisation, Control Eng. Prac., vol. 7, [35] A. Molin, S. Hirche, A bi-level approach for the
pp. 1467–1482, 1999. design of event-triggered control systems over a
shared network, Discrete Event Dyn. Sys., vol. 24,
[22] T. Henningsson, E. Johannesson, A. Cervin, Spo- pp. 153–171, 2014.
radic event-based control of first-order linear
stochastic systems, Automatica, vol. 44, pp. 2890– [36] P. G. Otanez, J. G. Moyne, D. M. Tilbury, Using
2895, 2008. deadbands to reduce communication in networked
control systems, American Control Conf., Anchorage,
[23] E. Kofman, J. H. Braslavsky, Level crossing sam- AK, 2002, pp. 3015–3020.
pling in feedback stabilization under data-rate con-
[37] M. Rabi, K. H. Johansson, M. Johansson, Optimal
straints, IEEE Conf. on Decision and Control, San
stopping for event-triggered sensing and actuation,
Diego, CA, 2006, pp. 4423–4428.
IEEE Conf. on Decision and Control, Cancun, Mexico,
[24] D. Lehmann, Event-Based State-Feedback Control, 2008, pp. 3607–3612.
Logos-Verlag, Berlin, 2011. [38] M. Rabi, K. H. Johansson, Scheduling packets
for event-triggered control, European Control Conf.,
[25] D. Lehmann, J. Lunze, Extension and experimen-
Budapest, 2009.
tal evaluation of an event-based state-feedback
approach, Control Eng. Prac., vol. 19, pp. 101–112, [39] A. Ruiz, J. E. Jimenez, J. Sanchez, S. Dormido,
2011. Design of event-based PI-P controllers using inter-
active tools, Control Eng. Prac., vol. 32, pp. 183–202,
[26] D. Lehmann, J. Lunze, Event-based control with 2014.
communication delays and packet losses, Int. J.
Control, vol. 85, no. 5, pp. 563–577, 2012. [40] M. Sigurani, C. Stöcker, L. Grüne, J. Lunze, Exper-
imental evaluation of two complementary decen-
[27] J. Lunze, Feedback Control of Large Scale Systems, tralized event-based control methods, Control Eng.
Prentice Hall, Upper Saddle River, NJ, 1992. Prac., vol. 35, pp. 22–34, 2015.
20 Event-Based Control and Signal Processing

[41] C. Stöcker, Event-Based State-Feedback Control of [47] X. Wang, M. D. Lemmon, Event-triggered broad-
Physically Interconnected Systems, Logos-Verlag, casting across distributed networked control sys-
Berlin, 2014. tems, In Proc. American Control Conference, 2008,
pp. 3139–3144.
[42] C. Stöcker, J. Lunze, Event-based control of input-
output linearizable systems, IFAC World Congress, [48] X. Wang, M. D. Lemmon, Event-triggering in dis-
2011, pp. 10062–10067. tributed networked systems with data dropouts
and delays, in R. Majumdar, P. Tabuada (Eds.),
[43] C. Stöcker, D. Vey, J. Lunze, Decentralized event- Hybrid Systems: Computation and Control, Springer,
based control: Stability analysis and experimental 2009, pp. 366–380.
evaluation, Nonlinear Anal. Hybrid Sys., vol. 10,
pp. 141–155, 2013. [49] X. Wang, M. D. Lemmon, Event-triggering in dis-
tributed networked control systems, IEEE Trans.
[44] P. Tabuada, Event-triggered real-time scheduling of Autom. Control, vol. 56, pp. 586–601, 2011.
stabilizing control tasks, IEEE Trans. Autom. Control,
vol. 52, pp. 1680–1685, 2007. [50] X. Wang, M. Lemmon, Self-triggered feedback con-
trol systems with finite-gain L2 stability, IEEE
[45] P. Tallapragada, N. Chopra, Decentralized event- Trans., vol. AC-54, pp. 452–467, 2009.
triggering for control of nonlinear systems, IEEE
Trans. Autom. Control, vol. 59, pp. 3312–3324, 2014. [51] P. V. Zhivoglyadov, R. H. Middleton, Networked
control design for linear systems, Automatica,
[46] M. Velasco, P. Mari, E. Bini, Control-driven tasks: vol. 39, pp. 743–750, 2003.
Modeling and analysis, Real-Time Systems Sympo-
sium, 2008, pp. 280–290.
2
Event-Driven Control and Optimization in Hybrid Systems

Christos G. Cassandras
Boston University
Boston, MA, USA

CONTENTS
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 A Control and Optimization Framework for Hybrid Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 IPA: Event-Driven IPA Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 IPA Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.1 Unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.2 Robustness to Stochastic Model Uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.3 State Trajectory Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5 Event-Driven Optimization in Multi-Agent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

ABSTRACT The event-driven paradigm offers an agent trajectories without any detailed knowledge of
alternative complementary approach to the time-driven environmental randomness.
paradigm for modeling, sampling, estimation, control,
and optimization. This is largely a consequence of sys-
tems being increasingly networked, wireless, and con-
sisting of distributed communicating components. The
key idea is that control actions need not be dictated by
2.1 Introduction
time steps taken by a “clock”; rather, an action should be
triggered by an “event,” which may be a well-defined The history of modeling and analysis of dynamic sys-
condition on the system state, including the possibil- tems is founded on the time-driven paradigm provided
ity of a simple time step, or a random state transition. by a theoretical framework based on differential (or
In this chapter, the event-driven paradigm is applied difference) equations. In this paradigm, time is an inde-
to control and optimization problems encountered in pendent variable, and as it evolves, so does the state
the general setting of hybrid systems where controllers of the system. Conceptually, we postulate the exis-
are parameterized and the parameters are adaptively tence of an underlying “clock,” and with every “clock
tuned online based on observable data. We present a tick” a state update is performed, including the case
general approach for evaluating (or estimating in the where no change in the state occurs. The methodolo-
case of a stochastic system) gradients of performance gies developed for sampling, estimation, communica-
metrics with respect to various parameters based on the tion, control, and optimization of dynamic systems have
infinitesimal perturbation analysis (IPA) theory origi- also evolved based on the same time-driven principle.
nally developed for discrete event systems (DESs) and Advances in digital technologies that occurred in the
now adapted to hybrid systems. This results in an “IPA 1970s and beyond have facilitated the implementation
calculus,” which amounts to a set of simple, event- of this paradigm with digital clocks embedded in hard-
driven iterative equations. The event-driven nature of ware and used to drive processes for data collection
this approach implies its scalability in the size of an or for the actuation of devices employed for control
event set, as opposed to the system state space. We purposes.
also show how the event-based IPA calculus may be As systems have become increasingly networked,
used in multi-agent systems for determining optimal wireless, and distributed, the universal value of this

21
22 Event-Based Control and Signal Processing

point of view has understandably come to question. Moreover, many systems of interest are now net-
While it is always possible to postulate an underlying worked and spatially distributed. In such settings, espe-
clock with time steps dictating state transitions, it may cially when energy-constrained wireless devices are
not be feasible to guarantee the synchronization of all involved, frequent communication among system com-
components of a distributed system to such a clock, and ponents can be inefficient, unnecessary, and sometimes
it is not efficient to trigger actions with every time step infeasible. Thus, rather than imposing a rigid time-
when such actions may be unnecessary. The event-driven driven communication mechanism, it is reasonable to
paradigm offers an alternative, complementary look at seek instead to define specific events that dictate when
modeling, control, communication, and optimization. a particular node in a network needs to exchange infor-
The key idea is that a clock should not be assumed mation with one or more other nodes. In other words, we
to dictate actions simply because a time step is taken; seek to complement synchronous operating mechanisms
rather, an action should be triggered by an “event” spec- with asynchronous ones, which can dramatically reduce
ified as a well-defined condition on the system state or communication overhead without sacrificing adherence
as a consequence of environmental uncertainties that to design specifications and desired performance objec-
result in random state transitions. Observing that such tives. When, in addition, the environment is stochastic,
an event could actually be defined to be the occurrence significant changes in the operation of a system are the
of a “clock tick,” it follows that this framework may in result of random event occurrences, so that, once again,
fact incorporate time-driven methods as well. On the understanding the implications of such events and react-
other hand, defining the proper “events” requires more ing to them is crucial. Besides their modeling potential,
sophisticated techniques compared to simply reacting to it is also important to note that event-driven approaches
time steps. to fundamental processes such as sampling, estima-
The motivation for this alternative event-driven view tion, and control possess important properties related
is multifaceted. For starters, there are many natural to variance reduction and robustness of control poli-
DESs where the only changes in their state are dictated cies to modeling uncertainties. These properties render
by event occurrences. The Internet is a prime exam- them particularly attractive, compared to time-driven
ple, where “events” are defined by packet transmis- alternatives.
sions and receptions at various nodes, causing changes While the importance of event-driven behavior in
in the contents of various queues. For such systems, dynamic systems was recognized as part of the devel-
a time-driven modeling approach may not only be inef- opment of DES and then hybrid systems, more recently
ficient, but also potentially erroneous, as it cannot deal there have been significant advances in applying event-
with events designed to occur concurrently in time. driven methods (also referred to as “event-based” and
The development of a rigorous theory for the study “event-triggered”) to classical feedback control systems;
of DES in the 1980s (see, e.g., [1–5]) paved the way see [14–18] and references therein. For example, in [15]
for event-based models of certain classes of dynamic a controller for a linear system is designed to update
systems and spurred new concepts and techniques for control values only when a specific error measure (e.g.,
control and optimization. By the early 1990s, it became for tracking or stabilization purposes) exceeds a given
evident that many interesting dynamic systems are in threshold, while refraining from any updates otherwise.
fact “hybrid” in nature, i.e., at least some of their It is also shown how such controllers may be tuned
state transitions are caused by (possibly controllable) and how bounds may be computed in conjunction with
events [6–12]. This has been reinforced by technological known techniques from linear system theory. Trade-offs
advances through which sensing and actuating devices between interevent times and controller performance are
are embedded into systems allowing physical processes further studied in [19]. As another example, in [18] an
to interface with such devices which are inherently event-driven approach termed “self-triggered control”
event driven. A good example is the modern automo- determines instants when the state should be sampled
bile where an event induced by a device that senses and control actions taken for some classes of nonlinear
slippery road conditions may trigger the operation of control systems. Benefits of event-driven mechanisms
an antilock braking system, thus changing the operating for estimation purposes are considered in [20,21]. In
dynamics of the actual vehicle. More recently, the term [20], for instance, an event-based sampling mechanism is
“cyber–physical system” [13] has emerged to describe studied where a signal is sampled only when measure-
the hybrid structure of systems where some compo- ments exceed a certain threshold, and it is shown that
nents operate as physical processes modeled through this approach outperforms a classical periodic sampling
time-driven dynamics, while other components (mostly process at least in the case of some simple systems.
digital devices empowered by software) operate in In distributed systems, event-driven mecha-
event-driven mode. nisms have the advantage of significantly reducing
Event-Driven Control and Optimization in Hybrid Systems 23

communication among networked components without where it results in an “IPA calculus” [32], which amounts
affecting desired performance objectives (see [22–27]). to a set of simple, event-driven iterative equations. In
For instance, Trimpe and D’Andrea [25] consider the this approach, the gradient evaluation/estimation pro-
problem of estimating the state of a linear system cedure is based on directly observable data, and it is
based on information communicated from spatially dis- entirely event driven. This makes it computationally effi-
tributed sensors. In this case, each sensor computes the cient, since it reduces a potentially complex process to
measurement prediction variance, and the event-driven a finite number of actions. More importantly perhaps,
process of transmitting information is defined by events this approach has two key benefits that address the
such that this variance exceeds a given threshold. A sce- need for scalable methods in large-scale systems and
nario where sensors may be subject to malicious attacks the difficulty of obtaining accurate models especially in
is considered in [28], where event-driven methods are stochastic settings. First, being event driven, it is scalable
shown to lead to computationally advantageous state in the size of the event space and not the state space of
reconstruction techniques. It should be noted that in the system model. As a rule, the former is much smaller
all such problems, one can combine event-driven and than the latter. Second, it can be shown that the gradient
time-driven methods, as in [24] where a control scheme information is often independent of model parameters,
combining periodic (time-driven) and event-driven which may be unknown or hard to estimate. In stochas-
control is used for linear systems to update and commu- tic environments, this implies that complex control and
nicate sensor and actuation data only when necessary optimization problems can be solved with little or no
in the sense of maintaining a satisfactory closed-loop knowledge of the noise or random processes affecting
performance. It is shown that this goal is attainable the underlying system dynamics.
with a substantial reduction in communication over the This chapter is organized as follows. A general online
underlying network. Along the same lines, combining control and optimization framework for hybrid sys-
event-driven and time-driven sensor information it is tems is presented in Section 2.2, whose centerpiece is a
shown in [29] that stability can be guaranteed where the methodology used for evaluating (or estimating in the
former methods alone may fail to do so. stochastic case) a gradient of an objective function with
In multi-agent systems, on the other hand, the goal is respect to controllable parameters. This event-driven
for networked components to cooperatively maximize methodology, based on IPA, is described in Section 2.3.
(or minimize) a given objective; it is shown in [23] that In Section 2.4, three key properties of IPA are presented
an event-driven scheme can still achieve the optimiza- and illustrated through examples. In Section 2.4, an
tion objective while drastically reducing communication application to multi-agent systems is given. In particu-
(hence, prolonging the lifetime of a wireless network), lar, we consider cooperating agents that carry out a per-
even when delays are present (as long as they are sistent monitoring mission in simple one-dimensional
bounded). Event-driven approaches are also attractive environments and formulate this mission as an optimal
in receding horizon control, where it is computationally control problem. Its solution results in agents operat-
inefficient to reevaluate a control value over small time ing as hybrid systems with parameterized trajectories.
increments as opposed to event occurrences defining Thus, using the event-based IPA calculus, we describe
appropriate planning horizons for the controller (e.g., how optimal trajectories can be obtained online without
see [30]). any detailed knowledge of environmental randomness.
In the remainder of this chapter, we limit ourselves
to discussing how the event-driven paradigm is applied
to control and optimization problems encountered in
the general setting of hybrid systems. In particular,
we consider a general-purpose control and optimiza-
tion framework where controllers are parameterized 2.2 A Control and Optimization Framework for
and the parameters are adaptively tuned online based
Hybrid Systems
on observable data. One way to systematically carry
out this process is through gradient information per- A hybrid system consists of both time-driven and event-
taining to given performance measures with respect to driven components [33]. The modeling, control, and
these parameters, so as to iteratively adjust their values. optimization of these systems is quite challenging. In
When the environment is stochastic, this entails gener- particular, the performance of a stochastic hybrid sys-
ating gradient estimates with desirable properties such tem (SHS) is generally hard to estimate because of the
as unbiasedness. This gradient evaluation/estimation absence of closed-form expressions capturing the depen-
approach is based on the IPA theory [1,31] originally dence of interesting performance metrics on various
developed for DES and now adapted to hybrid system design or control parameters. Most approaches rely on
24 Event-Based Control and Signal Processing

approximations and/or using computationally taxing given time window) of ∇ E[ L(θ)]. Note that we require
methods, often involving dynamic programming tech- ∇ L(θ) to be evaluated based on available data observed
niques. The inherent computational complexity of these from a single state trajectory (or sample path) of the
approaches, however, makes them unsuitable for online hybrid system. This is in contrast to standard derivative
control and optimization. Yet, in some cases, the struc- dL (θ)
approximation or estimation methods for dθ based on
ture of a dynamic optimization problem solution can L (θ+Δθ)− L (θ)
finite differences of the form Δθ . Such meth-
be shown to be of parametric form, thus reducing it
ods require two state trajectories under θ and θ + Δθ,
to a parametric optimization problem. As an example,
respectively, and are vulnerable to numerical problems
in a linear quadratic Gaussian setting, optimal feed-
when Δθ is selected to be small so as to increase the
back policies simply depend on gain parameters to be
accuracy of the derivative approximation.
selected subject to certain constraints. Even when this
is not provably the case, one can still define paramet- The final step then is to make use of ∇ L(θ) in a
ric families of solutions which can be optimized and gradient-based adaptation mechanism of the general
yield near-optimal or at least vastly improved solutions form θn+1 = θn + ηn ∇ L(θ), where n = 1, 2, . . . counts the
relative to ad hoc policies often adopted. For instance, iterations over which this process evolves, and {ηn }
it is common in solutions based on dynamic program- is a step size sequence which is appropriately selected
ming [34] to approximate cost-to-go functions through to ensure convergence of the controllable parameter
parameterized function families and then iterate over sequence {θn } under proper stationarity assumptions.
the parameters involved seeking near-optimal solutions After each iteration, the controller is adjusted, which
for otherwise intractable problems. obviously affects the behavior of the system, and the
With this motivation in mind, we consider a general- process repeats. Clearly, in a stochastic setting there is
purpose framework as shown in Figure 2.1. The starting no guarantee of stationarity conditions, and this frame-
point is to assume that we can observe state trajectories work is simply one where the controller is perpetually
of a given hybrid system and measure a performance seeking to improve system performance.
(or cost) metric denoted by L(θ), where θ is a parameter The cornerstone of this online framework is the eval-
vector. This vector characterizes a controller (as shown uation of ∇ L(θ) based only on data obtained from the
in Figure 2.1) but may also include design or model observed state trajectory. The theory of IPA [32,35] pro-
parameters. The premise here is that the system is too vides the foundations for this to be possible. Moreover,
complex for a closed-form expression of L(θ) to be avail- in the stochastic case where ∇ L(θ) becomes an esti-
able, but that it is possible to measure it over a given mate of ∇ E[ L(θ)], it is important that this estimate pos-
time window. In the case of a stochastic environment, the sess desirable properties such as unbiasedness, without
observable state trajectory is a sample path of a SHS, so which the ultimate goal of achieving optimality cannot
that L(θ) is a sample function, and performance is mea- be provably attained. As we see in the next section, it
sured through E[ L(θ)] with the expectation defined in is possible to evaluate ∇ L(θ) for virtually arbitrary SHS
the context of a suitable probability space. In addition through a simple systematic event-driven procedure we
to L(θ), we assume that all or part of the system state is refer to as the “IPA calculus.” In addition, this gradient
observed, with possible noisy measurements. Thus, ran- is characterized by several attractive properties under
domness may enter through the system process or the mild technical conditions.
measurement process or both. In order to formally apply IPA and subsequent con-
The next step in Figure 2.1 is the evaluation of the trol and optimization methods to hybrid systems, we
gradient ∇ L(θ). In the stochastic case, ∇ L(θ) is a ran- need to establish a general modeling framework. We
dom variable that serves as an estimate (obtained over a use a standard definition of a hybrid automaton [33].

Controller Hybrid Performance


(parameterized by θ) system E[L(θ)]

Noise

Δ
L(θ) State L(θ)
Δ
θn + 1 = θn + ηn L(θn ) Gradient
evaluation

FIGURE 2.1
Online control and optimization framework for hybrid systems.
Event-Driven Control and Optimization in Hybrid Systems 25

Thus, let q ∈ Q (a countable set) denote the discrete k = 1, 2, . . . , denote the occurrence times of the discrete
state (or mode) and x ∈ X ⊆ Rn denote the continuous events in increasing order, and define τ0 (θ) = 0 for con-
state of the hybrid system. Let υ ∈ Υ (a countable set) venience. We will use the notation τk instead of τk (θ)
denote a discrete control input and u ∈ U ⊆ Rm a con- when no confusion arises. The continuous state is also
tinuous control input. Similarly, let δ ∈ Δ (a countable generally a function of θ, as well as of t, and is thus
set) denote a discrete disturbance input and d ∈ D ⊆ R p denoted by x (θ, t). Over an interval [τk (θ), τk+1 (θ)), the
a continuous disturbance input. The state evolution is system is at some mode during which the time-driven
determined by means of state satisfies:
• A vector field f : Q × X × U × D → X ẋ = f k ( x, θ, t), (2.1)

• An invariant (or domain) set Inv : Q × Υ × where ẋ denotes ∂x ∂t . Note that we suppress the depen-
Δ → 2X dence of f k on the inputs u ∈ U and d ∈ D and stress
• A guard set Guard : Q × Q × Υ × Δ → 2X instead its dependence on the parameter θ which may
generally affect either u or d or both. The purpose of per-
• A reset function r : Q × Q × X × Υ × Δ → X turbation analysis is to study how changes in θ influence
A trajectory or sample path of such a system consists the state x (θ, t) and the event times τk (θ) and, ultimately,
of a sequence of intervals of continuous evolution fol- how they influence interesting performance metrics that
lowed by a discrete transition. The system remains at a are generally expressed in terms of these variables. The
discrete state q as long as the continuous (time-driven) following assumption guarantees that (2.1) has a unique
state x does not leave the set Inv(q, υ, δ). If, before reach- solution w.p.1 for a given initial boundary condition
ing Inv(q, υ, δ), x reaches a set Guard(q, q
, υ, δ) for some x (θ, τk ) at time τk (θ).
q
∈ Q, a discrete transition is allowed to take place. If
this transition does take place, the state instantaneously ASSUMPTION 2.1 W.p.1, there exists a finite set of
resets to (q
, x
), where x
is determined by the reset points t j ∈ [τk (θ), τk+1 (θ)), j = 1, 2, . . . , which are inde-
map r (q, q
, x, υ, δ). Changes in the discrete controls υ pendent of θ, suchn that the function f k is continuously
and disturbances δ are discrete events that either enable differentiable on R × Θ × ([τk (θ), τk+1 (θ)) \ {t1 , t2 . . .}).
a transition from q to q
when x ∈ Guard (q, q
, υ, δ) or Moreover, there exists a random number K > 0 such that
force a transition out of q by making sure x ∈ / Inv(q, υ, δ). E[nK ] < ∞ and the norm of the first derivative of f k on
We also use E to denote the set of all events that cause R × Θ × ([τk (θ), τk+1 (θ)) \ {t1 , t2 . . .}) is bounded from
discrete state transitions and will classify events in a above by K.
manner that suits the purposes of perturbation analy-
sis. In what follows, we provide an overview of the “IPA An event occurring at time τk+1 (θ) triggers a change
calculus” and refer the reader to [32] and [36] for more in the mode of the system, which may also result in
details. new dynamics represented by f k+1 , although this may
not always be the case; for example, two modes may
be distinct because the state x (θ, t) enters a new region
where the system’s performance is measured differently
without altering its time-driven dynamics (i.e., f k+1 =
f k ). The event times {τk (θ)} play an important role in
2.3 IPA: Event-Driven IPA Calculus defining the interactions between the time-driven and
In this section, we describe the general framework for event-driven dynamics of the system.
IPA as presented in [37] and generalized in [32] and [36]. We now classify events that define the set E as follows:
Let θ ∈ Θ ⊂ R be a global variable, henceforth called
l

the control parameter, where Θ is a given compact, con- • Exogenous: An event is exogenous if it causes a dis-
vex set. This may include system design parameters, crete state transition at time τk independent of the
parameters of an input process, or parameters that char- controllable vector θ and satisfies dτdθ = 0. Exoge-
k

acterize a policy used in controlling this system. The nous events typically correspond to uncontrolled
disturbance input d ∈ D encompasses various random random changes in input processes.
processes that affect the evolution of the state (q, x )
so that, in general, we can deal with a SHS. We will • Endogenous: An event occurring at time τk is
assume that all such processes are defined over a com- endogenous if there exists a continuously differen-
mon probability space, (Ω, F , P). Let us fix a particular tiable function gk : Rn × Θ → R such that
value of the parameter θ ∈ Θ and study a resulting sam-
ple path of the SHS. Over such a sample path, let τk (θ), τk = min{t > τk−1 : gk ( x (θ, t), θ) = 0}. (2.2)
26 Event-Based Control and Signal Processing

The function gk is normally associated with an by evaluating dL(dθθ,ω) based on directly observed data. We
invariant or a guard condition in a hybrid automa- obtain θ∗ (under the conditions mentioned above) by
ton model. optimizing J (θ) through an iterative scheme of the form

• Induced: An event at time τk is induced if it is θn+1 = θn − ηn Hn (θn ; x (θ, 0), T, ωn ), n = 0, 1, . . . ,


triggered by the occurrence of another event at (2.4)
time τm ≤ τk . The triggering event may be exoge-
nous, endogenous, or itself an induced event. The where {ηn } is a step size sequence and
d J ( θ)
events that trigger induced events are identified by Hn (θn ; x (θ, 0), T, ωn ) is the estimate of dθ at θ = θn . In
a subset of the event set, E I ⊆ E . using IPA, Hn (θn ; x (θ, 0), T, ωn ) is the sample derivative
dL (θ,ω) d J ( θ)
Although this event classification is sufficiently gen- dθ , which is an unbiased estimate of dθ if the
condition (dropping the symbol ω for simplicity)
eral, recent work has shown that in some cases, it is
 
convenient to introduce further event distinctions [38]. dL(θ) d dJ (θ)
Moreover, it has been shown in [36] that an explicit E = E[ L(θ)] = , (2.5)
dθ dθ dθ
event classification is in fact unnecessary if one is will-
ing to appropriately extend the definition of the hybrid is satisfied, which turns out to be the case under mild
automaton described earlier. However, for the rest of this technical conditions to be discussed later. The condi-
chapter, we only make use of the above classification. tions under which algorithms of the form (2.4) converge
Next, consider a performance function of the control are well-known (e.g., see [39]). Moreover, in addition
parameter θ: to being unbiased, it can be shown that such gradi-
ent estimates are independent of the probability laws of
J (θ; x (θ, 0), T ) = E[ L(θ; x (θ, 0), T )], the stochastic processes involved and require minimal
information from the observed sample path.
where L(θ; x (θ, 0), T ) is a sample function of interest The process through which IPA evaluates dθ is
dL (θ)
evaluated in the interval [0, T ] with initial conditions based on analyzing how changes in θ influence the state
x (θ, 0). For simplicity, we write J (θ) and L(θ). Suppose x (θ, t) and the event times τ (θ). In turn, this provides
k
that there are N events, with occurrence times gener- information on how L(θ) is affected, because it is gen-
ally dependent on θ, during the time interval [0, T ] and erally expressed in terms of these variables. Given θ =
define τ0 = 0 and τ N +1 = T. Let Lk : Rn × Θ × R+ → R [θ1 , . . . , θ ]T , we use the Jacobian matrix notation:
l
be a function satisfying Assumption 2.1 and define
L(θ) by ∂x (θ, t) ∂τ (θ)
x
(θ, t) ≡ , τ
k ≡ k , k = 1, . . . , K,
N  τk +1 ∂θ ∂θ
L ( θ) = ∑ Lk ( x, θ, t)dt, (2.3)
k =0 τk
for all state and event time derivatives. For simplicity of
notation, we omit θ from the arguments of the functions
where we reiterate that x = x (θ, t) is a function of θ and above unless it is essential to stress this dependence. It is
t. We also point out that the restriction of the definition shown in [32] that x
(t) satisfies
of J (θ) to a finite horizon T which is independent of θ is
made merely for the sake of simplicity. d
∂ f (t) ∂ f (t)
x (t) = k x
(t) + k , (2.6)
Returning to Figure 2.1 and considering (for the dt ∂x ∂θ
sake of generality) the stochastic setting, the ultimate for t ∈ [τ (θ), τ (θ)) with boundary condition
k k +1
goal of the iterative process shown is to maximize
Eω [ L(θ, ω)], where we use ω to emphasize dependence x
(τ+
− − +
k ) = x (τk ) + [ f k −1 (τk ) − f k (τk )]τk ,

(2.7)
on a sample path ω of a SHS (clearly, this is reduced
to L(θ) in the deterministic case). Achieving such opti- for k = 0, . . . , K. We note that whereas x (t) is often con-

mality is possible under standard ergodicity conditions tinuous in t, x (t) may be discontinuous in t at the event
imposed on the underlying stochastic processes, as well times τk ; hence, the left and right limits above are gener-
as the assumption that a single global optimum exists; ally different. If + x (t) is not continuous in t at t = τk (θ),
otherwise, the gradient-based approach is simply con- the value of x ( τ k ) is determined by the reset function
r ( q, q
, x, υ, δ) discussed earlier and
tinuously attempting to improve the observed perfor-
mance L(θ, ω). Thus, we are interested in estimating the dr (q, q
, x, υ, δ)
gradient x
(τ+
k ) = . (2.8)

dJ (θ) dEω [ L(θ, ω)] Furthermore, once the initial condition x
(τ+ k ) is given,
= ,
dθ dθ the linearized state trajectory { x
(t)} can be computed
Event-Driven Control and Optimization in Hybrid Systems 27

in the interval t ∈ [τk (θ), τk+1 (θ)) by solving (2.6) to time units until the induced event takes place. Hence-
obtain forth, we will consider y m (t), m = 1, 2, . . . , as part of the
 t ∂ f (u)  t  t ∂ f (u)  continuous state of the SHS, and we set
k du ∂ f k (v) − τ ∂x k du
x
(t) = e τk ∂x e k dv + ξk ,
τk ∂θ ∂ym (t)
y
m (t) ≡ , m = 1, . . . , N. (2.12)
(2.9) ∂θ

with the constant ξk determined from x


(τ+ For the common case where y0 is independent of θ and
k ) in either
(2.7) or (2.8). C (t) is a constant c > 0 in (2.11), Lemma 2.1 facilitates
In order to complete the evaluation of x
(τ+ the computation of τ
k for an induced event occurring at
k ) in (2.7),
we need to also determine τ
k . Based on the event classi- τk . Its proof is given in [32].
fication above, τ
k = 0 if the event at τk (θ) is exogenous
and Lemma 2.1
  −1  

∂gk − ∂gk ∂gk

τk = − f (τ ) + x (τk ) , (2.10) If in (2.11), y0 is independent of θ and C (t) = c > 0
∂x k k ∂θ ∂x (constant), then τ
k = τ
m .
if the event at τk (θ) is endogenous, that is,
∂g With the inclusion of the state variables y m (t), m =
gk ( x (θ, τk ), θ) = 0, defined as long as ∂xk f k (τ− k )  = 0. 1, . . . , N, the derivatives x
(t), τ
k , and y
m (t) can be evalu-
(Details may be found in [32].) Finally, if an induced ated through (2.6)–(2.11) and this set of equations is what
event occurs at t = τk and is triggered by an event we refer to as the “IPA calculus.” In general, this evalu-
at τm ≤ τk , the value of τ
k depends on the derivative ation is recursive over the event (mode switching) index
τ
m . The event induced at τm will occur at some time k = 0, 1, . . . . In other words, the IPA estimation process is
τm + w(τm ), where w(τm ) is a (generally random) vari- entirely event driven. For a large class of problems, the
able which is dependent on the continuous and discrete SHS of interest does not involve induced events, and the
states x (τm ) and q(τm ), respectively. This implies the state does not experience discontinuities when a mode-
need for additional state variables, denoted by y m (θ, t), switching event occurs. In this case, the IPA calculus
m = 1, 2, . . . , associated with events occurring at times reduces to the application of three equations:
τm , m = 1, 2 . . . . The role of each such a state variable is
to provide a “timer” activated when a triggering event 1. Equation 2.9:
occurs. Triggering events are identified as belonging to  
t ∂ f k (u) t ∂ f k (u)
a set E I ⊆ E and letting ek denote the event occurring at ∂x du
t ∂ f k (v) − ∂x du
x
(t) = e τk
e τk
dv + ξk ,
τk . Then, define k = {m : em ∈ E I , m ≤ k} to be the set τk ∂θ
of all indices with corresponding triggering events up
to τk . Omitting the dependence on θ for simplicity, the which describes how the state derivative x
(t)
dynamics of y m (t) are then given by evolves over [τk (θ), τk+1 (θ)).
 2. Equation 2.7:
− C ( t ) τm ≤ t < τm + w (τm ), m ∈ m
ẏ m (t) = ,
0 otherwise x
(τ+
− − +

k ) = x (τk ) + [ f k −1 (τk ) − f k (τk )]τk ,


(2.11)
 −
y0 ym (τm ) = 0, m ∈ m which specifies the initial condition ξk in (2.9).
y m (τ+ m) = ,
0 otherwise 3. Either τ
k = 0 or Equation 2.10:
where y0 is an initial value for the timer y m (t), which   −1  

∂gk − ∂gk ∂gk

decreases at a “clock rate” C (t) > 0 until y m (τm + τk = − f (τ ) + x (τk ) ,
w(τm )) = 0 and the associated induced event takes ∂x k k ∂θ ∂x
place. Clearly, these state variables are only used for
depending on the event type at τk (θ), which
induced events, so that y m (t) = 0 unless m ∈ m . The
specifies the event time derivative present
value of y0 may depend on θ or on the continuous and
in (2.7).
discrete states x (τm ) and q(τm ), while the clock rate
C (t) may depend on x (t) and q(t) in general, and pos- From a computational standpoint, the IPA derivative
sibly θ. However, in most simple cases where we are evaluation process takes place iteratively at each event
interested in modeling an induced event to occur at time defining a mode transition at some time instant τk (θ). At
τm + w(τm ), we have y0 = w(τm ) and C (t) = 1—that this point in time, we have at our disposal the value of
is, the timer simply counts down for a total of w(τm ) x
(τ+
k −1 ) from the previous iteration, which specifies ξk −1
28 Event-Based Control and Signal Processing

in (2.9) applied for all t ∈ [τk−1 (θ), τk (θ)). Therefore, set- 2.4.1 Unbiasedness
ting t = τk (θ) in (2.9) we also have at our disposal the
We begin by returning to the issue of unbiasedness of
value of x
(τ−
k ). Next, depending on whether the event dL (θ)
is exogenous or endogenous, the value of τ
k can be the sample derivatives dθ derived using the IPA cal-
obtained: it is either τ
k = 0 or given by (2.10) since x
(τ− culus described in the last section. In particular, the IPA
k ) dL (θ)

+ derivative dθ is an unbiased estimate of the perfor-
is known. Finally, we obtain x (τk ) using (2.7). At this
d J ( θ)
point, one can wait until the next event occurs at τk+1 (θ) mance (or cost) derivative dθ if the condition (2.5)
and repeat the process which can, therefore, be seen to holds. In a pure DES, the IPA derivative satisfies this
be entirely event driven. condition for a relatively limited class of systems (see
The last step in the IPA process involves using the IPA [1,31]). This has motivated the development of more
calculus in order to evaluate the IPA derivative dL/dθ. sophisticated perturbation analysis methods that can
This is accomplished by taking derivatives in (2.3) with still guarantee unbiasedness at the expense of additional
respect to θ: information to be collected from the observed sample
 path or additional assumptions regarding the statistical
dL(θ) N
d τk +1
= ∑ Lk ( x, θ, t)dt. (2.13) properties of some of the random processes involved.
dθ k =0
dθ τk However, in a SHS, the technical conditions required
to guarantee the validity of (2.5) are almost always
Applying the Leibnitz rule, we obtain, for every k =
applicable.
0, . . . , N,
 The following result has been established in [40]
d τk +1 regarding the unbiasedness of IPA:
Lk ( x, θ, t)dt
dθ τk
 τ  
k +1 ∂Lk
∂Lk Theorem 2.1
= ( x, θ, t) x (t) + ( x, θ, t) dt
τk ∂x ∂θ
+ Lk ( x (τk+1 ), θ, τk+1 )τ
k+1 − Lk ( x (τk ), θ, τk )τ
k , Suppose that the following conditions are in force:
dL (θ)
(2.14) (1) For every θ ∈ Θ, the derivative dθ exists w.p.1.
(2) W.p.1, the function L(θ) is Lipschitz continuous on Θ,
where x
(t) and τ
k are determined through (2.6)–(2.10). and the Lipschitz constant has a finite first moment.
What makes IPA appealing is the simple form the right- d J ( θ)
Then, for a fixed θ ∈ Θ, the derivative dθ exists, and
hand-side in Equation 2.14 often assumes. As we will
dL (θ)
see, under certain commonly encountered conditions, the IPA derivative dθ is unbiased.
this expression is further simplified by eliminating the
integral term. The crucial assumption for Theorem 2.1 is the conti-
nuity of the sample function L(θ), which in many SHSs
is guaranteed in a straightforward manner. Differentia-
bility w.p.1 at a given θ ∈ Θ often follows from mild
technical assumptions on the probability law underly-
ing the system, such as the exclusion of co-occurrence of
2.4 IPA Properties
multiple events (see [41]). Lipschitz continuity of L(θ)
In this section, we identify three key properties of IPA. generally follows from upper boundedness of | dL(θ) |

The first one is important in ensuring that when IPA by an absolutely integrable random variable, generally
involves estimates of gradients, these estimates are unbi- a weak assumption. In light of these observations, the
ased under mild conditions. The second is a robustness proofs of unbiasedness of IPA have become standard-
property of IPA derivatives in the sense that they do not ized, and the assumptions in Theorem 2.1 can be verified
depend on specific probabilistic characterizations of any fairly easily from the context of a particular problem.
stochastic processes involved in the hybrid automaton
model of a SHS. This property holds under certain suffi-
cient conditions which are easy to check. Finally, under
conditions pertaining to the switching function gk ( x, θ),
2.4.2 Robustness to Stochastic Model Uncertainties
which we have used to define endogenous events, the
event-driven IPA derivative evaluation or estimation Next, we turn our attention to properties of the esti-
process includes some events that have the property mators obtained through the IPA calculus which ren-
of allowing us to decompose an observed state trajec- der them, under certain conditions, particularly simple
tory into cycles, thus greatly simplifying the overall and efficient to implement with minimal information
computational effort. required about the underlying SHS dynamics.
Event-Driven Control and Optimization in Hybrid Systems 29
dL (θ) ∂ fk ∂ fk
The first question we address is related to dθ in wk (t), we can immediately see that ∂x = ak and ∂θ =
(2.13), which, as seen in (2.14), generally depends on ∂u k (θ,t )
∂θ ; hence, the second of the three parts of (C2) is
information accumulated over all t ∈ [τk , τk+1 ). It is, d ∂ fk
however, often the case that it depends only on infor-
satisfied—that is, dt ∂x = 0. Further, suppose that the
∂u (θ,t )
mation related to the event times τk , τk+1 , resulting in dependence of uk (θ, t) on t is such that k∂θ is also
an IPA estimator that is simple to implement. Using the independent of t; this is true, for instance, if uk (θ, t) =
dL ( x,t,θ)
notation L
k ( x, t, θ) ≡ k dθ , we can rewrite dθ in
dL (θ) uk (θ), that is, the control is fixed at that mode, or if
d ∂ fk
(2.13) as uk (θ, t) = γ(θ)t, in which case dt ∂θ = 0, and the last part
 of (C2) is also satisfied. Finally, consider a performance
dL(θ)
= ∑ τ
k+1 · Lk (τ+
+
k +1 ) − τk · L k ( τk )
metric of the form
dθ k
 τ     
k +1 N  τk +1 N  τk +1
+ L
k ( x, t, θ)dt . (2.15)
τk J ( θ) = E ∑ Lk ( x, θ, t)dt = E ∑ x (t)dt ,
k =0 τk k =0 τk
The following theorem provides two sufficient con-
dL (θ)
ditions under which dθ involves only the event ∂L k
time derivatives τ
k , τ
k+1 and the “local” performance where we have ∂x = 1, thus satisfying also the first
L k (τ+ + part of (C2). It is worthwhile pointing out that the IPA
k +1 ), Lk (τk ), which is obviously easy to observe. d J ( θ)
The proof of this result is given in [42]. calculus here provides unbiased estimates of dθ with-
out any information regarding the noise process wk (t).
Theorem 2.2 Although this seems surprising at first, the fact is that
the effect of the noise is captured through the values
dL (θ) of the observable event times τk (θ) and the observed
If condition (C1) or (C2) below holds, then dθ
performance values Lk (τ+ k ) at these event times only:
depends only on information available at event times
modeling information about wk (t) is traded against
{τk }, k = 0, 1, . . . .
observations made online at event times only. In other
(C1) Lk ( x, t, θ) is independent of t over [τk , τk+1 ) for
words, while the noise information is crucial if one is
all k = 0, 1, . . . . τ
interested in the actual performance τ k +1 Lk ( x, θ, t)dt
(C2) Lk ( x, t, θ) is only a function of x, and the follow- k

ing condition holds for all t ∈ [τk , τk+1 ), k = 0, 1, . . .: over an interval [τk−1 (θ), τk (θ)), such information is
not always required
τ to estimate the sensitivity of the
d ∂Lk d ∂ fk d ∂ fk performance τ k +1 Lk ( x, θ, t)dt with respect to θ.
= = = 0. (2.16) k
dt ∂x dt ∂x dt ∂θ
We refer to the property reflected by Theorem 2.2 as
The implication of Theorem 2.2 is that (2.15), under
“robustness” of IPA derivative estimators with respect
either (C1) or (C2), reduces to
to any noise process affecting the time-driven dynamics
dL(θ) of the system. Clearly, that would not be the case if, for
= ∑[τ
k+1 · Lk (τ+k + 1 ) − τ
k · Lk (τ+
k )], instance, the performance metric involved x 2 (t) instead

of x (t); then, ∂L
k
∂x = 2x ( t ) and the integral term in (2.15)
k

and involves only directly observable performance sam- would have to be included in the evaluation of dL(θ) .
ple values at event times along with event time deriva- Although this increases the computational burden ofdθthe
tives which are either zero (for exogenous events) or IPA evaluation procedure and requires the collection of
given by (2.10). The conditions in Theorem 2.2 are sample data for w (t), note that it still requires no prior
k
surprisingly easy to satisfy as the following example modeling information regarding this random process.
illustrates. Thus, one need not have a detailed model (captured
by f k−1 ) to describe the state behavior through ẋ =
EXAMPLE 2.1 Consider a SHS whose time-driven f ( x, θ, t), t ∈ [τ , τ ) in order to estimate the effect
k −1 k −1 k
dynamics at all modes are linear and of the form of θ on this behavior. This explains why simple abstrac-
ẋ = ak x (t) + bk uk (θ, t) + wk (t), t ∈ [τk−1 (θ), τk (θ)), tions of a complex stochastic system are often adequate
to perform sensitivity analysis and optimization, as long
where uk (θ, t) is a control used in the system mode over as the event times corresponding to discrete state tran-
[τk (θ), τk+1 (θ)), which depends on a parameter θ and sitions are accurately observed and the local system
wk (t) is some random process for which no further infor- behavior at these event times, for example, x
(τ+ k ) in
mation is provided. Writing f k = ak x (t) + bk uk (θ, t) + (2.7), can also be measured or calculated.
30 Event-Based Control and Signal Processing

2.4.3 State Trajectory Decomposition θ


β(τ)
The final IPA property we discuss is related to the dis- α(τ)
continuity in x
(t) at event times, described in (2.7).
γ(τ)
This happens when endogenous events occur, since for x(t)
exogenous events we have τ
k = 0. The next theorem
FIGURE 2.2
identifies a simple condition under which x
(τ+ k ) is inde- A simple fluid queue system for Example 2.2.
pendent of the dynamics f before the event at τk . This
implies that we can evaluate the sensitivity of the state θ
with respect to θ without any knowledge of the state
trajectory in the interval [τk−1 , τk ) prior to this event.
Moreover, under an additional condition, we obtain τk – 1 τk τk + 1
x
(τ+
k ) = 0, implying that the effect of θ is “forgotten,”
and one can reset the perturbation process. This allows FIGURE 2.3
us to decompose an observed state trajectory (or sam- A typical sample path of the system in Figure 2.2.
ple path) into “reset cycles,” greatly simplifying the IPA
process. The proof of the next result is also given in [42].
 T
Theorem 2.3 L ( θ) = γ(θ, t)dt. (2.17)
0

Suppose an endogenous event occurs at τk (θ) with a We assume that α(t) and β(t) are independent of θ,
switching function g( x, θ). If f k (τ+
+
k ) = 0, then x (τk )
and note that the buffer workload and overflow pro-
is independent of f k−1 . If, in addition, ∂θ = 0, then cesses certainly depend upon θ; hence, they are denoted
∂g
by { x (θ, t)} and {γ(θ, t)}, respectively. The only other
x
(τ+
k ) = 0. assumptions we make on the arrival process and ser-
vice process are that, w.p.1, α(t) and β(t) are piecewise
The condition f k (τ+
k ) = 0 typically indicates a satura- continuously differentiable in t (but need not be contin-
tion effect or the state reaching a boundary that cannot T T
uous), and the terms 0 α(t)dt and 0 β(t)dt have finite
be crossed, for example, when the state is constrained
first moments. In addition, to satisfy the first condition
to be nonnegative. This often arises in stochastic flow
of Theorem 2.1, we assume that w.p.1 no two events can
systems used to model how parts are processed in man-
occur at the same time (unless one induces the other),
ufacturing systems or how packets are transmitted and dL (θ)
received through a communication network [43,44]. In thus ensuring the existence of dθ .
such cases, the conditions of both Theorems 2.1 and 2.2
are frequently satisfied since (1) common performance The time-driven dynamics in this SHS are given by
metrics such as workload or overflow rates satisfy (2.16) ⎧
and (2) flow systems involve nonnegative continuous ⎨ 0 if x (θ, t) = 0, α(t) ≤ β(t)
states and are constrained by capacities that give rise to ẋ ( θ, t ) = 0 if x (θ, t) = θ, α(t) ≥ β(t) .

dynamics of the form ẋ = 0. This class of SHS is also α ( t ) − β ( t ) otherwise
referred to as stochastic flow models, and the simplic- (2.18)
ity of the IPA derivatives in this case has been thor-
oughly analyzed, for example, see [35,45]. We present an A typical sample path of the process { x (θ, t)} is shown
illustrative example below. in Figure 2.3. Observe that there are two endogenous
events in this system: the first is when x (θ, t) increases
EXAMPLE 2.2 Consider the fluid single-queue sys- and reaches the value x (θ, t) = θ (as happens at time τk
tem shown in Figure 2.2, where the arrival-rate process in Figure 2.3) and the second is when x (θ, t) decreases
{α(t)} and the service-rate process {β(t)} are random and reaches the value x (θ, t) = 0. Thus, we see that the
processes (possibly correlated) defined on a common sample path is partitioned into intervals over which
probability space. The queue has a finite buffer, { x (t)} x ( θ, t ) = 0, termed empty periods (EPs) since the fluid
denotes the buffer workload (amount of fluid in the queue in Figure 2.2 is empty, and intervals over which
buffer), and {γ(t)} denotes the overflow of excess fluid x ( θ, t ) > 0, termed nonempty periods (NEPs).
when the buffer is full. Let the controllable parameter θ We can immediately see that Theorem 2.3 applies here
be the buffer size, and consider the sample performance for endogenous events with g ( x, θ) = x, which occur
∂g
function to be the loss volume during a given horizon when an EP starts at some event time τk . Since ∂θ = 0
interval [0, T ], namely, and f k (τ+ k ) = 0 from (2.18), it follows that x (τk ) = 0

+
Event-Driven Control and Optimization in Hybrid Systems 31

and remains at this value throughout every EP. There- no discontinuities (exogenous events) at t = τk when
fore, the effect of the parameter θ in this case need only the endogenous event x (θ, t) = θ takes place, that is,
be analyzed over NEPs. α(τ− − + +
k ) − β(τk ) = α(τk ) − β(τk ) = α(τk ) − β(τk ). Then,
Next, observe that γ(θ, t) > 0 only when x (θ, t) = θ. returning to (2.19) we get
We refer to any such interval as a full period (FP) since
the fluid queue in Figure 2.2 is full, and note that we can
dL(θ) α ( t ) − β( t )
write L(θ) in (2.17) as =− ∑ = −|ΨT |,
dθ k ∈Ψ
α ( t ) − β( t )
 τ T
k +1
L ( θ) = ∑ [α(t) − β(t)]dt,
k ∈ Ψ T τk where |ΨT | is simply the number of observed NEPs
that include a “lossy” interval over which x (θ, t) = θ.
where ΨT = {k : x (θ, t) = θ for all t ∈ [τk (θ), τk+1 (θ))} is dL (θ)
the set of all FPs in the observed sample path over [0, T ]. Observe that this expression for dθ does not depend
It follows that in any functional way on the details of the arrival or ser-
vice rate processes. Furthermore, it is simple to compute,
dL(θ) − −

= ∑ [α(τk+1 ) − β(τk+1 )]τk+1 and in fact amounts to a simple counting process.


dθ k ∈Ψ T

− ∑ [α ( τ+ +
k ) − β(τk )]τk , (2.19)
k ∈Ψ T

and this is a case where condition (C2) of Theorem 2.2 2.5 Event-Driven Optimization in Multi-Agent
d ∂Lk d ∂ fk d ∂ fk Systems
∂x = dt [α( t ) − β( t )] = 0 and dt ∂x = dt ∂θ = 0
d
holds: dt
since f k = α(t) − β(t) from (2.18). Thus, the evaluation Multi-agent systems are commonly modeled as hybrid
dL (θ)

of dθ reduces to the evaluation of τk+1 and τk at the systems with time-driven dynamics describing the
end and start, respectively, of every FP. Observing that motion of the agents or the evolution of physical

τk+1 = 0 since the end of a FP is an exogenous event processes in a given environment, while event-driven
depending only on a change in sign of [α(t) − β(t)] from behavior characterizes events that may occur randomly
nonnegative to strictly negative, it only remains to use (e.g., an agent failure) or in accordance to control poli-

the IPA calculus to evaluate τk for every endogenous cies (e.g., an agent stopping to sense the environment
event such that g( x (θ, τk ), θ) = x − θ. Applying (2.10) or to change directions). As such, a multi-agent sys-
gives: tem can be studied in the context of Figure 2.1 with

1 − x
(τ− k )
parameterized controllers aiming to meet certain spec-
τk = . ifications or to optimize a given performance metric. In
α(τ−k ) − β ( τ−
k ) some cases, the solution of a multi-agent dynamic opti-
The value of x
(τ− k ) is obtained using (2.9) over the
mization problem is reduced to a policy that is naturally
interval [τk−1 (θ), τk (θ)): parametric. Therefore, the adaptive scheme in Figure 2.1
provides a solution that is (at least locally) optimal. In
 τ−
k ∂ f k (u) this section, we present a problem known as “persis-

(τ−
du
k ) =e τ k −1 ∂x
x tent monitoring,” which commonly arises in multi-agent
 −

τ− ∂ f k (v) − ττk ∂ f k (u)

systems and where the event-driven approach we have


(τ+
k
× ∂x du dv + x
∂θ
e k −1 k −1 ) , described can be used.
τk −1
Persistent monitoring tasks arise when agents must
∂ f (u) ∂ f (u) ∂ f (v) monitor a dynamically changing environment that can-
∂x = ∂x = 0 and ∂θ = 0. Moreover,
k k k
where not be fully covered by a stationary team of available
using (2.7) at t = τk−1 , we have x (τ+


k −1 ) = x ( τk −1 ) + agents. Thus, all areas of a given mission space must be
− +
[ f k−1 (τk−1 ) − f k (τk−1 )]τ
k−1 = 0, since the start of a NEP visited infinitely often. The main challenge in designing
is an exogenous event so that τ
k−1 = 0 and x
(τ− k −1 ) = 0
control strategies in this case is in balancing the pres-
as explained earlier. Thus, x
(τ− k ) = 0, yielding ence of agents in the changing environment so that it
is covered over time optimally (in some well-defined

1 sense) while still satisfying sensing and motion con-
τk = .
α(τ−
k ) − β( τ −
k )
straints. Examples of persistent monitoring missions
include surveillance, patrol missions with unmanned
Recalling our assumption that w.p.1 no two events can vehicles, and environmental applications where routine
occur at the same time, α(t) and β(t) can experience sampling of an area is involved. Control and motion
32 Event-Based Control and Signal Processing

planning for this problem have been studied in the lit- we model the dynamics of Ri (t), i = 1, . . . , M, as follows:
erature, for example, see [46–49]. We limit ourselves 
here to reviewing the optimal control formulation in [49] Ṙi (t) = 0 if Ri (t) = 0, Ai ≤ BPi (s(t))
,
for a simple one-dimensional mission space taken to be A i − BP i ( s ( t )) otherwise
an interval [0, L] ⊂ R. Assuming N mobile agents, let (2.23)
their positions at time t be sn (t) ∈ [0, L], n = 1, . . . , N,
where we assume that initial conditions Ri (0), i =
following the dynamics
1, . . . , M, are given and that B > Ai > 0 (thus, the uncer-
tainty strictly decreases when there is perfect sensing
ṡn (t) = gn (sn ) + bn un (t), (2.20)
Pi (s(t)) = 1.) Note that Ai represents the rate at which
uncertainty increases at αi which may be random. We
where un (t) is the controllable agent speed constrained
will start with the assumption that the value of Ai is
by |un (t)| ≤ 1, n = 1, . . . , N, that is, we assume that the
known and will see how the robustness property of the
agent can control its direction and speed. Without loss
IPA calculus (Theorem 2.2) allows us to easily general-
of generality, after some rescaling with the size of the
ize the analysis to random processes { Ai (t)} describ-
mission space L, we further assume that the speed is
ing uncertainty levels at different points in the mission
constrained by |un (t)| ≤ 1, n = 1, . . . , N. For the sake of
space.
generality, we include the additional constraint,
The goal of the optimal persistent monitoring prob-
lem is to control the movement of the N agents through
a ≤ s(t) ≤ b, a ≥ 0, b ≤ L, (2.21)
un (t) in (2.20) so that the cumulative uncertainty over all
sensing points {αi }, i = 1, . . . , M, is minimized
over all t to allow for mission spaces where the agents over a fixed time horizon T. Thus, setting u(t) =
may not reach the endpoints of [0, L], possibly due to [u (t), . . . , u (t)], we aim to solve the following optimal
1 N
the presence of obstacles. We associate with every point control problem:
x ∈ [0, L] a function pn ( x, sn ) that measures the proba-
bility that an event at location x is detected by agent 
1 T M
T 0 i∑
n. We also assume that pn ( x, sn ) = 1 if x = sn , and that min J = Ri (t)dt, (2.24)
u(t) =1
pn ( x, sn ) is monotonically nonincreasing in the distance
| x − sn | between x and sn , thus capturing the reduced subject to the agent dynamics (2.20), uncertainty dynam-
effectiveness of a sensor over its range which we con- ics (2.23), control constraint |un (t)| ≤ 1, t ∈ [0, T ], and
sider to be finite and denoted by rn . Therefore, we set state constraints (2.21), t ∈ [0, T ].
p n ( x, sn ) = 0 when | x − sn | > rn . Using a standard calculus of variations analysis, it
Next, consider a partition of [0, L] into M intervals is shown in [49] that the optimal trajectory of each
(2i −1) L
whose center points are αi = 2M , i = 1, . . . , M. We agent n is to move at full speed, that is, un (t), until it
associate a time-varying measure of uncertainty with reaches some switching point, dwell on the switching
each point αi , which we denote by Ri (t). Without loss point for some time (possibly zero), and then switch
of generality, we assume 0 ≤ α1 ≤ · · · ≤ α M ≤ L and, to directions. Consequently, each agent’s optimal trajectory
simplify notation, we set pn ( x, sn (t)) = pn,i (sn (t)) for all is fully described by a vector of switching points θn =
x ∈ [αi − 2M L
, αi + 2M L
]. Therefore, the joint probability [θn,1 , . . . , θn,Γn ]T and wn = [wn,1 . . . , wn,Γn ]T , where θn,ξ is
of detecting an event at location x ∈ [αi − 2M L
, αi + 2M
L
] the ξth control switching point and wn,ξ is the waiting
by all the N agents simultaneously (assuming detection time for this agent at the ξth switching point. Note that
independence) is Γn is generally not known a priori and depends on the
time horizon T. It follows that the behavior of the agents
N operating under optimal control is fully described by
Pi (s(t)) = 1 − ∏ [1 − pn,i (sn (t))], (2.22) hybrid dynamics, and the problem is reduced to a para-
n =1 metric optimization one, where θn and wn need to be
optimized for all n = 1, . . . , N. This enables the use of the
where we set s(t) = [s1 (t), . . . , s N (t)]T . We define uncer- IPA calculus and, in particular, the use of the three equa-
tainty functions Ri (t) associated with the intervals [αi − tions, (2.9), (2.7), and (2.10), which ultimately leads to an
2M , αi + 2M ], i = 1, . . . , M, so that they have the follow- evaluation of the gradient ∇ J (θ, w ) with J (θ, w ) in (2.24)
L L

ing properties: (1) Ri (t) increases with a prespecified now viewed as a function of the parameter vectors θ, w.
rate Ai if Pi (s(t)) = 0, (2) Ri (t) decreases with a fixed In order to apply IPA to this hybrid system, we begin
rate B if Pi (s(t)) = 1, and (3) Ri (t) ≥ 0 for all t. It is then by identifying the events that cause discrete state tran-
natural to model uncertainty so that its decrease is pro- sitions from one operating mode of an agent to another.
portional to the probability of detection. In particular, Looking at the uncertainty dynamics (2.23), we define an
Event-Driven Control and Optimization in Hybrid Systems 33

event at time τk such that Ṙi (t) switches from Ṙi (t) = 0 3. If an event at time τk is such that un (t)
to Ṙi (t) = Ai − BPi (s(t)) or an event such that Ṙi (t) switches from ±1 to 0, or from 0 to ±1, we
switches from Ṙi (t) = Ai − BPi (s(t)) to Ṙi (t) = 0. In need the components of ∇sn (τ+ k ) in (2.26) and
addition, since an optimal agent trajectory experiences (2.27) which are obtained as follows. First, for
switches of its control un (t) from ±1 to 0 (the agent ∂s n (τ+ )
k
∂θn,ξ , if an event at time τk is such that un (t)
comes to rest before changing direction) or from 0 to ±1, ∂s n +
we define events associated with each such action that switches from ±1 to 0, then ∂θn,ξ (τk ) = 1 and
affects the dynamics in (2.20). Denoting by τk (θ, w) the 
occurrence time of any of these events, it is easy to obtain ∂sn (τ+
k ) 0, if j = ξ
= , j < ξ.
from (2.24): ∂θn,j 1, if j = ξ
M K  τk +1 (θ,w )
1 If on the other hand, un (t) switches from 0 to
∇ J (θ, w) = ∑∑ ∇ Ri (t)dt,
T i =1 k =0 τk (θ,w ) ±1, then ∂θ∂τk = −sgn(u(τ+
n,ξ k )) and

which depends entirely on ∇ Ri (t). Let us define the


∂sn (τ+
k )
function
  ∂θn,j
∂p i (sn ) ⎧
Gn,i (t) = B ∏ (1 − pi (sd (t))) ( t − τk ), ∂s n −
if un (τ+
∂sn ⎪

⎪ ∂θn,j (τk ) + 2, k ) = 1, j even,
d=n ⎨
or un (τ+k ) = −1, j odd ,
(2.25) = −
for all t ∈ [τk (θ, w), τk+1 (θ, w)) and observe that it ⎪

∂s n
∂θn,j (τk ) − 2, if un (τ+
k ) = 1, j odd,


depends only on the sensing model pi (sn (t)) and the or un (τ+
k ) = −1, j even
uncertainty model parameter B. Applying the IPA cal- j < ξ,
culus (details are provided in [49]), we can then obtain
∂s n (τ+ )
∂Ri ∂Ri (τ+
k )
Finally, for k
∂w n,ξ , we have
(t) =
∂θn,ξ ∂θn,ξ
 ∂sn (τ+
0 if Ri (t) = 0, Ai < BPi (s(t)) k )
− ∂s n (τ+ ) , ∂wn,j
Gn,i (t) ∂θ k
otherwise 
n,ξ
0, if un (τ− +
k ) = ±1, un (τk ) = 0 .
(2.26) =
∓1, if un (τ− +
k ) = 0, un (τk ) = ±1
and
In summary, this provides an event-driven
∂Ri ∂Ri (τ+
k ) procedure for evaluating ∇ J (θ, w) and pro-
(t) =
∂wn,ξ ∂wn,ξ ceeding with a gradient-based algorithm as
 shown in Figure 2.1 to determine optimal
0 if Ri (t) = 0, Ai < BPi (s(t))
− ∂s n (τ+ ) , agent trajectories online or at least improve on
Gn,i (t) ∂w k otherwise current ones.
n,ξ
(2.27) Furthermore, let us return to the case of stochastic
for all n = 1, . . . , N and ξ = 1, . . . , Γn . It remains to environmental uncertainties manifested through ran-
∂R i (τ+ ) dom processes { Ai (t)} in (2.23). Observe that the eval-
derive event-driven iterative expressions for ∂θ k , uation of ∇ R (t), hence ∇ J (θ, w), is independent of A ,
n,ξ i i
∂R i (τ+
k
) ∂s n (τ+
k
) ∂s n (τ+k
) i = 1, . . . , M; in particular, note that Ai does not appear
∂w n,ξ and ∂w n,ξ , ∂θn,ξ above. These are given as
in the function Gn,i (t) in (2.25) or in any of the expres-
follows (see [49] for details): ∂s n (τ+ ) ∂s n (τ+ )
sions for ∂θ k , ∂w k . In fact, the dependence of
1. If an event at time τk is such that Ṙi (t) n,j n,j
∇ Ri (t) on Ai manifests itself through the event times
switches from Ṙi (t) = 0 to Ṙi (t) =
τk , k = 1, . . . , K, that do affect this evaluation, but they,
Ai − BPi (s(t)), then ∇sn (τ+ k ) = ∇ s n (τ−
k )
+ − unlike Ai which may be unknown, are directly observ-
and ∇ Ri (τk ) = ∇ Ri (τk ) for all n = 1, . . . , N.
able during the gradient evaluation process. This, once
2. If an event at time τk is such that Ṙi (t) again is an example of the IPA robustness property
switches from Ṙi (t) = Ai − BPi (s(t)) to discussed in Section 4.2. Extensive numerical examples
Ṙi (t) = 0 (i.e., Ri (τk ) becomes zero), then of how agent trajectories are adjusted online for the
∇ s n (τ+k ) = ∇ s n ( τ −
k ) and ∇ R i ( τ +
k ) = 0. persistent monitoring problem may be found in [49].
34 Event-Based Control and Signal Processing

Extending this analysis from one-dimensional to two-


dimensional mission spaces no longer yields optimal Acknowledgment
trajectories which are parametric in nature, as shown
in [50]. However, one can represent an agent trajectory in The author’s work was supported in part by the
terms of general function families characterized by a set National Science Foundation (NSF) under grants
of parameters that may be optimized based on an objec- CNS-12339021 and IIP-1430145, by the Air Force Office
tive function such as (2.24) extended to two-dimensional of Scientific Research (AFOSR) under grant FA9550-12-
environments. In particular, we may view each agent’s 1-0113, by the Office of Naval Research (ONR) under
trajectory as represented by parametric equations grant N00014-09-1-1051, and by the Army Research
y
Office (ARO) under grant W911NF-11-1-0227.
snx (t) = f (Υn , ρn (t)), sn (t) = g(Υn , ρn (t)), (2.28)
for all agents n = 1, . . . , N. Here, Υn = [Υn1 , Υn2 , . . . , ΥnΓ ]T
is the vector of parameters through which we control the
shapes and locations of the nth agent trajectory, and Γ is
this vector’s dimension. The agent position over time is Bibliography
controlled by a function ρn (t) dependent on the agent
[1] C. G. Cassandras, S. Lafortune, Introduction to Dis-
dynamics. We can then formulate a problem such as
crete Event Systems, 2nd ed., Springer, New York
 T M 2008.
min
Υn , n =1,...,N
J= ∑
0 i =1
Ri (Υ1 , . . . , Υ N , t)dt,
[2] Y. C. Ho, C. G. Cassandras, A new approach to
which involves optimization over the controllable the analysis of discrete event dynamic systems,
parameter vectors Υ , n = 1, . . . , N, characterizing each Automatica, vol. 19, pp. 149–167, 1983.
n
agent trajectory and placing once again the problem in
[3] P. J. Ramadge, W. M. Wonham, The control of dis-
the general framework of Figure 2.1.
crete event systems, Proceedings of the IEEE, vol. 77,
no. 1, pp. 81–98, 1989.

[4] F. Baccelli, G. Cohen, G. J. Olsder, J. P. Quadrat,


Synchronization and Linearity. Wiley, New York 1992.
2.6 Conclusions
Glancing into the future of systems and control the- [5] J. O. Moody, P. J. Antsaklis, Supervisory Control
ory, the main challenges one sees involve larger and of Discrete Event Systems Using Petri Nets. Kluwer
ever more distributed wirelessly networked structures Academic, Dordrecht, the Netherlands 1998.
in application areas spanning cooperative multi-agent
systems, energy allocation and management, and trans- [6] M. S. Branicky, V. S. Borkar, S. K. Mitter, A unified
portation, among many others. Barring any unexpected framework for hybrid control: Model and optimal
dramatic developments in battery technology, limited control theory, IEEE Trans. Autom. Control, vol. 43,
energy resources in wireless settings will have to largely no. 1, pp. 31–45, 1998.
dictate how control strategies are designed and imple-
[7] P. Antsaklis, W. Kohn, M. Lemmon, A. Nerode, and
mented so as to carefully optimize this limitation. Taking
S. Sastry (Eds.), Hybrid Systems. Springer-Verlag,
this point of view, the event-driven paradigm offers an
New York 1998.
alternative to the time-driven paradigm for modeling,
sampling, estimation, control, and optimization, not to [8] C. G. Cassandras, D. L. Pepyne, Y. Wardi, Optimal
supplant it but rather complement it. In hybrid systems, control of a class of hybrid systems, IEEE Trans.
this approach is consistent with the event-driven nature Autom. Control, vol. 46, no. 3, pp. 398–415, 2001.
of IPA which offers a general-purpose process for eval-
uating or estimating (in the case of stochastic systems) [9] P. Zhang, C. G. Cassandras, An improved forward
gradients of performance metrics. Such information can algorithm for optimal control of a class of hybrid
then be used on line so as to maintain a desirable sys- systems, IEEE Trans. Autom. Control, vol. 47, no. 10,
tem performance and, under appropriate conditions, pp. 1735–1739, 2002.
lead to the solution of optimization problems in appli-
cations ranging from multi-agent systems to resource [10] M. Lemmon, K. X. He, I. Markovsky, Supervisory
allocation in manufacturing, computer networks, and hybrid systems, IEEE Control Systems Magazine,
transportation systems. vol. 19, no. 4, pp. 42–55, 1999.
Event-Driven Control and Optimization in Hybrid Systems 35

[11] H. J. Sussmann, A maximum principle for hybrid [24] W. P. M. H. Heemels, M. C. F. Donkers, A. R. Teel,
optimal control problems, in Proceedings of 38th Periodic event-triggered control for linear systems,
IEEE Conf. on Decision and Control, December, 1999, IEEE Trans. Autom. Control, vol. 58, no. 4, pp. 847–
pp. 425–430. 861, April 2013.

[12] A. Alur, T. A. Henzinger, E. D. Sontag (Eds.), Hybrid [25] S. Trimpe, R. D’Andrea, Event-based state estima-
Systems. Springer-Verlag, New York 1996. tion with variance-based triggering, IEEE Trans.
Autom. Control, vol. 49, no. 12, pp. 3266–3281, 2014.
[13] Special issue on goals and challenges in cyber phys-
ical systems research, IEEE Trans. Autom. Control, [26] E. Garcia, P. J. Antsaklis, Event-triggered output
vol. 59, no. 12, pp. 3117–3379, 2014. feedback stabilization of networked systems with
external disturbance, in Proceedings of 53rd IEEE
[14] K. E. Arzen, A simple event based PID controller, Conf. on Decision and Control, 2014, pp. 3572–3577.
in Proceedings of 14th IFAC World Congress, 2002, pp.
423–428. [27] T. Liu, M. Cao, D. J. Hill, Distributed event-
triggered control for output synchronization of
[15] W. P. Heemels, J. H. Sandee, P. P. Bosch, Analysis dynamical networks with non-identical nodes, in
of event-driven controllers for linear systems, Int. J. Proceedings of 53rd IEEE Conf. on Decision and
Control, vol. 81, no. 4, p. 571, 2008. Control, 2014, pp. 3554–3559.

[16] J. Lunze, D. Lehmann, A state-feedback approach [28] Y. Shoukry, P. Tabuada, Event-triggered projected
to event-based control, Automatica, vol. 46, no. 1, pp. Luenberger observer for linear systems under
211–215, 2010. sparse sensor attacks, in Proceedings of 53rd IEEE
Conf. on Decision and Control, 2014, pp. 3548–3553.
[17] P. Tabuada, Event-triggered real-time scheduling of
stabilizing control tasks, IEEE Trans. Autom. Control, [29] S. Trimpe, Stability analysis of distributed event-
vol. 52, no. 9, pp. 1680–1685, 2007. based state estimation, in Proceedings of 53rd IEEE
Conf. on Decision and Control, 2014, pp. 2013–2018.
[18] A. Anta, P. Tabuada, To sample or not to sample:
Self-triggered control for nonlinear systems, IEEE [30] Y. Khazaeni, C. G. Cassandras, A new event-
Trans. Autom. Control, vol. 55, no. 9, pp. 2030–2042, driven cooperative receding horizon controller for
2010. multi-agent systems in uncertain environments,
in Proceedings of 53rd IEEE Conf. on Decision and
[19] V. S. Dolk, D. P. Borgers, W. P. M. H. Heemels, Control, 2014, pp. 2770–2775.
Dynamic event-triggered control: Tradeoffs
between transmission intervals and performance, [31] Y. C. Ho, X. Cao, Perturbation Analysis of Dis-
in Proceedings 35th IEEE Conf. on Decision and crete Event Dynamic Systems. Kluwer Academic,
Control, 2014, pp. 2765–2770. Dordrecht, the Netherlands 1991.

[32] C. G. Cassandras, Y. Wardi, C. G. Panayiotou,


[20] K. J. Astrom, B. M. Bernhardsson, Comparison of
C. Yao, Perturbation analysis and optimization of
Riemann and Lebesgue sampling for first order
stochastic hybrid systems, Eur. J. Control, vol. 16,
stochastic systems, in Proceedings of 41st IEEE Conf.
no. 6, pp. 642–664, 2010.
on Decision and Control, 2002, pp. 2011–2016.
[33] C. G. Cassandras, J. Lygeros (Eds.), Stochastic Hybrid
[21] T. Shima, S. Rasmussen, P. Chandler, UAV Team
Systems. CRC Press, Boca Raton, FL 2007.
decision and control using efficient collaborative
estimation, ASME J. Dyn. Syst. Meas. Cont. vol. 129, [34] W. B. Powell, Approximate Dynamic Programming
no. 5, pp. 609–619, 2007. (2nd ed.). John Wiley and Sons, New York.
[22] X. Wang, M. Lemmon, Event-triggering in dis- [35] C. G. Cassandras, Y. Wardi, B. Melamed, G. Sun,
tributed networked control systems, IEEE Trans. C. G. Panayiotou, Perturbation analysis for on-line
Autom. Control, vol. 56, no. 3, pp. 586–601, 2011. control and optimization of stochastic fluid mod-
els, IEEE Trans. Autom. Control, vol. 47, no. 8, pp.
[23] M. Zhong, C. G. Cassandras, Asynchronous dis- 1234–1248, 2002.
tributed optimization with event-driven communi-
cation, IEEE Trans. Autom. Control, vol. 55, no. 12, [36] A. Kebarighotbi, C. G. Cassandras, A general
pp. 2735–2750, 2010. framework for modeling and online optimization
36 Event-Based Control and Signal Processing

of stochastic hybrid systems, in Proceedings of 4th [44] C. G. Cassandras, Stochastic flow systems: Mod-
IFAC Conference on Analysis and Design of Hybrid eling and sensitivity analysis, in Stochastic Hybrid
Systems, 2012. Systems (C. G. Cassandras, J. Lygeros, Eds.). Taylor
and Francis, 2006, pp. 139–167.
[37] Y. Wardi, R. Adams, B. Melamed, A unified
approach to infinitesimal perturbation analysis in [45] G. Sun, C. G. Cassandras, C. G. Panayiotou, Pertur-
stochastic flow models: The single-stage case, IEEE bation analysis of multiclass stochastic fluid mod-
Trans. Autom. Control, vol. 55, no. 1, pp. 89–103, els, J. Discrete Event Dyn. Syst.: Theory Appl., vol. 14,
2010. no. 3, pp. 267–307, 2004.

[38] Y. Wardi, A. Giua, C. Seatzu, IPA for continuous [46] I. Rekleitis, V. Lee-Shue, A. New, H. Choset, Lim-
stochastic marked graphs, Automatica, vol. 49, no. 5, ited communication, multi-robot team based cover-
pp. 1204–1215, 2013. age, in IEEE Int. Conf. on Robotics and Automation,
vol. 4, 2004, pp. 3462–3468.
[39] H. J. Kushner, G. G. Yin, Stochastic Approximation
Algorithms and Applications. Springer-Verlag, New [47] N. Nigam, I. Kroo, Persistent surveillance using
York 1997. multiple unmanned air vehicles, in IEEE Aerospace
Conf., 2008, pp. 1–14.
[40] R. Rubinstein, Monte Carlo Optimization, Simulation
and Sensitivity of Queueing Networks. John Wiley and [48] S. L. Smith, M. Schwager, D. Rus, Persistent moni-
Sons, New York 1986. toring of changing environments using a robot with
limited range sensing, in Proc. of IEEE Conf. on
[41] C. Yao, C. G. Cassandras, Perturbation analysis and Robotics and Automation, pp. 5448–5455, 2011.
optimization of multiclass multiobjective stochas-
tic flow models, J. Discrete Event Dyn. Syst., vol. 21, [49] C. G. Cassandras, X. Lin, X. C. Ding, An optimal
no. 2, pp. 219–256, 2011. control approach to the multi-agent persistent mon-
itoring problem, IEEE Trans. on Automatic Control,
[42] C. Yao, C. G. Cassandras, Perturbation analysis vol. 58, no. 4, pp. 947–961, 2013.
of stochastic hybrid systems and applications to
resource contention games, Frontiers of Electrical and [50] X. Lin, C. G. Cassandras, Trajectory optimiza-
Electronic Engineering in China, vol. 6, no. 3, pp. tion for multi-agent persistent monitoring in two-
453–467, 2011. dimensional spaces, in Proc. of 53rd IEEE Conf.
Decision and Control, 2014, pp. 3719–3724.
[43] H. Yu, C. G. Cassandras, Perturbation analysis for
production control and optimization of manufac-
turing systems, Automatica, vol. 40, pp. 945–956,
2004.
3
Reducing Communication by Event-Triggered Sampling

Marek Miśkowicz
AGH University of Science and Technology
Kraków, Poland

CONTENTS
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.1.1 Event-Triggered Communication in Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.1.2 Event-Based Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 Event-Based Sampling Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.1 Origins of Concept of Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.2 Threshold-Based Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2.3 Threshold-Based Criteria for State Estimation Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2.4 Lyapunov Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2.5 Reference-Crossing Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2.5.1 Level-Crossing Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.5.2 Extremum Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.5.3 Sampling Based on Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.6 Cost-Aware Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.7 QoS-Based Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.8 Adaptive Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.9 Implicit Information on Intersample Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.10 Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3 Send-on-Delta Data Reporting Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3.1 Definition of Send-on-Delta Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3.2 Illustrative Comparison of Number of Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4 Send-on-Delta Bandwidth Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.1 Total Variation of Sampled Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.2 Total Signal Variation within Send-on-Delta Sampling Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.3 Mean Rate of Send-on-Delta Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4.4 Mean Send-on-Delta Rate for Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4.5 Reducing Sampling Rate by Send-on-Delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5 Extensions of Send-on-Delta Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5.1 Send-on-Area Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5.2 Send-on-Energy Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.6 Lyapunov Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.6.1 Stable Lyapunov Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.6.2 Less-Conservative Lyapunov Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

ABSTRACT Event-based sampling is a core of event- a wide range of design objectives and application
triggered control and signal processing of continuous- requirements. The present study is a survey of event-
time signals. By flexible definition of the triggering triggered sampling schemes both in control and sig-
conditions, the sampling released by events can address nal processing aimed to emphasize the motivation and

37
38 Event-Based Control and Signal Processing

wide technical context of event-based sampling criteria creativity and gave rise to a systematic development of
design. new event-triggered approaches (see [2,12], and other
chapters in this book).

3.1.1 Event-Triggered Communication in Control


3.1 Introduction In the context of modern networked sensor and control
The convergence of computing, communication, and systems, the event-based paradigm is aimed first at
control enables integration of computational resources avoiding transmission of redundant data, thereby
with the physical world under a concept of cyber- reducing the rate of messages transferred through the
physical systems and is a major catalyst for future tech- shared communication channel. The promise of saving
nology development. The core of cyber-physical systems communication bandwidth and energy resources is the
is networking that enables coordination between smart primary motivation that stimulates research interest on
sensing devices and computational entities. Due to high event-triggered control strategies in networked systems.
flexibility, easier mobility, and network expansion, the The other strong motivation behind development of
cyber-physical systems benefit from ubiquitous wire- event-based methodology is that the control loops
less connectivity. The postulate of providing wireless established over networks with varying transmission
communication of a limited bandwidth between smart delays and packet losses can no longer be modeled
devices creates new challenges for networked sensor by conventional discrete-time control theory, even
and control systems design, as the wireless transmis- if data are processed and transmitted with a fixed
sion is energy expensive, while interconnected devices sampling rate [5,12,42,64]. Thus, the progress in the
are often battery powered. The development of net- event-triggered control strategies is enforced by the con-
worked systems under constraint communication and vergence of control and networking. The formulation
energy resources has pushed a research interest toward of the event-based control methodology provides the
concepts of establishing interactions between system fundamentals to model the operation of the networked
components that are as rare as possible. This mechanism control systems with nonzero transmission delays and
fostered to system design an adoption of the event-based packet losses. Perhaps, a more important benefit is that
paradigm represented by a model of calls for resources the development of approaches to the event-triggered
only if it is really necessary. control allows for relaxation of the control strategy
The event-paradigm in the context of control, commu- based on equidistant sampling and for benefits to be
nication, and signal processing is not new, as “probably gained from sporadic interactions between sensors and
one of the earliest issues to confront with respect to the actuators at a lower rate.
control system design is when to sample the system so Finally, the effect of reducing communication by the
as to reduce the data needing to be transported over the event-triggered data reporting model is the most effec-
network” [1]. Early event-based systems (termed also tive method to minimize the network-induced imperfec-
aperiodic or asynchronous) have been known in feedback tions on the control performance (i.e., packet delay, delay
control, data transmission, and signal processing at least jitter, and packet losses) [13]. In this way, the adoption
since the 1950s, see [2,3]. Due to easier implementation of the event-triggered strategy meets one of the funda-
and the existence of mature theory of digital systems mental criteria of the networked control system design
based on periodic sampling, the event-based design aimed to achieve performance close to that of the con-
strategy failed to compete with synchronous architec- ventional point-to-point digital control system. At the
tures that have monopolized electrical engineering since same time, control over networks provides well-known
the early 1960s. In subsequent decades, the development advantages over classical control (e.g., high flexibility
of event-driven systems had attracted rather limited and resource utilization, efficient reconfigurability, and
attention of the research community, although questions reduction of installation and maintenance costs) and has
related to periodic versus aperiodic control, uniform ver- been identified as one of the key directions for future
sus nonuniform sampling in signal processing, or syn- developments of control systems [14].
chronous versus asynchronous circuit implementations
have appeared since the origins of computer technology.
3.1.2 Event-Based Signal Processing
The renaissance of interest in exploring the event-based
paradigm was evoked more than a decade ago due to During the last decade, in parallel to development of
the publishing of a few important research works inde- the event-triggered communication and control strate-
pendently in the disciplines of control [4–6] and signal gies, the event-based paradigm has been adopted in
processing [7–11]. These works released great research signal processing and electronic circuitry design as an
Reducing Communication by Event-Triggered Sampling 39

alternative to the conventional approach based on peri- Furthermore, event-driven sampling specifies criteria
odic sampling. The strong motivation behind this alter- for interactions between distributed system compo-
native is that the modern nanoscale very large scale nents, which is a critical issue in wireless communication
integration (VLSI) technology promotes fast circuit oper- systems. By flexible definition of the triggering con-
ation but requires low supply voltage (even below 1 V), ditions, the event-based sampling can address a wide
which makes the fine quantization of the amplitude range of design objectives and application requirements.
increasingly difficult. In the event-based signal process- The fundamental postulate for formulation of event-
ing mode, the periodic sampling is substituted by the based sampling criteria for monitoring and control sys-
discretization in the amplitude domain (e.g., by level- tems is to achieve the satisfactory system performance
crossings), and the quantization process is moved from at a low rate of events. Wireless transmission costs
the amplitude to the time domain. This allows benefit to much more energy than information processing. There-
be gained from growing time resolution provided by the fore, it is critical to maximize the usefulness of every
downscaling of the feature size of VLSI technology. The bit transmitted or received [18], even if it requires much
other reason for increasing interest in event-based signal computational effort. In other words, it is reasonable to
processing is the promise to achieve activity-dependent develop advanced event-triggered algorithms that pro-
power consumption and, thereby, save energy on the vide discrete-time representation of the signal at low
circuit level [15]. average sampling rates, even if these algorithms are
In response to challenges of nanoscale VLSI design, complex.
the development of the event-based signal processing On the other hand, in signal processing, another objec-
techniques has resulted in introducing a new class of tive of event-based sampling is to eliminate the clock
analog-to-digital converters and digital filters that use from system architecture, to benefit from increasing
event-triggered sampling [10,11,16], as well as a novel time resolution provided by nanoscale VLSI technol-
concept of continuous-time digital signal processing ogy, or avoid aliasing. The present study is a survey
(CT-DSP) [8,9,17]. of event-triggered sampling schemes in control and sig-
The CT-DSP, based essentially on amplitude quantiza- nal processing aimed to emphasize the motivation and
tion and processing of binary signals in continuous time, wide technical context of event-based sampling criteria
is a complementary approach to conventional clocked design.
DSP. Furthermore, the CT-DSP may be regarded as a
“missing category” among well-established signal pro-
cessing modes: continuous-time continuous-amplitude 3.2.1 Origins of Concept of Event
(analog processing), discrete-time discrete-amplitude
The notion of the event originates from computing
(digital processing), and discrete-time continuous-
domain and was formulated in the 1970s. An overview
amplitude processing techniques. Due to a lack of
of early definitions of the event in the context of discrete-
time discretization, the CT-DSP is characterized by
event digital simulation is summarized in [19]. Most
no aliasing. The other significant advantages of this
concepts refer to the event as a change of the state of
signal processing mode are the lower electromagnetic
an object occurring at an instant. An event is said to be
interference and minimized power consumption when
determined, if the condition on event occurrence can be
the input does not contain novel information [17]. The
expressed strictly as a function of time. Otherwise, the
application profile of the event-based signal process-
event is regarded as contingent. According to the clas-
ing techniques is related to battery-powered portable
sical definition widely adopted in real-time computer
devices with low energy budget designed for wireless
and communication systems in the 1990s, “an event is
sensor networking and Internet of Things.
a significant change of a state of a system” [20,21].
Despite different technical contexts and methodolo-
Control theory was perhaps the first discipline that
gies, both event-triggered control and signal processing
adopted the concept of event to define a criterion
essentially adopt the same sampling techniques, which
for triggering the sampling of continuous-time signals,
are signal-dependent instead of based on the progres-
although without an explicit reference to the notion of
sion of time.
the event. In 1959, Ellis noticed, that “the most suitable
sampling is by transmission of only significant data, as
the new value obtained when the signal is changed by
a given increment”. Consequently, “it is not necessary
to sample periodically, but only when quantized data
3.2 Event-Based Sampling Classes
change from one possible value to the next” [22]. As
Event-based sampling is a core of event-triggered con- seen, Ellis defined the triggering condition for sampling
trol and signal processing of continuous-time signals. as the occurrence of the significant change of the signal,
40 Event-Based Control and Signal Processing

which is consistent with definitions of the event adopted In the basic version, the deviation is referred to
later in the computing domain. In the seminal work, the value of the most recent sample (zero-order hold)
Åstrom proposed for the criterion specified by Ellis the (Figure 3.1). In an advanced version, the threshold is
name Lebesgue sampling due to a clear connotation to related to the predicted value of the sampled signal
the Lebesgue method of integration [5]. Using the same (first-order or higher-order hold) [28–31]. The latter
convention of terminology, the equidistant sampling has allows samples to be taken more rarely but the sampling
been called Riemann sampling [5]. Depending on the con- device requires more information to be known about the
text, the Lebesgue sampling is also referred to as the signal (i.e., the time-derivatives) at the instant of the last
send-on-delta concept [23], deadbands [13], level-crossing sample.
sampling with hysteresis [24], or constant amplitude differ- Several extensions of send-on-delta criterion have
ence sampling criterion [25]. The variety of existing termi-been proposed: the integral sampling (Figure 3.2)
nology shows that it is really a generic concept adapted [32–34] (called also area-triggered sampling, or send-on-
to a broad spectrum of technology and applications. area scheme), or the sampling triggered by error energy
changes (send-on-energy) [35,36]. Due to the integration
of any intersampling error, both the send-on-area and
3.2.2 Threshold-Based Sampling send-on-energy algorithms provide better conditions for
A class of event-based sampling schemes that may be effective sample triggering and help to avoid a lack of
referred literally to the concept of the event as a signif- samples during long time intervals.
icant change of the particular signal parameter can be In the generalized version of the threshold-based sam-
termed the threshold-based sampling that in the context pling, a plant is sampled when the difference between
of communication is known as the send-on-delta scheme the real process and its model is greater than a prespeci-
[23]. According to the threshold-based (Lebesgue sam- fied threshold [37].
pling, send-on-delta) criterion, an input signal is sam-
pled when it deviates by a certain confidence interval
3.2.3 Threshold-Based Criteria for State Estimation
(additive increment or decrement) (Figure 3.1). The size
Uncertainty
of the threshold (delta) defines a trade-off between sys-
tem performance and the rate of samples. The maximum The threshold-based sampling criteria (send-on-delta,
time interval between consecutive samples is theoreti- send-on-area) are also used in the distributed state esti-
cally unbounded. mation when the state of a dynamic system is not
The send-on-delta communication strategy is used in measured directly by discrete values but is estimated
many industrial applications, for example, in LonWorks with the probability density functions. The objective
(ISO/IEC 14908), the commercial platform for net- of the estimator is to keep the estimation uncertainty
worked control systems applied in building automa- bounded. Although the use of the threshold-based sam-
tion, street lighting, and smart metering [26], as well pling reduces the transmission of messages between the
as in EnOcean energy harvesting wireless technol- sensor and the estimator, compared to periodic updates,
ogy. Furthermore, the send-on-delta paradigm has been these transmissions occur not necessarily “when it is
extended to spatiotemporal sampling, for example, as required.” A demand to keep bounded the difference
the basis of the Collaborative Event Driven Energy Effi- between the actual measurement and its prediction does
cient Protocol for sensor networks [27]. not directly address the estimator’s performance. The
latter is based primarily on the estimation uncertainty.
Recently, several novel event-based sampling criteria for
state estimators have been proposed. These sampling
schemes link the decision “when to transmit” to its
Δ contribution to the estimator performance [38–41].

Δ
x(t)

Δ
x(t)

FIGURE 3.1 FIGURE 3.2


t
Send-on-delta sampling criterion. Send-on-area sampling criterion.
Reducing Communication by Event-Triggered Sampling 41

According to threshold-based sampling focused not sufficiently changes. Thus, a discretization is defined
on the sampled signal but on estimation uncertainty, not in the amplitude but in the energy space domain,
a new measurement is not communicated as long as because the particular values of the Lyapunov function
the state estimation confidence level is “good enough.” constitute a set of contour curves of constant energy
The sampling is triggered only if the estimation uncer- in the state space. Each curve forms a closed region
tainty exceeds a tolerable upper bound. In the matched around the equilibrium point, and sampling is triggered
sampling, the sensor uses the Kullback–Leibler diver- when the system trajectory crosses contour curves from
gence DKL for triggering events [38,41]. The DKL is a outside to inside, that is, when it migrates toward a
measure of the difference between two probability dis- decrease of system energy (Figure 3.3).
tributions: p1 ( x ) as a “true” reference distribution of The next sample that triggers the control update is
state x and p2 ( x ) as the distribution related to the pre-
captured when the value of the Lyapunov function
decreases with scaling coefficient η compared to the
dicted state. More specifically, in the context of the state
estimation, the DKL divergence shows how much infor- actual sample, where 0 < η < 1.
mation is lost when the predicted state defined by p2 ( x ) The tunable design parameter of Lyapunov sampling,
is adopted instead of “true” reference distribution p1 ( x ).
η, represents the system energy decay ratio and can be
The Kullback–Leibler divergence DKL (t) is a function referred to the threshold (delta) in the send-on-delta
of time due to temporal evolution of p1 ( x ) and p2 ( x ).sampling. The difference is that the threshold in send-
As soon as DKL (t) crosses the prespecified threshold, on-delta scheme (Δ) is an additive parameter. Instead,
a new sample is sent to the estimator, and the state the energy decay η in Lyapunov sampling is defined
of the system is updated in order to reduce the state as a multiplicative scaling coefficient, and the sam-
uncertainty. ples are triggered only if the energy of the system
In [39,40], with the variance-based sampling criterion, decreases (η < 1). The parameter η determines the trade-
new measurements transmitted from the sensor to the off between the average rate of sampling frequency and
estimator are referred to prediction variance that grows control performance. For small values of η, the next sam-
in time when no measurement is available. A new mea- ple is taken when the Lyapunov function decreases more
surement is broadcast if the prediction variance exceeds significantly. Conversely, closer contour curves trigger-
a tolerable bound, which indicates that the uncertainty ing events and more frequent sampling occur with the
of predicting the state is too large. As soon as the values of η close to one.
predefined threshold is exceeded, a new measurement If the parameter η is constrained such that 0 < η < 1,
is transmitted and the estimator variance drops. The the system energy decreases with every new sample,
comprehensive discussion of the event-based state esti- and the trajectory successively traverses contour curves
mation is included in [39,41]. until the equilibrium point is reached. This is equivalent
It is worth noting that the development of new sam- to the stability in the discrete Lyapunov sense. How-
pling criteria based on the state estimator uncertainty ever, the requirement 0 < η < 1 does not guarantee that
has given rise to the extension of the traditional con- the continuous-time trajectory is also stable, because a
cept of event originated in computing. Depending on the new sample might be never captured (i.e., Lyapunov
context, an event is now regarded not only to a signifi- function may not decrease sufficiently). The latter may
cant change of the state of an object but may also define happen if the system energy increases before the trajec-
a significant change of the state uncertainty. tory achieves the next contour curve [44]. Intuitively, the
sampling intervals cannot be too long to preserve sta-
3.2.4 Lyapunov Sampling bility of the continuous-time system. As proved in [44],
In the context of control systems, the class of threshold- the stability of the continuous-time system trajectory is
guaranteed if η is higher than a certain value defined
based sampling represents event-triggered sampling
algorithms oriented to update the control law when by the shape of the Lyapunov function: 0 < η∗ < η < 1.
Thus, η∗ is regarded as the minimum value of η that
needed to achieve a desired performance. The scheme
guarantees system stability. Further discussion on Lya-
related to the threshold-based schemes is Lyapunov
sampling, introduced in [44] and explored further punov sampling is provided in Section 3.6. Recent
experimental investigation of Lyapunov sampling for
in [45–47], which consists of enforcing control update
when required from the stability point of view. adaptive tracking controllers in robot manipulators is
The aim of the use of Lyapunov sampling is to reported in [47].
drive the system to a neighborhood of the equilibrium.
3.2.5 Reference-Crossing Sampling
To guarantee the control system stability, Lyapunov
sampling refers to variations of the Lyapunov func- In signal processing, event-based sampling is consid-
tion. The plant is sampled when the system trajectory ered mainly in the perspective of signal reconstruction.
42 Event-Based Control and Signal Processing
4

1
x2(t)
0

−1

−2

−3

−4
−1 −0. 5 0 0. 5 1
x1(t)

FIGURE 3.3
Lyapunov sampling as system trajectory-crossings scheme.

In order to recover the original signal perfectly from


samples irregularly distributed in time, the mean sam-
pling rate must exceed the Nyquist rate, which is twice
higher than the highest frequency component in the
signal spectrum (B) [48–50]. The survey of mathemat-
x(t)

ical background related to signal recovery based on


nonuniform sampling, which the event-triggered sam-
pling belongs to, is included in [51].
A class of event-based criteria relevant to many
applications in the event-based signal processing is the
reference-crossing sampling. According to this criterion,
the signal is sampled when it crosses a prespecified t
reference function. This sampling criterion is thus not FIGURE 3.4
referred to an explicit (additive or multiplicative) sig- Sine-wave crossing sampling.
nal change but is dedicated to match a certain refer-
ence. The example is the sine-wave crossing sampling
(Figure 3.4) [52,53]. The rate of crossings of the single level is usually
lower than the Nyquist rate (2B). For example, the mean
rate of zero-crossing for Gaussian process bandlimited
to the bandwidth B (Hz) equals 1.15B, and the rate
3.2.5.1 Level-Crossing Sampling
of crossings of level L = 0 is even lower [56]. There-
In the simplest version of the reference-crossing sam- fore, for the level-crossing sampling, multiple reference
pling, the reference function can be defined as a sin- levels are used in practice (Figure 3.5) [7–11,56–63].
gle reference level disposed in the amplitude domain. The work by Mark and Todd was the first study on
Studies on the level-crossings problem for stochastic level-crossing sampling with multiple reference lev-
processes is one of the classical research topics in mathe- els addressed to the context of signal processing [57].
matics, with many applications to mechanics and signal The level-crossing sampling is the most popular sam-
processing [54]. For example, in the pioneered work pling scheme in event-based signal processing adopted
published in 1945, Rice provided the formula for the to design of asynchronous analog-to-digital convert-
mean rate of level-crossings for Gaussian processes [55]. ers [7,10,60,61], and digital filters [11,16], as well as to
Reducing Communication by Event-Triggered Sampling 43

Δ
x(t)

x(t)
Δ

t
t
FIGURE 3.5
Level-crossing sampling with multiple reference levels. FIGURE 3.6
Extremum sampling.

CT-DSP [8,9,15,17,43,65]. In most scenarios of level-


crossing sampling, the reference levels are uniformly recovery of the original signal only at half of the Nyquist
distributed in the amplitude domain. However, the rate [68], because it is two-channel sampling—that is, the
approaches to adaptive [60] or optimal distribution of samples provide information both on signal value and
levels [61] have also been proposed. on zeros of its first time-derivative. For recovery of the
If the hysteresis is adopted to triggering level- signal based on irregular derivative sampling, see [69].
crossings (i.e., the repeated crossings of the same level
do not trigger new sampling operations), then such 3.2.5.3 Sampling Based on Timing
level-crossing criterion matches the threshold-based
The reference-crossing sampling provides the frame-
sampling (send-on-delta scheme). However, sometimes
work for time encoding [70–72], also called sampling based
both terms are used interchangeably.
on timing [73], a class of discrete-time signal representa-
The mean rate of level-crossing sampling is a func-
tion complementary to classical sampling triggered by
tion of power spectral density of the sampled signal and
a timer. The concept of sampling by timing consists in
depends on the signal bandwidth. This feature is intu-
recording the value of the signal at a preset time instant,
itively comprehensible as the rate of level-crossings is
instead of recording the time at which the function takes
higher when the signal varies quickly, and lower when
on a preset value. The objective of sampling based on
it changes slowly. For bandlimited signals modeled by
timing is to represent the continuous-time signals fully
a stationary Gaussian process, the mean rate of level-
in the time domain by a sequence of time intervals that
crossing sampling is directly proportional to the signal
include information on signal values. In this sense, the
bandwidth. In other words, the level-crossing scheme
level-crossings do not allow signals to be encoded only
provides faster sampling when the local bandwidth
in the time domain, because the information which level
becomes higher, and respectively slower in regions of
has been crossed is also required [72].
lower local bandwidth. In this way, the level-crossings
Sampling by timing is an invertible signal transfor-
sampling provides dynamic estimation of time-varying
mation, provided that the density of time intervals in
local signal bandwidth. The approach to local band-
general exceeds the Nyquist rate. The perfect recovery
width estimation and signal reconstruction by the use of
of the input signal from time-encoded output based on
level-crossing sampling is presented in [56] in this book.
frame theory has been formulated in [70] and addressed
in several later research studies.
3.2.5.2 Extremum Sampling
As mentioned before, the motivation behind time
By extending the concept of level-crossings to the trans- encoding of signals is a desire to benefit from increas-
forms of input signal, new sampling criteria might be ingly better time resolution of nanoscale VLSI cir-
formulated. For example, the zero-crossings of the first cuits technology, while the coding of signals in the
derivative defines the extremum sampling (mini–max or amplitude domain becomes more difficult due to the
peak sampling), which relies on capturing signal values necessity to reduce supply voltage. The sampling by
at its local extrema (Figure 3.6) [66–68]. Extremum sam- timing appears in nature and in biologically inspired
pling allows estimation of dynamically time-varying circuit architectures (e.g., in integrate-and-fire neurons)
signal bandwidths similarly as level-crossings. How- that are usually based on integration. In various neu-
ever, unlike level-crossing, extremum sampling enables ron models, the state transition (sampling) is triggered
44 Event-Based Control and Signal Processing

when an integral of input signal reaches a prespeci- sampling rate can be dependent on the current state
fied reference level [3,74–76]. In this sense, the time of QoS metrics, or a weighted mixture of the network
encoding realized in the integrate neurons is based on conditions and closed-loop control performance. The
level-crossings of the input signal integral. The time QoS-based sampling scheme should be classified to the
encoding referred to circuit-based integration has been adaptive sampling category, as the samples are trig-
adopted to time-based analog-to-digital conversion, see, gered by a timer, but the adaptation mechanism
for example, [77–80]. is event-triggered and related to the state of QoS
performance metrics.
3.2.6 Cost-Aware Sampling
The measures related to the input signal may not be the 3.2.8 Adaptive Sampling
only factor that determines the conditions for triggering
One of the principles of the event-based paradigm is
the sampling operations. By extension of the concept of
that the events in event-triggered architectures occur
the event, the sampling criteria may address the com-
unexpectedly—that is, they are contingent according to
bination of the control performance and sampling cost.
the classification provided in [19,21]. The attribute of
The control performance is defined by the threshold
unexpectedness of event occurrences is a great challenge
size or the density of reference levels. The sampling
for event-based systems design, because it precludes a
cost may address various application requirements, for
priori allocation of suitable resources to handle them.
example, the allocated fraction of communication band-
The well-known idea to cope with event unexpect-
width, energy consumption, or output buffer length.
edness relies on a prediction of a contingent event and
The fraction of bandwidth required for transmission
converting it to a determined event, since the latter is pre-
of the samples is represented by the mean sampling
dictable and can be expressed strictly as a function of
rate, which is proportional to energy consumption for
time. The instant of occurrence of a determined event
wireless communication.
is an approximate of an arrival time of a correspond-
The first work on cost-aware sampling was published
ing contingent event. In the context of the sampling
by Dormido et al. in the 1970s in the context of adap-
strategy, this idea comes down to a conversion of event-
tive sampling [81]. The cost-aware sampling criteria may
based to adaptive sampling under the same criterion.
balance the control performance and resource utiliza-
More specifically, the current sampling interval in adap-
tion, for example, by “discouraging” the triggering of
tive sampling is assigned according to the belief that
a new sample when the previous sample was taken
the sampled signal will meet the assumed criterion (e.g.,
not long ago, even if the object has changed its state
a change by delta) at the moment of the next sampling
significantly [81,82].
instant. The adaptive sampling system does not need
to use event detectors, because the new sample is not
3.2.7 QoS-Based Sampling captured when an event is detected, but it is predicted
The other class of sampling schemes proposed to be used based on the expansion of the signal at the actual sam-
in the sensor and networked control systems is focused pling instant. For example, according to the adaptive
on the quality of service (QoS) of the network [83]. sampling based on the constant absolute-difference cri-
QoS performance metrics typically include time delays, terion (send-on-delta), the next sample is taken after
throughput, and packet loss. Given that the bandwidth the time Ti+1 = Δ/ ẋ (ti ), where ẋ (ti ) is the value of the
usage and the control performance are linked together first time-derivative at the instant ti of the actual sam-
in networked control systems, a sampling rate that is too ple, and Δ is the sampling threshold (Figure 3.1) [84].
fast due to its contribution to the total network traffic can For the adaptive sampling based on send-on-area cri-
significantly increase the transmission delays and packet terion (integral-difference
 law), the next sampling time
losses, and thereby degrade the control performance. is Ti+1 = μ/ | ẋ (ti )|, where μ is the assumed sam-
Unlike in classical point-to-point control systems, the pling threshold (Figure 3.2) [85]. The explicit difference
performance of control loops in the networked control between the event-triggered and adaptive sampling on
systems paradoxically could be increased with slower the example of the send-on-delta criterion is illustrated
sampling, because the traffic reduction improves QoS explicitly in Figure 3.7. If the signal is convex (second
metrics [13]. time-derivative is negative), then the adaptive sampling
The idea to adapt the sampling interval to varying net- (t
i−1) is triggered earlier than the corresponding event-
work conditions (e.g., congestion, length of delays, and based sampling t
i (Figure 3.7), and vice versa.
wireless channel quality) is presented in [83]. In the QoS- To summarize, the essential difference between the
based sampling scheme, a sampling interval is extended adaptive sampling and the event-based sampling is
if the network is overloaded, and vice versa. The actual that the former is based on the time-triggered strategy
Reducing Communication by Event-Triggered Sampling 45

it is computed in discrete time by software in modern


self-triggered control systems.
Among other applications, adaptive sampling is a
useful technique to determine the sleeping time of wire-
less sensor nodes between sampling instants in order to
decrease energy consumption [91].
x(t)

Δ
3.2.9 Implicit Information on Intersample Behavior
All the event-based sampling algorithms produce
samples irregularly in time. However, compared to non-
uniform sampling, the use of event-triggered schemes
ti t´i + 1 ti + 1 provides some implicit information on behavior of
t the signal between sampling instants [3], because “as
long as no event is triggered, the actual measurement
FIGURE 3.7 does not fulfill the event-sampling criterion [94], or
Comparison of event-triggered sampling and adaptive sampling rephrased equivalently “it is not true that there is zero
under send-on-delta criterion. relevant information between threshold crossings; on
the contrary, there is a valuable piece of information,
namely that the output has not crossed a threshold” [92].
where the sampling instants are controlled by the timer, Furthermore, in [93], the implicit information as a lack of
and the latter belongs to the event-triggered systems event occurrence is termed the “negative information.”
where the sampling operations are determined only In particular, in the send-on-delta scheme, the implicit
by signal amplitude variations, rather than by the information means that the sampled signal does not
progression of time. vary between sampling instants more than by the
The criteria for adaptive sampling [84–89] corre- threshold (delta) [5,22,23]. In the extremum sampling,
spond to these for event-based sampling [5,22,23,32–36], it is known implicitly that the first time-derivative of
and have been formulated in the 1960s and 1970s. In the signal does not change its sign between sampling
particular, Mitchell and McDaniel introduced the con- instants [66–68]. This extra information on intersample
stant absolute-difference criterion that is the (adaptive signal behavior might be additionally exploited to
sampling) equivalent of the pure (event-based) level- improve system performance. However, in the common
crossing/send-on-delta algorithm [84]. Dorf et al. pro- strategy adopted in the existing research studies on
posed the constant integral-difference sampling law [85], event-based systems, the updates are restricted to event
which corresponds to the integral criterion (send-on- occurrences, for example, when a state of the plant
area) [32–34]. A unified approach to a design of adaptive varies by the threshold. The implicit information on
sampling laws based on the same integral-difference intersample signal behavior to improve system per-
criterion of different thresholds has been derived and formance in the context of control has been used only
proposed in [87]. In subsequent work, the proposition in a few works [92–94], see also [41] in this book. In
of generalized criteria for adaptive sampling has been these works, the implicit information is used to improve
introduced [88]. These criteria combine a sampling res- the state estimation by continuing updates between
olution with a sample cost modeled by the proposed threshold crossings based on the known fact that the
cost functions, and [88] is the first work on cost-aware output lies within the threshold.
sampling design. In the context of event-based signal processing, to
Adaptive sampling has been adopted in the self- the best of the author’s knowledge, the implicit infor-
triggered control strategy proposed in [89] and exten- mation on intersample behavior has not been used.
sively investigated in technical literature, see [90]. In The adoption of extra knowledge of the signal between
self-triggered control, the time of the next update is pre- event-triggered sampling instants to improve system
computed at the instant of the actual control update performance seems to be one of the major unused poten-
based on predictions using previously received data and tials of the event-based paradigm.
knowledge of the plant dynamics. Thus, the controller
computes simultaneously both the actual control input
3.2.10 Event Detection
and the sampling period for the next update. The dif-
ference is that the prediction of the sampling period In signal processing, and in many approaches to event-
realized in early adaptive control systems was based on based control, the events that trigger sampling are
continuous estimation of the state of the plant, while detected in continuous time. The continuous verification
46 Event-Based Control and Signal Processing

3.3.1 Definition of Send-on-Delta Scheme


In the send-on-delta sampling scheme (Lebesgue sampling),
the most generic sampling scheme, a continuous-time
signal x (t) is sampled when its value deviates from the

x(t)
x(t)

Δ Δ
value included in the most recent sample by an inter-
val of confidence ±Δ [5,22,23]. The parameter Δ > 0 is
called the threshold, or delta (Figures 3.1 and 3.9b):
ti ti + 1 ti ti + 1 | x (ti+1 ) − x (ti )| = Δ, (3.1)
t t

FIGURE 3.8
where x (ti ) is the ith signal sample. The size of the
Continuous-time versus discrete-time event detection.
threshold (delta) defines a trade-off between system
performance and the rate of samples: the smaller the
of an event-triggering condition is the basis of con- threshold Δ, the higher is the sampling rate.
tinuous event-triggered control (CETC) [95] and requires By comparison, in the periodic sampling, the signal is
the use of analog or mixed signal (analog and digital) sampled equidistantly in time. The sampling period T
instrumentation (Figure 3.8a). is usually adjusted to the fastest expected change of a
The alternative technique of event detection combines signal, as follows (Figure 3.9a):
the event and time-triggered strategies, where the Δ
events are detected in discrete time on the basis of T= , (3.2)
| ẋ (t)|max
equidistant samples (Figure 3.8b). The implementation
of even-triggered control on the top of periodically where | ẋ (t)|max is the maximum slope of the sampled sig-
sampled data is referred to as the periodic event-triggered nal defined as the maximum of the absolute value of the
control (PETC) strategy [95]. In PETC, the sensor is first signal derivative in relation to time, and Δ is the
sampled equidistantly, and event occurrence is verified maximum difference between values in the consecutive
at every sampling time in order to decide whether or samples acceptable by the application. In [28], the send-
not to transmit a new measurement to a controller. on-delta scheme with prediction as an enhanced version
Thus, in PETC, the periodicity of control system com- of the pure send-on-delta principle has been introduced.
ponents is relaxed in order to establish the event-driven This allows a further reduction of the number of sam-
communication between them. ples. In the prediction-based send-on-delta scheme, x (t)
The idea to build the event-based strategy on top is sampled, when its value deviates from the value pre-
of time-triggered architectures has been proposed and dicted on the basis of the most recent sample by a
adopted in several approaches in control and real-time threshold ±Δ (Figure 3.9c and d),
applications (e.g., in [4,96]). The PETC strategy can be
referred to as the globally asynchronous locally asyn- | x (ti+1 ) − x̂ (ti+1 )| = Δ, (3.3)
chronous (GALS) model of computation introduced in
the 1980s [97] and adopted as the reference architecture where x̂ (ti+1) denotes the value predicted for the instant
for digital electronics, including system-on-chip design ti+1 on the basis of the time-derivative(s) by the use of
[98]. The PETC is convenient for practical applications. truncated Taylor series expanded at ti as follows:
In particular, the implementation of the event-triggered
ẍ (ti )
communication and processing activities on top of peri- x̂(ti+1 ) = x (ti ) + ẋ (ti )(ti+1 − ti ) + ( t i + 1 − t i ) 2 + · · ·,
odically sampled data has been applied in the send-on- 2
(3.4)
delta scheme in LonWorks networked control systems where ẋ (t ) and ẍ (t ) are, respectively, the first and
i i
platform (ISO/IEC 14908) [26] and in EnOcean energy second time-derivatives of the signal at t . The send-on-
i
harvesting wireless technology. delta scheme with zero-order prediction corresponds to
the pure send-on-delta algorithm (Figure 3.9b):

x̂0 (ti+1 ) = x (ti ). (3.5)


3.3 Send-on-Delta Data Reporting Strategy
Most event-based sampling schemes applied to control,
3.3.2 Illustrative Comparison of Number of Samples
communication, and signal processing involve trigger-
ing the sampling operations by temporal variations of It is intuitively comprehensible that the send-on-delta
the signal value. scheme reduces the mean sampling rate compared to the
Reducing Communication by Event-Triggered Sampling 47
Sampling period: 14 ms, number of samples: 71 Delta: 0.028, number of samples: 30
0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

x(t)
x(t)

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 200 400 600 800 1000 0 0.2 0.4 0.6 0.8 1
t t
(a) (b)

Delta: 0.028, number of samples: 6 Delta: 0.028, number of samples: 4


1 0.9
0.9 0.8
0.8 0.7
0.7
0.6
0.6
0.5
x(t)

x(t)

0.5
0.4
0.4
0.3
0.3
0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
t t
(c) (d)

FIGURE 3.9
Step response of the first-order control system sampled: (a) periodically (71 samples), (b) according to send-on-delta scheme (28 samples),
(c) with send-on-delta scheme and linear prediction (6 samples), and (d) with send-on-delta scheme and quadratic prediction (4 samples).

periodic sampling with the same maximum sampling not higher than 0.028. The number of samples is evalu-
error. The problem of comparative analysis of event- ated during the settling time of the step response (i.e., as
based sampling algorithms has been studied in several long as the step response reaches 90% of its final value).
papers [99–102]. As expected, the number of samples in the send-
Figure 3.9a through d illustrates a comparison of on-delta scheme (28 samples, Figure 3.9a) is lower
the number of samples between periodic sampling and compared to the classical periodic sampling (71 samples,
send-on-delta schemes. The step-response of the first- Figure 3.9b). The use of linear and quadratic prediction
order control system as the sampled signal is used. In contributes to further reduction of the number of sam-
order to make the comparison illustrative, the maximum ples (6 samples with linear prediction and just 4 samples
sampling error (delta) is set to 0.028, and the time con- with quadratic prediction). The reduction of the num-
stant of the system is 0.42 s. In uniform sampling, the ber of samples is higher than one order of magnitude
sampling period of 14 ms is selected in order to keep the in the prediction-based send-on-delta algorithms com-
difference between values of the consecutive samples pared to equidistant sampling. However, this reduction
48 Event-Based Control and Signal Processing

is obtained at the price of a necessity to detect changes The graphical interpretation of the total variation is
of the signal and also to sample its time-derivatives. illustrated in Figure 3.10. In the presented scenario, the
t
total variation Vtii+7 ( x ) between instants ti and ti+7 is
t t t
a sum Vtii+7 ( x ) = Vttia ( x ) + Vt ab ( x ) + Vttbc ( x ) + Vtcd ( x ) +
t
Vtdi+7 ( x ). Each component of this sum is represented by
a corresponding arrow in Figure 3.10.
3.4 Send-on-Delta Bandwidth Evaluation
Since the send-on-delta sampling is triggered by asyn-
3.4.2 Total Signal Variation within Send-on-Delta
chronous events, the time is a dependent variable in sys-
Sampling Interval
tem modeling. The average requirements of networked
system communication bandwidth for a single control Let us consider the total variation of the sampled signal
loop are defined by the mean number of send-on-delta between sampling instants in the send-on-delta scheme
samples per unit of time. (Figure 3.10). If the signal x (t) does not contain any local
extrema between consecutive send-on-delta instants ti
and ti+1, then the derivative ẋ (t) does not change its sign
3.4.1 Total Variation of Sampled Signal t
within (ti , ti+1 ). Then, the total variation Vtii+1 ( x ) = Δ,
As will be shown, the signal measure called the total because
variation of the sampled signal is essential to estimate  t 
 t  i +1 
the number of samples produced by the send-on-delta i +1
| ẋ (t)|dt =  ẋ (t)dt = Δ. (3.8)
t
Vtii+1 ( x ) =
scheme. ti ti
In mathematics, the total variation of a function x (t)
on the time interval [a,b] denoted by Vab ( x ) is formally If the sampled signal x (t) contains local extrema
defined as the supremum on the following sum [103]: between consecutive send-on-delta instants which in
Figure 3.10 occurs for (ti+1, ti+2), then ẋ (t) changes its
k   sign, and
Vab ( x ) = sup ∑  x (t j ) − x (t j−1 ) , (3.6) t
j =1 Vtii++12 ( x ) > Δ, (3.9)

for every partition of the interval [a,b] into subintervals because  t  


(t j−1, t j ); j = 1, 2, . . . , k; and t0 = a, tk = b. i +2  t i +2 
| ẋ (t)|dt >  ẋ (t)dt . (3.10)
If x (t) is continuous, the total variation Vab ( x ) might be t i +1 t i +1
interpreted as the vertical component (ordinate) of the In particular, if one local extremum occurs within
arc-length of the x (t) graph, or alternatively, as the sum (ti+1, ti+2 ), then [104]
of all consecutive peak-to-valley differences.
For continuous signals, the total variation Vab ( x ) of t
Δ < Vtii++12 ( x ) < 3Δ. (3.11)
x (t) on the time interval [a,b] is
 b Note that referring to Figure 3.10, it is evident that
Vab = | ẋ (t)|dt. (3.7) V ti+2 ( x ) ∼
= 3Δ and Vtii++34 ( x ) ∼
t
= Δ, which illustrates lower
a t i +1

x(t)

x(ti + 1) = x(ti + 5) = x(ti + 7)


Δ

x(ti) = x(ti + 2) = x(ti + 4) = x(ti + 6)


Δ

ta tb tc td t
ti ti + 1 ti + 2 ti + 3 ti + 4 ti + 5 ti + 6 ti + 7

FIGURE 3.10
Total variation of a continuous-time signal.
Reducing Communication by Event-Triggered Sampling 49

and upper bounds for signal variation in the send-on- where | ẋ (t)| is the mean slope of the signal—that is, the
delta sampling interval with a single local extremum. If mean of the absolute value of the first time-derivative of
the send-on-delta sampling interval (tk , tk+1 ) contains a x (t) in the time interval [a,b]:
number of m local extrema, then  b
1
t | ẋ (t)| = | ẋ (t)|dt, (3.18)
Δ < Vtkk +1 ( x ) < (2m + 1)Δ. (3.12) b−a a

or alternatively,
To summarize, the total variation during a number of
n send-on-delta intervals (ti , ti+n ) is lower bounded V b (x)
| ẋ (t)| = a . (3.19)
t b−a
because Vtii+n ( x ) ≥ nΔ. Therefore, the number of sam-
ples triggered during a certain time interval [a,b] accord- For small Δ, the mean send-on-delta sampling rate λ
ing to the send-on-delta scheme is not higher than approaches | ẋ (t)|/Δ as follows from (3.15):
Vab ( x )/Δ. In particular, if the signal x (t) is a monotone
function of time in [a,b], then the number of samples n is | ẋ (t)|
lim λ = = λ
. (3.20)
exactly equal to Vab ( x )/Δ: Δ→0 Δ

n = Vab ( x )/Δ, (3.13) Since the mean slope of the signal can be calculated
on the basis of (3.18) analytically or numerically either
provided that the fractional part of this number is for deterministic or stochastic signals, the formula (3.17)
neglected. provides a simple expression for the evaluation of the
Finally, in a general case, the expression Va ( x )/Δ upper bound for the mean sampling rate produced by
b

might be regarded as an upper bound on the number n the send-on-delta scheme. In the context of a networked
of send-on-delta samples for the continuous signal x (t) control system, this allows estimation of the bandwidth
during [a,b]: for transferring messages in a single control loop.

Vab ( x )
n≤ . (3.14) 3.4.4 Mean Send-on-Delta Rate for Gaussian
Δ
Processes
Taking into account asymptotic conditions, it could be
easily shown that the number n of send-on-delta sam- If x (t) is a random process with a mean square value σx ,
ples approaches Vab ( x )/Δ if the threshold Δ becomes bandlimited to ωc = 2π f c , and with Gaussian amplitude
small [104]: probability density function f ( x ) given by
Vab ( x ) √  
lim n = . (3.15) −x2
Δ→0 Δ f ( x ) = (1/ 2πσx ) exp , (3.21)
2σ2x
EXAMPLE 3.1 When the set point is changed by x0
the mean slope of the x (t) is [105]
in the first-order control system, then the step response
is given by x (t) = x0 (1 − e−t/τ ), which is a monotone 

signal presented in Figure 3.9a through d. On the basis | ẋ (t)| = f c σx . (3.22)
of (3.7), it is clear that V0∞ ( x ) = x0 , and the number of 3
send-on-delta samples during the time [0, ∞) when the On the basis of (3.17), the upper bound for the
control system migrates to the new set point according mean send-on-delta sampling rate for the bandlimited
to (3.10) is Gaussian random process x (t) is
x
n = 0. (3.16) 
Δ 8π
λ≤ f c σ̂x , (3.23)
3

3.4.3 Mean Rate of Send-on-Delta Sampling where σ̂x = σx /Δ is the mean square value normalized
to the sampling threshold Δ.
Taking into account (3.7) and (3.14), the mean send- Let us introduce the measure of spectral density of
on-delta sampling rate λ during the time interval [a,b] the send-on-delta sampling rate as the ratio of the mean
defined as λ = n/(b − a) is constrained by λ
: sampling rate λ and the Gaussian input process cutoff
frequency f c :
| ẋ (t)|
λ≤ = λ
, (3.17)
Δ λ̂ = λ/ f c . (3.24)
50 Event-Based Control and Signal Processing

The measure λ̂ has intuitive interpretation as the num- of the first- and the second-order control systems. For
ber of samples generated in unit of time by a sensor example, the guaranteed effectiveness for the pure har-
using a send-on-delta scheme for the Gaussian process monic signal is π/2 = 1.57. In [104], it is shown that the
of unit bandwidth 1 Hz. On the basis of (3.23), the guaranteed reduction for bandlimited
√ Gaussian random
spectral density of the send-on-delta sampling rate of process x (t) equals pmin = 4.5π ∼ = 3.76, and is inde-
the bandlimited Gaussian process is upper bounded as pendent of the process bandwidth and the mean square
follows [105]: value.

λ̂ ≤ 2.89σ̂x . (3.25) EXAMPLE 3.3 The reduction p of the sampling rate for
the step response of the first-order control system given
EXAMPLE 3.2 For Δ = 0.5σx : λ̂ ≤ 5.78 which means by x (t) = x0 (1 − e−t/τ ) during the time interval (0, b)
that for Gaussian bandlimited process with the cutoff evaluated according to (3.27) equals [23]
frequency f c = 1 Hz, the send-on-delta scheme pro-
duces no more than 5.78 samples per second in average. p = b/[τ(1 − e−b/τ )]. (3.29)

The relevant step response x (t) = x0 (1 − e−t/τ ) is pre-


sented in Figure 3.9a through 3.9d. The reduction of the
3.4.5 Reducing Sampling Rate by Send-on-Delta sampling rate on the basis of simulation results illus-
As illustrated in Figure 3.9a through d, the number of trated in Figure 3.10a and 3.10b is 71/28 ∼ = 2.54. The
samples produced by the send-on-delta scheme is much reduction p calculated according to (3.26) with τ = 0.42 s
lower compared to the periodic sampling. The analytical and b = 1 s is ∼ = 2.62, which agrees with simulation
evaluation of sampling rate reduction has been stud- results despite rounding errors (±1 sample). Note that
ied in [23]. The reduction of the sampling rate might (3.29) represents not only guaranteed but even exact
be defined as the ratio of the sampling frequency in sampling reduction, because the relevant step response
the periodic sampling λT = 1/T and the mean sam- is monotone in time.
pling rate λ in the send-on-delta scheme for the same
maximum sampling error (Δ):

p = λT /λ. (3.26)

According to (3.2), the frequency of periodic sampling is


3.5 Extensions of Send-on-Delta Scheme
λT = | ẋ (t)|max /Δ. Since λ is upper bounded by | ẋ (t)|/Δ
according to (3.17), then the reduction of the sampling Both the level-crossing sampling and send-on-delta
rate is limited by the following constraint: schemes are based on triggering sampling operations
when the signal reaches certain values in the amplitude
| ẋ (t)|max
p≤ . (3.27) domain or exceeds an assumed linear sampling error.
| ẋ (t)| As mentioned, several extensions of send-on-delta cri-
terion have been proposed to the other signal measures:
The expression (3.27) becomes an equality on the basis of the integral sampling [32–34] (area-triggered sampling,
(3.13) if the sampled signal x (t) is a monotone function or send-on-area scheme), or the sampling triggered by
of time. Moreover, for small threshold Δ, the reduction error energy changes (send-on-energy) [35,36].
of the sampling rate reaches the minimum:

| ẋ (t)|max 3.5.1 Send-on-Area Criterion


pmin = lim p = . (3.28)
Δ→0 | ẋ (t)| A signal x (t) is sampled according to the uniform inte-
gral criterion (send-on-area) if the integral of the abso-
By giving the lower limit, the formula shown in (3.28) lute error (IAE)—that is, the integral of the absolute
provides a conservative evaluation of sampling rate difference between the current signal value x (t) and the
decrease and might be interpreted as guaranteed reduction value included in the most recent sample x (t ), accu-
i
pmin . An interesting point is that p min is only a function mulated over an actual sampling interval—reaches a
of the sampled signal x (t) and does not depend on the threshold δ > 0 [32,33]:
parameter of the sampling algorithm (Δ).
 t
Reference [23] contains the analytical formulae for i +1
| x (t) − x (ti )|dt = δ. (3.30)
the guaranteed reduction computed for time responses ti
Reducing Communication by Event-Triggered Sampling 51

In the basic version, the zero-order hold is used to keep send-on-energy sampling scheme with the threshold ζ
the value of the most recent sample between sampling is [35,36]:

instants, but the prediction may also be adopted to 1 3
s∼= 3√ [ ẋ (t)]2 , (3.34)
reduce further the number of samples as carried out in 3ζ
the predictive send-on-delta scheme, see [28–30]. 
There are a few reasons to use an event-based inte- where 3 [ ẋ (t)]2 denotes the mean of the cubic root of the
gral criterion in control systems. First, the integrated signal derivative square in the time interval [a,b]:
absolute sampling error is a useful measure to trigger b
 3
[ ẋ (t)]2 dt
a control update, because a summation of the absolute 3
[ ẋ (t)]2 = a
. (3.35)
temporal sampling errors counteracts mutual compen- b−a
sation of positive and negative errors. Second, the per-
formance of control systems is usually defined as the The asymptotic reductions (i.e., reductions for small
integral of the absolute value of the error (IAE), which thresholds) of the sampling rate in send-on-area or send-
directly corresponds to the integral sampling criterion on-energy compared to the periodic sampling derived
definition. analytically in [23,33,36] are given by an expression:
The mean sampling rate m in the integral sampling
scheme with threshold δ is approximated as [32,33] [ ẋ (t)max ]α
χ∞ (α) = , (3.36)
[ ẋ (t)]α

1
m∼
= √ | ẋ (t)|, (3.31) where α = 1/2 for send-on-area and α = 2/3 for send-
2δ on-energy schemes. It may be shown that the asymptotic
reduction of sampling rate for send-on-area is lower
 than for send-on-delta but higher from the send-on-
where | ẋ (t)| is the mean of the square root of the signal
energy:
derivative absolute value:

b pmin ≤ χ∞ (α = 1/2) ≤ χ∞ (α = 2/3). (3.37)


 | ẋ (t)|dt
| ẋ (t)| = a
. (3.32) As the price of higher sampling reduction, the quality
b−a
of approximation of the sampled signal by the send-
on-delta is lower than for the send-on-area and send-
The approximation shown in (3.32) is more accurate if
on-energy criteria. References [33] and [36] include the
the sampling threshold δ is low.
analytical formulae of asymptotic reductions for send-
on-area or send-on-energy schemes for step responses of
3.5.2 Send-on-Energy Criterion the first- and second-order control systems.
A further extension of send-on-delta and send-on-area
schemes is the event-triggered criterion based on the
energy of the sampling error that may be called the send-
on-energy paradigm [35,36]. A signal x (t) is sampled at
the instant ti+1 according to the error energy criterion 3.6 Lyapunov Sampling
if the energy of a difference between the signal value and
the value included in the most recent sample x (ti ) accu- As introduced in Section 3.2.4, the event-triggered cri-
mulated during the actual sampling interval reaches a terion suggested to drive the control system to the
threshold ζ > 0 neighborhood of the equilibrium is Lyapunov sampling
based on the trajectory-crossings scheme. The Lyapunov
 t sampling is stable provided that the energy decay ratio
i +1
[ x (t) − x (ti )] dt = ζ.
2
(3.33) meets the following constraint: 0 < η∗ < η < 1. In this
ti section, the exemplified scenario of Lyapunov sampling
will be analyzed based on [44] and [46].
The difference between send-on-area and send-on-
energy is that the successive sampling error values are
3.6.1 Stable Lyapunov Sampling
squared before integration in the error energy–based
sampling criterion. Compared to the integral sampling, Following [44], consider a continuous-time control
the error energy criterion gives more weight to extreme system:
sampling error values. The mean sampling rate s in the ẋ (t) = f ( x (t), u(t)), (3.38)
52 Event-Based Control and Signal Processing

with the control law The Lyapunov function for the system is

u(t) = k( x (ti )) for t ∈ (ti , ti+1), V ( x ) = x T (t) Px (t), (3.48)

updated by a feedback controller k at instants where P is a positive definite matrix that makes the
t0 , t1 , . . . , ti , . . . and with the control signal kept constant system stable. In the double integrator example,
during each sampling interval. Taking into account  
(3.38), the closed-loop system is 01.1455 0.1 !
P= and K = 10 11 . (3.49)
0.1 0.0545
ẋ (t) = f ( x (t), u(ti )). (3.39)
For the considered double integrator system, the
Let V be a Lyapunov function of the control system and value η∗ guaranteeing the stability of Lyapunov sam-
ti be the instant of taking the current sample. The next pling equals 0.7818 [44]. On the basis of the results of the
sample that triggers the control update is captured at the reproduced experiment reported originally in [44,46],
time instant ti+1 when the following condition is met: double integrator system trajectories are shown for sta-
ble and unstable Lyapunov sampling in Figure 3.11a
V ( x (ti+1 )) = ηV ( x (ti )) where 0 < η < 1. (3.40) and b, respectively.
Figure 3.11a presents a system trajectory in plane
The tunable design parameter of Lyapunov sampling
( x1 , x2 ) with Lyapunov sampling for η = 0.8. The sys-
η represents the system energy decay ratio. Intuitively,
tem is stable in the Lyapunov sense, because its energy
the sampling intervals cannot to be too long to preserve
decreases with each sample. The continuous control sys-
the stability of the continuous-time system. As proved
tem is also stable since the system trajectory tends to the
in [44], the stability of the continuous system trajectory
origin. As reported in [46], the performance of the dou-
is preserved if η is lower bounded by η∗ :
ble integrator system with Lyapunov sampling is similar
0 < η∗ < η < 1, (3.41) to that with periodic sampling, although the number of
samples is reduced by around 91%.
where η∗ is calculated as Figure 3.11b illustrates the system trajectory with Lya-
punov sampling for η = 0.65. The lower value of η
V ∗ ( x0 ) promises to take less samples. However, the system is
η∗ = max , (3.42)
x0 V ( x0 ) unstable. After a capture of seven samples, the trigger-
ing condition is no longer satisfied, and a new sample
and is never taken (see Figure 3.11b). The trajectory diverges
because a nonupdated control signal is unable to drive
V ∗ ( x0 ) = min V ( x (t, x0 )); t ≥ 0. (3.43)
t the system to equilibrium.
The Lyapunov sampling with η such that η∗ < η < 1
is called the stable Lyapunov sampling algorithm in the 3.6.2 Less-Conservative Lyapunov Sampling
literature [44]. The setup of the control system with Lya- The lower bound η∗ stated by (3.42) for the energy decay
punov sampling will be illustrated by an example of ratio η in Lyapunov sampling is sufficient but not a nec-
a double integrator system analyzed originally in [44] essary condition for system stability. Moreover, the con-
and [46]: dition (3.41) is quite conservative, and the use of η values
close to one enforces relatively frequent sampling that
ẋ (t) = Ax (t) + Bu(t), (3.44)
drives the discrete system into a quasi-periodic regime.
y(t) = Cx (t) For many initial conditions, the dynamics of the system
is stable even for η < η∗ [46]. Furthermore, as argued
with     in [46], a calculation of η∗ is computationally complex,
0 1 0
A= and B= , (3.45) especially for nonlinear systems, and the existence of η∗
0 0 1
is not always guaranteed.
for the initial conditions To address these limitations, the less-conservative
  Lyapunov sampling with the gain factor η varying
0
x0 = , (3.46) dynamically during the evolution of the system is stud-
−3
ied in [46]. The parameter η is decreased ad hoc (even
with the control law below η∗ ) when the system is stable, and is increased as
soon as it becomes unstable. This allows for the num-
u(t) = −Kx (t). (3.47) ber of samples to be reduced almost twice than with
Reducing Communication by Event-Triggered Sampling 53
4

1
x2(t)
0

−1

−2

−3

−4
−1 −0.5 0 0.5 1
x1(t)
(a)

1
x2(t)

−1

−2

−3

−4
−1.5 −1 −0.5 0 0.5 1
x1(t)
(b)

FIGURE 3.11
(a) System trajectory in plane ( x1 , x2 ) for η = 0.8 with stable Lyapunov sampling. (b) System trajectory in plane ( x1 , x2 ) for η = 0.65 with
unstable Lyapunov sampling.

original stable Lyapunov sampling for double integrator


systems while offering similar performance. Alterna-
3.7 Conclusions
tively, to save system stability, the Lyapunov sampling
has been proposed to be augmented with “safety sam- Event-based sampling and communication are emerg-
pling” that enforces a sudden control update as soon as ing research topics driven mainly by the fast devel-
the system energy increases [46]. opment of event-triggered monitoring and control and
54 Event-Based Control and Signal Processing

its implementation in networked sensor and control [6] W. P. M. H. Heemels, R. J. A. Gorter, A. van Zijl,
systems, especially those based on a wireless link. The P. P. J. van den Bosch, S. Weiland, W. H. A.
fundamental postulate for formulation of event-based Hendrix, et al., Asynchronous measurement and
sampling criteria for control is to achieve satisfac- control: A case study on motor synchronization,
tory system performance at a low rate of events. The Control Eng. Pract., vol. 7, pp. 1467–1482, 1999.
overview of research studies shows that in many appli-
cation scenarios, the event-based sampling schemes in [7] N. Sayiner, H. V. Sorensen, T. R. Viswanathan,
control can provide around one order of magnitude A level-crossing sampling scheme for A/D con-
of traffic reduction compared to conventional periodic version, IEEE Trans. Circuits Syst. II: Analog Digital
sampling, while still keeping similar system perfor- Signal Process., vol. 43, no. 4, pp. 335–339, 1996.
mance. On the other hand, in signal processing, the [8] Y. Tsividis, Continuous-time digital signal pro-
objective of event-based sampling is to eliminate the cessing, Electron. Lett., vol. 39, no. 21, pp. 1551–
clock from system architecture, benefit from increasing 1552, 2003.
time resolution provided by nanoscale VLSI technology,
or avoid aliasing. The present study is a survey of event- [9] Y. Tsividis, Digital signal processing in continu-
triggered sampling schemes, both in control and signal ous time: A possibility for avoiding aliasing and
processing, aimed at emphasizing the motivation and reducing quantization error, Proceedings of IEEE
wide technical context of event-based sampling criteria International Conf. on Acoustics, Speech, and Signal
design. Processing ICASSP 2004, 2004, pp. 589–592.
[10] E. Allier, G. Sicard, L. Fesquet, M. Renaudin,
A new class of asynchronous A/D converters
based on time quantization, Proceedings of IEEE
International Symposium on Asynchronous Circuits
Acknowledgment and Systems ASYNC 2003, 2003, pp. 196–205.
This work was supported by the Polish National Center [11] F. Aeschlimann, E. Allier, L. Fesquet, M. Renaudin,
of Science under grant DEC-2012/05/E/ST7/01143. Asynchronous FIR filters: Towards a new digi-
tal processing chain, Proceedings of International
Symposium on Asynchronous Circuits and Systems
ASYNC 2004, 2004, pp. 198–206.
[12] L. Grune, S. Hirche, O. Junge, P. Koltai,
Bibliography
D. Lehmann, J. Lunze, et al., Event-based control,
[1] K. J. Åstrom, P. R. Kumar, Control: A perspective, in Control Theory of Digitally Networked Dynamic
Automatica, vol. 50, no. 1, pp. 3–43, 2014. Systems (J. Lunze, Ed.), Springer, New York 2014,
pp. 169–261.
[2] W. P. M. H. Heemels, K. H. Johansson,
P. Tabuada, An introduction to event-triggered [13] P. G. Otanez, J. R. Moyne, D. M. Tilbury, Using
and self-triggered control, Proceedings of the IEEE deadbands to reduce communication in net-
Conf. on Decision and Control CDC 2012, 2012, pp. worked control systems, Proceedings of American
3270–3285. Control Conf., ACC 2002, vol. 4, 2002, pp. 3015–
3020.
[3] K. J. Åstrom, Event-based control, in Analysis and
[14] R. M. Murray, K. J. Astrom, S. P. Boyd, R. W. Brock-
Design of Nonlinear Control Systems (A. Astolfi,
ett, G. Stein, Control in an information rich world,
L. Marconi, Eds.). Springer, New York 2008,
IEEE Control Systems Magazine, vol. 23, no. 2,
pp. 127–147.
pp. 20–33, 2003.
[4] K. E. Arzén, A simple event-based PID controller, [15] Y. Tsividis, Event-driven data acquisition and
Proceedings of IFAC World Congress, vol. 18, 1999, digital signal processing: A tutorial, IEEE Trans.
pp. 423–428. Circuits Syst. II: Express Briefs, vol. 57, no. 8,
pp. 577–581, 2010.
[5] K. J. Åstrom, B. Bernhardsson, Comparison of
periodic and event based sampling for first- [16] L. Fesquet, B. Bidégaray-Fesquet, Digital filter-
order stochastic systems, Proceedings of IFAC World ing with non-uniformly sampled data: From the
Congress, 1999, pp. 301–306. algorithm to the implementation, in Event-Based
Reducing Communication by Event-Triggered Sampling 55

Control and Signal Processing (M. Miskowicz, Ed.), [30] K. Staszek, S. Koryciak, M. Miskowicz, Perfor-
CRC Press, Boca Raton, FL 2015, pp. 515–528. mance of send-on-delta sampling schemes with
prediction, Proceedings of IEEE International Sym-
[17] Y. Tsividis, M. Kurchuk, P. Martinez-Nuevo, S. M.
posium on Industrial Electronics ISIE 2011, 2011,
Nowick, S. Patil, B. Schell, C. Vezyrtzis, Event- pp. 2037–2042.
based data acquisition and digital signal process-
ing in continuous time, in Event-Based Control and [31] F. Xia, Z. Xu, L. Yao, W. Sun, M. Li, Prediction-
Signal Processing (M. Miskowicz, Ed.), CRC Press, based data transmission for energy conservation
Boca Raton, FL 2015, pp. 353–378. in wireless body sensors, Proceedings of Wireless
[18] J. Elson, D. Estrin, An address-free architecture Internet Conf. WICON 2010, pp. 1–9, 2010.
for dynamic sensor networks. Technical Report [32] M. Miskowicz, The event-triggered integral crite-
00-724. University of Southern California, Com- rion for sensor sampling, Proceedings of IEEE Inter-
puter Science Department, Los Angeles, CA 2000. national Symposium on Industrial Electronics ISIE
[19] R. E. Nance, The time and state relationships 2005, vol. 3, 2005, pp. 1061–1066.
in simulation modeling, Commun. ACM, vol. 24,
[33] M. Miskowicz, Asymptotic effectiveness of the
no. 4, pp. 173–179, 1981.
event-based sampling according to the integral
[20] H. Kopetz, Real-time systems. Design principles for criterion, Sensors, vol. 7, pp. 16–37, 2007.
distributed embedded applications, Springer, New
York 2011. [34] V. H. Nguyen, Y. S. Suh, Networked estima-
tion with an area-triggered transmission method,
[21] R. Obermaisser, Event-triggered and time-triggered Sensors, vol. 8, pp. 897–909, 2008.
control paradigms, Springer, New York 2004.
[35] M. Miskowicz, Sampling of signals in energy
[22] P. H. Ellis, Extension of phase plane analysis to domain, Proceedings of IEEE Conf. on Emerging Tech-
quantized systems, IRE Trans. Autom. Control, nologies and Factory Automation ETFA 2005, vol. 1,
vol. 4, pp. 43–59, 1959. 2005, pp. 263–266.
[23] M. Miskowicz, Send-on-delta concept: An event-
based data reporting strategy, Sensors, vol. 6, [36] M. Miskowicz, Efficiency of event-based sampling
pp. 49–63, 2006. according to error energy criterion, Sensors, vol. 10,
no. 3, pp. 2242–2261, 2010.
[24] E. Kofman, J. H. Braslavsky, Level crossing sam-
pling in feedback stabilization under data-rate [37] J. Sánchez, A. Visioli, S. Dormido, Event-based
constraints, Proceedings of IEEE Conf. on Decision PID control, in PID Control in the Third Mil-
and Control CDC2006 2006, pp. 4423–4428. lennium. Advances in Industrial Control (R. Vil-
lanova, A. Visioli, Eds.), Springer, New York 2012,
[25] M. de la Sen, J. C. Soto, A. Ibeas, Stability and limit pp. 495–526.
oscillations of a control event-based sampling cri-
terion, J. Appl. Math, 2012, pp. 1–25. [38] J. W. Marck, J. Sijs, Relevant sampling applied
to event-based state-estimation, Proceedings of the
[26] M. Miskowicz, R. Golański, LON technology in International Conf. on Sensor Technologies and Appli-
wireless sensor networking applications, Sensors, cations SENSORCOMM 2010, 2010, pp. 618–624.
vol. 6, no. 1, pp. 30–48, 2006.
[27] M. Andelic, S. Berber, A. Swain, Collaborative [39] S. Trimpe, Distributed and Event-Based State Estima-
Event Driven Energy Efficient Protocol (CEDEEP), tion and Control, PhD Thesis, ETH Zürich, Switzer-
IEEE Wireless Commun. Lett., vol. 2, pp. 231–234, land 2013.
2013. [40] S. Trimpe, R. D’Andrea, Event-based state estima-
[28] Y. S. Suh, Send-on-delta sensor data transmission tion with variance-based triggering, IEEE Trans.
with a linear predictor, Sensors, vol. 7, pp. 537–547, Autom. Control, vol. 59, no. 12, pp. 3266–3281, 2014.
2007.
[41] J. Sijs, B. Noack, M. Lazar, U. D. Hanebeck,
[29] D. Bernardini, A. Bemporad, Energy-aware robust Time-periodic state estimation with event-based
model predictive control based on wireless sensor measurement updates, in Event-Based Control and
feedback, Proceedings of IEEE Conf. on Decision and Signal Processing (M. Miskowicz, Ed.), CRC Press,
Control CDC 2008, 2008, pp. 3342–3347. Boca Raton, FL 2015, pp. 261–280.
56 Event-Based Control and Signal Processing

[42] J. Lunze, Event-based control: Introduction and [53] J. Selva, Efficient sampling of band-limited sig-
survey, in Event-Based Control and Signal Processing nals from sine wave crossings, IEEE Trans. Signal
(M. Miskowicz, Ed.), CRC Press, Boca Raton, FL Process., vol. 60, no. 1, pp. 503–508, 2012.
2015, pp. 3–20.
[54] I. F. Blake, W. C. Lindsey, Level-crossing prob-
[43] Y. Chen, M. Kurchuk, N. T. Thao, Y. Tsividis, lems for random processes, IEEE Trans. Inf. Theory,
Spectral analysis of continuous-time ADC and vol. 3, pp. 295–315, 1973.
DSP, in Event-Based Control and Signal Processing
(M. Miskowicz, Ed.), CRC Press, Boca Raton, FL [55] S. O. Rice, Mathematical analysis of random noise,
2015, pp. 409–420. Bell System Tech. J, vol. 24, no. 1, pp. 46–156, 1945.

[44] M. Velasco, P. Martı́, E. Bini, On Lyapunov sam- [56] D. Rzepka, M. Pawlak, D. Kościelnik,
pling for event-driven controllers, Proceedings of M. Miskowicz, Reconstruction of varying band-
Joint IEEE Conf. on Decision and Control and Chinese width signals from event-triggered samples,
Control Conf. CDC/CCC 2009, 2009, pp. 6238–6243. in Event-Based Control and Signal Processing
(M. Miskowicz, Ed.), CRC Press, Boca Raton, FL
[45] J. Yépez, C. Lozoya, M. Velasco, P. Marti, J. M. 2015, pp. 529–546.
Fuertes, Preliminary approach to Lyapunov sam-
pling in CAN-based networked control systems, [57] J. W. Mark, T. D. Todd, A nonuniform sampling
Proceedings of Annual Conf. of IEEE Industrial Elec- approach to data compression, IEEE Trans. Com-
tronics IECON 2009, 2009, pp. 3033–3038. mun., vol. 29, no. 1, pp. 24–32, 1981.

[46] S. Durand, N. Marchand, J. F. Guerrero Castel- [58] C. Vezyrtzis, Y. Tsividis, Processing of signals
lanos, Simple Lyapunov sampling for event- using level-crossing sampling, Proceedings of IEEE
driven control, Proceedings of the IFAC World International Symposium on Circuits and Systems
Congress, pp. 8724–8730, 2011. ISCAS 2009, 2009, pp. 2293–2296.

[47] P. Tallapragada, N. Chopra, Lyapunov based [59] S. Senay, L. F. Chaparro, M. Sun, R. J. Sclabassi,
sampling for adaptive tracking control in robot Adaptive level-crossing sampling and reconstruc-
manipulators: An experimental comparison. tion, Proceedings of European Signal Processing Conf.
Experimental robotics. in Springer Tracts in EUSIPCO 2010, pp. 196–1300, 2010.
Advanced Robotics, vol. 88, Springer, New York
2013, pp. 683–698. [60] K. M. Guan, S. S Kozat, A. C. Singer, Adaptive ref-
erence levels in a level-crossing analog-to-digital
[48] J. Yen, On nonuniform sampling of bandwidth- converter, EURASIP J. Adv. Signal Process., Article
limited signals, IRE Trans. Circuit Theory, vol. 3, No. 183, 2008.
no. 4, pp. 251–257, 1956.
[61] S. S. Kozat, K. M. Guan, A. C. Singer, Tracking the
[49] F. J. Beutler, Error-free recovery of signals from best level set in a level-crossing analog-to-digital
irregularly spaced samples, Siam Rev., vol. 8, no. converter, Digital Signal Process., vol. 23, no. 1, pp.
3, pp. 328–335, 1966. 478–487, 2013.

[50] H. G. Feichtinger, K. Gröchenig, Theory and prac- [62] M. Greitans, R. Shavelis, Speech sampling by
tice of irregular sampling, in Wavelets: Mathe- level-crossing and its reconstruction using spline-
matics and Applications (J. J. Benedetto, M. W. based filtering, Proceedings of International Work-
Frazier, Eds.), CRC Press, Boca Raton, FL 1994, shop on Systems, Signals and Image Processing 2007,
pp. 305–363. 2007, pp. 292–295.

[51] N. T. Thao, Event-based data acquisition and [63] A. Can-Cimino, L. F. Chaparro, Asynchronous
reconstruction—Mathematical background, in processing of nonstationary signals, in Event-Based
Event-Based Control and Signal Processing Control and Signal Processing (M. Miskowicz, Ed.),
(M. Miskowicz, Ed.), CRC Press, Boca Raton, CRC Press, Boca Raton, FL 2015, pp. 441–456.
FL 2015, pp. 379–408.
[64] C. G. Cassandras, Event-driven control and opti-
[52] I. Bar-David, An implicit sampling theorem for mization in hybrid systems, in Event-Based Control
bounded band-limited functions, Inf. Control, and Signal Processing (M. Miskowicz, Ed.), CRC
vol. 24, pp. 36–44, 1974. Press, Boca Raton, FL 2015, pp. 21–36.
Reducing Communication by Event-Triggered Sampling 57

[65] D. Brückmann, K. Konrad, T. Werthwein, Con- replacement, Proceedings of IEEE International Sym-
cepts for hardware efficient implementation posium on Circuits and Systems ISCAS 2011, 2011,
of continuous time digital signal processing, pp. 2421–2424.
in Event-Based Control and Signal Processing
(M. Miskowicz, Ed.), CRC Press, Boca Raton, FL [76] A. S. Alvarado, M. Rastogi, J. G. Harris, J. C.
2015, pp. 421–440. Principe, The integrate-and-fire sampler: A special
type of asynchronous Σ-Δ modulator, Proceedings
[66] M. Greitans, R. Shavelis, L. Fesquet, of IEEE International Symposium on Circuits and
T. Beyrouthy, Combined peak and level-crossing Systems ISCAS 2011, 2011, pp. 2031–2034.
sampling scheme, Proceedings of International Conf.
on Sampling Theory and Applications SampTA 2011, [77] J. Daniels, W. Dehaene, M. S. J. Steyaert,
pp. 1–4, 2011. A. Wiesbauer, A/D conversion using asyn-
chronous Delta-Sigma modulation and time-to-
[67] I. Homjakovs, M. Hashimoto, T. Hirose, digital conversion, IEEE Trans. Circuits Syst. II:
T. Onoye, Signal-dependent analog-to-digital Express Briefs, vol. 57, no. 9, pp. 2404–2412, 2010.
conversion based on MINIMAX sampling, IEICE
Trans. Fundam. Electron. Commun. Comput. Sci., [78] D. Kościelnik, M. Miskowicz, Asynchronous
vol. E96-A, no. 2, pp. 459–468, 2013. Sigma-Delta analog-to-digital converter based on
the charge pump integrator, Analog Integr. Circuits
[68] D. Rzepka, M. Miskowicz, Recovery of varying- Signal Process, vol. 55, pp. 223–238, 2008.
bandwidth signal from samples of its extrema,
Signal Processing: Algorithms, Architectures, Ar- [79] L. Hernandez, E. Prefasi, Analog-to-digital con-
rangements, and Applications SPA 2013, 2013, pp. version using noise shaping and time encoding,
143–148. IEEE Trans. Circuits Syst. I: Regular Papers, vol. 55,
no. 7, pp. 2026–2037, 2008.
[69] D. Rzepka, M. Miskowicz, A. Gryboś, D.
Kościelnik, Recovery of bandlimited signal [80] J. Daniels, W. Dehaene, M. S. J. Steyaert,
based on nonuniform derivative sampling, Pro- A. Wiesbauer, A/D conversion using asyn-
ceedings of International Conf. on Sampling Theory chronous Delta-Sigma modulation and time-to-
and Applications SampTA 2013, pp. 1–5, 2013. digital conversion, IEEE Trans. Circuits Syst. II:
Express Briefs, vol. 57, no. 9, pp. 2404–2412, 2010.
[70] A. A. Lazar, L. T. Tóth, Perfect recovery and sen-
sitivity analysis of time encoded bandlimited sig- [81] S. Dormido, M. de la Sen, M. Mellado, Criterios
nals, IEEE Trans. Circuits Syst. I: Regular Papers, generales de determinacion de leyes de muestreo
vol. 52, no. 10, pp. 2060–2073, 2005. adaptive (General criteria for determination of
[71] A. A. Lazar, E. A. Pnevmatikakis, Video time adaptive sampling laws), Revista de Informatica y
encoding machines, IEEE Trans. Neural Networks, Automatica, vol. 38, pp. 13–29, 1978.
vol. 22, no. 3, pp. 461–473, 2011.
[82] M. Miskowicz, The event-triggered sampling opti-
[72] A. A. Lazar, E. K. Simonyi, L. T. Toth, Time mization criterion for distributed networked mon-
encoding of bandlimited signals, an overview, itoring and control systems, Proceedings of IEEE
Proceedings of the Conf. on Telecommunication Sys- International Conf. on Industrial Technology ICIT
tems, Modeling and Analysis, November 2005. 2003, vol. 2, 2003, pp. 1083–1088.

[73] G. David, M. Vetterli, Sampling based on timing: [83] J. Colandairaj, G. W. Irwin, W. G. Scanlon, Wireless
Time encoding machines on shift-invariant sub- networked control systems with QoS-based sam-
spaces, Appl. Comput. Harmon. Anal., vol. 36, no. 1, pling, IET Control Theory Appl., vol. 1, no. 1, pp.
pp. 63–78, 2014. 430–438, 2007.

[74] L. C. Gouveia, T. Koickal, A. Hamilton, Spike [84] J. Mitchell, W. McDaniel, Adaptive sampling tech-
event coding scheme, in Event-Based Control and nique, IEEE Trans. Autom. Control, vol. 14, no. 2,
Signal Processing (M. Miskowicz, Ed.), CRC Press, pp. 200–201, 1969.
Boca Raton, FL 2015, pp. 487–514.
[85] R. C. Dorf, M. C. Farren, C. A. Phillips, Adaptive
[75] M. Rastogi, A. S. Alvarado, J. G. Harris, J. C. sampling for sampled-data control systems, IEEE
Principe, Integrate and fire circuit as an ADC Trans. Autom. Control, vol. 7, no. 1, pp. 38–47, 1962.
58 Event-Based Control and Signal Processing

[86] T. C. Hsia, Comparisons of adaptive sampling [97] D. M. Chapiro, Globally-asynchronous locally-


control laws, IEEE Trans. Autom. Control, vol. 17, synchronous systems, PhD Thesis, Stanford
no. 6, pp. 830–831, 1974. University, CA 1984.

[87] T. C. Hsia, Analytic design of adaptive sampling [98] M. Krstić, E. Grass, F. K. Gurkaynak, P. Vivet,
control laws, IEEE Trans. Autom. Control, vol. 19, Globally asynchronous, locally synchronous cir-
no. 1, pp. 39–42, 1974. cuits: Overview and outlook, IEEE Design Test,
vol. 24, no. 5, pp. 430–441, 2007.
[88] W. P. M. H. Heemels, R. Postoyan, M. C. F. (Tijs)
Donkers, A. R. Teel, A. Anta, P. Tabuada, D. Nešić, [99] J. Ploennigs, V. Vasyutynskyy, K. Kabitzsch, Com-
Periodic Event-Triggered Control, in Event-Based parison of energy-efficient sampling methods for
Control and Signal Processing (M. Miskowicz, Ed.), WSNs in building automation scenarios, Proceed-
CRC Press, Boca Raton, FL 2015, pp. 203–220. ings of IEEE International Conf. on Emerging Tech-
nologies and Factory Automation ETFA 2009, 2009,
[89] M. Velasco, P. Marti, J. M. Fuertes, The self- pp. 1–8.
triggered task model for real-time control systems,
Proceedings of 24th IEEE Real-Time Systems Sympo- [100] J. Sánchez, M. A. Guarnes, S. Dormido, On
sium RTSS 2003, pp. 1–4, 2003. the application of different event-based sampling
strategies to the control of a simple industrial
[90] C. Nowzari, J. Cortés, Self-triggered and team- process, Sensors, vol. 9, pp. 6795–6818, 2009.
triggered control of networked cyber-physical sys-
tems, in Event-Based Control and Signal Processing [101] V. Vasyutynskyy, K. Kabitzsch, Towards compar-
(M. Miskowicz, Ed.), CRC Press, Boca Raton, FL ison of deadband sampling types, Proceedings of
2015, pp. 203–220. IEEE International Symposium on Industrial Electron-
ics ISIE 2007, 2007, pp. 2899–2904.
[91] J. Ploennigs, V. Vasyutynskyy, K. Kabitzsch,
Comparative study of energy-efficient sampling [102] M. Miskowicz, Event-based sampling strategies
approaches for wireless control networks, IEEE in networked control systems, Proceedings of IEEE
Trans. Ind. Inf., vol. 6, no. 3, pp. 416–424, 2010. International Workshop on Factory Communication
Systems WFCS 2014, 2014, pp. 1–10.
[92] M. G. Cea, G. C. Goodwin, Event based sampling
in non-linear filtering, Control Eng. Pract., vol. 20, [103] F. Riesz, B. Nagy, Functional Analysis, Dover, New
no. 10, pp. 963–971, 2012. York 1990.

[93] J. Sijs, M. Lazar, On event based state estima- [104] M. Miskowicz, Bandwidth requirements for
tion, in Hybrid Systems: Computation and Control, event-driven observations of continuous-time
Springer, New York 2009, pp. 336–350. variable, Proceedings of IFAC Workshop on Discrete
Event Systems, 2004, pp. 475–480.
[94] J. Sijs, B. Noack, U. D. Hanebeck, Event-based
state estimation with negative information, Pro- [105] M. Miskowicz, Efficiency of level-crossing sam-
ceedings of IEEE International Conf. on Information pling for bandlimited Gaussian random processes,
Fusion—FUSION 2013, 2013, pp. 2192–2199. Proceedings of IEEE International Workshop on Fac-
tory Communication Systems WFCS 2006, 2006,
[95] W. P. M. H. Heemels, M. C. F. Donkers, Model- pp. 137–142.
based periodic event-triggered control for linear
systems, Automatica, vol. 49, pp. 698–711, 2013.

[96] F. de Paoli, F. Tisato, On the complementary nature


of event-driven and time-driven models, Control
Eng. Pract., vol. 4, no. 6, pp. 847–854, 1996.
4
Event-Triggered versus Time-Triggered Real-Time Systems:
A Comparative Study

Roman Obermaisser
University of Siegen
Siegen, Germany

Walter Lang
University of Siegen
Siegen, Germany

CONTENTS
4.1 Event-Triggered and Time-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2 Requirements of Real-Time Architectures in Control Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.1 Composability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.2 Real-Time Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.3 Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.4 Heterogeneous Timing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.5 Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.6 Clock Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.7 Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3 Communication Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3.1 Time-Triggered Network-on-a-Chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.3.2 Æthereal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3.3 Time-Triggered Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3.4 Time-Triggered Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.5 Time-Triggered Controller Area Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4 Example of a Control Application Based on Time-Triggered and Event-Triggered Control . . . . . . . . . . . . . . 69
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

ABSTRACT This chapter compares event-triggered Time-triggered systems are designed for periodic
and time-triggered systems, both of which can be activities, where all computational and communication
used for the realization of an embedded control appli- activities are initiated at predetermined global points in
cation. Inherent properties of time-triggered systems time. The temporal behavior of a time-triggered commu-
such as composability, real-time support, and diagnosis nication network is controlled solely by the progression
are well-suited for safety-relevant control applications.of time. Likewise, time-triggered operating systems per-
Therefore, we provide an overview of time-triggered form scheduling decisions at predefined points in time
architectures at chip level and at cluster level. The chap-
according to schedule tables. Event-triggered systems,
ter finishes with an example of a control application, on the other hand, initiate activities whenever significant
which is realized as an event-triggered system and as a events occur. Any state change of relevance for a given
time-triggered system in order to experimentally com- application (e.g., external interrupt, software excep-
pare the respective behaviors. tion) can trigger a communication or computational
activity.
In time-triggered systems, contention is resolved
using the temporal coordination of resource access
4.1 Event-Triggered and Time-Triggered Control
based on time division multiple access (TDMA). TDMA
Based on the source of temporal control signals, one can statically divides the capacity of a resource into a num-
distinguish time-triggered and event-triggered systems. ber of slots and assigns unique slots to every component.
59
60 Event-Based Control and Signal Processing

In case of time-triggered networks, the communica- implementation of the components is neither needed nor
tion activities of every component are controlled by available in many cases. Composability deals with all
a time-triggered communication schedule. The sched- issues that relate to the component-based design of large
ule specifies the temporal pattern of message transmis- systems.
sions (i.e., at what points in time components send and Composability is a concept that relates to the ease
receive messages). A sequence of sending slots, which of building systems out of subsystems [7]. We assume
allows every component in an ensemble of n compo- that subsystems have certain properties that have been
nents to send exactly once, is called a TDMA round. validated at the subsystem level. A system, that is, a
The sequence of the different TDMA rounds forms the composition of subsystems, is considered composable
cluster cycle and determines the periodicity of the time- with respect to a certain property, if this property, given
triggered communication. that it has been established at the subsystem level, is
The a priori knowledge about the times of message not invalidated by the integration. Examples of such
exchanges enables the communication network to oper- properties are timeliness or certification.
ate autonomously. Hence, the correct temporal behavior Composability is defined as the stability of com-
of the communication network is independent of the ponent properties across integration. Composability is
temporal behavior of the application software and can necessary for correctness-by-construction of component-
be verified in isolation. based systems [8]. Temporal composability is an instan-
Likewise, time-triggered operating systems and tiation of the general notion of composability. A system
hypervisors use TDMA to manage access to processors. is temporally composable, if temporal correctness is not
For example, XtratuM [1], LynxOS [2], and VxWorks [3] refuted by the system integration [9]. For example, tem-
support fixed cyclic scheduling of partitions according poral correctness is essential to preserve the stability of
to ARINC-653 [4]. a control application on the integration of additional
Time-triggered systems typically employ implicit syn- components.
chronization [5]. Components agree a priori on the A necessary condition for temporal composability is
global points in time when resources are accessed, that if n components are already integrated, the inte-
thereby avoiding race conditions, interference, and gration of component n + 1 will not disturb the correct
inconsistency. In case of networks, a component’s abil- operation of the n already integrated components. This
ity to handle received messages can be ensured at design condition guarantees that the integration activity is lin-
time by fixing the points in time of communication activ- ear and not circular. It has stringent implications for the
ities. This implicit flow control [5] does not require management of the resources. Time-triggered systems
acknowledgment mechanisms and is well-suited for satisfy this condition, because the allocation of resources
multicast interactions, because a unidirectional data occurs at design time (i.e., prior to system integration).
flow involves only a unidirectional control flow [6]. In an event-triggered system, however, which man-
ages the resources dynamically, it must be ensured that
even at the critical instant, that is, when all components
request the resources at the same instant, the specified
4.2 Requirements of Real-Time Architectures in timeliness of all resource requests can be satisfied. Oth-
erwise, failures will occur sporadically with a failure
Control Applications
rate that is increasing with the number of integrated
This section discusses the fundamental requirements of components. When resources are multiplexed between
an architecture for control applications. These require- components, then each newly integrated component
ments include composability, real-time support, flexibil- will affect the temporal properties of the system (e.g.,
ity, support for heterogeneous timing models, diagnosis, network latencies, processor execution time) of already
clock synchronization, and fault tolerance. integrated components.
Time-triggered systems support invariant temporal
properties during an incremental integration process.
4.2.1 Composability
The resources are statically assigned to each component,
In many engineering disciplines, large systems are built and each newly integrated component simply exploits a
from prefabricated components with known and vali- time slot that has already been reserved at design time
dated properties. Components are connected via stable, via a static schedule.
understandable, and standardized interfaces. The sys-
tem engineer has knowledge about the global properties
4.2.2 Real-Time Requirements
of the components as they relate to the system func-
tions and of the detailed specification of the component Many control applications are real-time systems, where
interfaces. Knowledge about the internal design and the achievement of control stability and safety depends
Event-Triggered versus Time-Triggered Real-Time Systems 61

on the completion of activities (like reading sensor Achievement of control stability in real-time
values, performing computations, conducting commu- applications depends on the completion of actions
nication activities, implementing actuator control) in such as reading sensor values, performing computa-
bounded time. In real-time systems, missed deadlines tions, and performing actuator control within bounded
represent system failures with the potential for conse- time. Hard real-time systems must guarantee bounded
quences as serious as in the case of providing incorrect response times even in the case of peak load and fault
results. scenarios. For example, in drive-by-wire applications,
In the case of networks, important performance the dynamics for steered wheels in closed control
attributes are the bandwidth, the network delay, and loops enforce computer delays of less than 2 ms [12].
the variability of the network delay (i.e., communica- Taking the vehicle dynamics into account, a transient
tion jitter). The bandwidth is a measure of the available outage-time of the steering system must not exceed
communication resources expressed in bits/second. The 50 ms [12].
bandwidth is an important parameter as it determines While control algorithms can compensate known
the types of functions that can be handled and the num- delays, delay jitter (i.e., the difference between the maxi-
ber of messages and components that can be handled by mum and minimum value of delay) brings an additional
the communication network. uncertainty into a control loop, which has an adverse
The network delay denotes the time difference effect on the quality of control [5]. Delay jitter represents
between the production of a message at a sending com- an uncertainty about the instant a physical entity was
ponent and the reception of the last bit of the message at observed and can be expressed as an additional error in
a receiving component. Depending on whether the com- the value domain. In case of low jitter or a global time-
munication protocol is time triggered or event triggered, base with good precision, state estimation techniques
the network delay exhibits different characteristics. In a allow compensation for a known delay between the time
time-triggered system, the send instants of all compo- of observation and the time of use of a real-time image.
nents are periodically recurring instants, which are glob- State estimation introduces models of physical quanti-
ally planned in the system and defined with respect to ties to compute the probable state of a variable at a future
the global time base. The network delay is independent point in time.
of the traffic from other components and depends solely
on the relationship between the transmission request 4.2.3 Flexibility
instant of a message and the preplanned send instants
Many systems are faced with the need to incor-
according to the time-triggered schedule. Furthermore,
porate new application functionalities using new or
since the next send instant of every component is known
changed components, for example, upgrade of a factory
a priori, a component can synchronize locally the pro-
automation system with new Computerized Numeri-
duction of periodic messages with the send instant and
cal Control (CNC) machines. Likewise, control appli-
thus minimize the network delay of a message.
cations need to cope with changes in processors and
In an event-triggered system, the network delay of
network technologies, which is also known as the chal-
a message depends on the state of the communication
lenge of technology obsolescence.
system at the transmission request instant of a message.
Event-triggered systems provide more flexibility
If the communication network is idle, the message trans-
in coping with new components and technological
mission can start immediately, leading to a minimal
changes. For example, an additional component
transport delay. If the channel is busy, then the transport
sending messages on a CAN bus does not require
delay depends on the media access strategy imple-
any changes in already existing nodes. In contrast,
mented in the communication protocol. For example, in
time-triggered systems require the computation of new
the Carrier Sense Multiple Access/Collision Detection
schedules and TDMA schemes for the entire system.
(CSMA/CD) protocol of the Ethernet [10], components
However, the increased flexibility of event-triggered
wait for a random delay before attempting transmission
systems does not prevent undesirable implications such
again. In the Carrier Sense Multiple Access/Collision
as missed deadlines after upgrades to the system.
Avoidance (CSMA/CA) protocol of the Controller Area
Although a change is functionally transparent to exist-
Network (CAN) [11], the network delay of a message
ing components, the resulting effects on extrafunctional
depends on its priority relative to the priorities of
properties and real-time performance can induce consid-
other pending messages. Hence, in an event-triggered
erable efforts for revalidation and recertification.
network, the network delay of a message is a global
property that depends on the traffic patterns of all
4.2.4 Heterogeneous Timing Models
components.
A bounded network delay with a minimum variabil- Due to the respective advantages of event-triggered
ity is important in many embedded applications. and time-triggered systems, support for both control
62 Event-Based Control and Signal Processing

paradigms is desirable. The rationale of these integrated overlay networks have been established for different
architectures is the effective covering of mixed-criticality event-triggered protocols [15,17].
systems, in which a safety-critical subsystem exploits
time-triggered services, and a nonsafety-critical sub-
4.2.5 Diagnosis
system can exploit event-triggered services. Thereby,
the system can support different, possibly contradicting In order to achieve the required level of safety, it is
requirements from different application subsystems. required for many control applications that errors in the
For this reason, several communication protocols inte- systems reliably be detected and isolated in bounded
grating event-triggered and time-triggered control have time.
been developed [13–16]. These protocols differ at the In general, error detection mechanisms can be imple-
level at which the integration takes place (i.e., either at mented at the architectural level, or within the appli-
the media access control [MAC] layer or above), and cation itself. The advantage of implementations at the
the basic operational principles for separating event- architectural level is that they are built in a generic way
triggered and time-triggered traffic. and, thus, can be verified and certified. Furthermore,
Communication protocols for the integration of event- they can be implemented directly in hardware (e.g.,
triggered and time-triggered control can be classified, within the communication controller), which can reduce
depending on whether the integration occurs on the the error detection latency and relieves the host CPU
MAC layer or through an overlay network. from computational overhead for error detection. Never-
In case of MAC layer integration, the start and end theless, according to Salzer’s end-to-end argument [18],
instants of the periodic time-triggered message trans- every safety-critical system must contain additional end-
missions, as well as the sending components are speci- to-end error detection mechanisms at the application
fied at design time. For this class of messages, contention level, in order to cover the entire controlled process.
is resolved statically. All time-triggered message trans- Error detection mechanisms can be realized based on
missions follow an a priori defined schedule, which syntactic checks and semantic checks that are imple-
repeats itself with a fixed round length, which is the mented at the protocol level in many time-triggered pro-
least common multiple of all time-triggered message tocols, and error detection by active redundancy which
periods. Within each round, the time intervals that are is usually implemented at a higher level.
not consumed by time-triggered message exchanges The syntactic checks are targeted at the syntax of the
are available for event-triggered communication. Con- received messages. Protocols usually check for the sat-
sequently, time is divided into two types of slots: event- isfaction of specific constraints defined in the message
triggered and time-triggered. The time-triggered slots format. Examples are start and end sequences of the
are used for periodic preplanned message transmis- entire message or cyclic redundancy checks (CRCs).
sions, while event-triggered slots are located in between Semantic checks can be implemented efficiently in
the time-triggered slots. In event-triggered slots, mes- time-triggered protocols, due to the a-priori knowledge
sage exchanges depend on external control, and the start of communication patterns. Examples of semantic errors
instants of message transmissions can vary. Further- that can be detected by using this knowledge are omis-
more, event-triggered slots can be assigned to multiple sion failures, invalid sender identifiers, and violation of
(or all) components of the system. For this reason, the slot boundaries.
MAC layer needs to support the dynamic resolving of Value errors in a message’s payload cannot be
contention when more than one component intends to detected by the above-mentioned mechanisms, if the
transmit a message. During event-triggered slots, a sub- payload CRC and the message timing are valid. Such
protocol (e.g., CSMA/CA, CSMA/CD) takes over that value errors can be systematically detected by active
is not required during time-triggered slots in which redundancy, which means that the sending component
contention is prevented by design. is replicated, and the values generated by the redundant
Event-triggered overlay networks based on a time- senders are compared at the receivers.
triggered communication protocol are another solu-
tion for the integration of the two control paradigms
4.2.6 Clock Synchronization
by layering event-triggered communication on top of
time-triggered communication. The MAC protocol is Due to clock drifts, the clock times in an ensemble
TDMA—that is, time is divided into slots and each slot of clocks will drift apart, if clocks are not periodically
is statically assigned to a component that exclusively resynchronized. Clock synchronization is concerned
sends messages during this slot. A subset of the slots in with bringing the values of clocks in close relation with
each communication round is used for the construction respect to each other. A measure for the quality of syn-
of event-triggered overlay networks. Event-triggered chronization is the precision. An ensemble of clocks that
Event-Triggered versus Time-Triggered Real-Time Systems 63

are synchronized to each other with a specified precision The N replicas receive the same requests and pro-
offers a global time. The global time of the ensemble is vide the same service. The output of all replicas is
represented by the local time of each clock, which serves provided to a voting mechanism, which selects one
as an approximation of the global time. of the results (e.g., based on majority) or transforms
If the counter value is read by a component at a par- the results to a single one (average voter). The most
ticular point in time, it is guaranteed that this value can frequently used N-modular configuration is triple
only differ by one tick from the value that is read at modular redundancy (TMR).
any other correct component at the same point in time. We denote three replicas in a TMR configuration of
This property is also known as the reasonableness of the a fault-tolerant unit (FTU). In addition, we consider
global time, which states that the precision of the local the voter at the input of a component and the compo-
clocks at the components is lower than the granularity nent itself as a self-contained unit, which receives the
of the global time base. Optionally, the global time base replicated inputs and performs voting by itself with-
can be synchronized with an external time source (e.g., out relying on an external voter. We call this behavior
GPS or global time base of another integration level). In incoming voting.
this case, the accuracy denotes the maximum deviation An important prerequisite for error containment are
of any local clock to the external reference time. independent FCRs. Common-mode failures are failures
The global time base allows the temporal coordination of multiple FCRs, which are correlated and occur due
of distributed activities (e.g., synchronized messages), to a common cause. Common-mode failures occur when
as well as the interrelation of timestamps assigned at the assumption of the independence of FCRs is com-
different components. promised. They can result from replicated design faults
or from common operational faults such as a mas-
sive transient disturbance. Common-mode failures of
4.2.7 Fault Tolerance
the replicas in an FRU must be avoided, because any
Fault detection, fault containment, and fault masking are correlation in the instants of the failures of FCRs signifi-
three fault-tolerance techniques that can be realized to cantly decreases the reliability improvements that can be
be consecutively building upon each other. These tech- achieved by NMR [23].
niques differ in the incurred overhead and in their utility Replica determinism has to be supported by the archi-
for realizing a reliable system. For example, fault detec- tecture to ensure that the replicas of an FRU produce the
tion allows the application to react to faults using an same outputs in defined time intervals. A time-triggered
application-specific strategy (e.g., entering a safe state). communication system addresses key issues of replica
A fault containment region (FCR) is the boundary of determinism. In particular, a time-triggered communica-
the immediate impact of a fault. In conformance with tion system supports replica determinism by exploiting
the fault-error-failure chain introduced by Laprie [19], the global time base in conjunction with preplanned
one can distinguish between faults that cause the fail- communication and computational schedules. Compu-
ure of an FCR (e.g., design of the hardware or software tational activities are triggered after the last message of a
of the FCR, operational fault of the FCR) and faults at set of input messages has been received by all replicas of
the system level. The latter type of fault is a failure of an FRU. This instant is a priori known due to the prede-
an FCR, which could propagate to other FCRs through a fined time-triggered schedules. Thus, each replica wakes
sent message that deviates from the specification. If the up at the same global tick, and operates on the same set
transmission instant of the message violates the specifi- of input messages. The alignment of communication and
cations, we speak of a message timing failure. A message computational activities on the global time base ensures
value failure means that the data structure contained in temporal predictability and avoids race conditions.
a message is incorrect.
Such a failure of an FCR can be tolerated by dis-
tributed fault-tolerance mechanisms. The masking of
failures of FCRs is denoted as error containment [20],
because it avoids error propagation by the flow of erro-
neous messages. The error detection mechanisms must
be part of different FCRs than the message sender [21].
4.3 Communication Protocols
Otherwise, the error detection mechanism may be This section gives an overview of time-triggered pro-
impacted by the same fault that caused the message tocols at the chip level and in distributed systems. In
failure. addition, we discuss how the requirements of control
For example, a common approach for masking com- applications are addressed in the protocols (cf. overview
ponent failures is N-modular redundancy (NMR) [22]. in Table 4.1).
64 Event-Based Control and Signal Processing
TABLE 4.1
Time-Triggered Protocols
Protocol Composability Real-Time Timing Diagnosis Global Fault Error
(Bounded Latency Models Time Containment Containment
and Jitter)
On-chip TTNoC Yes Yes Periodic Local error Yes Yes Yes
and detection
sporadic
AEthereal Yes Yes Periodic Local error No Yes No
and detection
sporadic
Off-chip TTP Yes Yes Periodic Membership Yes Yes Yes
TTE Yes (time- Yes (time- Periodic, Local error Yes Yes Yes
triggered triggered traffic) sporadic, detection
traffic) aperiodic
TTCAN Yes (time- Yes (time- Periodic, Local error Yes Yes Yes
triggered triggered traffic) sporadic, detection
traffic) aperiodic

Trusted Resource Microcomponent Microcomponent


network management
authority authority Host Host

Network Network Network Network


interface interface interface interface

Router Router Router Router


Time-triggered
network-on-a-chip

Router Router Router Router

Network Network Network Network


interface interface interface interface

Host Host Host Host

FIGURE 4.1
GENESYS MPSoC with example NoC topology.

4.3.1 Time-Triggered Network-on-a-Chip purpose, dedicated architectural elements called the


Trusted Network Authority (TNA) and the Resource
The time-triggered network-on-a-chip (TTNoC) is part
Management Authority (RMA) accept resource
of the GENESYS Multi-Processor Systems-on-a-Chip
allocation requests from the microcomponents and
(MPSoC) architecture [24]. This network-on-a-chip
reconfigure the MPSoC, for example, by dynamically
(NoC) interconnects multiple, possibly heterogeneous
updating the time-triggered communication schedule of
IP blocks called microcomponents. The MPSoC intro-
the NoC and switching between power modes.
duces a trusted subsystem, which ensures that a fault
The TTNoC interconnects the microcomponents of an
(e.g., a software fault) within the host of a microcompo-
MPSoC. The purposes of the TTNoC encompass clock
nent cannot lead to a violation of the microcomponent’s
synchronization for the establishment of a global time
temporal interface specification in a way that the com-
base, as well as the predictable transport of periodic and
munication between other microcomponents would be
sporadic messages.
disrupted. For this reason, the trusted subsystem pre-
vents a faulty microcomponent from sending messages
during the sending slots of any other microcomponent. Clock Synchronization. The TTNoC performs clock
Figure 4.1 shows a GENESYS MPSoC with a mesh-based synchronization in order to provide a global time base
NoC topology. for all microcomponents despite the existence of mul-
Furthermore, the GENESYS MPSoC architecture tiple clock domains. The TTNoC is based on a uni-
supports integrated resource management. For this form time format for all configurations, which has been
Event-Triggered versus Time-Triggered Real-Time Systems 65

standardized by the Object Management Group (OMG) is an important quality characteristic. By specifying two
in the smart transducer interface standard [25]. pulsed data streams corresponding to (B) and (D) in
The time format of the NoC is a binary time-format Figure 4.2, efficient, temporally aligned data transmis-
that is based on the physical second. Fractions of a sion can be achieved.
second are represented as 32 negative powers of two Contrary to the on-chip communication system of the
(down to about 232 ps), and full seconds are presented introduced MPSoC architecture, many existing NoCs
in 32 positive powers of two (up to about 136 years). provide only a guaranteed bandwidth to the individual
This time format is closely related to the time-format senders without support for temporal alignment. The
of the General Positioning System (GPS) time and takes resulting consequences are as follows:
the epoch from GPS. In case there is no external syn-
• The short latency cannot be guaranteed.
chronization [26], the epoch starts with the power-up
instant. • A high bandwidth has to be granted to the sender
throughout the entire period of the control cycle,
Predictable Transport of Messages. Using TDMA, the although it is only required for a short interval.
available bandwidth of the NoC is divided into peri-
• The communication system has to be periodically
odic conflict-free sending slots. We distinguish between
reconfigured in order to free and reallocate the
two utilizations of a periodic time-triggered sending slot
nonused communication resources.
by a microcomponent. A sending slot can be used for
the periodic transmission of messages or the sporadic Similarly, in a fault-tolerant system that masks fail-
transmission of messages. In the latter case, a message ures by TMR, a high bandwidth communication service
is sent only if the sender must transmit a new event to is required for short intervals to exchange and vote on
the receiver. the state data of the replicated channels. A real-time
The allocation of sending slots to microcomponents communication network should consider these pulsed
occurs using a communication primitive called pulsed communication requirements and provide appropriate
data stream [27]. A pulsed data stream is a time- services.
triggered periodic unidirectional data stream that trans- Furthermore, the GENESYS MPSoC provides an
ports data in pulses with a defined length from one extended functionality with respect to message order-
sender to n a priori identified receivers at a specified ing. The GENESYS MPSoC guarantees consistent deliv-
phase of every cycle of a periodic control system. ery order across multiple channels. Consistent delivery
The pulsed behavior of the communication network order means that any two microcomponents will see the
enables the efficient transmission of large data in appli- same sequence of reception events within the intersec-
cations requiring a temporal alignment of sender and tion of the two sets of messages that are received by
receiver. Temporal alignment of sender and receiver is the two microcomponents. Consistent delivery order-
required in applications where a short latency between ing is a prerequisite for replica deterministic behavior of
sender and receiver is demanded. This is typical for microcomponents, which is required for the transparent
many real-time systems. masking of hardware errors by TMR [28].
For example, consider a control loop realized by three
microcomponents performing sensor data acquisition
4.3.2 Æthereal
(A), processing of the control algorithm (C), and actuator
operating (E), as is schematically depicted in Figure 4.2. COMPSOC [29] is an architectural approach for mixed-
In this application, temporal alignment between sensor criticality integration on a multicore chip. The main
data transmission (B) and the start of the processing of interest of COMPSOC is to establish composability and
the control algorithm (cf. instant 3 in Figure 4.2) as well predictability. Hardware resources, such as processor
as between the transmission of the control value (D) tiles, NoC, and memory tiles are virtualized to ensure
and the start of actuator output (cf. instant 5) is vital to temporal and spatial isolation. The result is a set of vir-
reduce the end-to-end latency of the control loop, which tual platforms connected via virtual wires based on the

Period

A Observation B Transmission C Processing of D Transmission of E Output at the A Observation


of sensor input of input data control algorithm output data actuator of sensor input

1 2 3 4 5 6 1 Real time

FIGURE 4.2
Cyclic time-triggered control activities.
66 Event-Based Control and Signal Processing

Æthereal NoC [30]. A static TDMA scheme with pre- can be reserved equals 1/S of the maximum channel
emption is used for processor and NoC scheduling, and bandwidth, where S is the number of slots in the TDMA
shared resources have a bounded worst-case execution table.
time (WCET) to ensure determinism. CoMPSOC is inde- Within a single channel, temporal ordering of mes-
pendent from the model of computation used by the sages is guaranteed. This means that all messages are
application, but a reconfiguration of hardware resources received in the same order as they were sent. Since the
at runtime is not supported. NI treats different channels as different entities, order-
Analogous to the time-triggered NoC, the TDMA ing guarantees are provided only within a single chan-
scheme is employed to avoid conflicts on the shared nel. Across different channels, message reordering is
interconnects and to provide encapsulation and tim- possible.
ing guarantees for the individual communication chan- Æthereal provides a shared memory abstraction to the
nels. The objective of Æthereal is to establish resource attached IP cores via a transaction-based master/slave
guarantees with respect to bandwidth and latency. protocol as it is used in OCP or AXI. These protocols
The Æthereal architecture [31] combines guaranteed define low-level signals like address signals, data sig-
services (such as guaranteed throughput and bounded nals, interrupt signals, reset signals, or clock signals.
latency) with best-effort services. Guaranteed services Examples where such protocols are typically employed
aid in the compositional design of robust SoCs, while are at the interfaces of processors, memory subsystems,
best-effort services increase the resource efficiency by or bus bridges.
exploiting the NoC capacity that is left unused by the
guaranteed services.
The constituting elements of the architecture are net-
4.3.3 Time-Triggered Protocol
work interfaces (NIs) and routers which are intercon-
nected by links. The NIs translate the protocol of the The Time-Triggered Protocol (TTP) is a communica-
attached IP core to the NoC-internal packet-based pro- tion protocol for distributed fault-tolerant real-time sys-
tocol. The routers transport messages between the NIs. tems. It is designed for applications with stringent
Æthereal offers a shared-memory abstraction to the requirements concerning safety and availability, such as
attached IP modules [32] and employs a transaction- avionic, automotive, industrial control, and railway sys-
based master/slave protocol. The transaction-based tems. TTP was initially named TTP/C and later renamed
model was chosen to ensure backward compatibility to TTP. The initial name of the communication proto-
to existing on-chip network protocols like AXI [33] or col originated from the classification of communication
OCP [34]. protocols of the Society of Automotive Engineers (SAE),
In the Æthereal NoC, the signals of an IP core with which distinguishes four classes of in-vehicle networks
a standardized interface (e.g., AXI or OCP) are sequen- based on the performance. TTP/C satisfies the highest
tialized in request and response messages, which are performance requirements in this classification of in-
transported by the NoC in the form of packets. The vehicle networks and is suitable for network classes C
communication services are provided via so-called con- and above.
nections that are composed out of multiple unidirec- TTP provides a consistent distributed computing
tional peer-to-peer channels (e.g., a typical peer-to-peer base [35] in order to ease the construction of reli-
OCP connection would consist of a request channel able distributed applications. Given the assumptions
and a reply channel; a multicast or narrow-cast connec- of the fault hypothesis, TTP guarantees that all cor-
tion can be implemented by a collection of peer-to-peer rect nodes perceive messages consistently in the value
connections, one for each master-slave pair). and time domains. In addition, TTP provides consis-
Channels offer two types of service classes: guaran- tent information about the operational state of all nodes
teed throughput (GT), and best effort (BE). GT channels in the cluster. For example, in the automotive domain
give guarantees on minimum bandwidth and maximum these properties would reduce efforts for the realiza-
latency by using a TDMA scheme. The TDMA scheme is tion of a safety-critical brake-by-wire application with
based on a table with a given number of time slots (e.g., four braking nodes. Given the consistent information
128 slots). In each slot, a network interface can read and about inputs and node failures, each of the nodes can
write at most one block of data. Given the duration of adjust the braking force to compensate for the failure
a slot, the size of a block that can be transferred within of other braking nodes. In contrast, the design of dis-
one slot and the number of slots in the table, a slots corre- tributed algorithms becomes more complex [36] if nodes
sponds to a given bandwidth B. Therefore, reserving N cannot be certain that every other node works on the
slots for a channel results in a total bandwidth of N ∗ B. same data. In such a case the agreement problem has to
The granularity with which the bandwidth of a channel be solved at the application level.
Event-Triggered versus Time-Triggered Real-Time Systems 67

The communication services of TTP support the pre- many safety-critical systems [38]. The fault hypothesis
dictable message transport with a small variability of assumes an arbitrary failure mode of a single node.
the latency. The smallest unit of transmission and MAC In order to tolerate timing failures, a TTP cluster
on the TTP network is a TDMA slot. A TDMA slot is uses local or central bus guardians. In addition, the
a time interval with a fixed duration that can be used bus guardian protects the cluster against slightly off-
by a node to broadcast a message to all other nodes. A specification faults [39], which can lead to ambiguous
sequence of TDMA slots is called a TDMA round. The results at the receiver nodes.
cluster cycle defines a pattern of periodically recurring A local bus guardian is associated with a single TTP
TDMA rounds. node and can be physically implemented as a separate
The time of the TDMA slots and the allocation of device or within the TTP node (e.g., on the silicon die
TDMA slots to nodes is specified in a static data struc- of the TTP communication controller or as a separate
ture called the Message Descriptor List (MEDL). The chip). The local bus guardian uses the a priori knowl-
MEDL is the central configuration data structure in edge about the time-triggered communication schedule
the TTP. Each node possesses its own MEDL, which in order to ensure fail-silence of the respective node. If
reflects the node’s communication actions (e.g., send- the node intends to send outside the preassigned trans-
ing of messages, clock synchronization) and parameters mission slot in the TDMA scheme, the local bus guardian
(e.g., delays to other nodes). At design time, TTP devel- cuts off the node from the network. In order to avoid
opment tools are used to temporally align the MEDLs common mode failures of the guardian and the node, the
of the different nodes with respect to the global time TTP suggests the provision of an independent external
base. For example, the period and phase of a message clock source for the local bus guardian.
transmission are aligned with the respective message The central bus guardian is always implemented as
receptions taking into account propagation delays and a separate device, which protects the TDMA slots of
jitter. all attached TTP nodes. An advantage compared to the
The fault-tolerant clock synchronization maintains a local bus guardians is the higher resilience against spa-
specified precision and accuracy of the global time base, tial proximity faults and the ability to handle slightly
which is initially established by the restart and startup off-specification faults.
services when transiting from asynchronous to syn-
chronous operation. The fault-tolerant average (FTA)
4.3.4 Time-Triggered Ethernet
algorithm [26] is used for clock synchronization in TTP.
The FTA algorithm computes the convergence function A Time-Triggered Ethernet (TTEthernet) [40] network
for the clock synchronization within a single TDMA consists of a set of nodes and switches, which are
round. It is designed to tolerate k Byzantine faults interconnected using bidirectional communication links.
in a system with N nodes. Therefore, the FTA algo- TTEthernet combines different types of communication
rithm bounds the error that can be introduced by arbi- on the same network. A service layer is built on top
trary faulty nodes. These nodes can provide inconsistent of IEEE 802.3, thereby complementing layer two of the
information to the other nodes. Open System Interconnection model [40].
The diagnostic services provide the application with TTEthernet supports synchronous communication
feedback about the operational state of the nodes and the using so-called time-triggered messages. Each partici-
network using a consistent membership vector. A node pant of the system is configured offline with preassigned
A considers another node B as operational, if node A has time slots based on a global time base. This network-
correctly received the message that was sent by node B access method based on TDMA offers a predictable
prior to the membership point. In case redundant com- transmission behavior without queuing in the switches
munication channels are used, the reception on one of and achieves low latency and low jitter.
the channels is sufficient in order to consider a sender While time-triggered periodic messages are always
to be operational. The diagnostic services in conjunction sent with a corresponding period and phase, time-
with the a priori knowledge about the permitted behav- triggered sporadic messages are assigned a periodic
ior of nodes is the basis for the fault isolation services of time slot but only transmitted if the host actually
TTP. requests a transmission [41]. These messages can be
The TTP was designed to isolate and tolerate an arbi- used to transmit high-priority information about occur-
trary failure of a single node during synchronized oper- ring events, but are transmitted only if the event really
ation [37]. After the error detection and the isolation of happens. If sporadic messages are not sent, the unused
the node, a consecutive failure can be handled. Given bandwidth of the communication slot can be used for
fast error detection and isolation mechanisms, such a other message types. Transmission request times for
single fault hypothesis is considered to be suitable in sporadic messages are unknown, but it is known that a
68 Event-Based Control and Signal Processing

minimum time interval exists between successive trans- of messages is resource inefficient and results in
mission requests. The knowledge about this minimum a low network utilization. Also, corrupt messages
time interval can be used to compute the minimum result from the truncation, which can be indistin-
period of the periodic communication slot. guishable to the consequences of hardware faults.
The bandwidth that is either not assigned to time- The consequence is a diagnostic deficiency.
triggered messages or assigned but not used is free The TTEthernet message format is fully compliant
for asynchronous message transmissions. TTEthernet to the Ethernet message format. However, the destina-
defines two types of asynchronous messages: rate con- tion address field in TTEthernet is interpreted differently
strained and best effort. Rate-constrained messages are depending on the traffic type. In best-effort traffic, the
based on the AFDX protocol and intended for the trans- format for destination addresses as standardized in IEEE
mission of data with less stringent real-time require- 802.3 is used. In time-triggered and rate-constrained
ments [42]. Rate-constrained messages support bounded traffic, the destination address is subdivided into a con-
latencies but incur higher jitter compared to time- stant 32-bit field and a 16-bit field called the virtual-
triggered messages. Best-effort messages are based on link identifier. TTEthernet communication is structured
standard Ethernet and provide no real-time guarantees. into virtual links, each of which offers a unidirectional
The different types of messages are associated with connection from one node to one or more destination
priorities in TTEthernet. Time-triggered messages have nodes. The constant field can be defined by the user
the highest priority, whereas best-effort messages are but should be fixed for all time-triggered and rate-
assigned the lowest priority. Using these priorities, constrained traffic. This constant field is also denoted
TTEthernet supports three mechanisms to resolve colli- as the CT marker [42]. The least two significant bits of
sions between the different types of messages [40,43]: the first octet of the constant field must be equal to one,
• Shuffling. If a low-priority message is being trans- since rate-constrained and time-triggered messages are
mitted while a high-priority message arrives, the multicast messages.
high-priority message will wait until the low-
priority message is finished. That means that the 4.3.5 Time-Triggered Controller Area Network
jitter for the high-priority message is increased by The Time-Triggered CAN (TTCAN) [44] protocol is an
the maximum transmission delay of a low-priority example of time-triggered services that are layered on
message. Shuffling is resource efficient but results top of an event-triggered protocol. TTCAN is based on
in a degradation of the real-time quality. the CAN data link layer as specified in ISO 11898 [11].
TTCAN is a master/slave protocol that uses a Time
• Timely Block. According to the time-triggered
Master (TM) for initiating communication rounds.
schedule, the switch knows in advance the trans-
TTCAN employs multiple time masters for supporting
mission times of the time-triggered messages.
fault tolerance. The time master periodically sends a
Timely block means that the switch reserves
reference message, which is recognized by clients via
guarding windows before every transmission time
its identifier. The period between two successive refer-
of a time-triggered message. This guarding win-
ence messages is called basic cycle. The reference message
dow has a duration that is equal to the maximum
contains information about the current time of the time
transmission time of a lower-priority message. In
master as well as the number of the cycle.
the guarding window, the switch will not start the
In order to improve the precision of the global time
transmission of a lower-priority message to ensure
base, slave nodes measure the duration of the basic
that time-triggered messages are not delayed. The
cycle and compare this measured duration with the
jitter for high-priority messages will be close to
values contained in the reference messages. The differ-
zero. Timely block ensures high real-time qual-
ence between these measured duration and the nominal
ity with a near constant delay. However, resource
duration (as indicated by the time master) is used for
inefficiency occurs when the maximum size of
drift correction.
low-priority messages is high or unknown [41].
The basic cycle can consist of three types of windows,
• Preemption. If a high-priority message arrives namely, exclusive time windows, free time windows,
while a low-priority message is being relayed by and arbitrating time windows.
a switch, the switch stops the transmission of the 1. An exclusive time window is dedicated to a
low-priority message and relays the high-priority single periodic message. A statically defined
message. That means that the switch introduces node sends the periodic message within an
an almost constant and a priori known latency for exclusive time window. An offline design tool
high-priority messages. However, the truncation ensures that no collisions occur.
Event-Triggered versus Time-Triggered Real-Time Systems 69

2. The arbitrating time windows are dedicated


to spontaneous messages. Multiple nodes can 4.4 Example of a Control Application Based on
compete for transmitting a message within an
Time-Triggered and Event-Triggered Control
arbitrating time window. The bitwise arbitra-
tion mechanism of CAN decides which mes- This section examines a rotary inverted pendulum con-
sage actually succeeds. trolled with several microcontrollers that communicate
over a CAN bus. We compare the differences between
3. Free time windows are reserved for further time-triggered and event-triggered CAN communica-
extensions. When required, free time win- tion with and without load. On every microcontroller,
dows can be reconfigured as exclusive or arbi- the operating system FreeRTOS with tasks and queues
trating time windows. is used for the implementation of the control system.
The structure of the section is at first a short descrip-
Retransmissions are prohibited in all three types of tion of the rotary inverted pendulum model followed
windows. by a graph that shows the temporal behavior for a sim-
In order to tolerate the failure of a single-channel bus, ple swing-up algorithm followed by an algorithm to
channel redundancy has been proposed for TTCAN [45]. hold the pendulum in an upright position. The follow-
This solution uses gateway nodes, which are connected ing sections describe the printed circuit board (PCB) for
to multiple channel busses, in order to synchronize the control (cf. Control PCB in Figure 4.3) with the connec-
global time of the different busses and maintain a consis- tions between the four microcontrollers, the sensors, the
tent schedule. However, no analysis of the dependencies human–machine interface, and the motor controls with
between the redundant busses has been formed. For the help of a block diagram. A short description of the
example, due to the absence of bus guardians, a babbling used TTCAN library and its configuration on all micro-
idiot failure [46] of a gateway node has the potential controllers is described in the next section followed by
to disrupt communication across multiple redundant the monitored signals on the CAN bus. The behavior of
busses. the arm and pendulum angles over time for small dis-
The CAN protocol, which forms the basis for TTCAN, ruptions as well as the overall behavior of the control
has built-in error detection mechanisms such as CRC system with TTCAN communication are discussed in
codes to detect faults on the communication channel or the next section.
error counters. Figure 4.3 shows the inverted rotary pendulum (IRP)
In addition, the a priori knowledge of the time- model from a top-left point of view. The microcontroller-
triggered send and receive instants can be used to estab- based control PCB is located at the bottom left border
lish a membership service. Although such a service is of the picture on the ground plate of the model foun-
not mandatory in TTCAN, [47] describes the design dation. There are two digital rotary encoders for the
and implementation of a membership service on top of measurement of the pendulum bar angle (pendulum
TTCAN. sensor) and the arm angle (arm sensor) with the rod for

Balance weight Rod

Sensor PCB Pendulum


sensor
Arm frame

Toothed belt Arm sensor


transmission
Bar
Motor and
gearbox Mass M

Foundation
Control PCB

FIGURE 4.3
Inverted rotary pendulum.
70 Event-Based Control and Signal Processing

mounting the pendulum bar with mass M at the end. • Arm angle
The rotary encoder (pendulum sensor) for the pendulum
bar angle sends its signals with the help of the sensor • Arm angle velocity
PCB over a flat ribbon cable to the control PCB. The mea-
surement of the arm angle is also done with a rotary These states are determined by a feedback system real-
encoder (arm sensor) that is located below the toothed ized in the form of the distributed control algorithm
belt (called the toothed belt transmission) in the founda- based on the control PCB (see Figure 4.3) with the four
tion of the IRP model. This encoder is directly connected microcontrollers and the TTCAN communication. The
with a flat ribbon cable to the control PCB, because its output signal of this feedback system is sent back to the
location is fixed in the foundation. The angle of the arm IRP model for driving via a H-bridge driver and pulse
frame is driven by a DC motor over a first built-in gear- width modulation the DC-motor movement.
box (cf. motor and gearbox in Figure 4.3) in the motor Figure 4.5 shows the block diagram of the control
case and an additional toothed belt transmission. The PCB. On the left side, there is the first microcontroller
balance weight on the rod of the pendulum axis helps where the angle sensors are connected and the pendu-
to balance the arm frame. lum and arm angles are calculated. These values are
The graph in Figure 4.4 shows at the horizontal axis sent over the CAN bus driver (cf. angle box in the com-
(time line) 13 s after power, on the start of the pendu- munication plan for TTCAN in Figure 4.6) to all other
lum movement with a simple swing-up algorithm. At microcontrollers. Both values are the base for building
the time near 23 s, it holds the pendulum in the upright the differential values over time of the arm and pendu-
position with an appropriate algorithm derived from the lum angles and in a final step the velocity of both angles.
state-space method of control theory. This is a typical These four states (measured and calculated values) are
behavior of the IRP model in combination with a well- the input signals of the feedback system in the form of
designed control algorithm without disruptions. The the distributed control algorithm described in the last
stability of the pendulum angle is remarkable, which section. Microcontroller 2 transforms the binary values
is manifested by a short time for swing-up with nearly into a human-readable form (ASCII) and shows the mea-
constant 180 degrees as well as the small arm angle vari- sured values on a small liquid-crystal display (LCD). It
ability (less than ±5 degrees) for holding the pendulum is responsible for generating the first CAN message (ref-
in the upright position. The control algorithm is based erence message) as a time master. Its crystal-stabilized
on four states in the system: local time is sent to all other CAN bus participants,
which will synchronize their own local time according
• Pendulum angle to this time-master message. Microcontroller 3 reads the
push-button matrix and sends its information over the
• Pendulum angle velocity CAN bus for controlling the sequence of the control

195
180
165
150
135
120
105
90
75
Angle (degree)

60
45
30
15
0
–15 13.5 20 26.5 33 39.5
–30
–45
–60
–75
–90
–105
–120
t (s)
Pendulum angle Arm angle

FIGURE 4.4
Pendulum swing-up and hold in upright position.
Event-Triggered versus Time-Triggered Real-Time Systems 71

Angle LCD Push-buttons DC-motor


sensors Test-port LEDs H-bridge

Time Master
uC1 uC2 uC3 uC4
Compute Info on LCD Control Compute
angle values HMI algorithm PWM values

CAN-bus CAN-bus CAN-bus CAN-bus


driver driver driver driver

CAN-bus

FIGURE 4.5
Block diagram of control PCB.

TTCAN without using all possible slots

Time Angle Empty PWM Empty Empty Time Angle Empty PWM
master slot slot slot master slot

0 1 2 3 4 5 0 1 2 3
t

Time Angle AL2 PWM AL4 AL5 Time Angle AL2 PWM
master master

TTCAN cycles with usage of all six slots

One TTCAN cycle

FIGURE 4.6
Communication plan for TTCAN.

algorithm (i.e., swing-up part, hold in upright position, of six slots. The implementation of the feedback com-
and stop), and it realizes the control algorithm of the ponent with many matrix operations based on floating
feedback system for the movement of the DC-motor in point values needs a lot of computation time on the
the IRP model. This information is also sent via the used 8-bit microcontroller architecture (AT90CAN128
CAN bus [cf. Pulse-Width Modulation (PWM) box in from Atmel). The implementation of the TTCAN
the communication plan for TTCAN in Figure 4.6] for management functions on every microcontroller (e.g.,
driving the DC-motor and showing the values on the high-priority TTCAN scheduler task), the RTOS compu-
small LCD. The last microcontroller (uC4) controls the tations for the message queues, and the static arrays for
revolution speed of the DC-motor with the help of a the TTCAN plan require a significant amount of com-
pulse width modulation algorithm. The software of all putation time. The TTCAN plan results in a very stable
microcontrollers is based on FreeRTOS for controlling control system, as one can see in Figure 4.4.
the time sequence of different tasks. FreeRTOS is also The plan shows a TTCAN cycle with six slots indexed
used to execute the sequence of activities according to from zero to five. In slot number zero, the reference
the TTCAN communication plan (see Figure 4.6) such as message with the local time of the time master micro-
sending CAN messages, receiving values via CAN mes- controller (uC2) is sent over the CAN bus with the
sages with specific IDs, starting tasks, and synchronizing hexadecimal value 0x000 as the CAN bus ID. All
the local timers. receiving microcontrollers synchronize their own FreeR-
Figure 4.6 shows the TTCAN communication plan of TOS timers to the time in the reference message. This
the algorithm for controlling the IRP model. The upper synchronized time is needed in order to see a con-
part shows the first implementation with empty slots stant time offset (see Figure 4.7) between the refer-
of the TTCAN communication cycle with a duration ence message and all other messages in the TTCAN
72 Event-Based Control and Signal Processing
10.0
V

6.0
4.0
2.0
0.0
–2.0
–4.0
–6.0
–8.0
–10.0
–0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0

FIGURE 4.7
TTCAN signal and serial decoding of CAN messages.

communication. The configuration of FreeRTOS estab- lower part of Figure 4.7). This message is followed by a
lishes on all microcontrollers a ticker period of half a sensor angle CAN message with ID 0x001, push-button
millisecond. This ticker period determines the invoca- events, and a PWM value for the motor with ID 0x002.
tion of the FreeRTOS scheduler within the timer inter- All messages with ID 0x030 are artificial load messages
rupt service routine. The task TTCAN-Scheduler with for composability evaluations that transport in the data
the highest priority is started every tick and increments section of the CAN message only the sequence index of
the slot number (i.e., the index of the TTCAN sched- the TTCAN plan. In row 8, the next TTCAN cycle starts
ule array). Then, the array with the TTCAN schedule with a time master message that is easily identifiable in
will be checked for starting time-triggered activities, like the ID column through the ID value 0x000.
sending the angles via the CAN bus. The lower part of Figure 4.8 shows the alteration by a disruption of both
Figure 4.6 shows a TTCAN schedule where all possible angles in the upright position of the pendulum. The dis-
slots are used for sending CAN messages. In principle, it ruption is similar to a finger snip force against the mass
is the same plan as in the upper part, but some artificial M at the upper end of the pendulum. You can see the
loads (AL2, AL4, and AL5) are added in the empty slots effect near the time line values of 2.05 and 2.47 s as a
in order to evaluate the temporal composability. local buckling at the axis of the pendulum angle. In con-
Figure 4.7 shows an example of a received signal at a trast to the angle graph over time in Figure 4.4, the value
microcontroller RXCAN pin of the CAN–bus interface. domain of the arm angle is now in the range between
The marked D box at the time 0.0 ms under the CAN +20 and −8 degrees.
bus signal shows the time master message. All of the fol- The final part of this section is a short comparison
lowing messages have an offset of approximately 90 μs to the behavior of the same model (IRP) with event-
as indicated by the vertical rulers. The microcontrollers triggered CAN communication. The measured angles
need computation time for synchronizing their local for the pendulum axis and the arm axis are sent in the
clocks, and therefore, all CAN messages after the time same period (every 3 ms) as in the time-triggered CAN
master message have a constant time delay in all slots communication. The time point of all following CAN
except slot zero. The signal graph from the RXCAN pin messages depends on this start-up message except for
shows that every 0.5 ms a CAN message is placed on the the artificial loads that follow later.
bus. The TTCAN cycle with six messages ends at 3.0 ms The graph in Figure 4.9 shows at the horizontal
with the next time master message. The lower part of axis 13 s after power-on, the start of pendulum move-
Figure 4.7 shows the decoded values of every TTCAN ment with a simple swing-up algorithm. To guarantee
message. The start of the sequence begins with a time now a soft change from swing-up movement to the
master message with CAN-ID 0x000 (see row 2 in the hold in upright position, the motor is driven with less
Event-Triggered versus Time-Triggered Real-Time Systems 73
30
15
0
–15 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
–30

Angle (degree)
–45
–60
–75
–90
–105
–120
–135
–150
–165
–180
–195
–210
t (s)
Pendulum angle Arm angle

FIGURE 4.8
Angle behavior on a minor disruption.

195
180
165
150
135
120
105
90
75
Angle (degree)

60
45
30
15 26.5
0 t (s)
–15 13.5 20 33
–30
–45
–60
–75
–90
–105
–120
Pendulum angle Arm angle

FIGURE 4.9
Swing-up and hold with event-triggered CAN communication.

power during the swing-up phase. In comparison with signal CAN Low, A) with event-triggered CAN commu-
Figure 4.4 (10 s swing-up duration), it now needs a nication. The marked D box at the time 0.0 ms under
longer duration of approximately 12 s. The next inter- the blue CAN bus signal shows the angle message (ID
esting thing to see is the value domain of the arm angle. 0x11) from the sensor microcontroller. At the right part
With the use of TTCAN communication, it is less than at the first vertical ruler is a CAN message placed with
±5 degrees, and now with the asynchronous CAN com- the ID 0x22, which includes the most recent output
munication, one sees peaks up to 45 degrees as well from the control algorithm. The level change at chan-
as the pendulum angle within ±15 degrees. In general, nel B (solid signal curve in Figure 4.10) at the second
the model behavior is near the limit of stability, and vertical ruler marks the time amount for receiving the
small disruptions produce angle values greater than the CAN message with the event-triggered CAN library.
defined boundaries. The result is a system shut down. The difference between first and second ruler is around
Figure 4.10 shows a received signal at a microcon- 0.6 ms. Additional information is given through no level
troller RXCAN pin of the CAN bus interface (the dashed change after the second CAN message that contains also
74 Event-Based Control and Signal Processing
10.0
V

6.0
4.0
2.0
0.0
–2.0
–4.0
–6.0
–8.0
–10.0
–2.0 0.0 2.0 4.0 6.0

FIGURE 4.10
Event-triggered CAN signal without artificial loads.

120
90
60
30
0
21 26 31 36 41 46 51
Angle (degree)

–30
–60
–90
–120
–150
–180
–210
–240
–270
t (s)

Pendulum angle Arm angle

FIGURE 4.11
Reaching the arm boundary with a small disruption.

output for controlling the motor. This is a typical miss time line, there is an event (same finger snip disruption
of information in event-triggered communication. The as in Figure 4.8). The control algorithm reacts so hard
code for generating the signal of channel B is based on that the angle boundaries of the arm angle are out of its
the comparison of the ID from the last received CAN limits. The boundaries of the arm angle are calibrated
message against the hexadecimal value 0x22 (defined ID to ±108 degrees for mechanical reasons. When the arm
for motor control messages). If this is the case, pin 6 of angle exceeds these values, the IRP model is stopped
the 8-bit test port F is XORed with a logical 1 (binary for safety reasons in order not to harm the cables of the
01000000). This leads to the level change that is shown pendulum angle sensor. The rest of the graph shows the
in Figure 4.10 below the CAN signal. slow swing-down of the pendulum.
The last section shows the behavior of the model and Figure 4.12 depicts the event-triggered communica-
the signals on the CAN bus with artificial load in the case tion on the CAN bus. Here, we can see the artificial load
of event-triggered CAN communication. In Figure 4.11, messages and the IRP model data messages in combi-
we can see the swing-up of the pendulum followed by a nation on the bus. Artificial loads can be identified by
barely stable behavior of the IRP model. Near 42 s on the IDs greater than the value 0x100, for example, ID 0x401
Event-Triggered versus Time-Triggered Real-Time Systems 75
V

6.0

4.0

2.0

0.0

0.0 0.5 1.0 1.5 2.0 2.5 3.0


No. Packet ID DLC Data bytes ACK Slot Start time End time Packet time
1 Data 011 6 009A0000FCB2 0 –39,62ns 194,2μs 194,2μs
2 Data 402 8 0402ABCDEFABCDEF 0 976μs 1,202ms 226,2μs
3 Data 102 8 0102ABCDEFABCDEF 0 1,212ms 1,44ms 228,2μs
4 Data 401 8 0401AABBCCDDEEFF 0 2,002ms 2,226ms 224,2μs
5 Data 022 8 0000000000264300 0 2,496ms 2,732ms 236,1μs
6 Data 101 8 0101AABBCCDDEEFF 0 2,742ms 2,968ms 226,3μs
7 Data 011 6 009A0000FCB2 0 3,012ms 3,206ms 194,1μs

FIGURE 4.12
Event-triggered CAN signal with artificial loads and serial decoding.

before the marked box with the grey background in the [5] H. Kopetz, Real-time systems—Design Principles
table is below the measured CAN signal. The model for Distributed Embedded Applications (2nd ed.).
data such as the measured angles (cf. Nos. 1 and 7 with Springer, New York, 2011.
ID 0x011) and the response of the control algorithm (cf.
No. 5 with ID 0x022) are sent every 3 ms in case the CAN [6] H. Kopetz, Elementary versus composite interfaces
bus is idle at the moment. in distributed real-time systems, in Proc. of the
In summary, the example system includes a com- International Symposium on Autonomous Decentral-
plex control algorithm for controlling an unstable sys- ized Systems (Tokyo, Japan), March 1999.
tem (IRP) with multiple microcontrollers, and there [7] ARTEMIS Final Report on Reference Designs
are no problems with the control function when we and Architectures—Constraints and Requirements.
use TTCAN communication between several microcon- ARTEMIS (Advanced Research and Technology
trollers with and without additional loads. The same for EMbedded Intelligence and Systems) Strategic
model realized with event-triggered CAN communi- Research Agenda, 2006. https://artemis-ia.eu/
cation is less stable and leads to an unstable system publication/download/639-sra-reference-designs-
behavior with a comparable load, as used in the TTCAN and-architectures-2.
communication.
[8] J. Sifakis, A framework for component-based con-
struction, in Proc. of Third IEEE International Conf. on
Software Engineering and Formal Methods (SEFM05),
September 2005, pp. 293–300.
[9] H. Kopetz, R. Obermaisser, Temporal composabil-
ity, Comput. Control Eng. J., vol. 13, pp. 156–162,
Bibliography
August 2002.
[1] A. Crespo, I. Ripoll, M. Masmano, Partitioned
embedded architecture based on hypervisor: The [10] IEEE, IEEE standard 802.3—carrier sense multi-
xtratum approach, in EDCC, IEEE Computer Soci- ple access with collision detect (CSMA/CD) access
ety, 2010, pp. 67–72. method and physical layer, tech. rep., IEEE, 2000.

[2] Lynuxworks, LynxOS User’s Guide, Release 4.0, [11] International Standardization Organisation (ISO),
DOC-0453-02, 2005. Road vehicles—Interchange of Digital Information—
Controller Area Network (CAN) for High-Speed Com-
[3] P. Parkinson, L. Kinnan, Safety-Critical Software munication, ISO 11898, 1993.
Development for Integrated Modular Avionics,
Whitepaper, Wind River Systems, 2007. [12] G. Heiner, T. Thurner, Time-triggered architecture
for safety-related distributed real-time systems in
[4] Aeronautical Radio, Inc., ARINC Specification 653- transportation systems, in Proceedings of the Twenty-
2: Avionics Application Software Standard Interface. Eighth Annual International Symposium on Fault-
Aeronautical Radio, Inc., Annapolis, MD, 2007. Tolerant Computing, June 1998, pp. 402–407.
76 Event-Based Control and Signal Processing

[13] FlexRay Consortium. BMW AG, DaimlerChrysler embedded real-time systems, IEEE Trans. Ind. Inf.,
AG, General Motors Corporation, Freescale GmbH, vol. 6, no. 4, pp. 548–567, 2010.
Philips GmbH, Robert Bosch GmbH, Volkswagen
AG., FlexRay Communications System Protocol Speci- [25] OMG, Smart Transducers Interface, Specification
fication Version 2.1, May 2005. ptc/2002-05-01, Object Management Group, May
2002. Available at http://www.omg.org/.
[14] P. Pedreiras, L. Almeida, Combining event-
triggered and time-triggered traffic in FTT-CAN: [26] H. Kopetz, W. Ochsenreiter, Clock synchroniza-
Analysis of the asynchronous messaging system, tion in distributed real-time systems, IEEE Trans.
in Proceedings of Third IEEE International Work- Comput., vol. 36, no. 8, pp. 933–940, 1987.
shop on Factory Communication Systems, pp. 67–75,
[27] H. Kopetz, Pulsed data streams, in IFIP TC 10
September 2000.
Working Conf. on Distributed and Parallel Embedded
[15] R. Obermaisser, CAN Emulation in a time-triggered Systems (DIPES 2006), (Braga, Portugal), Springer,
environment, in Proceedings of the 2002 IEEE Inter- New York, October 2006, pp. 105–124.
national Symposium on Industrial Electronics (ISIE),
[28] S. Poledna, Replica determinism in distributed real-
vol. 1, 2002, pp. 270–275.
time systems: A brief survey, Real-Time Systems,
[16] H. Kopetz, A. Ademaj, P. Grillinger, K. Steinham- vol. 6, 1994, pp. 289–316.
mer, The time-triggered ethernet (TTE) design, in
[29] K. Goossens, A. Azevedo, K. Chandrasekar,
Proceedings of Eighth IEEE International Symposium
M. D. Gomony, S. Goossens, M. Koedam, Y. Li,
on Object-Oriented Real-Time Distributed Computing
D. Mirzoyan, A. Molnos, A. B. Nejad, A. Nelson,
(ISORC), pp. 22–33, May 2005.
S. Sinha, Virtual execution platforms for mixed-
[17] R. Benesch, TCP für die time-triggered architecture, time-criticality systems: The compsoc architecture
Master’s thesis, Technische Universität Wien, Insti- and design flow, SIGBED Rev., vol. 10, October
tut für Technische Informatik, Vienna, Austria, June 2013, pp. 23–34.
2004.
[30] K. Goossens, A. Hansson, The aethereal network
[18] J. Saltzer, D. Reed, D. Clark, End-to-end arguments on chip after ten years: Goals, evolution, lessons,
in system design, ACM Trans. Comput. Syst. (TOCS), and future, in Proceedings of the 47th Design Automa-
vol. 2, 1984, pp. 277–288. tion Conf., DAC’10, (New York, NY), ACM, 2010,
pp. 306–311.
[19] A. Avizienis, J. Laprie, B. Randell, Fundamental
concepts of dependability, Research Report 01-145, [31] K. Goossens, J. Dielissen, A. Radulescu, The aethe-
LAAS-CNRS, Toulouse, France, April 2001. real network on chip: Concepts, architectures, and
implementations, IEEE Des. Test Comput., vol. 22,
[20] J. Lala, R. Harper, Architectural principles for no. 5, pp. 414–421, 2005.
safety-critical real-time applications, in Proceedings
of the IEEE, vol. 82, pp. 25–40, January 1994. [32] A. Radulescu, J. Dielissen, K. Goossens,
E. Rijpkema, P. Wielage, An efficient on-chip
[21] H. Kopetz, Fault containment and error detection network interface offering guaranteed services,
in the time-triggered architecture, in Proceedings of shared-memory abstraction, and flexible network
the Sixth International Symposium on Autonomous configuration, in Proceedings of Design, Automation
Decentralized Systems, pp. 139–146, April 2003. and Test in Europe Conference and Exhibition, vol. 2,
pp. 878–883, 2004.
[22] P. Lee, T. Anderson, Fault Tolerance Principles and
Practice, vol. 3 of Dependable Computing and Fault- [33] ARM, AXI protocol specification, 2004.
Tolerant Systems. Springer-Verlag, New York, 1990.
[34] O.-I. Association, Open core protocol specification
[23] M. Hecht, D. Tang, H. Hecht, Quantitative relia- 2.1, 2005.
bility and availability assessment for critical sys-
tems including software, in Proceedings of the 12th [35] H. Kopetz, Time-triggered real-time computing,
Annual Conf. on Computer Assurance (Gaitherburg, Ann. Rev. Control, vol. 27, no. 1, pp. 3–13, 2003.
Maryland), pp. 147–158, June 1997.
[36] L. Lamport, R. Shostak, M. Pease, The byzantine
[24] R. Obermaisser, H. Kopetz, C. Paukovits, A generals problem, ACM Trans. Program. Lang. Syst.
cross-domain multi-processor system-on-a-chip for (TOPLAS), vol. 4, no. 3, pp. 382–401, 1982.
Event-Triggered versus Time-Triggered Real-Time Systems 77

[37] H. Kopetz, G. Bauer, S. Poledna, Tolerating arbi- Proceedings of the 8th IEEE International Symposium
trary node failures in the time-triggered architec- on Network Computing and Applications, pp. 319–322,
ture, in Proceedings of the SAE 2001 World Congress 2009.
(Detroit, MI), March 2001.
[44] T. Führer, B. Müller, W. Dieterle, F. Hartwich,
[38] R. Obermaisser, P. Peti, A fault hypothesis for inte- R. Hugel, Time-triggered CAN - TTCAN: Time-
grated architectures, in Proceedings of the Fourth triggered communication on CAN, in Proceedings
International Workshop on Intelligent Solutions in of the Sixth International CAN Conf. (ICC6), (Torino,
Embedded Systems, June 2006. Italy), 2000.

[39] A. Ademaj, Slightly-off-specification failures in the [45] B. Müller, T. Führer, F. Hartwich, R. Hugel,
time-triggered architecture, in Proceedings of the Sev- H. Weiler, Fault tolerant TTCAN networks, in Pro-
enth IEEE International High-Level Design Validation ceedings of the Eighth International CAN Conf. (iCC),
and Test Workshop (Washington, DC), IEEE Com- 2002.
puter Society, 2002, p. 7.
[46] C. Temple, Avoiding the babbling-idiot failure in a
[40] R. Obermaisser, Time-Triggered Communication. time-triggered communication system, in Proceed-
Taylor and Francis, Boca Raton, FL 2011. ings of the Symposium on Fault-Tolerant Computing,
1998, pp. 218–227.
[41] K. Steinhammer, Design of an FPGA-Based Time-
Triggered Ethernet System. PhD thesis, Technische [47] C. Bergenhem, J. Karlsson, C. Archer, A. Sjöblom,
Universität Wien, Austria, 2006. Implementation results of a configurable member-
ship protocol for active safety systems, in Proceed-
[42] AS-6802—Time-Triggered Ethernet, November ings of the 12th Pacific Rim International Symposium
2011. on Dependable Computing (PRDC’06), pp. 387–388.
[43] W. Steiner, G. Bauer, B. Hall, M. Paulitsch,
S. Varadarajan, TTEthernet Dataflow Concept, in
5
Distributed Event-Based State-Feedback Control

Jan Lunze
Ruhr-University Bochum
Bochum, Germany

Christian Stöcker
Ruhr-University Bochum
Bochum, Germany

CONTENTS
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2.2 Nonnegative Matrices and M-Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.3 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.4 Comparison Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2.5 Practical Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.3 State-Feedback Approach to Event-Based Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3.1 Basic Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3.2 Components of the Event-Based Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.3.3 Main Properties of the Event-Based State Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4 Disturbance Rejection with Event-Based Communication Using Information Requests . . . . . . . . . . . . . . . . . 87
5.4.1 Approximate Model and Extended Subsystem Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4.2 A Distributed State-Feedback Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.4.3 Event-Based Implementation of a Distributed State-Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . 92
5.4.4 Analysis of the Distributed Event-Based Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.5 Example: Distributed Event-Based Control of the Thermofluid Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.5.1 Demonstration Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.5.2 Design of the Distributed Event-Based Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

ABSTRACT This chapter presents an approach to


distributed control that combines continuous and event-
5.1 Introduction
based state feedback and aims at the suppression of
the disturbance propagation through interconnected In modern control systems, the feedback is increas-
systems. First, a new method for the design of dis- ingly closed through a digital communication network.
tributed state-feedback controllers is proposed, and Such systems, where the information exchange between
second, the implementation of the distributed state- sensors, controllers, and actuators occurs via a shared
feedback law in an event-based manner is presented. communication medium, are referred to as networked
At the event times, the event-based controllers trans- control systems [10,11,24]. The communication network
mit information to or request information from the allows a flexible distribution of information and, thus,
neighboring subsystems, which is a new communica- opens the possibility of new control structures, which
tion pattern. The distributed event-based state-feedback can hardly be realized in a conventional control sys-
approach is tested on a thermofluid process through an tem with fixed wiring. On the other hand, real networks
experiment. have limited bandwidth, and in order to avoid network

79
80 Event-Based Control and Signal Processing

Σ Interconnections L

z1(t) s1(t) z2(t) s2(t) zN(t) sN(t)

Σ1 Σ2 ... ΣN

d1(t) u1(t) x1(t) u2(t) x2(t) d2(t) uN(t) xN(t) dN(t)

F F1 F2 ... FN

Communication network

FIGURE 5.1
Event-based control system: The plant Σ is composed of the interconnected subsystems Σi ( i = 1, . . . , N ) that are controlled by the event-based
control units Fi ( i = 1, . . . , N ), which communicate over a network at the event times.

congestion, the communication should be kept to the In this way, the existing literature on distributed event-
minimum required to solve the respective control task, based control is extended.
which has motivated the investigation of event-based
control methods.
This section presents a new approach to the dis-
tributed event-based control of interconnected linear
systems. The investigated control system is illustrated
5.2 Preliminaries
in Figure 5.1. The overall plant is composed of N sub-
systems Σi (i = 1, . . . , N ), which are physically inter- 5.2.1 Notation
connected, and the subsystem Σi is controlled by local
IR and IR+ denote the set of real numbers or the set
control unit Fi . The overall event-based controller con-
of positive real numbers, respectively. IN is the set of
sists of the N control units Fi (i = 1, . . . , N ) and the
natural numbers, and IN0 = IN ∪ {0}. Throughout this
communication network. The main control goal is dis-
chapter, scalars are denoted by italic letters (s ∈ IR),
turbance attenuation and the suppression of the propa-
vectors by bold italic letters ( x ∈ IRn ), and matrices by
gation of the disturbance throughout the network.
uppercase bold italic letters ( A ∈ IRn×n ).
Distributed event-based control has been previously
If x is a signal, the value of x at time t ∈ IR+ is
investigated in [3,25,27] for nonlinear systems and in
represented by x(t). Moreover,
[2,7] for linear systems. Extensions of these approaches
to imperfect communication between the control units x(t+ ) = lim x(s),
s↓t
considering delays and packet losses has been studied
in [8] for linear systems and in [26] for nonlinear sys- represents the limit of x(t) taken from above.
tems. In all these references, the stability of the overall The transpose of a vector x or a matrix A is denoted
system is ensured using small-gain arguments that claim by x  or A , respectively. In denotes the identity matrix
the subsystems to be weakly connected in some sense. of size n. On×m is the zero matrix with n rows and m
In contrast to the existing literature, the proposed columns, and 0n represents the zero vector of dimen-
control approach combines continuous and event-based sion n. The dimensions of these matrices and vectors are
feedback. It is assumed that the control unit Fi continu- omitted if they are clear from the context.
ously measured the local state information xi (t) but has Consider the matrices A1 , . . . , A N . The notation A =
access to the state information x j (t) of some subsystem diag ( A1 , . . . , A N ) is used to denote a block diagonal
Σ j ( j = i ) through the network at discrete time instants matrix:
only. The novelty of this control approach is the use of ⎛ ⎞
two kinds of events that trigger either the transmission A1
⎜ .. ⎟
or the request of information, which leads to a new type A = diag ( A1 , . . . , A N ) = ⎝ . ⎠.
of event-based control. Moreover, it is shown that the AN
event-based control approach tolerates arbitrarily strong
interconnections between neighboring subsystems and The ith eigenvalue of a square matrix A ∈ IRn×n is
still guarantees the stability of the overall control system. denoted by λi ( A). The matrix A is called Hurwitz
Distributed Event-Based State-Feedback Control 81
" #
(or stable) if Re λi ( A) < 0 holds for all i = 1, . . . , n, DEFINITION 5.2 A matrix A ∈ IRn×n is called redu-
where Re (·) denotes the real part of the indicated cible if there exists a permutation matrix P that trans-
number. forms A to a block upper triangular matrix:
Consider a time-dependent matrix G (t) and vector  
u(t). The asterisk (∗) is used to denote the convolution Ã11 O
PAP  = ,
operator, for example, Ã21 Ã22
 t
G∗u = G (t − τ)u(τ)dτ. where Ã11 and Ã22 are square matrices. Otherwise, A is
0 called irreducible.

The inverse of a square matrix H ∈ IRn×n is symbol- Theorem 5.1: Theorem A1.1 in [15], Perron–Frobenius
ized by H −1. For a nonsquare matrix H n×m that has full theorem
rank, H + is the pseudoinverse that is defined by
" # −1  Every irreducible nonnegative matrix A ∈ IRn+×n has a
HH H , for n > m positive eigenvalue λP ( A) that is not exceeded by any
H+ = "  # −1 .

H H H , for m > n other eigenvalue λi ( A)

For n > m, H + denotes the left inverse ( H + H = Im ), λP ( A) ≥ |λi ( A)| .


and for m > n, H + is the right inverse ( H H + = In ) [1].
For two vectors v, w ∈ IRn the relation v > w (v ≥ w) λP ( A) is called the Perron root of A.
holds element-wise; that is, vi > wi (vi ≥ wi ) is true for
all i = 1, . . . , n, where v i and wi refer to the ith element DEFINITION 5.3 A matrix P = ( pij ), P ∈ IRn×n is said
of the vectors v or w, respectively. Accordingly, for two to be an M-matrix if pij ≤ 0 holds for all i = j and all
matrices V, W ∈ IRn×m where V = (vij ) and W = (wij ) eigenvalues of P have positive real part.
are composed of the elements v ij and wij for i = 1, . . . , n
and j = 1, . . . m, the relation V > W (V ≥ W ) refers to Theorem 5.2: Theorem A1.3 in [15]
v ij > wij (vij ≥ wij ). For a scalar s, |s| denotes the abso-
lute value. For a vector x ∈ IRn or a matrix A = ( aij ) ∈ A matrix P = ( pij ), P ∈ IRn×n with pij ≤ 0 for all i = j is
IRn×m , the |·|-operator holds element-wise, that is, an M-matrix if and only if P is nonsingular and P −1 is
⎛ ⎞ ⎛ ⎞ nonnegative.
| x1 | | a11 | ... | a1m |
⎜ ⎟ ⎜ .. .. ⎟,
| x| = ⎝ ... ⎠ , | A| = ⎝ . ..
. . ⎠ The next theorem makes a relationship between nonneg-
|xn | | an1 | ... | anm | ative matrices and M-matrices.

where  x and  A denote an arbitrary vector norm and Theorem 5.3: Theorem A1.5 in [15]
the induced matrix norm according to
If A ∈ IRn+×n is a nonnegative matrix, then the matrix
 1 P = μIn − A with μ ∈ IR+ is an M-matrix if and only
n p  Ax p
x p := ∑ |xi | p ,  A p := max
x  =0 x p
, if the relation
i =1
μ > λP ( A ) ,
with the real number p ≥ 1. Sets are denoted by calli-
graphic letters (A ⊂ IRn ). For the compact set A ⊂ IRn , holds.
$  %
 xA := inf  x − z  z ∈ A ,
5.2.3 Models
denotes the point-to-set distance from x ∈ IR to A [18]. n
Plant. The overall plant is represented by the linear
state-space model:
5.2.2 Nonnegative Matrices and M-Matrices
Σ : ẋ (t) = Ax (t) + Bu(t) + Ed(t), x (0) = x0 , (5.1)
DEFINITION 5.1 A matrix A = ( aij ) is called non-
negative ( A ≥ O) if all elements of A are real and where x ∈ IRn , u ∈ IRm , and d ∈ D denote the state,
nonnegative ( aij ≥ 0). the control input, and the disturbance, respectively.
82 Event-Based Control and Signal Processing

d(t) d1(t) dN(t)

Σ
z1(t) sN(t)
Σ Σ1 L ΣN
s1(t) zN(t)

...

u(t) x(t) u1(t) x1(t) uN(t) xN(t)

(a) (b)

FIGURE 5.2
Structure of the overall system with (a) a central sensor unit and an actuator unit and (b) a sensor unit and an actuator unit for each subsystem.

Concerning the properties of the system (5.1), the follow- The subsystem Σi is interconnected with the remaining
ing assumptions are made, unless otherwise stated: subsystems according to the relation

A 5.1 The plant dynamics are accurately known. N


si (t) = ∑ Lij z j (t), (5.4)
j =1
A 5.2 The state x (t) is measurable.
where the matrix Lij ∈ IRqi ×r j represents the couplings
A 5.3 The disturbance d(t) is bounded according to from some subsystem Σ j to subsystem Σi . The models
(5.3) and (5.4) is called the interconnection-oriented model
$  %
d(t) ∈ D := d ∈ IR p  |d| ≤ d¯ , ∀ t ≥ 0, (5.2) [15]. Concerning this model, the following assumptions
are made:
p
where d¯ ∈ IR+ denotes the bound on the magnitude of A 5.4 The pair ( Ai , Bi ) is controllable for each i ∈ N .
the disturbance d.
Equation 5.1 is referred to as the unstructured model
[15], because it barely gives an insight into the intercon- A 5.5 Lii = O holds for all i ∈ N , that is, the coupling
nections and the dynamics of the subsystems that form input si (t) does not directly depend on the coupling
the overall plant. The model (5.1) is useful if the overall output zi (t).
system has a central sensor unit that collects all mea-
surements and a centralized actuator unit that drives The last assumption is weak and can always be fulfilled
all actuators of the plant (Figure 5.2a). The next section by modeling all internal dynamics in the matrix Ai for
introduces a model that reflects the internal structure of all i ∈ N . Note that the assumptions A 5.1 to A 5.3 imply
the overall plant in more detail. that

Interconnected subsystems. In this chapter, the over- • The dynamics of the subsystems are accurately
all plant is considered to be composed of N phys- known.
ically interconnected subsystems. The subsystem i ∈
N = {1, . . . , N } is represented by the linear state-space • The subsystem state xi (t) is measurable for each
model: i ∈ N.

Σi :
⎧ • The local disturbance di (t) is bounded according
⎨ ẋi (t) = Ai xi (t) + Bi ui (t) + Ei di (t) + Esi si (t),
⎪ to
x i (0) = x0i , (5.3)

⎩ $  %
zi (t) = Czi xi (t) di (t) ∈ Di := di ∈ IR pi  |di | ≤ d¯i , ∀ t ≥ 0,
(5.5)
where x i ∈ IRni , ui ∈ IRmi , di ∈ Di , si ∈ IRqi , and z i ∈ IRri
p
denote the state, the control input, the disturbance, the with d¯i ∈ IR+i representing the bound on the local
coupling input, and the coupling output, respectively. disturbance di (t).
Distributed Event-Based State-Feedback Control 83

The unstructured model (5.1) can be determined from denote the impulse-response matrices describing the
the interconnection-oriented models (5.3) and (5.4) by impact of the control input ui (t) or the disturbance di (t),
taking respectively, on the state xi (t).
A = diag ( A1 , . . . , A N ) DEFINITION 5.4 The system
+ diag ( Es1 , . . . , EsN ) L diag (Cz1 , . . . , CzN ) , (5.6)
B = diag ( B1 , . . . , BN ) , (5.7) rxi (t) = F̄i (t) |x0i | + Ḡxui ∗ |ui | + Ḡxdi ∗ |di | , (5.13)
E = diag ( E1 , . . . , E N ) , (5.8) with rxi ∈ IRni is called a comparison system of subsys-
tem (5.10) if it satisfies the inequality
with the overall interconnection matrix
⎛ ⎞
O L12 . . . L1N rxi (t) ≥ | xi (t)| , ∀ t ≥ 0,
⎜ L21 O . . . L ⎟
⎜ 2N ⎟
L=⎜ . .. .. .. ⎟, (5.9) for arbitrary but bounded inputs ui (t) and di (t).
⎝ .. . . . ⎠
L N1 L N2 . . . O A method for finding a comparison system is given in
the following lemma.
and by assembling the signal vectors according to
" # Lemma 5.1: Lemma 8.3 in [15]
x (t) = x1 (t) . . . x  N (t) ,
" #
u(t) = u1 (t) . . . u N (t) , The system (5.13) is a comparison system of the sys-
"  
#  tem (5.10) if and only if the matrix F̄i (t) and the
d ( t ) = d1 ( t ) . . . d N ( t ) .
impulse-response matrices Ḡxui (t) and Ḡxdi (t) satisfy
the relations
Communication network. Throughout this chapter,  
 
the communication network is assumed to be ideal in F̄i (t) ≥ e Ai t  , Ḡxui (t) ≥ | Gxui (t)| ,
the following sense: Ḡ (t) ≥ | G (t)| , ∀ t ≥ 0,
xdi xdi
A 5.6 The transmission of information over the commu-
nication network happens instantaneously, without where the Gxui (t) and Gxdi (t) are defined in
delays and packet losses. Multiple transmitters can Equation 5.11.
simultaneously send information without collisions
occurring.
The communication network is considered to transmit 5.2.5 Practical Stability
information much faster compared to the dynamical Event-based control systems are hybrid dynamical sys-
behavior of the control system, which from a control tems and, more specifically, belong to the class of impul-
theoretic perspective justifies the assumption A 5.6. sive systems [5,22]. The notion of stability that is used
here for these kinds of systems is the stability with
5.2.4 Comparison Systems respect to compact sets [4,6].

The stability analysis in this section makes use of com- DEFINITION 5.5 Consider the system (5.1) and a
parison systems that yield upper bounds on the signals compact set A ⊂ IRn .
of the respective subsystems.
Consider the isolated subsystem (5.3) (with si (t) ≡ 0): • The set A is stable for the system (5.1) if for
each ε > 0 there exists δ > 0 such that  x(0)A ≤ δ
ẋi (t) = Ai xi (t) + Bi ui (t) + Ei di (t), x i (0) = x0i , implies  x(t)A ≤ ε for all t ≥ 0.
(5.10)
• The set A is globally attractive for the system (5.1)
that has the state trajectory with (5.2) if

x i (t) = e Ai t x0i + Gxui ∗ ui + Gxdi ∗ di , ∀ t ≥ 0, lim  x(t)A = 0,


t→∞
where
holds for all x(0) ∈ IRn .
Gxui (t) = e Ai t B ,
i (5.11)
• The set A is globally asymptotically stable for the
Gxdi (t) = e Ai t Ei , (5.12) system (5.1) if it is stable and globally attractive.
84 Event-Based Control and Signal Processing

5.3.1 Basic Idea


x(t) The state-feedback approach to event-based control
x(0)
published in [17] is grounded on the consideration that
FIGURE 5.3 A
feedback control, as opposed to feedforward control, is
Practical stability with necessary in three situations:
respect to the set A.
• An unstable plant needs to be stabilized.

• The plant is inaccurately known so that the con-


troller needs to react to model uncertainties.
This definition claims that for any bounded initial con-
• Unknown disturbances need to be attenuated.
dition x(0), the state trajectory x (t) remains bounded
for all t ≥ 0 and eventually converges to the set A for In order to answer the question of, at which time instants
t → ∞. a control loop needs to be closed, the focus in [17]
is on disturbance attenuation, while the plant model
DEFINITION 5.6 If A is a globally asymptotically sta- is assumed to be known with negligible uncertainties.
ble set for the system (5.1), then (5.1) is said to be Hence, the main reason for a feedback communication
practically stable with respect to the set A. is an intolerable effect of the disturbance d(t) on the
system performance. Further publications that have fol-
A graphical interpretation of this stability notion is lowed up on [17] have extended the event-based state-
given in Figure 5.3, which shows the trajectory x (t) of feedback approach to deal with model uncertainties [13],
a system of the form (5.1) in the two-dimensional state delays, and packet losses in the feedback communica-
space. The trajectory x(t) is bounded to the set A for tion [14] and nonlinear plant dynamics [19–21].
t → ∞. The size of A depends on the disturbance mag- The structure of the event-based control loop inves-
nitude d¯ and, in case of event-based state feedback, also tigated in [17] is illustrated in Figure 5.4. The plant is
on some parameters of the event-based controller, as will represented by the model (5.1), assuming that it pro-
be explained in more detail later. vides a central sensor unit and a central actuator unit.
Regarding the plant, the assumptions A 5.1–A 5.3 are
made. The event generator E determines the event times tk
(k = 0, 1, 2, . . .), at which the current state x(tk ) is trans-
5.3 State-Feedback Approach to Event-Based mitted to the control input generator C that uses the
received information to update the generation of the
Control
control input u(t). The solid arrows represent contin-
A model-based approach to event-based state feedback uous information links, whereas the dashed line sym-
has been presented first in [17]. This approach shows in bolizes a communication that occurs at the event times
a plain way the basic idea of how a dynamic model can only. The feedback communication via the network is
be used for the control input generation and the deter- assumed to be ideal as specified in A 5.6.
mination of the event times in event-based control. For The basic idea of the event-based state-feedback
this purpose, the basic approach [17] is summarized in approach is to design the event generator E and the con-
the following, before this idea is developed further in trol input generator C such that the event-based control
Section 5.4 to the distributed event-based state-feedback loop imitates the disturbance behavior of a continu-
control of physically interconnected systems. ous state-feedback loop with adjustable approximation

d(t)

Control input u(t) x(t) Event


Plant
generator C generator E

x(tk) x(tk)
Communication network

FIGURE 5.4
Event-based state-feedback loop.
Distributed Event-Based State-Feedback Control 85

error. In the following, this continuous control system is it updates the determination of the input signal
referred to as the reference system that is represented by u(t) using the information received at the event
the model times tk .

Σr : ẋr (t) = ( A − BK ) xr (t) + Ed(t), xr (0) = x0 , The communication network is also considered to be
  part of the event-based controller. However, the net-
=: Ā work is assumed to be ideal as stated in A 5.6; thus, the
(5.14) transmission of data is not particularly considered.

with the state xr ∈ IRn . The state of the reference sys-


Control input generator C. The control input genera-
tem is marked by the subscript r in order to distin-
tor C is described by
guish it from the state of the event-based control system

introduced later. The state-feedback gain K in (5.14) is Σs : ẋs (t) = Āxs (t) + Edˆk , xs (t+
k ) = x(tk )
assumed to be designed such that the system Σr has a C: .
desired disturbance behavior; hence, the state xr (t) of u(t) = −Kxs (t)
the reference system (5.14) is bounded. (5.17)
It includes a model Σs of the reference system (5.14) with
Proposition 5.1
the model state xs ∈ IRn , which is reset at the event times
tk to the current plant state x(tk ). Then, dˆk denotes an
Given that the disturbance d(t) is bounded as in (5.2), estimate of the disturbance d(t), which is determined at
the reference system (5.14) is practically stable with the event time tk . A method for the disturbance estimate
respect to the set is presented in Section 5.3.2. The control input u(t) is
generated using the model state xs (t) and is applied to
Ar := { xr ∈ IRn |  xr  ≤ br } , (5.15) the plant.

with the bound


& &  ∞ & &
& Āτ & Event generator E. The event generator E also
br = &d¯& · &e E& dτ. (5.16) includes a model Σs of the reference system (5.14) and it
0
determines the event times tk as follows
PROOF Equation 5.14 yields ⎧ +
⎨ Σs : ẋs (t) = Āxs (t) + Edk , xs (tk ) = x(tk )
⎪ ˆ
 t
E: t0 = 0 .
xr (t) = e Āt x0 + e Ā(t − τ) Ed(τ)dτ. ⎪

0 tk+1 : = inf { t > tk |  x(t) − xs (t) = ē }
For an arbitrary large but bounded initial state x0 and (5.18)
¯
the maximum disturbance magnitude d, the norm of the Note that the state x (t) of the model Σ that is used by
s s
state  xr (t) is bounded by both the control input generator C and the event genera-
& &  ∞& & Āτ &
& tor E is a prediction of the plant state x(t). Given that
lim sup  xr (t) ≤ &d¯& · &e E& dτ =: br , assumption A 5.1 holds true and that xs (t+
t→∞ 0 k ) = x ( t k ),
a deviation between x(t) and xs (t) for t > tk is only
for t → ∞. caused by a difference between the actual disturbance
d(t) and its estimate dˆk . The event generator continu-
ously monitors the difference state
5.3.2 Components of the Event-Based Controller xΔ ( t ) : = x ( t ) − xs ( t ), (5.19)
This section describes the method for the design of and triggers an event, whenever the norm of this dif-
the event-based controller proposed in [17]. The event- ference attains some event threshold ē ∈ IR+ . The initial
based controller consists of the following components: event at time t0 = 0 is generated regardless of the trig-
• The event generator E that determines what infor- gering condition. At the event time tk (k = 0, 1, 2, . . .),
mation is transmitted via the feedback link at the event generator E transmits the current plant state
which time instants tk . x ( t k ) and a new disturbance estimate dˆk over the com-
munication network to the control input generator C.
• The control input generator C that generates the The state information x (tk ) is used in both components
control input u(t) in a feedforward manner in at the event time tk in order to reset the model state xs (t),
between consecutive event times tk−1 and tk , and as indicated in (5.17) and (5.18).
86 Event-Based Control and Signal Processing

Disturbance estimation. The state-feedback approach Hence, the behavior of the reference system is better
to event-based control [17] works with any disturbance approximated by the event-based state-feedback loop if
estimation method that yields bounded estimates dˆk , the threshold ē is reduced, and vice versa. Theorem 5.4
including the trivial estimation dˆk ≡ 0 for all k ∈ IN0 . can also be regarded as a proof of stability for the event-
This section presents an estimation method, which is based state-feedback loop, since the stability of the refer-
based on the assumption that the disturbance d(t) is ence system (5.14) together with the bound (5.22) implies
constant in the time interval t ∈ [tk−1 , tk ): the boundedness of the state x (t) of the event-based
control loop.
d ( t ) = dc , for t ∈ [tk−1 , tk ), In [12] it has been shown that the event-based state-
feedback loop (5.1), (5.17), and (5.18) is practically stable
where dc ∈ D denotes the disturbance magnitude. The with respect to the set
idea of this estimation method is to determine the dis-
turbance magnitude dc at the event time tk and to use A := { x ∈ IRn |  x ≤ b} ,
this value as the disturbance estimation dˆk for the time
t ≥ tk until the next event occurs. with the bound
 ∞& & & &  ∞ & &
The disturbance estimation method is given by the & Āτ & & Āτ &
b = ē · &e BK & dτ + & d¯& · &e E& dτ. (5.23)
following recursion: 0 0

dˆ0 = 0, (5.20) A comparison of (5.23) with the bound br for the refer-
' ' ( (+ ence system given in (5.15) implies that b = br + emax
d k = d k −1 + A
ˆ ˆ −1
e A ( t k − t k − 1 ) − In E xΔ (tk ), holds. Hence, the bound br on the set Ar for the reference
system is extended by the bound (5.22) if event-based
(5.21)
state feedback is applied instead of continuous state
where the initial estimation d is chosen to be zero if feedback.
ˆ0
no information about the disturbance is available. The
pseudoinverse in (5.20) exists if p ≤ n holds, that is, Minimum interevent time. The minimum time that
the dimension p of the disturbance vector d(t) does not elapses in between two consecutive events is referred
exceed the dimension n of the state vector x(t). to as the minimum interevent time Tmin . The event-based
state-feedback approach [17] ensures that the minimum
interevent time is bounded from below according to
5.3.3 Main Properties of the Event-Based State Tmin ≥ T̄.
Feedback
Theorem 5.5: Theorem 2 in [17]
Deviation between the behavior of the reference sys-
tem and the event-based control loop. The follow-
For any bounded disturbance, the minimum interevent
ing result shows that the event-based state-feedback
time Tmin of the event-based state-feedback loop (5.1),
approach [17] has the ability to approximate the distur-
(5.17), and (5.18) is bounded from below by
bance behavior of the continuous state-feedback system
 t & & )
(5.14) with arbitrary precision. & Aτ & ē
T̄ = arg min &e E& dτ = ¯ , (5.24)
t 0 dΔ
Theorem 5.4: Theorem 1 in [17] & &
& &
where d¯Δ ≥ &d(t) − dˆk & denotes the maximum devia-
The approximation error e(t) = x(t) − xr (t) between the tion between the disturbance d(t) and the estimates dˆk
behavior of the event-based state-feedback loop (5.1), for all t ≥ 0 and all k ∈ IN0 .
(5.17), and (5.18) and the reference system (5.14) is
bounded from above by A direct inference of Theorem 5.5 is that no Zeno
 ∞& & behavior can occur in the event-based state-feedback
& Āt &
e(t) ≤ ē · &e BK & dt =: emax , (5.22) loop—that is, the number of events does not accumulate
0
to infinity in finite time [16]. This result also shows that
for all t ≥ 0. the minimum interevent time gets smaller if the event
threshold ē is reduced, and vice versa. This conclusion
Theorem 5.4 shows that the deviation between the dis- is in accordance with the expectation that a more precise
turbance rejection behavior of the event-based control approximation of the reference system’s behavior by the
system (5.1), (5.17), and (5.18) and the reference sys- event-based state feedback comes at the cost of a more
tem (5.14) depends linearly on the event threshold ē. frequent communication.
Distributed Event-Based State-Feedback Control 87

triggers either the transmission of information to or the


request of information from the neighboring subsys-
5.4 Disturbance Rejection with Event-Based tems. Section 5.5 presents the results of an experiment
Communication Using Information that investigates the practical application of the dis-
Requests tributed event-based state feedback to a thermofluid
process.
This section extends the event-based state-feedback
approach [17] to the distributed event-based control of
physically interconnected systems. The motivation for
this extension is twofold: 5.4.1 Approximate Model and Extended Subsystem
Model
1. The event-based state-feedback approach [17] The underlying idea of the subsequently proposed mod-
requires the plant to have a centralized sensor eling approach is to obtain a model that not only
unit that provides the overall state vector x (t), describes the behavior of a single subsystem Σi but
and a centralized actuator unit that updates also includes the interaction of Σi with its neighboring
all control inputs synchronously. Such central- subsystems and their controllers. The behavior of the
ized sensor and actuator units can generally neighboring subsystems can generally not be described
not be presumed to exist, and particularly not precisely; hence, it is approximated by an approxi-
in large-scale systems which are composed of mate model Σai . The subsystem Σi augmented with the
interconnected systems (cf. Figure 5.2). From approximate model Σai then forms the extended subsys-
this consideration emerges the need for a tem Σei , which is used later for the design of a distributed
decentralized controller structure. state-feedback controller.
2. For many large-scale systems, a stabilizing
decentralized controller can only be found
if the interconnections between the subsys- Approximate model. Regarding the interconnected
tems are sufficiently weak [9,23]. Moreover, subsystems (5.3) and (5.4), the following definitions are
there are applications where the control per- made:
formance that can be achieved by means
of decentralized control does not meet the
DEFINITION 5.7 Subsystem Σ j is called the predeces-
requirements. In these cases, a distributed & &
controller, where the control units do not use sor of subsystem Σi if & Lij & > 0 holds. In the following,
only local information but also information $ & & %
from neighboring subsystems, might be pre- Pi := j ∈ N \ {i } & Lij & > 0 ,
ferred in order to accomplish the control task.
denotes the set of those subsystems Σ j which directly
This chapter presents a novel method for the imple-
affect Σi through the coupling input si (t).
mentation of a distributed state-feedback law in an
event-based manner.
Section 5.4.1 introduces the notion of the approximate
model and the extended subsystem model. These models are
DEFINITION 5.8 Subsystem Σ j is called the successor
used in Section 5.4.2, which proposes a method for the & &
of subsystem Σi if & L ji & > 0 holds. Hereafter,
design of a continuous distributed state feedback. The
realization of the resulting distributed state feedback $ & & %
requires continuous communications between several Si := j ∈ N \ {i } & L ji & > 0 ,
subsystems, which would lead to a heavy load of the
network and, thus, is undesired. A solution to this prob- denotes the set of subsystems Σ j , which the subsystem
lem is presented in Section 5.4.3 in the form of a method Σi directly affects through the coupling output z i (t).
for the implementation of the distributed state feedback
in an event-based fashion, where the feedback of the
local state information is continuous, and the commu- To understand the notion of the approximate model,
nication between the components of the networked con- consider Figure 5.5. From Equation 5.4, it follows that
troller occurs at the event times only. The approximation the coupling input si (t) aggregates the influence of the
of the behavior of the control system with distributed remaining subsystems together with their controllers on
continuous state feedback by the event-based control Σi (Figure 5.5a). Assume that, from the viewpoint of
is accomplished by a novel triggering mechanism that subsystem Σi , the relation between the coupling output
88 Event-Based Control and Signal Processing

z i (t) and the coupling input si (t) is approximately Note that Equations 5.27 and 5.28 imply
described by the approximate model: 
In j , if j = p
Cji Tip = .
Σai : O, if j = p

⎪ ẋai (t) = Aai xai (t) + Bai zi (t) + Eai dai (t) + Fai f i (t), The model Σai approximately describes the behavior


⎨ xai (0) = xa0i , of the predecessor subsystems Σ j ( j ∈ Pi ) together with
⎪ si (t) = Cai xai (t), their controllers. This consideration leads to the follow-


⎩ ing assumption:
vi (t) = Hai xai (t) + Dai zi (t),
(5.25) A 7.1 The matrix Aai is Hurwitz for all i ∈ N .
The mismatch between the behavior of the overall
where xai ∈ IRnai , dai ∈ IR pai , f i ∈ IRνi , and vi ∈ IRμi control system except for subsystem Σi and the approx-
denote the state, the disturbance, the residual output, imate model (5.25) is expressed by the residual model:
and the residual input, respectively (Figure 5.5b). The
Σfi : f i (t) = Gfdi ∗ dfi + Gfvi ∗ vi , (5.29)
disturbance dai (t) is assumed to be bounded by
where dfi ∈ IR denotes the disturbance that is assumed
pfi
|dai (t)| ≤ d¯ai , ∀ t ≥ 0. (5.26) to be bounded by

In the following, the state xai (t) of the approximate |dfi (t)| ≤ d¯fi , ∀ t ≥ 0. (5.30)
model Σai is considered to be directly related with the Note that the approximate model Σai together with the
states x j (t) ( j ∈ Pi ) of the predecessors of Σi in the sense residual model Σfi represent the behavior of the remain-
that the approximate model Σai can be obtained by the ing subsystems and their controllers (Figure 5.5b). Here-
linear transformation after, the residual model Σfi is not assumed to be known
exactly but instead is described by some upper bounds
xai (t) = ∑ Tij x j (t), ∀ t ≥ 0, (5.27) Ḡfdi (t) and Ḡfvi (t) that satisfy the relations
j ∈P i
Ḡfdi (t) ≥ | Gfdi (t)| , Ḡfvi (t) ≥ | Gfvi (t)| , (5.31)
nai × n j
with Tij ∈ IR . Moreover, the relation between the for all t ≥ 0.
subsystem states x j (t) ( j ∈ Pi ) and the approximate
model state xai (t) is assumed to be bijective. Hence, Extended subsystem model. Consider the subsystem
for each j ∈ Pi there exists a matrix Cji ∈ IRn j ×nai that Σi augmented with the approximate model Σai that
satisfies yields the extended subsystem:
Σei :
x j (t) = Cji xai (t), ∀ t ≥ 0. (5.28) ⎧

⎪ ẋ (t) = Aei xei (t) + Bei ui (t) + Eei dei (t) + Fei f i (t)
⎨ ei "   #
xei (0) = xe0i = x0i xa0i
d1(t) dfi(t) ⎪

Σfi ⎩ v ( t ) = H x ( t ),
...

Remaining i ei ei
di–1(t) subsystems (5.32)
and their vi(t) fi(t)
n i + nai
di+1(t)
controllers with the state xei = ( x  
i xai ) ∈ IR , the composite
dai(t)
...

  
disturbance vector dei = (di dai ) , and the matrices
Σai
dN(t)    
Ai Esi Cai Bi
Aei = , Bei = ,
zi(t) si(t) zi(t) si(t) Bai Czi Aai O
   
di(t) Ei O O
Σi
di(t) Eei = , Fei = , (5.33)
Σi
Σei O Eai Fai
" #
ui(t)
Hei = Dai Czi Hai . (5.34)
ui(t)

(a)
The boundedness of the disturbances di (t) and dai (t)
(b)
implies the boundedness of dei (t). With Equations 5.5
FIGURE 5.5 and 5.26, the bound
Interconnection of subsystem Σi and the remaining control loops: " #
  =: d¯ , ∀ t ≥ 0,
|dei (t)| ≤ d¯
i d¯ai ei (5.35)
(a) general structure and (b) decomposition of the remaining controlled
subsystems into approximate model Σai and residual model Σfi . on the composite disturbance dei (t) follows.
Distributed Event-Based State-Feedback Control 89

5.4.2 A Distributed State-Feedback Design Method derives a condition under which the design of the dis-
tributed state-feedback controller for the extended sub-
This section proposes a new approach to the design of
system with f i (t) ≡ 0 ensures the stability of the overall
a distributed continuous state-feedback law for physi-
control system.
cally interconnected systems, which is summarized in
Algorithm 5.1. The main feature of the proposed design
Stability of the overall control system. This section
method is that the distributed state feedback can be
derives a condition to check the stability of the extended
designed for each subsystem Σi without having exact
subsystem (5.32) with the distributed state feedback
knowledge about the overall system. The stability of the
(5.39) taking the interconnection to the residual mod-
overall control system is tested by means of a condition
els (5.29) and (5.31) into account. The result is then
that is presented in Theorem 5.7.
extended to a method for the stability analysis of the
Distributed state-feedback design. The following overall control system.
proposes an approach to the design of a distributed The stability analysis method that is presented in this
state-feedback controller, which is carried out for each section makes use of the comparison system:
subsystem Σi of the interconnected subsystems (5.3)
r (t) = Vx0i (t) |xe0i | + Vxdi ∗ |dei | + Vxfi ∗ | f i | ≥ | xei (t)| ,
and (5.4) separately. This design approach requires the xi
(5.40)
extended subsystem model (5.32) to be given, in order
to determine the state feedback rvi (t) = Vv0i (t) |xe0i | + Vvdi ∗ |dei | + Vvfi ∗ | f i | ≥ |vi (t)| ,
  (5.41)
" # xi (t)
ui (t) = −Kei xei (t) = − Kdi Kai . (5.36) for the extended subsystem (5.32) with the distributed
xai (t)
controller (5.39), where
The state-feedback gain Kei is assumed to be chosen    
   
for the extended subsystem (5.32) with f i (t) ≡ 0 such Vx0i (t) = e Āei t  , Vxdi (t) = e Āei t Eei  ,
that the closed-loop system  
 
Vxfi (t) = e Āei t Fei  ,
Σ̄ei :    
 "   #    
Vv0i (t) = Hei e Āei t  , Vvdi (t) = Hei e Āei t Eei  ,
ẋei (t) = Āei xei (t) + Eei dei (t), xei (0) = x0i xa0i
,  
 
vi (t) = Hei xei (t) Vvfi (t) = Hei e Āei t Fei  . (5.42)
(5.37)
The following lemma gives a condition that can be used
has a desired behavior that includes the minimum to test the stability of the interconnection of the extended
requirement that the matrix subsystem (5.32) with the distributed state feedback
  (5.39) and the residual models (5.29) and (5.31).
Ai − Bi Kdi Esi Cai − Bi Kai
Āei := Aei − Bei Kei = ,
Bai Czi Aai Theorem 5.6
(5.38)

is stable. With the transformation (5.27), the control law Consider the interconnection of the extended subsystem
(5.36) can be rewritten as (5.32) with the distributed state feedback (5.39) together
with the residual model Σfi as defined in (5.29) with
ui (t) = −Kdi xi (t) − ∑ Kij x j (t), (5.39) the bounds (5.31). The interconnection of these systems
j ∈P i is practically stable if the condition
 ∞  ∞ 
with
λP Ḡfvi (t)dt Vvfi (t)dt < 1, (5.43)
0 0
Kij := Kai Tij .
is satisfied.
This reveals that (5.36) is a distributed state feedback,
where the control input ui (t) is a function of the local PROOF The proposed analysis method makes
state x i (t) as well as of the states x j (t) ( j ∈ Pi ) of the use of comparison systems, which are explained in
predecessor subsystems of Σi . Section 5.2.4. The following equations
Note that the proposed design approach neglects
the impact of the residual model (5.29) since f i (t) ≡ 0 rxi (t) = Vx0i (t) |xe0i | + Vxdi ∗ |dei | + Vxfi ∗ | f i | ≥ | xei (t)| ,
is presumed. Hence, the design of the distributed (5.44)
state feedback only guarantees the stability of the iso- rvi (t) = Vv0i (t) |xe0i | + Vvdi ∗ |dei | + Vvfi ∗ | f i | ≥ |vi (t)| ,
lated controlled extended subsystems. The next section (5.45)
90 Event-Based Control and Signal Processing

with implies that f i (t) is bounded, as well. Given that the dis-
    turbance inputs dfi (t) and dei (t) are bounded by (5.30)
   
Vx0i (t) = e Āei t  , Vxdi (t) = e Āei t Eei  , or (5.35), respectively, and that the matrix Āei in (5.42) is
 
  Hurwitz by definition, the output rxi (t) of the compar-
Vxfi (t) = e Āei t Fei  ,
ison system (5.40) is bounded for all t ≥ 0. Since rxi (t)
   
    represents a bound on the subsystem state xi (t), the
Vv0i (t) = Hei e Āei t  , Vvdi (t) = Hei e Āei t Eei  ,
  stability of overall comparison system implies the sta-
 
Vvfi (t) = Hei e Āei t Fei  , bility of (5.32) with the distributed state feedback (5.39)
together with the residual model (5.29).
represent a comparison system for the extended subsys-
tem (5.32) with the distributed state feedback (5.39). In The relation (5.43) is interpreted as presented in
the following analysis, the assumption xe0i = 0 is made, Figure 5.6.
which is no loss of generality for the presented stabil-
The condition (5.43) can be considered as a small-
ity criterion but simplifies the derivation. Consider the
gain theorem that claims that the extended subsystem
interconnection of the comparison system (5.44) and the
(5.32) and the residual model (5.29) are weakly cou-
residual systems (5.29) and (5.31), which yields
pled. On the other hand, the condition (5.43) does
rfi (t) = Ḡfvi ∗ Vvdi ∗ | dei | + Ḡfdi ∗ |dfi | not impose any restrictions on the interconnection of
+ Ḡfvi ∗ Vvfi ∗ | f i | ≥ | f i (t)| . (5.46) subsystem Σi and its neighboring subsystems Σ j ( j ∈
Ni ). These systems do not have to satisfy any small-
Equation 5.46 is an implicit bound on the signal | f i (t)|. gain condition but can be strongly interconnected.
An explicit bound on | f i (t)| in terms of the signal rvi (t)
can be obtained by means of the comparison principle Note that the residual model (5.29) together with the
[15], where the basic idea is to find an impulse-response extended subsystem (5.32) with the distributed state
matrix Gi (t) that describes the input/output behavior of feedback (5.39) describe the behavior of the overall con-
the gray highlighted block in Figure 5.6: trol system from the perspective of subsystem Σi . Hence,
rfi (t) = Gi ∗ ( Ḡfvi ∗ Vvdi ∗ |dei | + Ḡfdi ∗ |dfi |) ≥ | f i (t)| , the practical stability of the overall control system is
implied by the practical stability of the interconnection
∀ t ≥ 0. (5.47)
of residual model (5.29) and the extended subsystem
The comparison principle says that the impulse- (5.32) with the controller (5.39) for all i ∈ N . Theorem 5.7
response matrix makes the result on the practical stability of the overall
control system more precise and determines the set Ari
Gi (t) = δ(t) I + Ḡfvi ∗ Vvfi ∗ Gi , (5.48) to which the state x (t) is bounded for t → ∞. Hereafter,
i
exists, which means the inequality it is assumed that the condition (5.43) is satisfied, which
 ∞  ∞ implies that the matrix
Gi (t)dt = I + Ḡfvi ∗ Vvfi ∗ Gi dt  ∞   ∞  ∞ −1
0 0
 ∞  ∞  ∞ Gi (t)dt := I − Ḡfvi (t)dt Vvfi (t)dt ,
= I+ Ḡfvi (t)dt Vvfi (t)dt Gi (t)dt 0 0 0
0 0 0
< ∞, exists and is nonnegative.

holds true if the condition Theorem 5.7


 ∞  ∞

λP Ḡfvi (t)dt Vvfi (t)dt < 1, (5.49)
0 0 Consider the interconnection of the controlled extended
is satisfied. On condition that relation (5.49) is fulfilled, subsystems (5.32) and (5.39) with the residual models
the signal rfi (t) remains bounded for all t ≥ 0 which (5.29) and (5.31) and assume that the condition (5.43) is
satisfied for all i ∈ N . Then, the subsystem Σi (i ∈ N ) is
practically stable with respect to the set
Gi(t) $  %
Gfvi * Vvdi * |dei| + Gfdi * |dfi| rfi(t) Ari := xi ∈ IRni  | xi | ≤ bri , (5.50)

with the ultimate bound


Gfvi * Vvfi
 
" #  ∞  ∞
bri = Ini O Vxdi (t)dt · d¯ei + Vxfi (t)dt · f i ,
¯
FIGURE 5.6 0 0
Block diagram representing Equation 5.46. (5.51)
Distributed Event-Based State-Feedback Control 91

where with the impulse-response matrix Gi given in (5.48).


 ∞  ∞  ∞ From the last relation, the bound
f¯i = Gi (t)dt Ḡfvi (t)dt Vvdi (t)dt · d¯ei
0 0 0
 ∞  lim sup rfi (t)
+ Ḡfdi (t)dt · d¯fi , t→∞
 ∞  ∞  ∞
0
= Gi (t)dt Ḡfvi (t)dt Vvdi (t)dt · d¯ei
is the bound on the maximum residual output f i (t) for 0
 ∞
0

0
all t ≥ 0.
+ Ḡfdi (t)dt · d¯fi =: f¯i , (5.54)
0
PROOF To begin with the proof, observe that the state
xei (t) of each controlled extended subsystems (5.32) and follows.
(5.39) is bounded if the disturbance dei (t) as well as the In summary, for t → ∞, the state x i (t) asymptoti-
signal f i (t) are bounded. While the former is bounded cally converges to the set Ari defined in (5.53) with the
by definition (cf. Equation 5.35) the latter is known to be ultimate bound bri given in (5.52). The bound bri is a
bounded if the condition (5.43) holds, which is fulfilled function of the disturbance bound d¯ei according to (5.35)
by assumption. and of the bound f¯i given in (5.54). This completes the
The next analysis derives the asymptotically stable set proof of Theorem 5.7.
Ari for the subsystems Σi . Consider Equation 5.44 from
which Theorem 5.7 shows that the size of the set Ari depends
" # on the disturbance bounds d¯ei and d¯fi , which together
| xi (t)| = Ini O | xei (t)|
" #" # represent a bound on the overall disturbance vector d(t).
≤ In O Vx0i (t) |xe0i | + Vxdi ∗ d¯ei + Vxfi ∗ | f i | ,
i
Design algorithm. The method for the design of the
follows, where the disturbance dei has been replaced by distributed continuous state-feedback controller is sum-
the bound given in (5.35). Note that for t → ∞, the term marized in the following algorithm.
that depends on the initial state xe0i vanishes. Hence,
from the previous relation, the bound

lim sup | xi (t)| Algorithm 5.1: Design of a distributed continuous


t→∞
  ∞  state-feedback controller
" # ∞
≤ Ini O Vxdi (t)dt · d¯ei + Vxfi (t)dt · f¯i Given: Interconnected systems (5.3) and (5.4).
0 0
Do for each i ∈ N :
=: bri , (5.52)
1. Identify the set Pi and determine an approx-
follows with
imate model (5.25) and the corresponding
lim sup | f i (t)| ≤ f¯i (t). residual models (5.29) and (5.31). Compose
t→∞ the subsystem model (5.3) and the approxi-
From Equation 5.52, it can be inferred that subsystem Σi mate model (5.25) to form the extended sub-
is practically stable with respect to the set system model (5.32).
$  % 2. Determine the state-feedback gain Kei as in
Ari := xi ∈ IRni  | xi | ≤ bri . (5.53) (5.36) such that the controlled extended sub-
system has a desired behavior, implying that
The aim of the following analysis is to specify the
bound f̄ i . Therefore, consider Equations 5.29 and 5.45, the matrix Āei given in (5.38) is Hurwitz.
which yield 3. Check whether the condition (5.43) is satis-
fied. If (5.43) does not hold, stop (the stabil-
rfi (t) = Ḡfvi ∗ (Vv0i | xe0i | + Vvdi ∗ | dei | + Vvfi ∗ | f i |) ity of the overall control system cannot be
+ Ḡfdi ∗ |dfi | ≥ | f i (t)| . guaranteed).

Using the comparison principle, this bound can be Result: A state-feedback gain Kei for each i ∈ N can
restated in the explicit form be implemented in a distributed manner as shown
" in (5.39). The subsystem Σi is practically stable with
rfi (t) = Gi ∗ Ḡfvi ∗ (Vv0i | xe0i | + Vvdi ∗ |dei |) respect to the set Ari given in (5.50) with the ultimate
# bound (5.51).
+ Ḡfdi ∗ |dfi | ≥ | f i (t)| ,
92 Event-Based Control and Signal Processing

Note that the design and the stability analysis of the used to determine the control input ui (t) according to
distributed state feedback can be carried out for each
Σi (i ∈ N ) without the knowledge of the overall sys- ui (t) = −Kdi xi (t) − ∑ Kij x̃ ij (t). (5.55)
tem model. For the design, the subsystem model Σi and j ∈P i
the approximate model Σai are required only. In order
to analyze the stability of the control system, the coarse This state-feedback law is adapted from (5.39) by replac-
model information (5.31) of the residual model Σfi is ing the state x j (t) by its prediction x̃ij (t). Note that the
required. proposed approach to the distributed event-based con-
The implementation of the distributed state feedback trol allows for different local control units Fi and Fl to
as proposed in (5.39) requires continuous information determine different predictions x̃ ij (t) or x̃lj (t), respec-
links from all subsystems Σ j ( j ∈ Pi ) to subsystem Σi . tively, of the same state x j (t) of subsystem Σ j . These
However, continuous communication between several signals are distinguished in the following by means of
subsystems is in many technical systems neither real- the superscript.
izable nor desirable, due to high installation effort and It will be shown that this implementation yields a
maintenance costs. The following section proposes a stable overall control system if the difference
method for the implementation of the distributed state-    
feedback law (5.39) in an event-based fashion, where a  i   
 xΔj (t) =  x j (t) − x̃ ij (t) ,
communication between subsystem Σi and its neighbor-
ing subsystems Σ j ( j ∈ Pi ) is induced at the event time is bounded for all t ≥ 0 and all i, j ∈ N . This difference,
instants only. however, can neither be monitored by the controller Fi
nor by the controller Fj , because the former knows only
the prediction x̃ij (t), and the latter has access to the
5.4.3 Event-Based Implementation of a Distributed
State-Feedback Controller
subsystem state x j (t), only. This problem is solved in
the proposed implementation method by a new event-
This section proposes a method for the implementation triggering mechanism in the controllers Fi . The event
of a distributed control law of the form (5.39) in an event- generator Ei in Fi generates two different kinds of events:
based manner, where information is exchanged between The first one leads to a transmission of the current sub-
subsystems at event time instants only. This implemen- system state xi (t) to the controllers Fj of the neighboring
tation method obviates the need for a continuous com- subsystems Σ j , whereas the second one induces the
munication among subsystems, and it guarantees that request, denoted by R, of the current state information
the behavior of the event-based control systems approx- x j (t) from all neighboring subsystems Σ j . In this way,
imates the behavior of the control system with dis- the proposed implementation method introduces a new
tributed continuous state feedback. Following the idea kind of event-based control, where information is not
of [17], it will be shown that the deviation between the only sent but also requested by the event generators.
behavior of these two systems can be made arbitrarily
small. Reference system. Assume that for the interconnected
The main result of this section is a new approach to subsystems (5.3) and (5.4), a distributed state-feedback
the distributed event-based state-feedback control that controller is given, which has been designed according
can be made to approximate the behavior of the control to Algorithm 5.1. The subsystem Σi together with the
system with continuous distributed state feedback with distributed continuous state feedback (5.39) is described
arbitrary precision. This feature is guaranteed by means by the state-space model:
of a new kind of event-based control where two types of ⎧
events are triggered: Events that are triggered due to the ⎪
⎪ ẋri (t) = Āi xri (t) − Bi ∑ Kij xrj (t) + Ei di (t)

⎪ j ∈P i
transmit-condition induce the transmission of local state ⎪

information xi from controller Fi to the controllers Fj of Σri : + Esi sri (t)
the neighboring subsystems. In addition to this, a second ⎪

⎪ xri (0) = x0i

event condition, referred to as request-condition, leads to ⎪

the request of information from the controllers Fj of the zri (t) = Czi xri (t)
neighboring subsystems Σ j by the controller Fi . (5.56)

where Āi := ( Ai − Bi Kdi ). Hereafter, Σri is referred to as


Basic idea. The main idea of the proposed implemen- the reference subsystem. The signals are indicated with
tation method is as follows: The controller Fi generates r to distinguish them from the corresponding signals in
a prediction x̃ ij (t) of the state x j (t) for all j ∈ Pi which is the event-based control system that is investigated later.
Distributed Event-Based State-Feedback Control 93

The interconnection of the reference subsystems Σri is • The local control unit Fi transmits the current state
given by x i (tk i ) if at time tk i its transmit-condition is ful-
filled. On the other hand, Fi sends the state xi (tr j (i) )
N
at time tr j (i) when the controller Fj requests this
sri (t) = ∑ Lij zrj (t), (5.57)
j =1 information from Fi .

according to Equation 5.4. Equations 5.56 and 5.57 yield • If at time tri ( j) the request-condition of the local
the overall reference system control unit Fi is satisfied, it requests (and conse-
quently receives) the information x j (tri ( j) ) from the
Σr : ẋr (t) = ( A − B (Kd + K̄ )) xr (t) + Ed(t), controller Fj . The controller Fi also receives infor-
  mation if the controller Fj decides at time tk j to
:= Ā
transmit the state x j (tk j ) to the Fi .
xr (0) = x0 , (5.58)
In the following, ri ( j) denotes the counter for the events
with the matrices A, B, E given in (5.6) and the state- at which the controller Fi requests information from the
feedback gains controller Fj .
Kd = diag (Kd1 , . . . , KdN ) , (5.59)
From the viewpoint of the controller Fi , the trans-
and mission and the reception of information are either
⎛ ⎞ induced by its own triggering conditions (at the times
O K12 ... K1N tk i or tri ( j) ) or enforced by the neighboring controllers
⎜ K21 O ... K2N ⎟ (at the times tr j (i) or tk j ).
⎜ ⎟
K̄ = ⎜ . .. .. .. ⎟ , (5.60)
⎝ .. . . . ⎠
K N1 K N2 ... O
Networked controller. The networked controller con-
where Kij = O holds if j ∈ Pi . The overall control sys- sists of the local components Fi and the communication
tem (5.58) is stable since the distributed state-feedback network as shown in Figure 5.1. Each control unit Fi
gain (5.59) and (5.60) is the result of Algorithm 5.1, includes a control input generator Ci that generates the
which includes the validation of the stability condition control input ui (t) and an event generator Ei that deter-
(5.43). Consequently, the subsystems (5.56) are practi- mines the time instants tk i and tri ( j) at which the current
cally stable with respect to the set Ari given in (5.50) and
local state x i (tk i ) is communicated to the controllers Fj
(5.51). of the successor subsystems Σ j ( j ∈ Si ) or at which the
The aim of the event-based controller that is to be current state x j (tri ( j) ) is requested from the controllers Fj
designed subsequently is to approximate the behavior of the predecessor subsystems Σ j ( j ∈ Pi ), respectively.
of the reference systems (5.56) and (5.57) with adjustable Figure 5.7 depicts the structure of the local control unit
precision. Fi . R denotes a message that initiates the request of infor-
mation, and t represents different event times that are
Information transmissions and requests. The event- specified later. The components C and E are explained
i i
based control approach that is investigated in this in the following.
section works with a triggering mechanism that distin- The control input generator Ci determines the control
guishes between two kinds of events that lead to either input u (t) for subsystem Σ using the model
i i
the transmission of local state information or the request
of information from neighboring subsystems. The event ⎧ ⎧ d


⎪ ⎪ dt x̃ai (t) = Aai x̃ai (t) + Bai zi (t)


conditions, based on which these events are triggered, ⎪
⎪ ⎨
are called transmit-condition and request-condition. This ⎪
⎨ Σ̃ : x̃ai (t+)= ∑ Tip C pi x̃ai (t )
ai
C : ⎪
⎪ p ∈P \{ j }
event triggering scheme contrasts with the one of event- i

⎪ ⎪

i

based control methods that are proposed in the previous ⎪


⎪ + Tij x j (t ), j ∈ Pi


chapters and, thus, necessitates an extended notion of ⎩
ui (t) = −Ki xi (t) − Kai x̃ai (t). (5.61)
the triggering time instants.
According to the subsequently presented event-based The structure of this generator is illustrated in Figure 5.8.
control approach, the local control unit Fi sends and The state x̃ai (t) is a prediction of the actual approxi-
receives information. Both the sending and the recep- mate model state xai (t) that is generated by means of
tion of information can occur in two situations, which the model Σ̃ai . Note that Σ̃ai differs from the approxi-
are explained next: mate model Σai in that the disturbance dai (t) and the
94 Event-Based Control and Signal Processing

di(t) The task of the event generator Ei is to bound the


si(t) zi(t) deviation
     
∑i  i  
xΔj (t) =  x j (t) − Cji x̃ai (t) = x j (t) − x̃ ij (t) , (5.62)
ui(t) xi(t)

Fi between the actual subsystem state x j (t) and the pre-


x~ai(t) diction Cji x̃ai (t) = x̃ ij (t), applied for the control input
Ci Ei
generation (5.61). However, the difference (5.62) cannot
be monitored by any of the event generators Ei or Ej ,
which exclusively have access to the prediction x̃ai (t)
xj(t*), R xi(t*), R
or to the subsystem state x j (t), respectively. This con-
Communication network sideration gives rise to the idea that the boundedness
of (5.62) can be accomplished by a cooperation of the
FIGURE 5.7 event generators Ei and Ej , which is explained in the
Component Fi of the networked controller. following.
Consider that both Ei and Ej include the model

xcj (t+
ui(t) xi(t)
Σcj : ẋcj (t) = Ā j xcj (t),  ) = x j (t )

–Kdi (if t = tk j or t = tri ( j) ), (5.63)


~
xai(t) ~ with the matrix Ā j = ( A j − Bj Kdj ). The state xcj (t) is
–Kai ∑ai
used in both event generators Ei and Ej as a comparison
xj(t*)
signal as follows:
• Event generator Ej triggers an event at time tk j
FIGURE 5.8
whenever the transmit-condition
Control input generator Ci of the control unit Fi .
 
 x j (t) − xcj (t) < α j ē j , (5.64)
residual output f i (t) are omitted, since these signals are nj
is satisfied, where ē j ∈ IR+ denotes the event
unknown to the local controller Fi . The generation of
threshold vector and α j ∈ (0, 1) is a weighting fac-
the control input ui (t) is equivalent with Equation 5.55,
tor.
since
• The event generator Ei triggers an event at time
Kai x̃ai (t) = ∑ Kai Tij x̃ ij (t) = ∑ Kij x̃ ij (t). tri ( j) which induces the request of the state x j (tri ( j) )
j ∈P i j ∈P i from Fj whenever the request-condition
 
The state x̃ai is reinitialized at the event times t , where Cji x̃ai (t) − xcj (t) < (1 − α j )ē j , j ∈ Pi , (5.65)
the -symbol is a placeholder either for k j or for ri ( j).
That is, x̃ai is reinitialized whenever Fi receives current is fulfilled, with the same event threshold vector
nj
state information x j (t ) from Fj ( j ∈ Pi ) which occurs in ē j ∈ IR+ as in (5.64). That means, at time tri ( j) , the
two situations: event generator Ei sends a request R to the con-
troller Fj in order to call this controller to transmit
• At time t = tk j , the event generator Ej decides to the current state x j (tri ( j) ).
transmit the current state x j (tk j ) to all controllers
The event generator Ej of the controller Fj responds
Fi of the successor subsystems Σ j —that is, to all Fito the triggering of the events caused by the transmit-
with i ∈ S j . condition (5.64) or by the request-condition (5.65) with
the same action. It sends the current state x j (t ) (with
• At time t = tri ( j) , Ei requests current state infor- t = tk j or t = tri ( j) if the transmit-condition (5.64) or
mation from the controller Fj of the predecessor the request-condition (5.65), respectively, has led to the
subsystem Σ j —that is, from Fj with j ∈ Pi . event triggering) to all Fp with p ∈ S j (that is, to all con-
trollers of the successor subsystems Σ p of subsystem
The two event times tk j and tri ( j) are determined based Σ j , which includes Fi ). The received information is then
on conditions that are defined in the next paragraph. used to reinitialize the state of the models (5.61) in all C p
Distributed Event-Based State-Feedback Control 95

with p ∈ S j , as well as the state of the models (5.63) in Figure 5.9 illustrates the structure of the event gen-
Ej and in all E p with p ∈ S j . Due to the event triggering erator Ei . Note that the logic that determines the time
and the state resets, the relations instants at which the current state x i (t ) (with t = tk i or
t = tr j ) is sent to the successor subsystems is decoupled
| xi (t) − xci (t)| ≤ αi ēi , (5.66)
  from the logic that decides when to request state infor-
Cji x̃ai (t) − xcj (t) ≤ (1 − α j )ē j , (5.67) mation from the predecessor subsystems. The latter is
hold for all i ∈ N and all j ∈ Pi . By virtue of (5.66) the represented in the gray block for the example of request-
state difference (5.62) is bounded by ing information from some subsystem Σ j ( j ∈ Pi ). Note
   that this logic, highlighted in the gray block, is imple-
 i   
xΔj (t) = x j (t) − xcj (t) + xcj (t) − Cji x̃ai (t)  mented in Ei for each j ∈ Pi .
    The communication policy leads to a topology for
≤ x j (t) − xcj (t) + Cji x̃ai (t) − xcj (t) the information exchange that can be characterized as
≤ α j ē j + (1 − α j )ē j follows:
= ē j . (5.68) The event generator Ei of the controller Fi transmits
the state information xi only to the controllers Fj
In summary, the event generator Ei is represented by
of the successor subsystems Σ j (∈ Si ) of Σi , whereas


⎪ Σcj as described in (5.63), for all j ∈ Pi ∪ {i } Fi receives the state information x j only from the con-

⎪ $  % trollers Fj of the predecessor subsystems Σ j (j ∈ Pi )
⎪ 
⎨ tk i +1 := inf t > t̂Txi | xi (t) − xci (t)| < αi ēi

of Σi .
*    .
Ei :

⎪ tri ( j)+1 := inf t > t̂Rxi( j)  Cji x̃ai (t) − xcj (t)

⎪ +


⎩ < (1 − α )ē , ∀ j ∈ P
j j i
(5.69)
5.4.4 Analysis of the Distributed Event-Based
In (5.69) the time Control
 * +)
t̂Txi := max tk i , max tr j (i) , Approximation of the reference system behavior.
j ∈S i This section shows that the deviation between the
denotes the last time at which Ei has transmitted the behavior of the reference systems (5.56) and (5.57) and
state xi to the controllers Fj of the successor subsys- the control systems (5.3) and (5.4) with the distributed
tems Σ j , caused by either the violation of the transmit- event-based state feedback (5.61) and (5.69) is bounded,
condition or the request of information from one of the and the maximum deviation can be adjusted by means
controllers Fj with j ∈ Si . Accordingly, the time of an appropriate choice of the event threshold vectors
* + ē j for all j ∈ N . This result is used to determine the
t̂Rxi( j) := max tri ( j) , tk j , asymptotically stable set Ai for each subsystem Σi .
The following theorem gives an upper bound for the
denotes the last time at which Ei has received informa- deviation of the behavior of the event-based control
tion about the state x j either by request or due to the systems (5.3), (5.4), (5.61), and (5.69) and the reference
triggering of the transmit-condition in Ej . systems (5.56) and (5.57).

R
Request R from Fj (j ∈ Si)
xi(t) or xi(t*)
transmit-condition:
xci(t)
Σci |xi(t) – xci(t)| < αi ēi

For each j ∈ pi
~
xai(t)
Request-condition: R
xj(t*) xcj(t) ~
Σcj |Cji xai(t) – xcj(t)| < (1– αj)ēj

FIGURE 5.9
Event generator Ei of the control unit Fi .
96 Event-Based Control and Signal Processing

Theorem 5.8 Now consider the deviation e(t) = x(t) − xr (t). With
Equations 5.58 and 5.74, the deviation e(t) is described
Consider the event-based control systems (5.3), (5.4), by the model
⎛ ⎞
(5.61), and (5.69) and the reference systems (5.56) and ∑ j∈P1 K1j x̃1Δj (t)
(5.57) with the plant states x(t) and xr (t), respec- ⎜ ⎟
tively. The deviation e(t) = x(t) − xr (t) is bounded from ė(t) = Āe(t) + B ⎜

..
.
⎟ , e(0) = 0,

N (t)
above by ∑ j∈P N K Nj x̃Δj

⎛ ⎞ which yields
 ∞  ē1 ⎛ ⎞
1 (τ)
 Āt  ⎜ .. ⎟  t ∑ ∈P K x̃ Δj
|e(t)| ≤ emax := e B dt · |K̄ | ⎝ . ⎠ , (5.70) ⎜
j 1 1j

0
ē N e(t) = e Ā(t − τ) B ⎜

..
.
⎟ dτ.

0
N (τ)
∑ j∈P N K Nj x̃Δj
for all t ≥ 0 with the matrix K̄ given in (5.60). The last equation is used to derive a bound on the
maximum deviation |e(t)| by means of the following
PROOF The subsystem Σi represented by (5.3) with estimation:
the control input that is generated by the control input ⎛    ⎞

generator (5.61) is described by the state-space model: ∑ j∈P1 K1j   x̃1Δj (τ)
 t ⎜ ⎟
 Ā(t − τ)  ⎜ .. ⎟
|e(t)| ≤ e B ⎜ . ⎟ dτ
0 ⎝  
 N  ⎠
ẋ i (t) = Āi xi (t) − Bi ∑ Kij x̃ ij (t) + Ei di (t) + Esi si (t),   
∑ j∈P N Nj Δj
K  x̃ ( τ ) 
j ∈P i   ⎞

xi (0) = x0i ,  ∞  ∑ j∈P1 K1j  ē j
 Āt  ⎜ .. ⎟
zi (t) = Czi xi (t), ≤ e B dt ⎝ ⎠
0 . 
 
∑ j∈P N K Nj ē j
with the matrix Āi = ( A j − Bi Kdi ). The state x i (t) =: emax , (5.75)
depends on the difference states xΔji ( t ) = x ( t ) − x̃ i ( t )
j j where the relation (5.68) (for each j ∈ N ) has been
for all j ∈ Pi , which can be seen by reformulating the applied. Consequently, Equation 5.75 can be reformu-
previous model as lated as (5.70), which completes the proof.

ẋ i (t) = Āi xi (t) − Bi ∑ Kij x j (t) Theorem 5.8 shows that the systems (5.3) and (5.4)
j ∈P i with the distributed event-based state feedback (5.61)
" # and (5.69) can be made to approximate the behavior of
+ Bi ∑ Kij x j (t) − x̃ij (t) + Ei di (t)
the reference systems (5.56) and (5.57) with adjustable
j ∈P i
accuracy, by appropriately setting the event thresholds
+ Esi si (t), (5.71) ē for all i ∈ N . Interestingly, the weighting factors α
i i
x i (0) = x0i , (5.72) (i ∈ N ) that balance the event threshold vector ēi in the
zi (t) = Czi xi (t). (5.73) transmit- and request-conditions (5.64) and (5.65) do not
affect the maximum deviation (5.70) in any way.
Note that the bound (5.70) holds element-wise and,
The interconnection of the models (5.71) according to the thus, implicitly expresses also a bound on the deviation
relation (5.4) yields the overall control system model:
ei (t) = xi (t) − xri (t),
⎛ ⎞
∑ j∈P1 K1j x̃1Δj (t) between the subsystem states of the event-based con-
⎜ ⎟ trol system and the corresponding reference subsystem.
ẋ(t) = Āx(t) + Ed(t) + B ⎜ ⎝
..
.
⎟,
⎠ From (5.70), the bound
N (t)
∑ j∈P N K Nj x̃Δj
|ei (t)|
x (0) = x0 , (5.74) ≤ e
max i
" #
= Oni ×n1 . . . Oni ×ni−1 Ini Oni ×ni+1 . . . Oni ×n N
where Ā = A − B (Kd + K̄ ) with the state-feedback
× emax , (5.76)
gains Kd and K̄ as given in (5.59) and (5.60), respectively,
and the matrices A, B, E according to Equation 5.6. follows.
Distributed Event-Based State-Feedback Control 97

The result (5.76) can be combined with the results where the time T̄TCi is given by
(5.50) and (5.51) on the asymptotically stable set Ari  
for the reference systems (5.56) in order to infer the t   " #
 Āi τ 
asymptotically stable set Ai for the subsystems Σi with T̄TCi := arg min e  dτ ∑  Bi Kij  b j + ē j
t 0 j ∈P i
distributed event-based state feedback.  ,
 
 
+ ∑ Esi Lij Czj b j + | Ei | di = αi ēi , (5.80)
¯
Corollary 5.1
j ∈P i

Consider the event-based control systems (5.3), (5.4), with the bounds b j given in (5.78).
(5.61), and (5.69). Each subsystem Σi is practically stable
with respect to the set PROOF Consider the transmit-condition (5.64) and
observe that the difference
$ 
ni 
%
Ai := xi ∈ IR |xi | ≤ bi , (5.77)
δci (t) := xi (t) − xci (t),

with the bound evolves for t ≥ t̂Txi according to the state-space model

d
bi = bri + emax i , (5.78) δ (t) = Āi δci (t) − ∑ Bi Kij x̃ ij (t) + Ei di (t) + Esi si (t),
dt ci j ∈P i

where bri , given in (5.51), is the bound for the refer- δci (t̂Txi ) = 0, (5.81)
ence subsystem (5.56) and emax i , given in (5.76), denotes
the maximum deviation between the behavior of subsys- With t̂TXi = tk i , Equation 5.81 yields
tem Σi subject to the distributed event-based controller  t
(5.61) and (5.69) and the reference subsystem (5.56). δci (t) = e Āi (t − τ)
tk
i 
Corollary 5.1 can be interpreted as follows: Given that
the vector emax can be freely manipulated by the choice × − ∑ Bi Kij x̃ j (τ) + Ei di (τ) + Esi si (τ) dτ.
i
j ∈P i
of the event threshold vectors ēi for all i ∈ N , the dif-
ference between the sets Ari and Ai can be arbitrarily With Equation 5.4 and the relation z (t) = C x (t), the
j zj j
adjusted, as well. previous equation can be restated as
Minimum interevent times. This section investigates  t 
the minimum interevent times for events that are trig- δci (t) = e Ā i ( t − τ ) − ∑ Bi Kij x̃ij (τ) + Ei di (τ)
tk j ∈P i
gered by the transmit-condition (5.64) and the request- i
condition (5.65). The following analysis is based on 
the assumption that the state xi (t) of subsystem Σi is + ∑ si ij zj j
E L C x ( τ ) dτ.
j ∈P i
contained within the set Ai for all i ∈ N .
The following theorem states that the minimum
In order to analyze the minimum time that elapses
interevent time
in between the two consecutive events k i and k i + 1,
" # assume that no information is requested from Fi after
Tmin,TCi := min tk i +1 − tk i , ∀ k i ∈ IN0 , (5.79) tk until the next event at time tk +1 occurs. Recall
ki i i
that an event is triggered according to the transmit-
condition (5.64) whenever |δci (t)| = | xi (t) − xci (t)| =
for two consecutive events that are triggered according αi ēi holds. Hence, the minimum interevent time Tmin,TCi
to the transmit-condition (5.64) by the controller Fi is is the minimum time t for which
bounded from below by some time T̄TCi .  t 
 Āi (t − τ) −
 e ∑ Bi Kij x̃ij (τ) + Ei di (τ)
Theorem 5.9  0
j ∈P i
 

The minimum interevent time Tmin,TCi defined in (5.79) + ∑ Esi Lij Czj x j (τ) dτ = αi ē, (5.82)
is bounded from below by j ∈P i

is satisfied. The following analysis derives a bound on


Tmin,TCi ≥ T̄TCi , ∀ t ≥ 0, that minimum interevent time. To this end, it is assumed
98 Event-Based Control and Signal Processing

that the state xi (t) is bounded to the set Ai given in (5.77) Theorem 5.10
for all i ∈ N , which implies that
i
The minimum time Tmin,RCj that elapses in between two
| xi (t)| ≤ bi , (5.83)
consecutive events at which the controller Fi requests
holds where the bound bi is given in (5.78). By virtue information from controller Fj is bounded from below by
of the event-triggering mechanism and the state reset, i
Tmin,RCj ≥ T̄RCj
i
, ∀ t ≥ 0.
the prediction x̃ ij (t) always remains in a bounded sur-
i
rounding of the subsystems state x j (t). Hence, from The time T̄RCj is given by
Equation 5.68 the relation i
T̄RCj
    
 i     
 x̃ j (t) ≤ x j (t) + ē j ≤ b j + ē j , (5.84) t
Ā τ   " #
:= arg min e j  dτ ∑ Cji Aai Tip  b p + ē p
  t 0 p ∈P i \{ j }
follows. Taking account of the bound (5.5) on the dis-  ,
turbance di (t), the left-hand side of Equation 5.82 is  
bounded by + Cji Bai Czi  bi = (1 − α j )ē j , (5.85)
 t 
 Āi (t − τ) − with the bounds bi as in (5.78).
 e ∑ Bi Kij x̃ ij (τ) + Ei di (τ)
 0
j ∈P i
  PROOF In order to investigate the difference,

+ ∑ Esi Lij Czj x j (τ) dτ i
δcj (t) := Cji x̃ai (t) − xcj (t), (5.86)
j ∈P i
 t   that is monitored in the request-condition (5.65), con-
 τ   
≤  e i dτ ∑  Bi Kij  (b j + ē j )
Ā sider the state-space model
0 j ∈P i d
 C x̃ (t) = Cji Aai x̃ai (t) + Cji Bai Czi xi (t),
  dt ji ai
+ ∑ Esi Lij Czj  b j + | Ei | d¯i .
Cji x̃ai (t̂Rxi( j) ) = x j (t̂Rxi( j) ),
j ∈P i
which follows from (5.61). With Equation 5.27, this
Consequently, for the minimum interevent time Tmin,TCi model can be restated as
the relation d
C x̃ (t) = Ā j x̃ ij (t) + ∑ Cji Aai Tip x̃ ip (t)
dt ji ai p ∈P \{ j }
T̄TCi ≤ Tmin,TCi , i

+ Cji Bai Czi xi (t),


holds, where
Cji x̃ai (t̂Rxi( j) ) = x j (t̂Rxi( j) ),
T̄TCi where the assumption is made that the matrices Tij and
  Cji are chosen such that
t   " #
 Āi τ   Bi Kij  b j + ē j
:= arg min
t 0
e  dτ ∑ Ā j = Cji Aai Tij ,
j ∈P i
 , holds. This model together with Equation 5.63 yield the
 
+ ∑  Esi Lij Czj  b j + | Ei | d¯i = αi ēi , state-space model
j ∈P i d i
δ (t) = Ā j δcj
i
(t) + ∑ Cji Aai Tip x̃ip (t)
dt cj p ∈P \{ j }
which completes the proof. i

+ Cji Bai Czi xi (t), i


δcj (t̂Rxi( j) ) = 0, (5.87)
The next theorem presents a bound on the minimum
interevent time that describes the behavior of the difference (5.86) for t ≥
' ( t̂Rxi( j) . With t̂Rxi( j) = tri ( j) , Equation 5.87 results in
i
Tmin,RCj := min tri ( j)+1 − tri ( j) ,  t 
ri ( j) Ā j (t − τ)
δcj (t) =
i
tr ( j)
e ∑ Cji Aai Tip x̃ip (τ)
i p ∈P i \{ j }
for two consecutive information requests from controller 
Fj , triggered according to the request-condition (5.65) by + Cji Bai Czi xi (τ) dτ.
the controller Fi .
Distributed Event-Based State-Feedback Control 99

In the following, it is assumed that the controller Fj


does not trigger an event according to its transmission- 5.5 Example: Distributed Event-Based Control
condition (5.64) in between the times tri ( j) and tri +1( j)
in order to identify a minimum time that elapses in
of the Thermofluid Process
between two consecutive events that trigger the request This section demonstrates, using the example of a ther-
of information from Fj by the controller Fi . Note that this mofluid process, how the distributed state feedback
information
  request is triggered whenever the equal- is designed according to Algorithm 5.1 and how the
 i   
ity δ (t) = Cji x̃ai (t) − xcj (t) = (1 − α j )ē j is satisfied
cj
obtained control law is implemented in an event-based
fashion. Moreover, the results of a simulation and an
in at least one element. The minimum interevent time
i experiment illustrate the behavior of the distributed
Tmin,RCj is given by the minimum time t for which
event-based state-feedback approach.
 t 
 Ā j (t − τ)
 e ∑ Cji Aai Tip x̃ip (τ)
 0 5.5.1 Demonstration Process
p ∈P i \{ j }
 
 The event-based control approaches presented in this
+ Cji Bai Czi xi (τ) dτ = 1 − α j ē j , thesis are tested, and the analytical results are quanti-
tatively evaluated through simulations and experiments
holds. Following the same arguments as in the previous on a demonstration process realized at the pilot plant
proof, a bound at the Institute of Automation and Computer Con-
trol at Ruhr-University Bochum, Germany (Figure 5.10).
i
T̄RCj ≤ Tmin,RCj
i
, The plant includes four cylindrical storage tanks, three
on that minimum interevent time is given by batch reactors, and a buffer tank which are connected
over a complex pipe system, and it is constructed with
i standard industrial components including more than
T̄RCj
    70 sensors and 80 actuators.
t   " #
e Ā τ  dτ
:= arg min
t 0

j
 ∑ Cji Aai Tip  b p + ē p Process description. The experimental setup of the
p ∈P i \{ j }
 , process is illustrated in Figure 5.11. The main compo-
  nents are the two reactors TB and TS used to realize
+ Cji Bai Czi  bi = (1 − α j )ē j ,
continuous flow processes. Reactor TB is connected to
the storage tank T1 from where the inflow can be con-
which concludes the proof. trolled by means of the valve angle uT1 (t). Via the pump

Reactor TB Reactor TS

FIGURE 5.10
Pilot plant, where the reactors that are used for the considered process are highlighted.
100 Event-Based Control and Signal Processing

T1 T3 FW
uT1(t) uT3(t) dF(t)

L Reactor TB Reactor TS L

T T

CU
lB(t) vB(t) dH(t) uCU(t) vs(t) ls(t)
uH(t)
PB PS
TW TW
uBW uBS uSB uSW

FIGURE 5.11
Experimental setup of the continuous flow process.

PB a part of the outflow is pumped out into the buffer The overall system is composed of the subsystems (5.3)
tank TW (and is not used further in the process), while with the following parameters:
the remaining outflow is conducted to the reactor TS.
Parameters of Σ1 :
The temperature ϑB (t) of the water in reactor TB is influ- 
enced by the cooling unit (CU) using the input uCU (t) or A1 = −10−3 · 5.74, B1 = 10−3 · 2.30, E1 = 0,
by the heating rods that are driven by the signal dH (t). Es1 = 10 −3
· 2.42, Cz1 = 1.
The inflow from the storage tank T3 to the reactor TS (5.88)
can be adjusted by means of the opening angle uT3 (t).
Reactor TS is additionally fed by the fresh water sup- Parameters of Σ2 :

ply (FW) from where the inflow is controlled by means A2 = −10−3 · 8.58, B2 = −10−3 · 38.9, E2 = 0.169,
of the valve angle dF (t). Equivalently to reactor TB, the " #
Es2 = 10−3 −34.5 43.9 5.44 , Cz2 = 1.
outflow of reactor TS is split, and one part is conveyed
via the pump PS to TW, and the other part is pumped (5.89)
to the reactor TB. The temperature ϑTS (t) of the liquid Parameters of Σ3 :
in reactor TS can be increased by the heating rods that 
A3 = −10−3 · 5.00, B3 = 10−3 · 2.59,
are controlled by the signal uH (t). The signals dH (t) and
dF (t) are disturbance inputs that are used to define test E3 = 10−3 · 1.16, Es3 = 10−3 · 2.85, Cz3 = 1.
scenarios with reproducible disturbance characteristics, (5.90)
whereas the signals uT1 (t), uCU (t), uT3 (t), and uH (t) Parameters of Σ4 :
are control inputs. Both reactor TB and reactor TS are ⎧

⎪ A = −10−3 · 5.58, B4 = 10−3 · 35.0,
equipped with sensors that continuously measure the ⎨ 4
level lB (t) or lS (t) and the temperature ϑB (t) or ϑS (t), E4 = −10−3 · 20.7,

respectively. ⎩ E = 10−3 "−46.5 5.58 39.2# , C = 1.

s4 z4
The two reactors are coupled by the flow from reac-
tor TB to reactor TS, and vice versa, where the coupling (5.91)
strength can be adjusted by means of the valve angles The interconnection between the subsystems is repre-
uBS and uSB . The ratio of the volume that is used for the sented by Equation 5.4 with
coupling of the systems and the outflow to TW is set by ⎛ ⎞ ⎛ ⎞
s1 ( t ) ⎛ 0⎞ ⎛ 0⎞ ⎛ 1⎞ ⎛ 0⎞
the valve angles uBW and uSW . ⎜

⎟ ⎜ 1
⎟ ⎜ 0 0 0 ⎟⎟⎛ ⎞
⎜ s 2 ( t ) ⎟ ⎜ ⎝ 0⎠ ⎝ 0⎠ ⎝ 1⎠ ⎝ 0⎠ ⎟ z 1 ( t )
⎜ ⎟ ⎜ ⎟
Model. The linearized overall system is subdivided ⎜ ⎜
⎟ ⎜ 0
⎟=⎜ 0 0 1 ⎟ ⎜
⎟ ⎜ z2 ( t )⎟ .

into four scalar subsystems with the states ⎜ s ( t ) ⎟ ⎜ 1 0 0 0 ⎟ ⎝ z ( t ) ⎠
⎜ 3 ⎟ ⎜⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎟ 3
⎜ ⎟ ⎜ 1 0 0 0 ⎟ z4 ( t )
⎜ ⎟ ⎜ ⎟
⎝ s 4 ( t ) ⎠ ⎝ ⎝ 0⎠ ⎝ 1⎠ ⎝ 0⎠ ⎝ 0⎠ ⎠
0 0 1 0
x 1 ( t ) = l B ( t ) , x 2 ( t ) = ϑB ( t ) ,  
=L
x 3 ( t ) = l S ( t ) , x 4 ( t ) = ϑS ( t ) . (5.92)
Distributed Event-Based State-Feedback Control 101

5.5.2 Design of the Distributed Event-Based Note that the structure of the overall state-feedback
Controller gain is implicitly predetermined in the proposed design
method by the interconnections of the subsystems. For
This section shows how a state-feedback law is deter-
the considered system, this design procedure leads to
mined first, which is afterward implemented in an
the fact that the controllers F2 and F4 of subsystems Σ2
event-based manner.
or Σ4 , respectively, use state information of all remaining
subsystems.
Distributed state-feedback design. According to step
In the final step, the stability of the overall control sys-
1 in Algorithm 5.1, an approximate model Σai together
tem is analyzed by means of the condition (5.43). The
with the bounds Ḡfvi (t) and Ḡfdi (t) on the impulse-
analysis results are summarized as follows:
response matrices for the residual models Σfi must be
 ∞  ∞  
determined for each subsystem Σi (i = 1, . . . , 4). In this  Āe1 t F  dt
example, the approximate models are designed under λ P Ḡfv1 ( t ) dt H ei e e1
0 0
the assumption that subsystem Σi is controlled by a
continuous state feedback, whereas, in fact, the con-
= 1.48 · 10−4 < 1,
 ∞  ∞  
trol inputs are determined according to a distributed  Ā t 
λP Ḡfv2 (t)dt He2 e e2 Fe2  dt
state feedback. The mismatch between the approximate 0 0
model Σai and the model that describes the behavior = 0.37 < 1,
of the neighboring subsystems of Σi is aggregated in  ∞  ∞  
 
the residual model Σfi . For subsystem Σ1 , the approx- λP Ḡfv3 (t)dt He3 e Āe3 t Fe3  dt
imate model with the corresponding residual model is 0 0

presented by the state-space model: = 2.51 · 10−5 < 1,


⎧  ∞  ∞  
( ) = ( − ) ( )  Ā t 


ẋ a1 t A 3 B3 K d3 x a1 t λP Ḡfv4 (t)dt He4 e e4 Fe4  dt

⎨ 0 0
+ E3 da1 (t) + Es3 L31 Cz1 z1 (t) + 1 · f 1 (t)
Σa1 : = 0.37 < 1.

⎪ s ( t ) = xa1 (t)


1
v 1 ( t ) = z1 ( t ), Consequently, the distributed state-feedback con-
troller (5.94) guarantees the stability of the overall
where da1 (t) = dF (t) holds, and the residual model Σf1 control system. According to Theorem 5.7, each sub-
is represented by system is practically stable with respect to the set Ari
defined in (5.50) with the bounds
Σf1 : f 1 (t) = − B3 Ka3 · v1 (t),
br1 = 0.00, br2 = 1.20, br3 = 6.11, br4 = 0.31.
where Ka3 = K31 holds. The remaining approximate
(5.98)
models and residual models are determined accordingly.
In the second step, the state-feedback gains Kei are The result br1 = 0 shows that in the distributed contin-
to be designed such that the controlled extended sub- uous state-feedback system, the disturbance and cou-
systems have a desired behavior. In this example, these plings do not affect state x1 (t); thus, this state eventually
matrices are chosen as follows: converges to the origin.
" #
Ke1 = 3.00 1.05 ,
" # Event-Based Implementation of the Distributed State
Ke2 = −0.70 0.89 −1.13 −0.14 ,
" # " # Feedback. For the implementation of the derived state
Ke3 = 1.00 1.10 , Ke4 = 0.60 −1.33 0.16 1.12 . feedback in an event-based manner, the event thresh-
(5.93) olds ēi and the weighting factor αi for i = 1, . . . , 4 are
chosen as follows:
The structure of the distributed state-feedback law is bet-
ter conveyed by restating these matrices in the form used ē1 = ē3 = 0.035, α1 = α3 = 0.85,
(5.99)
in Equation 5.39 which yields ē2 = ē4 = 3.2, α2 = α4 = 0.94.

Kd1 = 3.00, K13 = 1.05, (5.94) According to Theorem 5.8, the maximum deviation
K21 = 0.89, Kd2 = −0.70, K23 = −1.13, between the distributed event-based state-feedback sys-
tems and the continuous reference system results in
K24 = −0.14, (5.95)
" #
K31 = 1.10, Kd3 = 1.00, (5.96) emax = 0.67 0.51 1.31 0.79 . (5.100)
K41 = −1.33, K42 = 0.16, K43 = 1.12, By virtue of Corollary 5.1, this result together with
Kd4 = 0.60. (5.97) the bounds (5.98) implies the practical stability of the
102 Event-Based Control and Signal Processing

subsystems Σi with respect to the sets Ai given in (5.77) The presented analysis results are evaluated by
with means of a simulation and an example in the next
section.
b1 = 0.67, b2 = 1.71, b3 = 7.43, b4 = 1.10. (5.101)

The following table summarizes the results on the 5.5.3 Experimental Results
i
bounds T̄TCi and T̄RCj on the minimum interevent times The following presents the results of an experiment,
for events triggered due to the transmit-condition (5.64) demonstrating the disturbance behavior of the dis-
or the request-condition (5.65). These times are obtained tributed event-based state-feedback approach.
according to Theorems 5.9 and 5.10. The ‘—’ symbol- The experiment investigates the scenario where the
izes that Σ j is not a predecessor of Σi ; hence, Fi never overall system is perturbed in the time intervals
requests information from Fj . t ∈ [200, 600] s by the disturbance dH (t) and in t ∈
[1000, 1400] s by the disturbance dF (t). The behavior
TABLE 5.1 of the overall plant with the distributed event-based
Minimum Interevent Times (MIET) controller subject to these disturbances is illustrated in
i
MIET T̄RCj in s for Requests Figure 5.12. It shows the disturbance behavior of each
subsystem where from top to bottom the following sig-
Fi MIET T̄TCi in s of Information from Fj
nals are depicted: the disturbance dH (t) or dF (t), the sub-
for Transmissions j = 1 j = 2 j = 3 j = 4 system state x i (t), the control input ui (t), and the time
F1 0.83 – – 2.78 – instants where information is received (Rxi ) and trans-
F2 4.21 0.29 – 2.64 0.58 mitted (Txi ), indicated by stems. Regarding the recep-
F3 6.97 0.29 – – –
tion of information, the amplitude of the stems refers
F4 4.62 0.29 5.97 0.65 –
to the subsystem that has transmitted the information.

Behavior of subsystem Σ1 Behavior of subsystem Σ3


0.2 0.3
0.2
dH

dF

0.1
0.1
0 0

2 8
x1 (cm)

x3 (cm)

1 4
0
0
–1
4 4
3 3
Rx3
Rx1

2 2
1 1
Tx1

Tx3

0 500 1000 1500 2000 0 500 1000 1500 2000


Time t (s) Time t (s)

Behavior of subsystem Σ2 Behavior of subsystem Σ4


0.2 0.3
0.2
dH

0.1
dF

0.1
0 0

6 1
4
x2 (K)

x4 (K)

2 0
0
–1

4 4
3 3
Rx2

Rx4

2 2
1 1
Tx2

Tx4

0 500 1000 1500 2000 0 500 1000 1500 2000


Time t (s) Time t (s)

FIGURE 5.12
Experimental results for the disturbance behavior of the distributed event-based state feedback.
Distributed Event-Based State-Feedback Control 103
Behavior of reactor TB Behavior of reactor TS
6

1
4

x2 (K)

x4 (K)
2 0

0 A
–1
A
–2
–1 0 1 2 3 –5 0 5
x1 (cm) x3 (cm)

FIGURE 5.13
Trajectory of the overall system state in state space in the experiment.

For both Rxi and Txi a stem with a circle denotes that shows that the bounds b1 = 0.67 and b2 = 1.71 do not
the request-condition (5.65) or the transmit-condition hold, whereas the bounds b3 = 7.43 and b4 = 1.10 are
(5.64) of the controller Fi is satisfied and, thus, leads not exceeded by the states x3 (t) or x4 (t), respectively.
to the triggering of the respective event. For the recep- The reason why x2 (t) grows much larger than the bound
tions (Rxi ), a stem with a diamond indicates that another b2 is again the saturation of the control input u2 (t). The
control unit has transmitted information, because its exceeding of the state x1 (t) above the bound b1 can be
local transmit-condition was satisfied or its state was ascribed to unmodeled nonlinear dynamics of various
requested by a third controller. For the transmissions components of the plant.
(Txi ), a stem with a diamond shows that the controller In summary, the experiment has shown that events
Fi was requested to transmit the local state by another are only triggered by the distributed event-based state-
controller. feedback controller whenever the plant is substantially
The experiment shows how the communication is affected by an exogenous disturbance and no feedback
adapted to the behavior of the system. No event is trig- communication is required in the unperturbed case.
gered when the overall system is undisturbed, except This observation implies that the effect of the couplings
for the times immediately after the disturbances van- between the subsystems has a minor impact on the trig-
ish. In this experiment, the transmit-condition (5.64) is gering of events, since these dynamics are taken into
only satisfied in the controller F3 , whereas the request account by using approximate models in the design of
of information is only induced by the controllers F2 and the local control units. In consequence, the proposed
F3 . In this investigation, the minimum interevent times distributed event-based state-feedback method is suit-
for events that are triggered according to the transmit- able for the application to interconnected systems, where
condition (5.64) or the request-condition (5.65) are neighboring subsystems are strongly coupled.
Tmin,TC3 ≈ 90 s > 6.97 s = T̄TC3 ,
2
Tmin,RC4 ≈ 0.7 s > 0.58 s = T̄RC4
2
,
3
Tmin,RC1 ≈ 49 s > 0.29 s = T̄RC1
3
.
Bibliography
For these interevent times, the bounds that are listed
in Table 5.1 hold true. In total, 43 events are trig- [1] D. S. Bernstein, Matrix Mathematics: Theory, Facts
gered, which cause a transmission of information within and Formulas. Princeton University Press, Princeton,
2000 s. Hence, from the communication aspect, the NJ, 2009.
distributed event-based controller outperforms a con-
[2] C. De Persis, R. Sailer, F. Wirth, On inter-sampling
ventional discrete-time controller (with appropriately
times for event-triggered large-scale linear systems.
chosen sampling time h = 10 s). Moreover, the exper-
In Proceedings IEEE Conf. on Decision and Control,
imental results also demonstrate that the proposed
2013, pp. 5301–5306.
control approach is robust with respect to model
uncertainties. [3] C. De Persis, R. Sailer, F. Wirth, Parsimonious
Figure 5.13 evaluates to what extend the analysis event-triggered distributed control: A Zeno free
results, presented in the previous section, are valid approach. Automatica, vol. 49, no. 7, pp. 2116–2124,
for trajectories that are measured in the experiment. It 2013.
104 Event-Based Control and Signal Processing

[4] M. C. F. Donkers, Networked and Event-Triggered [16] J. Lunze, F. Lamnabhi-Lagarrigue (Eds.), Hand-
Control Systems. PhD thesis, Eindhoven University book of Hybrid Systems Control—Theory, Tools, Appli-
of Technology, 2011. cations. Cambridge University Press, Cambridge,
2009.
[5] M. C. F. Donkers, W. P. M. H. Heemels, Output-
based event-triggered control with guaranteed [17] J. Lunze, D. Lehmann, A state-feedback approach
L∞ -gain and improved and decentralised event- to event-based control. Automatica, vol. 46, no. 1,
triggering. IEEE Trans. Autom. Control, vol. 57, no. 6, pp. 211–215, 2010.
pp. 1362–1376, 2012.
[18] E. D. Sontag, Y. Wang, New characterizations of
[6] R. Goebel, R. G. Sanfelice, A. R. Teel, Hybrid input-to-state stability. IEEE Trans. Autom. Control,
dynamical systems. IEEE Control Syst. Magazine, vol. 41, no. 9, pp. 1283–1294, 1996.
vol. 29, no. 2, pp. 28–93, 2009.
[19] C. Stöcker, J. Lunze, Event-based control of input-
[7] M. Guinaldo, D. V. Dimarogonas, K. H. Johansson, output linearizable systems. In Proceedings 18th
J. Sánchez, S. Dormido, Distributed event-based IFAC World Congress, 2011, pp. 10062–10067.
control for interconnected linear systems. In Pro-
ceedings IEEE Conf. on Decision and Control, 2011, [20] C. Stöcker, J. Lunze, Event-based control of
pp. 2553–2558. nonlinear systems: An input-output linearization
approach. In Proceedings Joint IEEE Conf. on Deci-
[8] M. Guinaldo, D. Lehmann, J. Sánchez, S. Dormido, sion and Control and European Control Conf., 2011,
K. H. Johansson, Distributed event-triggered con- pp. 2541–2546.
trol with network delays and packet losses. In Pro-
ceedings IEEE Conf. on Decision and Control, 2012, [21] C. Stöcker, J. Lunze, Event-based feedback control
pp. 1–6. of disturbed input-affine systems. J. Appl. Math.
Mech., vol. 94, no. 4, pp. 290–302, 2014.
[9] W. P. M. H. Heemels, M. C. F. Donkers, Model-
based periodic event-triggered control for linear [22] C. Stöcker, J. Lunze, Input-to-state stability of
systems. Automatica, vol. 49, no. 3, pp. 698–711, event-based state-feedback control. In Proceedings of
2013. the 13th European Control Conference, 2013, pp. 49–54.

[10] J. P. Hespanha, P. Naghshtabrizi, Y. Xu, A survey [23] C. Stöcker, D. Vey, J. Lunze, Decentralized event-
of recent results in networked control systems. Pro- based control: Stability analysis and experimental
ceedings of the IEEE, vol. 95, no. 1, pp. 138–162, evaluation. Nonlinear Anal. Hybrid Syst., vol. 10,
2007. pp. 141–155, 2013.

[11] K.-D. Kim, P. R. Kumar, Cyber-physical systems: [24] Y. Tipsuwan, M.-Y. Chow, Control methodologies
A perspective at the centennial. Proceedings of the in networked control systems. Contr. Eng. Practice,
IEEE, vol. 100, pp. 1287–1308, 2012. vol. 11, no. 10, pp. 1099–1111, 2003.

[12] D. Lehmann, Event-Based State-Feedback Control. [25] X. Wang, M. D. Lemmon, Event-triggered broad-
Logos-Verlag, Berlin, 2011. casting across distributed networked control
systems. In Proceedings of the American Control
[13] D. Lehmann, J. Lunze, Extension and experimen- Conference, 2008, pp. 3139–3144.
tal evaluation of an event-based state-feedback
approach. Contr. Eng. Practice, vol. 19, no. 2, [26] X. Wang, M. D. Lemmon, Event-triggering in dis-
pp. 101–112, 2011. tributed networked systems with data dropouts
and delays, in Hybrid Systems: Computation and
[14] D. Lehmann, J. Lunze, Event-based control with Control (R. Majumdar, P. Tabuada, Eds.), Springer:
communication delays and packet losses. Int. J. Berlin Heidelberg, 2009, pp. 366–380.
Control, vol. 85, no. 5, pp. 563–577, 2012.
[27] X. Wang, M. D. Lemmon, Event-triggering in dis-
[15] J. Lunze, Feedback Control of Large-Scale Systems. tributed networked control systems. IEEE Trans.
Prentice Hall, London, 1992. Autom. Control, vol. 56, no. 3, pp. 586–601, 2011.
6
Periodic Event-Triggered Control

W. P. Maurice H. Heemels
Eindhoven University of Technology
Eindhoven, Netherlands

Romain Postoyan
University of Lorraine, CRAN UMR 7039, CNRS
Nancy, France

M. C. F. (Tijs) Donkers
Eindhoven University of Technology
Eindhoven, Netherlands

Andrew R. Teel
University of California
Santa Barbara, CA, USA

Adolfo Anta
General Electric
Munich, Germany

Paulo Tabuada
University of California
Los Angeles, CA, USA

Dragan Nešić
University of Melbourne
Parkville, Australia

CONTENTS
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2 Periodic ETC Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.3 Impulsive System Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4 Analysis for Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.4.2 Stability and Performance of the Linear Impulsive System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.4.3 A Piecewise Linear System Approach to Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.4.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.5 Design and Analysis for Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.5.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.5.2 Emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.5.3 Redesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.5.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.6 Conclusions, Open Problems, and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

105
106 Event-Based Control and Signal Processing

ABSTRACT Recent developments in computer and since then, see, for example, [7–18]. In ETC, the con-
communication technologies are leading to an increas- trol task is executed after the occurrence of an event,
ingly networked and wireless world. This raises new generated by some well-designed event-triggering con-
challenging questions in the context of networked con- dition, rather than the elapse of a certain fixed period
trol systems, especially when the computation, commu- of time, as in conventional periodic sampled-data con-
nication, and energy resources of the system are limited. trol. This can be seen as bringing feedback to the sensing,
To efficiently use the available resources, it is desirable communication, and actuation processes, as opposed to
to limit the control actions to instances when the sys- “open-loop” sensing and actuation as in time-triggered
tem really needs attention. Unfortunately, the classical periodic control. By using feedback principles, ETC is
time-triggered control paradigm is based on perform- capable of significantly reducing the number of control
ing sensing and actuation actions periodically in time task executions, while retaining a satisfactory closed-
(irrespective of the state of the system) rather than when loop performance.
the system needs attention. Therefore, it is of interest to The main difference between the aforecited papers
consider event-triggered control (ETC) as an alternative [1–5,7–18] and the ETC strategy that will be discussed
paradigm as it is more natural to trigger control actions in this chapter is that in the former, the event-triggering
based on the system state, output, or other available condition has to be monitored continuously, while in
information. ETC can thus be seen as the introduction of the latter, the event-triggering condition is evaluated
feedback in the sensing, communication, and actuation only periodically, and at every sampling instant it is
processes. To facilitate an easy implementation of ETC, decided whether or not to transmit new measurements
we propose to combine the principles and particularly and control signals. The resulting control strategy aims
the benefits of ETC and classical periodic time-triggered at striking a balance between periodic time-triggered
control. The idea is to periodically evaluate the trigger- control on the one hand and event-triggered control
ing condition and to decide, at every sampling instant, on the other hand; therefore, we coined the term peri-
whether the feedback loop needs to be closed. This leads odic event-triggered control (PETC) in [19,20] for this
to the periodic event-triggered control (PETC) systems. class of ETC. For the existing approaches that require
In this chapter, we discuss PETC strategies, their bene- monitoring of the event-triggering conditions continu-
fits, and two analysis and design frameworks for linear ously, we will use the term continuous event-triggered
and nonlinear plants, respectively. control (CETC). By mixing ideas from ETC and periodic
sampled-data control, the benefits of reduced resource
utilization are preserved in PETC as transmissions
and controller computations are not performed peri-
odically, even though the event-triggering conditions
6.1 Introduction
are evaluated only periodically. The latter aspect leads
In many digital control applications, the control task to several benefits, including a guaranteed minimum
consists of sampling the outputs of the plant and com- interevent time of (at least) the sampling interval of
puting and implementing new actuator signals. Typi- the event-triggering condition. Furthermore, as already
cally, the control task is executed periodically, since this mentioned, the event-triggering condition has to be ver-
allows the closed-loop system to be analyzed and the ified only at periodic sampling instants, making PETC
controller to be designed using the well-developed the- better suited for practical implementations as it can be
ory on sampled-data systems. Although periodic sam- implemented in more standard time-sliced embedded
pling is preferred from an analysis and design point of software architectures. In fact, in many cases CETC will
view, it is sometimes less appropriate from a resource typically be implemented using a discretized version
utilization point of view. Namely, executing the control based on a sufficiently high sampling period resulting
task at times when no disturbances are acting on the sys- in a PETC strategy (the results of [21] may be applied in
tem and the system is operating desirably is clearly a this case to analyze stability of the resulting closed-loop
waste of resources. This is especially disadvantageous system). This fact provides further motivation for a more
in applications where the measured outputs and/or the direct analysis and design of PETC instead of obtain-
actuator signals have to be transmitted over a shared ing them in a final implementation stage as a discretized
(and possibly wireless) network with limited bandwidth approximation of a CETC strategy.
and energy-constrained wireless links. To mitigate the Initial work in the direction of PETC was taken
unnecessary waste of communication resources, it is in [2,7,8,22], which focused on restricted classes of
of interest to consider an alternative control paradigm, systems, controllers, and/or (different) event-triggering
namely, event-triggered control (ETC), which was pro- conditions without providing a general analysis
posed in the late 1990s, see [1–5] and [6] for a recent framework. Recently, the interest in what we call
overview. Various ETC strategies have been proposed here PETC is growing, see, for example, [20,23–26]
Periodic Event-Triggered Control 107

and [27, Sec. 4.5], although these approaches start from given by
a discrete-time plant model instead of a continuous-time d
dt x = f ( x, u, w), (6.1)
plant, as we do here. In this chapter, the focus is on
where x ∈ Rn x denotes the state of the plant, u ∈ Rnu
approaches to PETC that include a formal analysis
is the input applied to the plant, and w ∈ Rnw is an
framework, which, moreover, apply for continuous-
unknown disturbance.
time plants and incorporate intersample behavior in the
In a conventional sampled-data state-feedback set-
analysis. We first address the case of plants modeled by
ting, the input u is given by
linear continuous-time systems. Afterward, we present
preliminary results in the case where the plant dynamics u(t) = K ( x (tk )), for t ∈ ( t k , t k +1 ], (6.2)
is nonlinear. The presented results are a summary of our
works in [19] and in [28], in which the interested reader where tk , k ∈ N, are the sampling instants, which are
will find all the proofs as well as further developments. periodic in the sense that tk = kh, k ∈ N, for some prop-
The chapter is organized as follows. We first introduce erly chosen sampling interval h > 0. Hence, at each
the PETC paradigm in Section 6.2. We then model PETC sampling instant, the state measurement is sent to the
systems as impulsive systems in Section 6.3. Results for controller, which computes a new control input that is
linear plants are presented in Section 6.4, and the case of immediately applied to the plant.
nonlinear systems is addressed in Section 6.5. Section 6.6 The setup is different in PETC. In PETC, the sampled
concludes the chapter with a summary as well as a list state measurement x (tk ) is used to evaluate a criterion at
of open problems. each tk = kh, k ∈ N for some h > 0, based on which it is
decided (typically at the smart sensor) whether the feed-
Nomenclature back loop needs to be closed. In that way, a new control
Let R := (−∞, ∞), R+ := [0, ∞), N := {1, 2, . . .}, and input is not necessarily periodically applied to the plant
N0 := {√0, 1, 2, . . .}. For a vector x ∈ Rn , we denote by as in traditional sampled-data settings, even though
 x  := x  x its 2-norm. The distance of a vector x to a the state is sampled at every tk , k ∈ N. This has the
set A ⊂ Rn is denoted by  x A := inf{ x − y | y ∈ A}. advantage of reducing the usage of the communication
For a real symmetric matrix A ∈ Rn×n , λmax ( A) denotes channel and of the controller computation resources, as
the maximum eigenvalue of A. For a matrix A ∈ Rn×m , well as the number of control input updates. The lat-
we denote  by A ∈ Rm×n the transpose of A, and ter allows limiting the actuators, wear, and reducing the
actuators, energy consumption, in some applications. As
by  A := λmax ( A A) its induced 2-norm. For the
a consequence, the controller in PETC is given by
sake of brevity, we - sometimes
. - write
. symmetric matri-
A B
ces of the form B C as A

B
C . We call a matrix u(t) = K ( x̂ (t)), for t ∈ R+ , (6.3)
P∈ Rn × n positive definite, and write P  0, if P is sym- where x̂ is a left-continuous signal∗ given for t ∈
metric and x  Px > 0 for all x = 0. Similarly, we use (tk , tk+1 ], k ∈ N, by
P  0, P ≺ 0, and P  0 to denote that P is positive

semidefinite, negative definite, and negative semidef- x (tk ), when C( x (tk ), x̂ (tk )) > 0
inite, respectively. The notations I and 0 respectively x̂ (t) = , (6.4)
x̂ (tk ), when C( x (tk ), x̂ (tk ))  0
stand for the identity matrix and the null matrix, whose
dimensions depend on the context. For a locally inte- and some initial value for x̂(0). Considering the config-
grable signal w : R+ → Rn , we denote by wL2 := uration in Figure 6.1, the value x̂ (t) can be interpreted
∞
( 0 w(t)2 dt)1/2 its L2 -norm, provided the integral is as the most recently transmitted measurement of the
finite. Furthermore, we define the set of all locally inte- state x to the controller at time t. Whether or not new
grable signals with a finite L2 -norm as L2 . For a signal state measurements are transmitted to the controller is
w : R+ → Rn , we denote the limit from below at time based on the event-triggering criterion C : Rnξ → R
t ∈ R+ by w+ (t) := lims↑t w(s). The solution z of a time- with nξ := 2n x . In particular, if at time tk it holds that
invariant dynamical system at time t  0 starting with C( x (tk ), x̂(tk )) > 0, the state x (tk ) is transmitted over the
the initial condition z(0) = z0 will be denoted z(t, z0 ) network to the controller, and x̂ and the control value
or simply z(t) when the initial state is clear from the u are updated accordingly. In case C( x (tk ), x̂ (tk ))  0,
context. The notation · stands for the floor function. no new state information is sent to the controller, in
which case the input u is not updated and kept the
same for (at least) another sampling interval, imply-
6.2 Periodic ETC Systems ing that no control computations are needed and no
In this section, we introduce the PETC paradigm. To ∗ A signal x : R → Rn is called left-continuous, if for all t > 0,
+
do so, let us consider a plant whose dynamics is lims↑ t x ( s ) = x ( t).
108 Event-Based Control and Signal Processing

new state measurements and control values have to be u x


Plant
transmitted.
Contrary to CETC, we see that the triggering condi-
tion is evaluated only every h units of time (and not Event-triggering
condition
continuously for all time t ∈ R+ ). Intuitively, we might
want to design the criterion C as in CETC and to select

the sampling period h sufficiently small to obtain a PETC Controller
strategy which (approximately) preserves the properties
ensured in CETC. Indeed, we know from [21] that if FIGURE 6.1
a disturbance-free CETC system is such that its origin Periodic event-triggered control schematic.
(or, more generally, a compact set) is uniformly glob-
ally asymptotically stable, then the corresponding emu- ⎧ 
lated PETC system preserves this property semiglobally ⎪
⎪ J1 ξ ,
 + ⎪⎪
⎨ 0 when C(ξ) > 0, τ = h
and practically with fast sampling, under mild condi- ξ
+ =   (6.7)
tions as we will recall in Section 6.5.2. This way of τ ⎪
⎪ J ξ
addressing PETC may exhibit some limitations, as it may ⎪
⎪ 2 , when C(ξ)  0, τ = h

require very fast sampling of the state, which may not 0
be possible to achieve because of the limited hardware
where the state τ keeps track of the time elapsed since
capacities. Furthermore, we might want to work with
the last sampling instant. Between two successive sam-
“non-small” sampling periods in order to reduce the
pling instants, ξ and τ are given by the (standard) solu-
usage of the computation and communication resources.
tions to the ordinary differential equation (6.6), and these
As such, there is a strong need for systematic methods
experience a jump dictated by (6.7) at every sampling
to construct PETC strategies that appropriately take into
instant. When the event-based condition is not satisfied,
account the features of the paradigm. The objective of
only τ is reset to 0, while x and x̂ are unchanged. In the
this chapter is to address this challenge. We present in
other case, x̂ and τ are, respectively, reset to x and 0,
the next sections analysis and design results for systems
which corresponds to a new control input being applied
(6.1), (6.3), and (6.4) such that desired stability or per-
to the plant.
formance guarantees are satisfied, while the number of
In what follows, we use the impulsive model to ana-
transmissions between the plant and the controller is
lyze the PETC system for both the case that the plant and
kept small.
controller are linear (Section 6.4), or the case that they are
nonlinear (Section 6.5).

6.3 Impulsive System Formulation


The system described in the previous section com- 6.4 Analysis for Linear Systems
bines continuous-time dynamics (6.1) with discrete-time In this section, we analyze stability and performance
phenomena (6.3) and (6.4). It is therefore natural to of the PETC systems with linear dynamics. Hence,
model PETC systems as impulsive systems (see [29]). An the plant model is given by (6.1) with f ( x, u, w) =
impulsive system is a system that combines the “flow” A p x + B p u + Bw w and the feedback law by (6.2) with
of the continuous dynamics with the discrete “jumps” K ( x ) = Kx, where A p , B p , Bw , and K are matrices of
occurring at each sampling instant. appropriate dimensions. This leads to the PETC (6.6)–
We define ξ := [ x  x̂  ] ∈ Rnξ , with nξ = 2n x , and (6.7) with

      g (ξ, w) = Āξ + B̄w,


f ( x, K ( x̂), w) I 0 I 0  p   w
g (ξ, w) := , J1 := , J2 := , A BpK B
0 I 0 0 I with Ā := , B̄ := . (6.8)
0 0 0
(6.5)
Moreover, we focus on quadratic event-triggering condi-
tions, i.e., C in (6.4) and (6.7), which is defined as
to arrive at an impulsive system given by
C(ξ(tk )) = ξ (tk ) Qξ(tk ), (6.9)
   
d ξ g (ξ, w) n ×n
= , when τ ∈ [0, h ], (6.6) for some symmetric matrix Q ∈ R ξ ξ . This choice is
dt τ 1 justified by the fact that various existing event-triggering
Periodic Event-Triggered Control 109

conditions, including the ones in [11,12,16,30–33], that or storage function) can be found that satisfies certain
have been applied in the context of CETC, can be written properties, stability and a certain L2 -gain can be guar-
as quadratic event-triggering conditions for PETC as in anteed. In particular, we consider a Lyapunov function
(6.9) (see [19] for more details). of the form
We now make precise what we mean by stability and V (ξ, τ) := ξ P(τ)ξ, (6.12)
performance. Subsequently, we present two different
for ξ ∈ Rnξ and τ ∈ [0, h ], where P : [0, h ] → Rnξ ×nξ with
approaches: (1) direct analysis of the impulsive system
P(τ)  0, for τ ∈ [0, h ]. This function proves stability and
and (2) indirect analysis of the impulsive system using a
a certain L2 -gain from w to z if it satisfies
discretization.
d
dt V  −2ρV − γ−2 z2 + w2 , (6.13)
6.4.1 Problem Statement
during the flow (6.6) and
Let us define the notion of global exponential stabil-
ity and L2 -performance, where the latter definition is V ( J1 ξ, 0)  V (ξ, h), for all ξ with ξ Qξ > 0, (6.14)
adopted from [34].
V ( J2 ξ, 0)  V (ξ, h), for all ξ with ξ Qξ  0, (6.15)
DEFINITION 6.1 The PETC system, given by (6.1),
during the jumps (6.7) of the impulsive system (6.6)–
(6.2), and (6.3) is said to be globally exponentially sta-
(6.7). Equation 6.13 indicates that along the solutions, the
ble (GES), if there exist c > 0 and ρ > 0 such that for
energy of the system decreases up to the perturbating
any initial condition ξ(0) = ξ0 ∈ Rnξ , all corresponding
term w during the flow, and (6.14) indicates that the
solutions to (6.6)–(6.7) with τ(0) ∈ [0, h ] and w = 0 sat-
energy in the system does not increase during the jumps.
isfy ξ(t)  ce−ρt ξ0  for all t ∈ R+ and some (lower
The main result presented below will provide a
bound on the) decay rate ρ.
computable condition in the form of a linear matrix
inequality (LMI) to verify if a function (6.12) exists
Let us now define the L2 -gain of a system, for which
that satisfies (6.13) and (6.14). Note that LMIs can
we introduce a performance variable z ∈ Rnz given by
be efficiently tested using optimization software, such
z = C̄ξ + D̄w, (6.10) as Yalmip [35]. We introduce the Hamiltonian matrix,
given by
where C̄ and D̄ are appropriately chosen matrices  
given by the considered problem. For instance, when Ā + ρI + B̄ D̄  LC̄ γ2 B̄(γ2 I − D̄  D̄ )−1 B̄
H := ,
C̄ = [ In x 0n x ×n x ] and D̄ is equal to 0n x ×nw , we simply −C̄  LC̄ −( Ā + ρI + B̄ D̄  LC̄ )
have that z = x. (6.16)
 −1
DEFINITION 6.2 The PETC system, given by (6.1), with L := (γ I − D̄ D̄ ) . The matrix L has to be
2

(6.2), (6.3), and (6.10) is said to have an L2 -gain from w positive


 definite, which can be guaranteed by taking
to z smaller than or equal to γ, where γ ∈ R+ , if there γ > λmax ( D̄  D̄). In addition, we introduce the matrix
is a function δ : Rnξ → R+ such that for any w ∈ L2 , exponential
any initial state ξ(0) = ξ0 ∈ Rnξ and τ(0) ∈ [0, h ], the  
F (τ) F12 (τ)
corresponding solution to (6.6), (6.7), and (6.10) satisfies F (τ) := e− Hτ = 11 , (6.17)
F21 (τ) F22 (τ)
 z  L 2  δ ( ξ0 ) + γ  w  L 2 . (6.11)
Besides this, we need the following technical
Equation 6.11 is a robustness property, and the gain γ assumption.
serves as a measure of the system’s ability to attenuate
the effect of the disturbance w on z. Loosely speaking, ASSUMPTION 6.1 F11 (τ) is invertible for all τ ∈ [0, h ].
small γ indicates small impact of w on z.
Assumption 6.1 is always satisfied for a sufficiently
small sampling period h. Namely, F (τ) = e− Hτ is a con-
6.4.2 Stability and Performance of the Linear tinuous function, and we have that F11 (0) = I. Let us
Impulsive System also introduce the notation F̄11 := F11 (h), F̄12 := F12 (h),
We analyze the stability and the L2 -gain of the impul- F̄21 := F21 (h), and F̄22 := F22 (h), and a matrix S̄ that
−1
sive system model (6.6)–(6.7) using techniques from satisfies S̄S̄ := − F̄11 F̄12 . Such a matrix S̄ exists under
Lyapunov stability analysis [34]. In short, the theory Assumption 6.1 because this assumption ensures that
−1
states that if an energy function (a so-called Lyapunov the matrix − F̄11 F̄12 is positive semidefinite.
110 Event-Based Control and Signal Processing

Theorem 6.1 Using the PWL model (6.19) and a piecewise quadratic
(PWQ) Lyapunov function of the form

 the impulsive system (6.6)–(6.7) and let ρ > 0,
Consider
ξ P1ξ, when ξ Qξ > 0
γ > λmax ( D̄  D̄), and Assumption 6.1 hold. Suppose V (ξ) = , (6.20)
that there exist a matrix P  0, and scalars μi  0, ξ P2ξ, when ξ Qξ  0
i ∈ {1, 2}, such that for i ∈ {1, 2},
⎡ − − −1 −1 ⎤
we can guarantee GES of the PETC system given by (6.1),
P + (−1)i μi Q Ji F̄11 PS̄ Ji ( F̄11 P F̄11 + F̄21 F̄11 ) (6.3), (6.4), and (6.8) under the conditions given next.
⎣  I − S̄ PS̄ 0 ⎦
− −1 −1
  F̄11 P F̄11 + F̄21 F̄11 Theorem 6.2
 0. (6.18)
Then, the PETC system (6.6)–(6.7) is GES with decay rate The PETC system (6.6)–(6.7) is GES with decay rate ρ,
ρ (when w = 0) and has an L2 -gain from w to z smaller if there exist matrices P1 , P2 , and scalars αij  0, βij  0,
than or equal to γ. and κi  0, i, j ∈ {1, 2}, satisfying

The results of Theorem 6.1 guarantee both GES (for e−2ρh Pi − (e Āh Ji ) Pj e Āh Ji + (−1)i αij Q
w = 0) and an upper bound on the L2 -gain. (6.21)
+ (−1) j βij (e Āh Ji ) Qe Āh Ji  0,
REMARK 6.1 Recently, extensions to the above
results were provided in [36]. Instead of adopt- for all i, j ∈ {1, 2}, and
ing timer-dependent quadratic storage functions
V (ξ, τ) = ξ P(τ)ξ, in [36] more versatile storage Pi + (−1)i κi Q  0, (6.22)
functions were used of the piecewise quadratic form
for all i ∈ {1, 2}.
V (ξ, τ, ω) = ξ Pi (τ)ξ, i ∈ {1, . . . , N }, where i is deter-
mined by the region Ωi , i ∈ {1, . . . , N }, in which the
When comparing the two different analysis
state ξ is after h − τ time units (i.e., at the next jump
approaches, two observations can be made. The first
time) that depends on the disturbance signal ω. The
observation is that the direct analysis of the impulsive
regions Ω1 , . . . , Ω N form a partition of the state-space
system allows us to analyze the L2 -gain from w to z,
Rnξ . As such, the value of the storage function depends
contrary to the indirect analysis using the PWL system.
on future disturbance values, see [36] for more details.
Second, the indirect analysis approach using the PWL
This approach leads to less conservative LMI conditions
system is relevant since, when comparing it to the direct
than the ones presented above.
analysis of the impulsive system, we can show that
for stability analysis (when w = 0), the PWL system
6.4.3 A Piecewise Linear System Approach to approach never yields more conservative results than
Stability Analysis the impulsive system approach, as is formally proven
in [19].
In case disturbances are absent (i.e., w = 0), less con-
servative conditions for GES can be obtained than by REMARK 6.2 This section is devoted to the analy-
using Theorem 6.1. These conditions can be obtained by sis of the stability and the performance of linear PETC
discretizing the impulsive system (6.6)–(6.7) at the sam- systems. The results can also be used to design the
pling instants tk = kh, k ∈ N, where we take∗ τ(0) = h controllers as well as the triggering condition and the
and w = 0, resulting in a discrete-time piecewise-linear sampling period. The interested reader can consult
(PWL) model. By defining the state variable ξk := ξ(tk ) Section IV in [19] for detailed explanations.
(and assuming ξ to be left-continuous), the discretization
leads to the bimodal PWL model
 REMARK 6.3 In [7], PETC closed-loop systems were
e Āh J1 ξk , when ξ
k Qξk > 0 . analyzed with C( x (tk ), x̂ (tk )) =  x (tk ) − δ with δ > 0
ξ k +1 = (6.19)
e Āh J2 ξk , when ξ
k Qξk  0
some absolute threshold. Hence, the control value u
is updated to Kx (tk ) only when  x (tk ) > δ, while in
∗ Note that τ(0) is allowed to take any value in [0, h ] in the stability a region close to the origin, i.e., when  x (tk )  δ, no
definition (Definition 6.1), while in the discretization we take τ(0) = h. updates of the control value take place at the sampling
Due to the linearity of the flow dynamics (6.6) and the fact that τ(0)
instants tk = kh, k ∈ N. For linear systems with bounded
lies in a bounded set, it is straightforward to see that GES for initial
conditions with τ(0) = h implies GES for all initial conditions with disturbances, techniques were presented in [7] to prove
τ(0) ∈ [0, h]. ultimate boundedness/practical stability, and calculate
Periodic Event-Triggered Control 111

the ultimate bound Π to which eventually all state trajec- for striving for large values of σ is that then large
tories converge (irrespective of the disturbance signal). (minimum) interevent times are obtained, due to the
form of (6.24).
REMARK 6.4 Extensions of the above analysis frame- For the event-triggering condition (6.24), the PWL
work to output-based PETC with decentralized event system approach yields a maximum value for σ of
triggering (instead of state-based PETC with central- σPWL = 0.2550 (using Theorem 6.2), while still guaran-
ized triggering conditions) can be found in [20]. Model- teeing GES of the PETC system. The impulsive sys-
based (state-based and output-based) PETC controllers tem approach gives a maximum value of σIS = 0.2532.
are considered in [19]. Model-based PETC controllers Hence, as expected based on the discussion at the end of
exploit model knowledge to obtain better predictions x̂ Section 6.4.3 indicating that the PWL system approach is
of the true state x in between sensor-to-controller com- less conservative than the impulsive system approach,
munication than just holding the previous value as in see [19], we see that σIS  σPWL , although the values are
(6.4). This can further enhance communication savings rather close.
between the sensor and the controller. Similar tech- When analyzing the L2 -gain from the disturbance w to
niques can also be applied to reduce the number of com- the output variable z as in (6.10) where z = [0 1 0 0]ξ, we
munications between the controller and the actuator. obtain Figure 6.2a, in which the smallest upper bound
on the L2 -gain that can be guaranteed on the basis
REMARK 6.5 For some networked control systems of Theorem 6.1 is given as a function of σ. This fig-
(NCSs), it is natural to periodically switch between time- ure clearly demonstrates that better guarantees on the
triggered sampling and PETC. Examples include NCS control performance (i.e., smaller γ) necessitate more
with FlexRay (see [37]). FlexRay is a communication pro- updates (i.e., smaller σ), allowing us to make trade-offs
tocol developed by the automotive industry, which has between these two competing objectives, see also the
the feature to switch between static and dynamic seg- discussion regarding Figure 6.2d. An important design
ments, during which the transmissions are, respectively, observation is related to the fact that for σ → 0, we
time triggered or event triggered. While the implemen- recover the L2 -gain for the periodic sampled-data sys-
tation and therefore the model differ in this case, the tem, given by (6.1) of the controller (6.2) with sampling
results of this section can be applied to analyze stability. interval h = 0.05 and tk = kh, k ∈ N. Hence, this indi-
cates that an emulation-based design can be obtained by
synthesizing first a state-feedback gain K in a periodic
6.4.4 Numerical Example time-triggered implementation of the feedback control
We illustrate the presented theory using a numerical given by u(tk ) = Kx (tk ), k ∈ N (related to σ = 0), result-
example. Let us consider the example from [12] with ing in a small L2 -gain of the closed-loop sampled-data
plant dynamics (6.1) given by control loop (using the techniques in, e.g., [38]). Next
      the PETC controller values of σ > 0 can be selected to
0 1 0 1 reduce the number of communications and updates of
dt x = −2 3 x + 1 u + 0 w,
d
(6.23)
control input, while still guaranteeing a small value of
the guaranteed L2 -gain according to Figure 6.2a and d.
and state-feedback controller (6.3), where we take Figure 6.2b shows the response of the performance
K ( x ) = [1 − 4] x and tk = kh, k ∈ N, with sampling output z of the PETC system with σ = 0.2, initial condi-
interval h = 0.05. We consider the event-triggering con- tion ξ0 = [1 0 0 0] and a disturbance w as also depicted
ditions given by in Figure 6.2b. For the same situation, Figure 6.2c shows
the evolution of the interevent times. We see interevent
C( x, x̂) = K x̂ − Kx  − σKx , (6.24)
times ranging from h = 0.05 up to 0.85 (17 times the sam-
for some value σ > 0. This can be equivalently written in pling interval h), indicating a significant reduction in the
the form of (6.9), by choosing number of transmissions. To more clearly illustrate this
reduction, Figure 6.2d depicts the number of transmis-
 
(1 − σ2 )K  K −K  K sions for this given initial condition and disturbance, as
Q= . (6.25) a function of σ. Using this figure and Figure 6.2a, it can
−K  K K K
be shown that the increase of the guaranteed L2 -gain,
For this PETC system, we will apply both approaches through an increased σ, leads to fewer transmissions,
for stability analysis (for w = 0), and the impulsive which demonstrates the trade-off between the closed-
system approach for performance analysis. We aim at loop performance and the number of transmissions that
constructing the largest value of σ in (6.24) such that have to be made. Conclusively, using the PETC instead
GES or a certain L2 -gain can be guaranteed. The reason of the periodic sampled-data controller for this example
112 Event-Based Control and Signal Processing
10 1

8 0.5

6 0
γ

−0.5
4

−1
2 Disturbance w
Output z
0 0 5 10 15 20 25 30
0 0.05 0.1 0.15 0.2 0.25
σ Time t (s)
(a) (b)

1
600

0.8

Number of events 400


0.6

0.4
200

0.2

0
0 5 10 15 20 25 30 0
0 0.05 0.1 0.15 0.2
Time t (s) σ
(c) (d)

FIGURE 6.2
Figures corresponding to the numerical example of Section 6.4.4. (a) Upper-bound L2 -gain as a function of σ. (b) The evolution of the distur-
bances w and the output z as a function of time for σ = 0.2. (c) The interevent times as a function of time for σ = 0.2. (d) Number of events as
a function of σ.

yields a significant reduction in the number of trans- 6.5.1 Problem Statement


missions/controller computations, while still preserving
closed-loop stability and performance to some degree. The generalization of the results of Section 6.4 to non-
linear systems is a difficult task, and we will a pri-
ori not be able to derive similar easily computable
criteria to verify the stability properties of the cor-
6.5 Design and Analysis for Nonlinear Systems responding PETC systems. We therefore address the
design of the sampling interval h and of the trigger-
Let us now address the case where the plant dynamics is
ing condition C from a different angle compared to
described by a nonlinear ordinary differential equation,
Section 6.4.
and we ignore the possible presence of external distur-
We start by assuming that we already designed a con-
bance w for simplicity. As a consequence, (6.1) becomes
tinuous event-triggered controller and our objective is to
design h and C to preserve the properties of CETC. We
d
dt x = f ( x, u, 0 ) , (6.26) thus assume that we know a mapping K : Rn x → Rnu
and a criterion C3 : R2n x → R, which are used to gener-
where u is given by (6.3)–(6.4). ate the control input. The corresponding transmission
Periodic Event-Triggered Control 113

instants are denoted by t̃k , k ∈ N0 , and are defined by Assumption 6.2 can be relaxed by allowing two con-
* + secutive jumps, even Zeno phenomenon for the CETC
t̃k+1 = inf t > t̃k | C( 3 x (t), x (t̃k ))  0 , t̃0 = 0.
system. In this case, a different concept of solutions is
(6.27) required as defined in [29], see for more detail [21].
The control input is thus given by
ASSUMPTION 6.3 The vector field g and the scalar
u(t) = K ( x̃ (t)), for t ∈ R+ , (6.28)
field C3 are continuous.
where∗ x̃ (t) = x (t̃k ) for t ∈ (t̃k , t̃k+1 ]. Note that C3 is eval-
uated at any t ∈ R+ in (6.27) contrary to (6.4). The con- We suppose that (6.29) ensures the global asymptotic
tinuous event-triggered controller guarantees that, for stability of a given compact set A ⊂ R2n x , as formalized
all time† t  0, below.
3 ξ̃(t))  0,
C( (6.29)
ASSUMPTION 6.4 The following holds for the CETC
where ξ̃ := [ x  x̃  ] .
system.
In the following, we first apply the results of [21] to
show that, if (6.29) implies the global asymptotic stabil- (1) For each ε > 0, there exists δ > 0 such that
ity of a given compact set, then this property is semiglob- each solution ξ̃ starting at ξ̃0 ∈ A + δB,
ally and practically preserved in the context of PETC where B is the unit ball of R2n x , satisfies
where the adjustable parameter is the sampling period ξ̃(t)A  ε for all t  0.
h, under mild regularity conditions. We then present
(2) There exists μ > 0 such that any solution
an alternative approach, which consists in redesigning
starting in A + μB satisfies ξ̃(t)A → 0 as
the continuous event-triggering condition C3 for PETC t → ∞.
in order to recover the same properties as for CETC. In
this case, we provide an explicit bound on the sampling Set stability extends the classical notion of stability of
period h. an equilibrium point to a set. Essentially, a set is sta-
ble if a solution which starts close to it (in terms of the
6.5.2 Emulation distance to this set) remains close to it [see item (1) of
Assumption 6.4]; it is attractive if any solution converges
We model the overall CETC system as an impulsive toward this set [see item (1) of Assumption 6.4]. A set is
system (like in Section 6.3) asymptotically stable if it satisfies both properties. Set
3 stability is fundamental in many control theoretic prob-
dt ξ̃ = g (ξ̃) when C(ξ̃)  0
d
+ 3 , (6.30) lems, see, for example, Chapter 3.1 in [29], or [39,40].
ξ̃ = J1 ξ̃ when C(ξ̃)  0
Many existing continuous event-triggered controllers
where g(ξ̃) = [ f ( x, K ( x̃ ), 0) , 0 ] (with some abuse satisfy Assumption 6.4 as shown in [18]. Examples
of notation with respect to (6.5)) and J1 is defined in include the techniques in [1,2,12–14] to mention a few.
Section 6.3. We do not use strict inequalities in (6.30) to We need to slightly modify the impulsive model (6.6)–
define the regions of the state space where the system (6.7) of the PETC system as follows, in order to apply the
flows and jumps, contrary to (6.6)–(6.7). This is justi- results of [21]
fied by the fact that we want to work with a flow set,    
3 ξ̃)  0}, and a jump set, {ξ̃ | C( 3 ξ̃)  0}, which are ξ g (ξ)
{ξ̃ | C( d
dt τ = when τ ∈ [0, h ]
1
closed in order to apply the results of [21]. (We assume ⎧  )
below that C3 is continuous for this purpose.) When the ⎪ J1 ξ

⎪ ,

⎪ 0
state is in the intersection of the flow set and the jump ⎪


⎪ when C( 3 ξ) > 0, τ = h
set, the corresponding solution can either jump or flow, ⎪


⎪  )
if flowing keeps the solution in the flow set. We make  + ⎪ ⎪
⎨ J2 ξ (6.31)
the following assumptions on the system (6.30). ξ ,
∈ 0
τ+ ⎪


⎪ when C( 3 ξ) < 0, τ = h
ASSUMPTION 6.2 The solutions to (6.30) do not ⎪
⎪    )
undergo two consecutive jumps, i.e., t̃k < t̃k+1 for any ⎪


⎪ J1 ξ J2 ξ
k ∈ N0 , and are defined for all positive time. ⎪
⎪ , ,

⎪ 0 0


∗ when C( 3 ξ) = 0, τ = h.
We use the notation x̃ instead of x̂ to avoid any confusion with the
PETC setup.
† We assume that the solutions to the corresponding system are The difference with (6.6)–(6.7) is that, when τ = h and
defined for all positive time, for any initial condition. 3 ξ) = 0, we can either have a transmission (i.e., ξ is
C(
114 Event-Based Control and Signal Processing

reset to [ x  x  ] ) or not (i.e., ξ remains unchanged). conditions on f , K, β, α. In other words, C3 nonpositive


Hence, system (6.31) generates more solutions than along the system solutions implies that the origin of
(6.6)–(6.7). However, the results presented in Section 6.4 the closed-loop system is globally asymptotically stable.
also apply when (6.31) (with linear plant) is used instead Similarly, the conditions of the form  x − x̃   ε used
of (6.6)–(6.7). Furthermore, the jump map of (6.31) is in [1,2,13,14] and  x − x̃   δ x  + ε in [16] to practi-
outer semicontinuous (see Definition 5.9 in [29]), which cally stabilize the origin of the corresponding CETC sys-
is essential to apply [21]. Proposition 6.1 below follows tem give C(3 x, x̃ ) =  x − x̃  − ε and C(
3 x, x̃ ) =  x − x̃  −
from Theorem 5.2 in [21]. δ x  − ε, respectively. By reducing the properties of
CETC to the satisfaction of (6.29), we cover a range of
Proposition 6.1 situations in a unified way.
We make the following assumption on the CETC
Consider the PETC system (6.31) and suppose Assump- system (which was not needed in Section 6.5.2).
tions 6.2 through 6.4 hold. For any compact set Δ ⊂
R2n x and any ε > 0, there exists h ∗ such that for any ASSUMPTION 6.5 Consider the CETC system (6.30),
h ∈ (0, h ∗ ), any solution [ξ , τ] with ξ(0) ∈ Δ, there it holds that
exists T  0 such that ξ(t) ∈ A + εB for all t  T. 3 ξ̃(t, [ x x ] ))  0,
T := inf{t > 0 | C( 0 0

Proposition 6.1 shows that the global asymptotic sta- [ x0 x0 ] ∈ Ω} > 0, (6.32)
bility of A in Assumption 6.4 is semiglobally and
  
practically preserved for the emulated PETC system, where ξ̃(t, [ x0 x0 ] ) is the solution to dt ξ̃ = g(ξ̃) at time
d

by adjusting the sampling period. It is possible to t initialized at [ x0 x0 ] , and Ω ⊆ Rnξ is bounded and
derive stronger stability guarantees for the PETC sys- forward invariant† for the CETC system (6.30).
tem such as the (semi)global asymptotic stability of A,
for instance, under additional assumptions on the CETC Assumption 6.5 means that there exists a uniform
system. minimum intertransmission time for the CETC system
in the set Ω. This condition is reasonable as most avail-
6.5.3 Redesign able event-triggering schemes of the literature ensure
the existence of a uniform minimum amount of time
The results above are general, but they do not provide between two transmissions over a given operating set Ω,
an explicit bound on the sampling period h, which is see [18]. The set Ω can be determined using the level
important in practice. Furthermore, in some cases, we set of some Lyapunov function when investigating sta-
would like to exactly (and not approximately) preserve bilization problems, for example.
the properties of CETC, which may not necessarily be We have seen that under the PETC strategy, the input
those stated in Assumption 6.2, but may be some perfor- can be updated only whenever the triggering condition
mance guarantees, for instance. We present an alterna- is evaluated—i.e., every h units of time. Hence, it is rea-
tive approach to design PETC controllers for nonlinear sonable to select the sampling interval to be less than
systems for this purpose. Contrary to Section 6.5.2, we the minimum intertransmission time of the CETC sys-
do not emulate the CETC controller, but we redesign tem (which does exist in view of Assumption 6.5). In that
the triggering criterion (but not the feedback law), and way, after a jump, we know that C3 will remain nonpos-
we provide an upper-bound on the sampling period h itive at least until the next sampling instant. Therefore,
to guarantee that C3 remains nonpositive for the PETC we select h such that
system.
We suppose that inequality (6.29) ensures the satis- 0 < h < T, (6.33)
faction of a desired stability or performance property.
where T is defined in (6.32). Estimates of T are generally
Consider the following example to be more concrete.
given in the analysis of the CETC system to prove the
In [12], a continuous event-triggering law of the form
existence of a positive minimal interevent time.
β( x − x̃ )  σα( x ) with∗ β, α ∈ K∞ and σ ∈ (0, 1) is
3 x, x̃ ) = β( x − x̃ ) − σα( x ) in We aim at guaranteeing that C3 remains nonposi-
designed. We obtain C(
tive along the solutions to the CETC system. Hence,
this case. This triggering law is shown to ensure the
we would like to verify at tk , k ∈ N0 , whether the
global asymptotic stability of the origin of the nonlin-
3
ear systems (6.26), (6.27), and (6.28) in [12], under some condition C(ξ(t)) > 0 may be satisfied for t ∈ [tk , tk+1 ]
† The set Ω is forward invariant for the CETC system if ξ̃ ∈ Ω
0
∗A function β : R+ → R+ is of class K ∞ if it is continuous, zero at implies that the corresponding solution ξ̃, with ξ̃( t0 ) = ξ̃0 and t0 ∈ R+ ,
zero, strictly increasing, and unbounded. lies in Ω for all time larger than t0 .
Periodic Event-Triggered Control 115

(recall that ξ = [ x  x̂  ] ). May C( 3 ξ(t)) be positive at 3 ξ(t, ξ0 )) may be positive. On the other hand, if y1 (t, y )
C( 0
some t ∈ [tk , tk+1 ] (without updating the control action), is nonpositive, Lemma 6.1 ensures that C( 3 t, ξ0 )) is non-
a jump must occur when t = tk in order to guarantee positive. We can therefore evaluate online y1 (t, y0 ) for
3 ξ(t))  0 for all t ∈ [tk , tk+1 ]. To determine at time tk ,
C( t ∈ [0, h ] and verify whether it takes a positive value, in
k ∈ N, whether the condition C( 3 ξ(t))  0 may be vio- which case a transmission occurs at tk , otherwise that
lated for some t ∈ [tk , tk+1 ], the evolution of the trigger- is not necessary. The analytic expression of y1 (t, y0 ) is
ing function C3 along the solutions to dt d
ξ = g(ξ) needs given by
to be analyzed. This point is addressed by resorting ⎡ ⎤
3 ξ0 )
C(
to similar techniques as in [41]. We make the follow- ⎢ L C( ⎥
ing assumption for this purpose, which is stronger than ⎢ g 3 ξ0 ) ⎥
⎢ .. ⎥
Assumption 6.3. y1 (t, ξ0 ) := C p e A p t ⎢
⎢ .
⎥,
⎥ (6.36)
⎢ p −1 ⎥
⎣L g C( 3 ξ0 ) ⎦
ASSUMPTION 6.6 The functions g and C3 are p-times
c
continuously differentiable where p ∈ N and the real
numbers c, ςj for j ∈ {0, 1, . . . , p − 1} satisfy with
!
p −1 Cp := 1 0 . . . 0
p3 j 3
L g C(ξ)  ∑ ς j L g C(ξ) + c, (6.34) ⎡ ⎤
j =0
0 1 0 ... 0 0 0
⎢0 0 1 ... 0 0 0⎥
⎢ ⎥
for any ξ ∈ Ω, where we have denoted the jth Lie deriva- ⎢ . .. . .. ⎥
⎢ .. . .. . ⎥ (6.37)
tive of C3 along the closed-loop dynamics g as L g C3, with
j ⎢ ⎥
A p := ⎢ 0 0 0 . . . 1 0 0 ⎥.
⎢ ⎥
3 ξ) = ∂C3 g(ξ), and L gj C3 = L g (L gj−1 C)
L0g C3 = C3, (L g C)( 3 for ⎢0 0 0 ... 0 1 0⎥
∂ξ ⎢ ⎥
j  1. ⎣ ς 0 ς 1 ς2 . . . ς p −2 ς p −1 1

0 0 0 ... 0 0 0
Inequality (6.34) always holds when g and C3 are p-
times continuously differentiable, as it suffices to take Hence, we define C(ξ) for any ξ ∈ Ω as
p3
c = max L g C( ξ) and ς j = 0 for j ∈ {0, 1, . . . , p − 1} to C(ξ) := max y (t, ξ). (6.38)
ξ∈Ω 1
t ∈[0,h ]
ensure (6.34) (recall that Ω is bounded in view of
Assumption 6.5). However, this particular choice may Every h units of time, the current state ξ is measured, and
lead to conservative results as explained below. we verify whether C(ξ) is positive, in which case the con-
Assumption 6.6 allows to bound the evolution of C3 trol input is updated. Conversely, if C(ξ) is nonpositive,
by a linear differential equation for which the analytical then the control input is not updated. It has to be noticed
solution can be computed as stated in the lemma below, that we do not need to verify the triggering condition
which directly follows from Lemma V.2 in [41]. for the next  Th  sampling instants following a control
input update according to Assumption 6.5, which allows
Lemma 6.1 us to further reduce computations.

Under Assumption 6.6, for all solutions to dt d


ξ = g (ξ) REMARK 6.6 The evaluation of y1 (t, ξ) for any
with initial condition ξ0 ∈ Ω such that ξ(t, ξ0 ) ∈ Ω for t ∈ [0, h ] in (6.38) involves an infinite number of con-
any t ∈ [0, h ], it holds that C( 3 ξ(t, ξ0 ))  y1 (t, y ) for any ditions, which may be computationally infeasible. This
0
t ∈ [0, h ], where y1 is the first component of the solution shortcoming can be avoided by using convex overap-
to the linear differential equation proximation techniques, see [42]. The idea is to overap-
⎧ proximate y1 (t, ξ) for t ∈ [0, h ]. In that way, the control


d
y = y j +1 , j ∈ {1, 2, . . . , p − 1} input is updated whenever the derived upper-bound is
⎨ dt j positive; otherwise, no update is needed. Note that these
p −1
d
dt y p = ∑ j =0 ς j y j +1 + y p +1 (6.35)
bounds can get as close as we want to y1 (t, ξ), at the price


⎩d
dt y p +1 = 0,
of more computation at each sampling instant, see [42]
for more detail.
with y0 = (C( 3 ξ0 ), . . . , L gp−1 C(
3 ξ0 ), L g C( 3 ξ0 ) , c ) .
The proposition below states that to choose h such
In that way, for a given state ξ0 ∈ Ω and t ∈ [0, h ], that (6.33) holds and C as in (6.38) ensures that C3 will be
if y1 (t, y0 ) is positive, then Lemma 6.1 implies that nonpositive along the solutions to (6.6)–(6.7) as desired.
116 Event-Based Control and Signal Processing

Proposition 6.2 TABLE 6.1


Average Intertransmission Times for 100 Points

Consider system (6.6)–(6.7) with h which satisfies (6.33) CETC PETC


and C defined in (6.38) and suppose Assumptions 6.5 h = 0.01 h = 0.02
and 6.6 hold. Then for any solution [ξ (t) τ (t)] for 0.3488 0.3440 0.3376
3 ξ(t))  0 for any t ∈ R+ .
which (ξ0 , τ0 ) ∈ Ω × R+ , C(
with σ = 0.8, which is obtained using the Lyapunov
function
6.5.4 Numerical Example
V (x) = 2 ( x1
1
+ x3 )2 + 12 ( x2 − x32 )2 + x32 . (6.43)
We consider the rigid body previously studied in [43].
The model is given by We design the PETC strategy by following the procedure
⎧ in Section 6.5.3. Assumption 6.5 is satisfied with T =
⎪ dt x1 = u1
d
⎪ 0.08 (which has been determined numerically) for Ω =

⎨ {( x, x̂) | V ( x )  5}\{0}. Regarding Assumption 6.6, we
dt x2 = u2
d , (6.39)
⎪ note that the system vector fields and the triggering


⎩d condition are smooth. In addition, (6.34) is verified
dt x3 = x1 x2 with∗ p = 3, ς0 = −748.4986, ς1 = −1.0008, ς2 = 4.3166,
and we consider the controller synthesized in [43] in and c = 0. We can thus apply the method presented in
order to stabilize the origin, which is given by Section 6.5.3 as all the conditions of Proposition 6.2 are
 ensured. We have selected h < T. Table 6.1 provides
u1 = − x1 x2 − 2x2 x3 − x1 − x3 the average intertransmission times for 100 points in
(6.40) Ω whose x-components are equally spaced along the
u2 = 2x1 x2 x3 + 3x32 − x2 .
sphere centered at 0 and of radius 1 and x̂ (0) = x (0).
The implementation of the controller on a digital plat- PETC generates intertransmission times that are smaller
form leads to the following closed-loop system: than in CETC as expected. Moreover, we expect the
⎧ average intertransmission time to increase when the


d
x1 = − x̂1 x̂2 − 2 x̂2 x̂3 − x̂1 − x̂3 sampling interval h decreases as suggested by Table 6.1.


dt
Assumption 6.4 is verified with A = {0} (in view
dt x2 = 2 x̂1 x̂2 x̂3 + 3 x̂3 − x̂2
d 2 (6.41) of [18]) and Assumption 6.3 is also guaranteed. We



⎩d can therefore also apply the emulation results of
dt x3 = x1 x2 . Section 6.5.2. To compare the strategies obtained by
In order to stabilize the origin of (6.41), we take the Sections 6.5.2 and 6.5.3, we plotted the evolution of
triggering condition as in [41,44] C3 in both cases in Figure 6.3 with h = 0.079 and the
3 x, x̂ ) =  x̂ − x 2 − 0.792σ2  x 2
C( (6.42) ∗ SOSTools [45] was used to compute ςi and χi for i ∈ {1, 2, 3}.

0.5
Event-triggering condition C (ξ)

0
~

–0.5

–1

CETC
–1.5
PETC with redesigned C
~
PETC with C = C
–2
0 1 2 3 4 5
Time t (s)

FIGURE 6.3
Evolution of C˜.
Periodic Event-Triggered Control 117

same initial conditions. We see that C3 remains nonpos- the 2 -gain of a corresponding discrete-time piecewise
itive all the time with the redesigned triggering con- linear system is smaller than one (and the system is
dition, which implies that the periodic event-triggered internally stable). This new perspective on the PETC
controller ensures the same specification as the event- analysis yields an exact characterization of the L2 -gain
triggered controller, while C3 often reaches positive val- (and stability) that leads to significantly less conserva-
ues with the emulated triggering law. tive conditions.
Second, in the linear context, extensions to the case of
output-feedback and decentralized triggering exist, see
[19], and for the nonlinear context these extensions are
mostly open. Also the consideration of PETC strategies
6.6 Conclusions, Open Problems, and Outlook for nonlinear systems with disturbances requires atten-
In this chapter, we discussed PETC as a class of tion. Given these (and many other) open problems, it is
fair to say that the system theory for ETC is far from
ETC strategies that combines the benefits of periodic
being mature, certainly compared to the vast literature
time-triggered control and event-triggered control. The
PETC strategy is based on the idea of having an on time-triggered (periodic) sampled-data control. This
calls for further theoretical research on ETC in general
event-triggering condition that is verified only period-
ically, instead of continuously as in most existing ETC and PETC in particular.
schemes. The periodic verification allows for a straight- Given the potential of ETC in saving valuable system’s
resources (computational time, communication band-
forward implementation in standard time-sliced embed-
ded system architectures. Moreover, the strategy has width, battery power, etc.), while still preserving impor-
tant closed-loop properties, as demonstrated through
an inherently guaranteed minimum interevent time of
various numerical examples in the literature (including
(at least) one sampling interval of the event-triggering
condition, which is easy to tune directly. the two in this chapter), it is rather striking that the num-
ber of experimental and industrial applications is still
We presented an analysis and design framework for
linear systems and controllers, as well as preliminary rather small. To foster the further development of ETC
results for nonlinear systems. Although we focused in in the future, it is therefore important to validate these
strategies in practice. Getting feedback from industry
the first case on static state-feedback controllers and cen-
tralized event-triggering conditions, extensions exist to will certainly raise new important theoretical questions.
As such, many challenges are ahead of us both in theory
dynamic output-feedback controllers and decentralized
and practice in this fledgling field of research.
event generators, see [19]. Also model-based versions
that can further enhance communication savings are
available, see [20]. A distinctive difference between the
linear and nonlinear results is that an emulation-based
design for the former requires a well-designed time- Acknowledgment
triggered periodic controller (with a small L2 gain, e.g., This work was supported by the Innovational Research
synthesized using the tools in [38]), while the nonlinear Incentives Scheme under the VICI grant “Wireless con-
part uses a well-designed continuous event-triggered trol systems: A new frontier in automation” (No. 11382)
controller as a starting point. awarded by NWO (The Netherlands Organization
Several problems are still open in the area of PETC. for Scientific Research) and STW (Dutch Science
First, obtaining tighter estimates for stability bound- Foundation), the ANR under the grant COMPACS
aries and performance guarantees (e.g., L2 -gains), min- (ANR-13-BS03-0004-02), NSF grant ECCS-1232035,
imal interevent times, and average interevent times is AFOSR grant FA9550-12-1-0127, NSF award 1239085,
needed. These are hard problems in general as we and the Australian Research Council under the
have shown in this chapter that PETC strategies result Discovery Projects and Future Fellowship schemes.
in closed-loop systems that are inherently of a hybrid
nature, and it is hard to obtain nonconservative analy-
sis and design tools in this context. One recent example
providing improvements for the linear PETC frame-
Bibliography
work regarding the determination of L2 -gains is [36],
see Remark 6.1. Moreover, in [46] a new lifting-based [1] K. J. Åström and B. M. Bernhardsson. Comparison
perspective is taken on the characterization of the of periodic and event based sampling for first order
L2 -gain of the closed-loop PETC system, and, in fact, it stochastic systems. In Proceedings of the IFAC World
is shown that the L2 -gain of (6.6)–(6.7) is smaller than Congress, Beijing, China, pages 301–306, July 5–9,
one (and the system is internally stable) if and only if 1999.
118 Event-Based Control and Signal Processing

[2] K.-E. Arzén. A simple event-based PID controller. [14] M. Miskowicz. Send-on-delta concept: An event-
In Preprints IFAC World Conference, Beijing, China, based data-reporting strategy. Sensors, 6:49–63,
volume 18, pages 423–428, July 5–9, 1999. 2006.

[3] W. P. M. H. Heemels, R. J. A. Gorter, A. van Zijl, [15] E. Kofman and J. H. Braslavsky. Level crossing
P. P. J. van den Bosch, S. Weiland, W. H. A. Hendrix, sampling in feedback stabilization under data-rate
and M. R. Vonder. Asynchronous measurement and constraints. In Proceedings of the IEEE Conference
control: A case study on motor synchronisation. Decision & Control, pages 4423–4428, San Diego, CA,
Control Engineering Practice, 7:1467–1482, 1999. December 13–15, 2006.

[4] E. Hendricks, M. Jensen, A. Chevalier, and [16] M. C. F. Donkers and W. P. M. H. Heemels.


T. Vesterholm. Problems in event based engine Output-based event-triggered control with guar-
control. In Proceedings of the American Control Con- anteed L∞ -gain and improved and decentralised
ference, Baltimore, volume 2, pages 1585–1587, 1994. event-triggering. IEEE Transactions on Automatic
Control, 57(6):1362–1376, 2012.
[5] W. H. Kwon, Y. H. Kim, S. J. Lee, and K.-N. Paek.
Event-based modeling and control for the burn- [17] D. Lehmann and J. Lunze. Extension and experi-
through point in sintering processes. IEEE Transac- mental evaluation of an event-based state-feedback
tions on Control Systems Technology, 7:31–41, 1999. approach. Control Engineering Practice, 19:101–112,
2011.
[6] W. P. M. H. Heemels, K. H. Johansson, and
P. Tabuada. An introduction to event-triggered and [18] R. Postoyan, P. Tabuada, D. Nešić, and A. Anta.
self-triggered control. In 51th IEEE Conference on A framework for the event-triggered stabilization
Decision and Control Conference (CDC), 2012, Hawaı̈, of nonlinear systems. IEEE Transactions on Auto-
pages 3270–3285, December 2012. matic Control, 60(4):982–996, 2015.
[7] W. P. M. H. Heemels, J. H. Sandee, and P. P. J. [19] W. P. M. H. Heemels, M. C. F. Donkers, and A. R.
van den Bosch. Analysis of event-driven controllers Teel. Periodic event-triggered control for linear
for linear systems. International Journal of Control, systems. IEEE Transactions on Automatic Control,
81:571–590, 2008. 58(4):847–861, 2013.
[8] T. Henningsson, E. Johannesson, and A. Cervin.
[20] W. P. M. H. Heemels and M. C. F. Donkers. Model-
Sporadic event-based control of first-order linear
based periodic event-triggered control for linear
stochastic systems. Automatica, 44:2890–2895, 2008.
systems. Automatica, 49(3):698–711, 2013.
[9] J. Lunze and D. Lehmann. A state-feedback
[21] R. G. Sanfelice and A. R. Teel. Lyapunov analysis of
approach to event-based control. Automatica,
sampled-and-hold hybrid feedbacks. In IEEE Con-
46:211–215, 2010.
ference on Decision and Control, pages 4879–4884,
[10] P. J. Gawthrop and L. B. Wang. Event-driven San Diego, CA, 2006.
intermittent control. International Journal of Control,
82:2235–2248, 2009. [22] J. K. Yook, D. M. Tilbury, and N. R. Soparkar. Trad-
ing computation for bandwidth: Reducing commu-
[11] X. Wang and M. D. Lemmon. Event-triggering in nication in distributed control systems using state
distributed networked systems with data dropouts estimators. IEEE Transactions on Control Systems
and delays. IEEE Transactions on Automatic Control, Technology, 10(4):503–518, 2002.
56:586–601, 2011.
[23] A. Eqtami, V. Dimarogonas, and K. J. Kyriakopou-
[12] P. Tabuada. Event-triggered real-time scheduling los. Event-triggered control for discrete-time sys-
of stabilizing control tasks. IEEE Transactions on tems. In Proceedings of the American Control Confer-
Automatic Control, 52:1680–1685, 2007. ence (ACC), pages 4719–4724, Baltimore, MD, 2010.

[13] P. G. Otanez, J. R. Moyne, and D. M. Tilbury. Using [24] R. Cogill. Event-based control using quadratic
deadbands to reduce communication in networked approximate value functions. In Joint IEEE Con-
control systems. In Proceedings of the American Con- ference on Decision and Control and Chinese Con-
trol Conference, Anchorage, pages 3015–3020, May trol Conference, pages 5883–5888, Shanghai, China,
8–10, 2002. December 15–18, 2009.
Periodic Event-Triggered Control 119

[25] L. Li and M. Lemmon. Weakly coupled event event-triggered control. In Proceedings of the IEEE
triggered output feedback system in wireless Conference Decision and Control, LA, pages 1221–
networked control systems. In Allerton Conference 1226, December 15–17, 2014.
on Communication, Control and Computing, Urbana,
IL, pages 572–579, 2011. [37] FlexRay Consortium. Flexray communications
system-protocol specification. Version, 2(1):198–207,
[26] A. Molin and S. Hirche. Structural characteriza- 2005.
tion of optimal event-based controllers for linear
stochastic systems. In Proceedings of the IEEE Confer- [38] T. Chen and B. A. Francis. Optimal Sampled-Data
ence Decision and Control, Atlanta, pages 3227–3233, Control Systems. Springer-Verlag, 1995.
December 15–17, 2010. [39] A. R. Teel and L. Praly. A smooth Lyapunov func-
[27] D. Lehmann. Event-Based State-Feedback Control. tion from a class-estimate involving two positive
Logos Verlag, Berlin, 2011. semidefinite functions. ESAIM: Control, Optimisa-
tion and Calculus of Variations, 5(1):313–367, 2000.
[28] R. Postoyan, A. Anta, W. P. M. H. Heemels,
P. Tabuada, and D. Nešić. Periodic event-triggered [40] Y. Lin, E. D., Sontag, and Y. Wang. A smooth con-
control for nonlinear systems. In IEEE Conference verse Lyapunov theorem for robust stability. SIAM
on Decision and Control (CDC), Florence, Italy, pages Journal on Control and Optimization, 34(1):124–160,
7397–7402, December 10–13, 2013. 1996.

[29] R. Goebel, R. G. Sanfelice, and A. R. Teel. Hybrid [41] A. Anta and P. Tabuada. Exploiting isochrony in
Dynamical Systems. Princeton University Press, self-triggered control. IEEE Transactions on Auto-
Princeton, NJ, 2012. matic Control, 57(4):950–962, 2012.

[30] X. Wang and M. Lemmon. Self-triggered feedback [42] W. P. M. H. Heemels, N. van de Wouw,
control systems with finite-gain L2 stability. IEEE R. Gielen, M. C. F. Donkers, L. Hetel, S. Olaru,
Transactions on Automatic Control, 45:452–467, 2009. M. Lazar, J. Daafouz, and S. I. Niculescu. Compar-
ison of overapproximation methods for stability
[31] M. Velasco, P. Marti, and E. Bini. On Lyapunov sam- analysis of networked control systems. In Proceed-
pling for event-driven controllers. In Proceedings ings of the Conference Hybrid Systems: Computation
of the IEEE Conference Decision & Control, Shangaı̈, and Control, Stockholm, Sweden, pages 181–190,
China, pages 6238–6243, December 15–18, 2009. April 12–16, 2010.

[32] X. Wang and M. D. Lemmon. Event design in event- [43] C. I. Byrnes and A. Isidori. New results and exam-
triggered feedback control systems. In Proceedings ples in nonlinear feedback stabilization. Systems &
of the IEEE Conference Decision & Control, Cancun, Control Letters, 12(5):437–442, 1989.
Mexico, pages 2105–2110, December 9–11, 2008.
[44] A. Anta and P. Tabuada. To sample or not to sample:
[33] M. Mazo Jr., A. Anta, and P. Tabuada. An ISS Self-triggered control for nonlinear systems. IEEE
self-triggered implementation of linear controllers. Transactions on Automatic Control, 55(9):2030–2042,
Automatica, 46:1310–1314, 2010. 2010.

[34] A. van der Schaft. L2 -Gain and Passivity Tech- [45] S. Prajna, A. Papachristodoulou, P. Seiler, and P. A.
niques in Nonlinear Control. Springer-Verlag: Berlin Parrilo. Sostools: Sum of squares optimization tool-
Heidelberg, 2000. box for MATLAB. http://www.cds.caltech.edu/
sostools, 2004.
[35] J. Löfberg. YALMIP: A toolbox for modeling and
optimization in MATLAB. In Proceedings of the [46] W. P. M. H. Heemels, G. Dullerud, and A. R. Teel.
CACSD Conference, Taipei, Taiwan, pages 284–289, A lifting approach to L2-gain analysis of periodic
2004. event-triggered and switching sampled-data con-
trol systems. Proceeding of the IEEE Conference on
[36] S. J. L. M. van Loon, W. P. M. H. Heemels, and Decision and Control (CDC) 2015, Osaka, Japan.
A. R. Teel. Improved L2 -gain analysis for a class
of hybrid systems with applications to reset and
7
Decentralized Event-Triggered Controller Implementations

Manuel Mazo Jr.


Delft University of Technology
Delft, Netherlands

Anqi Fu
Delft University of Technology
Delft, Netherlands

CONTENTS
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.3 System Architecture and Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7.4 Decentralized Triggering of Synchronous Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.4.1 Adaption Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.4.2 Listening Time Reduction and Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.5 Decentralized Triggering of Asynchronous Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.5.1 Time between Actuation Updates and Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.6 Energy Consumption Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.6.1 TDMA Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.6.1.1 Synchronous Event Triggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.6.1.2 Asynchronous Event Triggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.6.2 Controller Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.6.2.1 Synchronous Event-Triggered Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.6.2.2 Asynchronous Event-Triggered Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.6.3.1 Parameters from TI CC2420 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.6.3.2 Parameters from TI CC2530 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

ABSTRACT The triggering conditions found in the some naive communication protocols that are used,
event-triggered control literature usually depend on a together with a realistic model of energy consumption
relation between the whole vector of states and the on wireless sensor networks, to assess the energy usage
last measurements employed in the control. However, of these two types of implementations.
in networked implementations of control systems, it is
often the case that sensors are not collocated with each
other, therefore impeding the application of such event-
triggered control techniques. In this chapter, motivated
by the use of wireless communication networks in con-
7.1 Introduction
trol systems’ implementations, we present two alterna-
tive solutions to this problem. We proposed two types In the current chapter, we address the implementation
of decentralized conditions. The first technique requires of controllers in which the updates of the control action
synchronous transmission of measurements from all are event triggered. We consider architectures in which
sensors, while the second proposes the use of asyn- the sensors providing measurements to the controller
chronous measurements. Furthermore, we introduce are not collocated (neither among them nor with

121
122 Event-Based Control and Signal Processing

the controller). Most event-triggered implementations For a function f : Rn → Rn , we denote by f i : Rn → R


of controllers rely on triggering conditions that depend the function whose image is the projection of f on its
on the whole state vector, and thus, in the scenario, ith coordinate, that is, f i ( x ) = Πi ( f ( x )). Consequently,
we consider monitoring the triggering conditions is given a Lipschitz continuous function f , we also denote
either unfeasible or requires a prohibitive amount of by L f i the Lipschitz constant of f i . A function γ : [0, a[
communication exchange between the different sensors → R0+ , is of class K if it is continuous, strictly increas-
and the controller. ing, and γ(0) = 0; if furthermore a = ∞ and γ(s) → ∞
The study of the proposed scenarios is mainly moti- as s → ∞, then γ is said to be of class K∞ . A continu-
vated by networked implementations of control loops, ous function β : R0+ × R0+ → R0+ is of class KL if β(·, τ)
in which a number of different sensors must pro- is of class K for each fixed τ ≥ 0 and for each fixed s ≥ 0,
vide measurements to recompute the control action. In β(s, τ) is decreasing with respect to τ and β(s, τ) → 0
such scenarios, in order to allow more control loops to for τ → ∞. Given an essentially bounded function δ :
share the same resources, access to the communication R0+ → Rm , we denote by δ∞ the L∞ norm, that is,
medium (regardless of the use of a wired or wireless δ∞ = ess supt∈R+ {|δ(t)|}. We also use the shorthand
0
physical layer) should be reduced. Furthermore, if the
V̇ ( x, u) to denote the Lie derivative ∇V · f ( x, u), and ◦ to
physical layer of the communication medium is a wire-
denote function composition, that is, f ◦ g(t) = f ( g(t)).
less channel, reducing its use has a direct impact on
The notion of Input-to-State stability (ISS) [3] will be
the energy consumption of each of the different nodes
central to our discussion:
involved: sensors, actuators, and computation nodes
(controllers). It is important to remark that in such wire-
DEFINITION 7.1: Input-to-state stability A control
less implementations, it is equally (if not more) impor-
system ξ̇ = f (ξ, υ) is said to be (uniformly globally)
tant to reduce the amount of time employed in listening
input-to-state stable (ISS) with respect to υ if there exist
for messages in the channel as in transmitting messages.
β ∈ KL, γ ∈ K∞ such that for any t0 ∈ R0+ the following
Following this line of reasoning, after formalizing the
holds:
system models and communication architecture under
consideration, we introduce a decentralized strategy
∀ξ(t0 ) ∈ Rn , υ∞ < ∞,
aimed at the reduction of message transmissions in the
network. We discuss briefly strategies that could be |ξ(t)| ≤ β(|ξ(t0 )|, t − t0 ) + γ(υ∞ ), ∀t ≥ t0 .
employed to reduce the amount of time that the differ-
ent nodes of the implementation need to spend listening Rather than using this definition, we use an alternative
for messages from other nodes. Next, we present an characterization of ISS systems by virtue of the follow-
approach that addresses directly the reduction of lis- ing fact: A system is ISS if and only if there exists an ISS
tening time from nodes. Finally, we compare the two Lyapunov function [3]:
techniques through some examples, discuss the benefits
and shortcomings of the two techniques, and conclude DEFINITION 7.2: ISS Lyapunov function A continu-
the chapter with some ideas for future and ongoing ously differentiable function V : Rn → R0+ is said to be
extensions of these techniques. an ISS Lyapunov function for the closed-loop system
ξ̇ = f (ξ, υ) if there exist class K∞ functions α, α, αv , and
αe such that for all x ∈ Rn and u ∈ Rm the following is
satisfied:

7.2 Preliminaries α(| x |) ≤ V ( x ) ≤ α(| x |),

We denote the positive real numbers by R+ and by R0+ =


∇V · f ( x, u) ≤ −αv ◦ V ( x ) + αe (|u|). (7.1)
R+ ∪ {0}. We use N0 to denote the natural numbers
including zero and N+ = N0 \{0}. The usual Euclidean Finally, we also employ the following Lemma in some
of our arguments:
(l2 ) vector norm is represented by | · |. When applied
to a matrix | · | denotes the l2 induced matrix norm.
Lemma 7.1: [16]
A symmetric matrix P ∈ Rn×n is said to be positive
definite, denoted by P > 0, whenever x T Px > 0 for all
x = 0, x ∈ Rn . By λm ( P), λ M ( P) we denote the min- Given two K∞ functions α1 and α2 , there exists some
imum and maximum eigenvalues of P, respectively. constant L < ∞ such that
A function f : Rn → Rm is said to be locally Lipschitz
if for every compact set S ⊂ Rn there exists a constant α1 ( s )
lim sup ≤ L,
L ∈ R0+ such that | f ( x ) − f (y)| ≤ L| x − y|, ∀ x, y ∈ S. s →0 α2 ( s )
Decentralized Event-Triggered Controller Implementations 123

if and only if for all S < ∞ there exists a positive κ < ∞ times {tk }k∈N+ . In traditional time-triggered implemen-
0
such that tations, the sequence is simply defined by tk = kT,
∀s ∈]0, S], α1 (s) ≤ κα2 (s). where T is a predefined sampling time. In contrast, an
event-triggered controller implementation defines the
PROOF The necessity side of the equivalence is trivial; sequence implicitly as the time instants at which some
thus, we concentrate on the sufficiency part. By assump- condition is violated, which in general results in ape-
tion, we know that the limit superior of the ratio of riodic schemes of execution. The following is a typi-
the functions tends to L as s → 0, and therefore, ∀ > 0, cal scheme to design triggering conditions resulting in
∃δ > 0 such that α1 (s)/α2 (s) < L +  for all s ∈]0, δ[. As asymptotic stability of the event-triggered closed loop.
α1 , α2 ∈ K∞ , we know that in any compact set excluding Define an auxiliary signal ε : R0+ → Rn as ε(t) =
the origin, the function α1 (s)/α2 (s) is continuous and ξ(tk ) − ξ(t) for t ∈ [tk , tk+1 [, and regard it as a mea-
therefore attains a maximum, implying that there exists surement error. By doing so, one can rewrite (7.4) for
a positive M ∈ R+ such that α1 (s)/α2 (s) < M, ∀s ∈ t ∈ [tk , tk+1 [ as
[δ, S], 0 < δ < S. Putting these two results together, we
have that ∀s ∈]0, S], S < ∞, α1 (s) ≤ κα2 (s), where κ = ξ̇(t) = f (ξ(t), k(ξ(t) + ε(t))),
max{ L + , M }. ε̇(t) = − f (ξ(t), k(ξ(t) + ε(t))), ε(tk ) = 0.

Let us quickly revisit the event-triggering mechanism Hence, as (7.3) is ISS with respect to measurement
proposed in [21], which serves as a starting point for errors ε, by enforcing
the techniques we propose in what follows. Consider a
γ(|ε(t)|) ≤ ρα(|ξ(t)|), ∀t > 0, ρ ∈]0, 1[ , (7.5)
control system:
one can guarantee that
ξ̇(t) = f (ξ(t), υ(t)), ∀t ∈ R0+ , (7.2)
∂V
where ξ : R0+ → Rn and υ : R0+ → Rm . Assume the f ( x, k( x + e)) ≤ −(1 − ρ)α(| x |), ∀ x, e ∈ Rn ,
∂x
following:
and asymptotic stability of the closed loop follows.
ASSUMPTION 7.1: Lipschitz continuity The func- Further assume that:
tions f and k, defining the dynamics and controller of
the system, are locally Lipschitz. ASSUMPTION 7.3 The operation of system (7.4) is
confined to some compact set S ⊆ Rn , and α−1 and γ are
This assumption guarantees the (not necessarily global) Lipschitz continuous on S.
existence and uniqueness of solutions of the closed-loop
system; and: Then, the inequality (7.5) can be replaced by the sim-
pler inequality |ε(t)|2 ≤ σ|ξ(t)|2 , for a suitably chosen
ASSUMPTION 7.2: ISS w.r.t. measurement errors σ > 0. We refer the interested reader to [21] for details
There exists a controller k : Rn → Rm for system (7.2) on how to select σ. Hence, if the sequence of update
such that the closed-loop system times {tk }k∈N+ is such that
0

ξ̇(t) = f (ξ(t), k(ξ(t) + ε(t)), ∀t ∈ R0+ , (7.3)


|ε(t)|2 ≤ σ|ξ(t)|2 , t ∈ [ t k , t k +1 [ , (7.6)
is ISS with respect to measurement errors ε. Further-
the sample-and-hold implementation (7.4) is guaranteed
more, assume the knowledge of an ISS-Lyapunov
to render the closed-loop system asymptotically stable.
function V certifying that the system is ISS, that is,
Note that enforcing this condition is achieved by simply
satisfying (7.1).
closing the loop, that is, resetting to zero ε(tk ) = ξ(tk ) −
ξ(tk ) = 0, at the instants of time:
Let the control law k(ξ) be applied in a sample-and-
hold fashion: tk = min{t > tk−1 ||ε(t)|2 ≥ σ|ξ(t)|2 }. (7.7)
ξ̇(t) = f (ξ(t), υ(t)), It was also shown in [21] (Theorem 3.1) that there
υ(t) = k(ξ(tk )), t ∈ [tk , tk+1 [ ,(7.4) is a non-zero minimum time that must always elapse
after a triggering event until the next event is triggered,
where {tk }k∈N+ is a divergent sequence of update and that lower bounds can be explicitly computed. We
0
times. The stability of the resulting hybrid system denote a computable lower bound for such minimum
depends now on the selection of the sequence of update time by τmin . Furthermore, delays between the event
124 Event-Based Control and Signal Processing

generation and the application of an updated control


signal can be accommodated by making the trigger- 7.3 System Architecture and Problem Statement
ing conditions more conservative. We reproduce here
Corollary IV.1 (with adjusted notation) from [21], which We consider systems consisting of three types of nodes
summarizes these results in the case of linear systems∗ : interconnected through a shared communications chan-
nel: sensors, actuators, and controllers. Sensors and
Corollary 7.1: [21] actuators are in direct contact with the plant to be con-
trolled, the first ones to acquire measurements of the
state, and the latter to set the values of the controllable
Let ξ̇ = Aξ + Bυ be a linear control system, let υ =
inputs of the plant. We consider only centralized con-
Kξ be a linear control law rendering the closed-loop
trol computing situations—that is, each control loop has
system globally asymptotically stable, and assume that
actions that are computed at a single controller node
Δτ = 0. For any initial condition in Rn , the interevent
that collects data from all sensors and sends actions to
times {tk+1 − √ tk }k∈N implicitly defined by the execu-
all actuators. Additionally, a network may contain relay
tion rule |ε| = σ|ξ| are lower bounded by the time τmin
nodes employed to enlarge the area covered by the net-
satisfying
√ work. A graphical representation of one such generic
φ(τmin , 0) = σ, network is given in Figure 7.1, in which sensor and
actuator nodes are indicated by circles and diamonds,
where φ(t, φ0 ) is the solution of respectively, next to which their respective sensed or
actuated signal is indicated, a central computing node
φ̇ = | A + BK | + (| A + BK | + | BK |)φ + | BK |φ2 , is indicated with a box, and the remainder nodes are
simple relay nodes to maintain connectivity. We have in
satisfying φ(0, φ0 ) = φ0 . Furthermore, for Δτ > √ 0 and mind, thus, tree-shaped networks in which sensors and

for any desired σ > 0, the execution rule |ε| = σ
|ξ| actuators are leave nodes, the controller sits at the root,
with and all other intermediate nodes can only be relay nodes.
The reason for this last restriction is to allow sensor (and
√ √
Δτ|[ A + BK | BK ]|( σ + 1) √ possibly actuator) nodes to go to an idle state, in which
√ ≤ σ
≤ φ(−Δτ, σ), they do not listen or transmit, more often than other
1 − Δτ|[ A + BK | BK ]|( σ + 1)
nodes in the network without disrupting the network
enforces for any k ∈ N and for any t ∈ [tk + Δτ, tk+1 + connectivity. This assumption is related to the energy
Δτ[ the following inequality: constraints that we describe in the next two paragraphs.
The goal of the techniques that will be described in
√ the remainder of the chapter is to reduce the amount of
|ε(t)| ≤ σ|ξ(t)|,
communication required between the different nodes of
the network to stabilize a control plant. Attaining such
with interexecution times bounded by τmin = Δτ + τ, a goal may be interesting to reduce congestion on the
where time τ satisfies shared network, thus freeing it up for other commu-
 √  √ nication tasks. Additionally, we consider situations in
Δτ|[ A + BK | BK ]|( σ + 1)
φ τ, √ = σ
. which the shared network is constructed over a wireless
1 − Δτ|[ A + BK | BK ]|( σ + 1) channel. In this case, the reduction of communication
For completeness, we also provide the solution to the
differential Riccati equation for φ:
ξ1 ξ4

1 Θ
φ(t, φ0) = − (α − Θ tan( (t + C ))),
2α2 1 2 ξ3
ξ5

2 2α2φ0 + α1
C= arctan , Θ = 4α0 α2 − α21 ,
Θ Θ
ξ2
υ1 υ2
where α0 = | A + BK |, α1 = | A + BK | + | BK |, and
α2 = | BK |.

∗ Similar results can be obtained for nonlinear input affine systems FIGURE 7.1
under mild assumptions on the flow (e.g., being norm bounded from A generic control network, with a centralized computing (controller)
above). For details see, for example, [6]. node.
Decentralized Event-Triggered Controller Implementations 125

between nodes could be exploited to also reduce energy Note that a solution to this problem is already pro-
consumption on the network nodes, some of which vided by the technique reviewed in Section 7.2, with
may be battery powered, and thus extend the network’s the additional Assumption 7.3. However, such a solu-
lifetime. tion requires that some node of the network has access to
We consider that actuation nodes, controller nodes, the full state vector in order to check the condition (7.7)
and possible relay nodes are not as energy constrained which establishes when the controller needs to be
as sensor nodes, and thus concentrate our efforts on updated with fresh measurements. In the architectures,
energy savings at the sensor nodes. This is justified we are considering the main challenge is therefore how
by the following considerations (taken as architectural to “decentralize” the decision of triggering new updates
assumptions): sensor nodes are often located in hard- among the different sensors of the network, which only
to-reach areas, which limit their possible size and thus have a partial view of the full state of the system.
the size of the batteries supplying energy to them; actu- Another similar problem, somehow complementary to
ators, while they can also have a restrictive placing, the one we addressed here, is to coordinate the trigger-
need larger energy reserves to act on the plant, which ing of different subsystems weakly coupled with each
usually renders any communication consumption neg- other. For the interested reader we refer on that topic to
ligible; other nodes of the network necessary to relay the work of Wang and Lemmon in [24] and references
packets can be equipped with sufficient energy reserves therein.
or even wired to a power source. Let us summarize this discussion by introducing the
An important remark is due at this point: the tech- following additional assumption and a concrete refor-
niques that we present in the following sections are mulation of the problem we solve in the remainder.
limited to implementations of state-feedback controllers,
and thus the following is one of the limiting assumptions ASSUMPTION 7.5: Decentralized sensing There is
of both techniques we present: no single sensor in the network capable of measuring
the whole state vector of the system (7.2).
ASSUMPTION 7.4: State measurements Each (and
all) of the states of the system are directly measurable PROBLEM 7.2 Solve Problem 7.1 under the additional
by some sensor. Assumption 7.5.

Note that this does not restrict each sensor node to sense In the remainder of this chapter, we provide two alter-
only a single entry of the state vector. native solutions to the problem we just described: in
Our objective is to propose controller implementa- Section 7.4 a solution first introduced in [17] is pre-
tions for systems of the form (7.2) satisfying Assump- j
sented in which {tri i } = {tr j } for all i, j ∈ [1, n], that is,
tions 7.4, 7.2, and 7.1. In particular, we are interested measurements from all different sensors are acquired
in finding stabilizing sample-and-hold implementations synchronized in time; Section 7.5 provides a solution,
of a controller υ(t) = k(ξ(t)) such that updates can be originally described in [16], in which this synchronic-
performed transmitting aperiodic measurements from ity requirement for the state vector employed in the
sensors, and if possible, we would like to do so while controller computation is removed.
reducing the amount of necessary transmissions. This
problem can be formalized as follows:

PROBLEM 7.1 Given system (7.2) and a controller k :


Rn → Rm satisfying Assumptions 7.4, 7.2, and 7.1, find 7.4 Decentralized Triggering of Synchronous
sequences of update times {tri i }, ri ∈ N0 for each sen- Updates
sor i = 1, . . . , n such that a sample-and-hold controller For simplicity of presentation, we will consider in what
implementation:
follows a scenario in which each state variable is mea-
sured by a different sensor. Nonetheless, we will discuss
υ j (t) = k j (ξ̂(t)), (7.8)
at the end of the section how to apply these same ideas
ξ̂i (t) = ξi (tri i ), t ∈ [tri i , tri i +1 [, ∀ i = 1, . . . , n, (7.9) to more generic decentralized scenarios.
As discussed in the previous section, because of
renders the closed-loop system uniformly globally Assumption 7.5 no sensor can evaluate condition (7.6),
asymptotically stable (UGAS), that is, satisfying that since it requires the knowledge of the full state vec-
there exists β ∈ KL such that for any t0 ≥ 0: tor ξ(t). The solution we propose is to employ a
set of simple conditions that each sensor can check
∀ξ(t0 ) ∈ Rn , |ξ(t)| ≤ β(|ξ(t0 )|, t − t0 ), ∀t ≥ t0 . locally to decide when to trigger a controller update.
126 Event-Based Control and Signal Processing

Then, whenever such a triggering event happens at one for the adaption of θ aiming at reducing the conserva-
sensor the transmission of fresh measurements from all tiveness of the decentralization are discussed in the next
the sensors to the controller is initiated. subsection.
To show how to arrive to the local triggering con- The previous discussion is now summarized in the
ditions let us start by introducing a set of parameters: following proposition:
θ1 , θ2 , . . . , θn ∈ R such that ∑ni=1 θi = 0. Employing these
parameters one can rewrite inequality (7.6) as Proposition 7.1: [17]
n ' ( n
∑ ε2i (t) − σξ2i (t) ≤ 0 = ∑ θi , Let Assumptions 7.1, 7.2, and 7.3 hold. For any choice of
i =1 i =1 θ satisfying:
where εi and ξi denote the ith coordinates of ε and n
ξ, respectively. Then, observing the following trivial ∑ θi (k) = 0, ∀ k ∈ N0+ ,
implication i =1

n ' ( the sequence of update times {tk } k∈N+ given by


6 0
ε2i (t) − σξ2i (t) ≤ θi ⇒ |ε(t)| ≤ σ|ξ(t)| ,
2 2
(7.10)
i =1 tk+1 = tk + max{τmin , min τi (ξ(tk ))},
i =1,...,n
suggests the use of τi (ξ(tk ))

ε2i (t) − σξ2i (t) ≤ θi , (7.11) = min{τ ∈ R0+ | ε2i (tk + τ) − σξ2i (tk + τ) = θi (k)},

as local event-triggering conditions. renders the system (7.4) asymptotically stable.


Employing these local conditions (7.11) whenever one
of them reaches equality the controller is recomputed.
7.4.1 Adaption Rules
If the time elapsed between two events generated by
the proposed scheme is smaller than the minimum time With the exception of some very special types of sys-
τmin between updates of the centralized event-triggered tems [7],∗ finding a value of θ that for a given initial
implementation (see the end of Section 7.2), the sec- condition maximizes the time until the next event is
ond event is discarded and the controller update is generated is a nontrivial problem. Thus, rather than pro-
scheduled τmin units of time after the previous update. viding any optimal solution, we suggest a family of
Waiting until τmin time has elapsed is justified because heuristics to adjust the vector θ whenever the control
our implementation is merely trying to mimic a cen- input is updated.
tralized triggering generation, but as will be discussed, We need to introduce a new concept:
the decentralization is achieved at a cost of being
conservative. DEFINITION 7.3: Decision gap Let the decision gap at
This decentralization approach is in general conser- sensor i at time t ∈ [tk , tk+1 [ be defined as
vative, meaning that times between updates will be
shorter than in the centralised case. This conservative- Gi (t) = ε2i (t) − σξ2i (t) − θi (k),
ness stems from the fact that (7.10) is not an equiva-
lence but only an implication. The reader might wonder The family of heuristics is parametrized by an equaliza-
what the purpose of introducing the vector of param- tion time te and an approximation order q, and we attempt
eters θ = [θ1 θ2 . . . θn ] T is. In fact, the main purpose of to equalize the decision gap at time te .
these parameters is to aid in reducing the mentioned For the equalization time te : N0 → R+ , we suggest the
conservatism and thus reducing utilization of the com- use of one of the following two choices:
munication network. To achieve this reduction, we allow
• Constant and equal to the minimum time between
the vector θ to change every time the control input
controller updates te (k) = τmin
is updated. To reflect the time-varying nature of the
parameters, from here on we show explicitly this time • Previous time between updates te (k) = tk − tk−1
dependence of θ by writing θ(k) to denote its value
between the update instants tk and tk+1 . Note, that The approximation order is the order of the Taylor
regardless of this time varying behavior, as long as θ expansion that is used to estimate the decision gap at
satisfies ∑ni=1 θi (k) = 0, the stability of the closed loop ∗ In that paper, the optimal values of θ are computed for diagonal-
is guaranteed independently of the specific value that θ ized linear systems for the purpose of self-triggered control, see, for
takes and the rules used to update θ. Some possible rules example, [6,15] for an introduction to that related topic.
Decentralized Event-Triggered Controller Implementations 127

the equalization time te : g1(t) = ε12(t) − σξ21(t) = G1(t) + θ1(k)


g2(t) = ε22(t) − σξ22(t) = G2(t) + θ2(k)
Ĝi (tk + te ) = ε̂2i (tk + te ) − σξ̂2i (tk + te ) − θi (k),

where for t ∈ [tk , tk+1 [


1 θ2(1)
ξ̂i (t) = ξi (tk ) + ξ̇i (tk )(t − tk ) + ξ̈i (tk )(t − tk )2 + . . .
2
θ1(0) θ2(0)
1 ( q)
+ ξi (tk )(t − tk )q , g2(t1)
q!
θ1(1)
1
ε̂i (t) = 0 − ξ̇i (tk )(t − tk ) − ξ̈i (tk )(t − tk )2 − . . . g2(t0)
2
1 ( q) g1(t1)
− ξi (tk )(t − tk ) , q
g1(t0)
q!
using the fact that ε̇ = −ξ̇ and ε(tk ) = 0. t0 t1 t´2 t2
Once a choice for equalization time te and an approx-
t´2 = t1 + te = t1 + (t1 – t0)
imation order q is selected, one can compute the vector FIGURE 7.2
g´1(t´2) – θ1(1) = g´2(t´2) – θ2(1)
θ(k) ∈ Rn to satisfy the following conditions: θ adaption.

Ĝi (tk + te ) = Ĝj (tk + te ) ∀i, j ∈ {1, 2, . . . , n},


n
∑ θi (k) = 0. (7.12) to a zero value, and after t1 − t0 seconds, the second sen-
i =1 sor has reached equality on its triggering condition. To
compute the next values of θ the controller computes
Note that solving for θ, once the estimates ξ̂ and ε̂ have
the estimate ε̂2i (t) − σ̂ξ2i (t), i = 1, 2 at time t2
= 2t1 − t0 ,
been computed, only requires to solve a system of n
that is, with te equal to the last time between updates
linear equations:
t1 − t0 . The estimate is computed through a first-order
⎡ ⎤⎡ ⎤
1 −1 0 0 ... 0 θ1 (k) Taylor approximation, that is, q = 1, as represented by
⎢ 0 1 −1 0 . . . 0 ⎥ ⎢ θ2 (k) ⎥ the dash-dotted lines approximating the actual trajec-
⎢ ⎥⎢ ⎥
⎢ . . ⎥ ⎢ . ⎥ tory of the decision gap. Then, the new values of θ1 and
⎢ 0 0 .. .. 0 0 ⎥ ⎢ .. ⎥
⎢ ⎥⎢ ⎥ θ2 are computed so that the estimates of the decision
⎣ 0 0 0 . . . 1 − 1 ⎦ ⎣ θn − 1 ( k ) ⎦ gap at t2
are equal. If one does not update the value
1 1 1 ... 1 1 θn ( k ) of θ = 0, it is indicated with a vertical dashed line the
⎡ ⎤
δ12 (tk + te ) time at which the next update would be triggered, which
⎢ δ23 (tk + te ) ⎥ can be seen to be much earlier compared to the time t2
⎢ ⎥
⎢ .. ⎥
=⎢ . ⎥, (7.13) at which the next update is triggered when using the
⎢ ⎥ adapted new θ values.
⎣ δ( n −1 ) n ( t k + t e ) ⎦
The choice of te and q has a great impact on the
0 amount of actuation required: equalizing at times
' ( ' (
δij (t) = ε̂2i (t) − σξ̂2i (t) − ε̂2j (t) − σξ̂2j (t) . tk + te as close as possible to the ideal next update
time tk+1 (the one that would result from a centralized
A solution θ to these equations could result in that event-triggered implementation) provides larger times
for some sensor i, the following holds −σξ2i (tk ) > θi (k). between updates; but a large te leads, in general, to
Employing such a θ results in an immediate violation of poor estimates of the state of the plant at time tk + te
the triggering condition at t = tk , that is, τi (ξ(tk )) would and thus degrades the equalization of the gaps. Thus,
be zero. Thus, in practice, when one encounters this sit- employing small te and simultaneously tk + te close to
uation, θ is reset to some default value such as the zero the ideal tk+1 can be an impossible task, namely, when
vector. Note that we propose that the computation of θ the time between controller updates is large. The effect
is made by the controller node for convenience, as it has of the order of approximation q depends heavily on te ,
access to ξ(tk ). Nonetheless, if there would be any other and in order to get an improvement of the estimates
node of the network that at each controller update also one might need to use prohibitively large q values. An
has access to the whole set of sensor measurements, such heuristic requiring relatively low computational effort
a node could also compute the updates of θ. (as long as q is not too large) that provides good results
Figure 7.2 shows a cartoon illustration of the operation in several case studies performed by the authors is
of this heuristic algorithm. In the figure, θ is initialized given by Algorithm 7.1.
128 Event-Based Control and Signal Processing

Algorithm 7.1: The θ-adaptation heuristic algorithm. water levels of the tanks: ξ1 , ξ2 , ξ3 , and ξ4 . Two inputs
are available: υ1 and υ2 , the input flows to the tanks.
Input: q, tk−1 , tk , τmin , ξ(tk )
The fixed parameters γ1 and γ2 control how the flow
Output: θ(k)
is divided into the four tanks. The goal is to stabilize
t e : = t k − t k −1 ;
the levels ξ1 and ξ2 of the lower tanks at some specified
Compute θ(k) according to Equation 7.12;
values x1∗ and x2∗ .
if ∃ i ∈ {1, 2, . . . , n} such that −σξ2i (tk ) > θi (k) then The system dynamics are given by the equation:
te := τmin ;
Compute θ(k) according to Equation 7.12; ξ̇(t) = f (ξ(t)) + gc υ,
if ∃ i ∈ {1, 2, . . . , n} such that −σξ2i (tk ) > θi (k)
then with
θ(k) := 0; ⎡ √ √ ⎤
end a1 2gx1 a3 2gx3
⎢ − + ⎥
⎢ a √2gx √A1
A1
end ⎥
⎢ − 2 2 a4 2gx4
+ A2 ⎥
⎢ ⎥
f (x) = ⎢ A2
√ ⎥,
⎢ a 2gx ⎥
⎢ − 3 A3 3 ⎥
We assumed at the beginning of the section that each ⎣ √ ⎦
a 2gx
node measured a single state of the system. In practice, − 4 A4 4
one may encounter scenarios in which one sensor has ⎡ γ1 ⎤
A1 0
access to several (but not all) states of the plant. Then, ⎢ γ2 ⎥
one can apply the same approach with a slight modifica- ⎢ 0 A2 ⎥
gc = ⎢
⎢ 0 1−γ2 ⎥ ,

tion to the local triggering rules as follows: ⎣ A3 ⎦
1−γ1
|ε̄i (t)|2 − σ|ξ̄i (t)|2 ≤ θi , A4 0

where ξ̄i (t) is now the vector of states sensed at node i,and g denoting gravity’s acceleration and Ai and ai
ε̄i (t) is its corresponding error vector, and θi is a scalar.
denoting the cross sections of the ith tank and outlet
Let us illustrate the effect of the proposed decen- hole, respectively.
tralization of the event-triggering mechanism with an The controller design from [12] extends the dynam-
example. We only provide a brief description of the sys- ics with two additional auxiliary state variables ξ5 and
tem and the results; for more details, we refer the reader ξ6 . These are nonlinear integrators that the controller
to the original sources of this example in [12] and [17]. employs to achieve zero steady-state offset and evolve
according to
EXAMPLE 7.1 Consider the quadruple-tank system as   
described in [12]. This is a multi-input multi-output non- 
ξ̇5 (t) = k I1 a1 2g ξ1 (t) − x1∗ ,
linear system consisting of four water tanks as shown
  
in Figure 7.3. The state of the plant is composed of the 
ξ̇6 (t) = k I2 a2 2g ξ2 (t) − x2∗ ,

where k I1 and k I2 are design parameters of the controller.


With this additional dynamics, stabilizing the extended
ξ3 ξ4 system implies that in steady state, ξ1 and ξ2 converge
to the desired values x1∗ and x2∗ . Assuming that the sen-
sors measuring ξ1 and ξ2 also compute ξ5 and ξ6 locally
γ1 γ2 and sufficiently fast, we can consider ξ5 and ξ6 as regular
state variables.
Then, the controller from [12] results in the following
ν1 ξ1 ξ2 ν2 state-feedback law:

υ( t ) = − K (ξ( t ) − x ∗ ) + u ∗ , (7.14)

with
  −1   
γ1 1 − γ2 a1 2gx1∗
FIGURE 7.3 u∗ =  ,
1 − γ1 γ2 a2 2gx2∗
The quadruple-tank system.
Decentralized Event-Triggered Controller Implementations 129

and K = QP, where Q is a positive definite matrix and P row), and a decentralized implementation without
is given by adaption, with θ(k) = 0 for all k ∈ N (last row). In the
  first, second, and third columns of the figure, the time
γ1 k 1 ( 1 − γ1 ) k 2 0 ( 1 − γ1 ) k 4 γ1 k 1 ( 1 − γ1 ) k 2 between controller updates, the evolution of the ratio
P=
( 1 − γ2 ) k 1 γ2 k 2 ( 1 − γ2 ) k 3 ( 1 − γ2 ) k 1 γ2 k 2
, √
0 ε/ξ versus σ and the state trajectories, are shown,
respectively.
with k1 , k2 , k3 , and k4 being design parameters of the Figure 7.5 illustrates the evolution of the adaptation
controller. vector θ for the adaptive decentralized event-triggered
Consider the following function: implementation.
Although the three implementations produce almost
1
Hd ( x ) = ( x − x ∗ ) T P T QP( x − x ∗ ) − u∗ T Px (7.15) undistinguishable state trajectories, the efficiency of
2 them presents quite some differences. As expected, a
4
2  
centralized event-triggered implementation employs far
+ ∑ k i ai xi 3/2
2g + k1 a1 x5 2gx1 ∗
i =1
3 less updates and produces larger times between updates
 than a decentralized event-triggered implementation
+ k2 a2 x6 2gx2∗ , without adaption. However, although Algorithm 7.1
does not completely recover the performance of the
which is positive definite and has a global minimum at centralized event-triggered implementation, it produces
x ∗ , as a candidate ISS Lyapunov function with respect results fairly close in terms of number of updates and
to ε for the system with the selected controller. The fol- time between them.
lowing bound shows that indeed Hd is an ISS Lyapunov
function:

d
H (ξ) ≤ −λm ( R)|∇ Hd (ξ)|2 + |∇ Hd (ξ)|| gc
K ||ε|. 7.4.2 Listening Time Reduction and Delays
dt d
It is clear, especially by looking at Example 7.1, that
Furthermore, from that equation, one can design the the main benefit of employing event-triggered con-
following as a triggering condition: trollers, reducing transmissions of measurements, is
retained by the proposed decentralization of the trigger-
|∇ Hd (ξ)|| gc
K ||ε| ≤ ρλm ( R)|∇ Hd (ξ)|2, ρ ∈ ]0, 1[. ing conditions. However, while reducing the amount of
information that needs to be transmitted from sensors to
And finally, assuming that the system is confined to a actuators, the proposed technique suggests that sensor
compact set containing a neighborhood of x ∗ , |∇ Hd (ξ)| nodes need to continuously listen for events triggered
can be bounded as |∇ Hd (ξ)| ≥ ρm |ξ − x ∗ | to arrive to at other nodes, or at least to requests for measurements
a triggering rule, ensuring asymptotic stability, of the arriving from the controller (once this receives a trigger-
desired form ing signal). This is a shortcoming of the current tech-
  nique, as the power drawn by radios listening is usually
∗ 2 λm ( R ) 2 as large as (if not larger than) that consumed while
|ε(t)| ≤ σ|ξ(t) − x | , σ = ρm ρ

2
> 0.
| gc K | transmitting. In practice, wireless sensor nodes usually
have their radio modules asleep most of the time and
In what follows we show some simulations of this are periodically awakened according to a time division
plant and controller, for an initial states (13, 12, 5, 9) medium access (TDMA) protocol in order to address this
and x1∗ = 15 and x2∗ = 13, with the implementation problem. Current proposals of protocols for control over
described in this section. Employing the same param- wireless networks, like WirelessHART, are also typically
eters of the plant and the controller as in [12], and based on TDMA in order to provide noninterference and
assuming that the system operates in the compact strict delay guarantees. Our technique can be adjusted
set S = { x ∈ R6 | 1 ≤ xi ≤ 20, i = 1, . . . , 4; 0 ≤ xi ≤ 20, to be implemented over a TDMA schedule forcing the
i = 5, 6}, one can take ρm = 0.14, which for a choice radios of the sensor nodes to be asleep in certain slots.
of ρ = 0.25 results in σ = 0.00542. A bound for the One can then regard the effect of this energy-saving
minimum time between controller updates is given mechanism as a bounded and known delay between the
by τmin = 0.1 ms. In the implementation we employed generation of an event and the corresponding effect in
Algorithm 7.1 with q = 1 and employed a combined the control signal. Then, the same approach to reduce σ
single triggering condition for the pairs ξ1 , ξ5 and from Corollary 7.1 can be applied to accommodate these
ξ2 , ξ6 . Figure 7.4 shows a comparison with a central- medium-access induced (and any other possible) delays
ized implementation (first row), our proposal (second in our proposal.
130 Event-Based Control and Signal Processing
States Event interval × 10−3 Trigger condition
6
15 1.5
583 controller updates 5

Interval (s)
10

|e|/|x|(√σ)
1
3

x
State 1 2
5 0.5
State 2
State 3 1
State 4
0 0 0
0 50 100 150 200 0 50 100 150 200 0 50 100 150 200
Time (s) Time (s) Time (s)

States Event interval × 10−3 Trigger condition


6
15 1.5
589 controller updates 5

Interval (s)

|e|/|x|(√σ)
10 1
3
x

State 1 2
5 0.5
State 2
State 3 1
State 4
0 0 0
0 50 100 150 200 0 50 100 150 200 0 50 100 150 200
Time (s) Time (s) Time (s)

States Event interval × 10−3 Trigger condition


6
15 1.5
11300 controller updates 5

4
10
Interval (s)

1
|e|/|x|(√σ)
3
x

State 1 2
5 0.5
State 2
State 3 1
State 4
0 0 0
0 50 100 150 200 0 50 100 150 200 0 50 100 150 200
Time (s) Time (s) Time (s)

FIGURE 7.4
Evolution of the states, times between updates, and evolution of the triggering condition for the centralized event-triggering implementation
(first row), decentralized event-triggering implementation with adaptation (second row), and decentralized event-triggering implementation
without adaptation (third row).

× 10−5 Theta

5 7.5 Decentralized Triggering of Asynchronous


Updates
We investigate now the possibility of implementing
0
θ

decentralized triggering in a manner in which the


updates of the controller are performed with asyn-
chronous measurements, that is, {tri i } does not need to
j
−5 be identical to {tr j } for j = i. This means that the con-
troller is updated with measurements of the state vector
0 50 100 150 200
containing entries not synchronized in time, thus the
Time (s)
“asynchronous” name. By employing this asynchronous
update, sensor nodes do not need to stay listening
FIGURE 7.5 for updates from other sensors, as in the technique of
Adaptation parameter vector evolution for the adaptive decentralized Section 7.4, thus potentially bringing some additional
event-triggered implementation. energy savings.
Decentralized Event-Triggered Controller Implementations 131
√ √
We consider event-triggering rules defining implicitly, |εi (tik−+1 )| = ηi , and thus,∗ τi ≥ ηi (maxS̃ dt d
|εi |)−1 .
i
and independently, the sequences of update times {tri i } Therefore, all that needs to be proved is the exis-
for each sensor i. In particular, we begin by considering tence of an upper bound on the rate of change of
simple triggering conditions as |εi |. One can trivially bound the evolution of |εi | as
dt |εi | ≤ |ε̇i | = | f i (ξ, k (ξ + ε))|, and the maximum rate of
d
tri i := min{t > tri i −1 | ε2i (t) = ηi }, (7.16)
change of |εi | in S̃ by f i (max{V (ξ(t0 )), α− v ◦ αe (η)}, η).
1

where ηi > 0 are design parameters. As before, the effect Note that the existence of such a maximum is guar-
of the sample-and-hold mechanism is represented as a anteed by the continuity of the maps f and k and the
measurement error at each sensor for all i = 1, . . . , n as compactness of the set S̃. Assumption 7.1 implies that
f i ( x, k( x + e)) is also locally Lipschitz, and thus one
εi (t) = ξi (tri i ) − ξi (t), t ∈ [tri i , tri i +1 [, ri ∈ N0 . can" further bound f i (max{V (ξ(t0 )), α− v #◦ αe (η)}, η) ≤
1

L f i α−1 (max{V (ξ(t0 )), α− v ◦ αe (η)}) + η . Finally, re-


1
Let us introduce the variable η ∈ R as
calling that ηi = ωi η , a lower bound for the
2 2
7 intertransmission times is given by (7.19) which
n
η= ∑ ηi , (7.17) proves the statement.
i =1
Employing a constant threshold value η establishes a
which can be considered as a design parameter that once trade-off between the size of the intertransmission times
specified restricts the choices of ηi to be used at each sen-
and the size of the set to which the system converges.
sor. Then, the local parameters ηi are defined through an In order to achieve asymptotic stability, we allow the
appropriate scaling: parameter η to change over time and converge to zero.
In particular, we consider an update policy for η(trcc )
ηi := ω2i η2 , |ω| = 1, (7.18)
given by
with ωi as design constants introduced for analysis
η(t) = η(trcc ), t ∈ [trcc , trcc +1 [ ,
purposes only. Using this newly defined variable, the
update rule (7.16) implies that |ε(t)| ≤ η (with equality η(trcc +1 ) = μη(trcc ), (7.20)
attained only when all sensors trigger simultaneously). for some μ ∈]0.5, 1[ and with {trcc } being a divergent
The following lemma will be central in the rest of this sequence of times (to be defined later) with t0c = t0 . The
section: local update rules to be used are adjusted to incorporate
these time-varying thresholds:
Lemma 7.2: [16] Intertransmission times bound
tri i := min{t > tri i −1 | ε2i (t) = ηi (t)},
If Assumptions 7.1 and 7.2 hold, for any η > 0, a lower ηi (t) := ω2i η(t)2, |ω| = 1. (7.21)
bound for the minimum time between transmissions
of the sensor i, for i ∈ {1, . . . , n}, for all time t ≥ t0 , is
Given the update policy (7.20), one can also design an
given by event-triggered policy to decide the sequence of times
{trcc } such that the system is rendered asymptotically
∗ −1 η
τi : = L f ωi , stable. The resulting overall implementation that we
i
η + α−1 (max{V (ξ(t0 )), α−
v ◦ αe (η)})
1
propose contains two independent triggering mecha-
(7.19) nisms to activate

where L f i denotes the Lipschitz constant of the function • Sensor-to-controller communication: Sensors
f i ( x, k( x + e)) for | x | ≤ α−1 (max{V (ξ(t0 )), α−
v ◦ αe (η)})
1 send measurements to the controller whenever the
and |e| ≤ η. local condition (7.21) is violated. The update of the
control commands is done with the measurements
PROOF Let us denote in what follows by: S(y, z) = as they arrive in an asynchronous fashion.
{( x, e) ∈ Rn×n | V ( x ) ≤ y, |e| ≤ z} and by f i (y, z) =
max( x,e)∈S(y,z) | f i ( x, k( x + e))|. From Assumption 7.2, • Controller-to-sensor communication: The con-
troller orders the sensors to reduce the thresh-
we have that |e| ≤ η, V ( x ) ≥ α− v ◦ αe (η) ⇒ V̇ ( x, e) ≤ 0
1
old used in their triggering condition, according
and thus, S̃ := S(max{V (ξ(t0 )), α− v ◦ αe (η)}, η) is
1
to (7.20), when the system has “slowed down”
forward invariant. Recall that the minimum time
between events at a sensor is given by the time it ∗ The time derivative of |ε | is defined for almost all t (excluding the
i
takes for |εi | to evolve from the value |εi (tik )| = 0 to instants {tk i }), which is sufficient to bound the time between events.
i
132 Event-Based Control and Signal Processing

enough to guarantee that the intersample times Next, we notice that the first condition of this lemma
remain bounded from below. The controller checks implies that ∃βη ∈ KL such that η(t) ≤ βη (η(t0c ), t − t0c )
this condition only in a periodic fashion, with for all t ≥ t0c . Putting together these last two bounds,
some period τc (a design parameter), and there- and assuming that the initial threshold is selected as
fore the sensors only need to listen at those time η(t0c ) = κ0 V (ξ(t0c )), for some constant κ0 ∈]0, ∞[, one
instants. can conclude that
One of the features of our proposal, as we show later in
V (ξ(t)) ≤ γV (βη (κ0 V (t0c ), t − t0c )), ∀t ≥ t0c .
this section, is that it enables implementations requiring
only the exchange of one bit of information between a Finally, this last bound guarantees that
sensor and a controller, and vice versa, with the exception
of the transmission of the initial state of the system at t0 . |ξ(t)| ≤ α−1 (γV (βη (κ0 α(|ξ(t0c )|), t − t0c ))), (7.22)
The mechanism to trigger sensor-to-controller com- := β(|ξ(t0c )|, t − t0c ), ∀t ≥ t0c . (7.23)
munication is already provided by (7.21). We focus now
on designing an appropriate triggering mechanism for with β ∈ KL, which finalizes the proof.
the communication from controller to sensors. The fol-
lowing lemma establishes some requirements to con- REMARK 7.1 Lemma 7.3 rules out the occurrence
struct this new triggering mechanism. of Zeno behavior but does not establish a minimum
time between transmissions of the same sensor. These
Lemma 7.3: [16] bounds are provided later in Proposition 7.2. Similarly,
the occurrence of arbitrarily close transmissions from
different sensors is also not addressed by this lemma.
The closed-loop systems (7.2), (7.8), (7.20), (7.21) is
A solution to this is discussed in Section 7.5.1.
UGAS if Assumptions 7.1 and 7.2 are satisfied and the
following two conditions hold:
Note that the first of the conditions established by this
• {η(trcc )} is a monotonically decreasing sequence lemma is already guaranteed by employing the thresh-
with limrc →∞ η(trcc ) → 0 old update rule (7.20). The following assumption will
• There exists κ>0 such that for all trcc help us devise a strategy satisfying the second condition.
α−1 (max{V (ξ( t crc )),α− This is achieved at the cost of restricting the type of ISS
v ◦αe (η( t r c ))})
1 c

η( t crc )
≤κ<∞ controllers amenable to our proposed implementation.

PROOF In view of Lemma 7.2, the second condition of ASSUMPTION 7.6 For some  > 1, the ISS closed-
this lemma guarantees that there exists a minimum time loop system (7.3) satisfies the following property:
between events at each sensor when both events fall in
an open time interval ]trcc , trcc +1 [, that is, tri i +1 − tri i > τ∗i lim sup α−1 ◦ α(α−1 ◦ α−
v ◦ αe ( s) + 2s) s
1 −1
< ∞. (7.24)
s →0
for all i = 1, . . . , n and tri i , tri +1 ∈]trcc , trcc +1 [. It could hap-
i
pen, however, that some sensor update coincides with REMARK 7.2 This assumption, as well as Assump-
an update of the thresholds, that is, tri +1 = trcc +1 , which tions 7.1 and 7.2, are automatically satisfied by linear
i
could lead to two arbitrarily close events of sensor i. systems with a stabilizing linear state-feedback con-
Similarly, events from two different sensors could be troller and the usual (ISS) quadratic Lyapunov function.
generated arbitrarily close to each other. Nonetheless,
as the sequence {trcc } is divergent (by assumption), and Let us introduce a simple illustrative example to pro-
there is a finite number of sensors, none of these two vide some intuition about how this condition looks in
effects can lead to Zeno executions. practice. Note that we employ a scalar system as an
The second condition of this lemma also implies that example and thus this is not really representative of
at trcc either V (ξ(trcc )) ≤ α− v ◦ αe (η( tr c )) or V (ξ( tr c )) ≤
1 c c the benefits in a decentralized setting. More illustrative
α(κη(trc )). From Assumption 7.2, we have that for all t ∈
c examples of the overall performance are provided later.
[trcc , trcc +1 [ the following bound holds:
EXAMPLE 7.2 Consider the system:
V (ξ(t)) ≤ max{V (ξ(trcc )), α−
v ◦ αe (η( tr c ))}
1 c

≤ max{α(κη(trcc )), α− ξ̇(t) = sat(υ(t)),


v ◦ αe (η( tr c ))}.
1 c

Thus, using definition (7.20) results in V (ξ(t)) ≤ with a controller affected by measurement errors: υ(t) =
γV (η(t)), ∀t ≥ t0 , where γV ∈ K∞ is the function: −ξ(t) − ε(t), where the function sat(s) is the satura-
γV (s) = max{α− v ◦ αe ( s), α(κs)}.
1 tion function, saturating at values |s| > 1. In [20] it
Decentralized Event-Triggered Controller Implementations 133
| x |3 | x |2 which also satisfies the bound |ξ|(t) ≤ |ξ(t)| +
is shown that V ( x ) = 3 + 2 is an ISS Lyapunov
function for the system, with αe (s) = 2s2 and αv (s) = 2η(trcc ).
αx ◦ α−1 (s), where αx (s) = s2 /2. Furthermore, we can Making use of this bound, the following theorem pro-
also set α(s) = α(s) = s3 /3 + s2 /2. Then, noting that poses a condition to trigger the update of sensor thresh-
α(s) < α(s), ∀ > 1, olds guaranteeing UGAS of the closed-loop system:

α−1 ◦ α(α−1 ◦ α−


v ◦ αe ( s) + 2s) <
1 Theorem 7.2: [16] UGAS
α−
x
1
◦ αe (s) + 2s = (2 + 2)s,
Consider the closed-loop systems (7.2), (7.8), (7.21) with
and we can conclude that any ρ > 4 will guarantee the threshold update rule (7.20) and satisfying Assump-
asymptotic stability to the origin. Furthermore, select- tions 7.1, 7.2, and 7.6. Let τc > 0 be a design parameter.
ing, for example, ρ = 4.1, μ = 0.82, and τc = 1 s, results The sequence of threshold update times {trcc } implicitly
in a minimum time between sensor transmissions of defined by
0.04 s, after accounting for the effect of an aggregated
(communication/actuation) delay of 0.002 s. Figure 7.6 trcc +1 := min{t = trcc + rτc | r ∈ N+ , (7.26)
shows a simulation in which it can be seen how the sys- −1
|ξ|(t) ≤ α ◦ α(ρη(trcc ))},
tem is stabilized while respecting the lower bound for
the intertransmission times. with any ρ < ∞ satisfying

Remember now that ξ̂, defined in (7.9), is the vector α−1 ◦ α(α−1 ◦ α−
v ◦ αe ( s) + 2s) ≤ ρs,
1
(7.27)
formed by asynchronous measurements of the state
entries that the controller is using to compute the input for all s ∈]0, η(t0 )] and some  > 1 renders the closed-
to the system. One can thus compute the following loop system UGAS.
upper bound at the controller (which has access to all
the latest measurements transmitted in the network): PROOF We use Lemma 7.3 to show the desired result.
The first itemized condition of the lemma is satisfied by
|ξ|(t) := |ξ̂(t)| + η(trcc ) ≥ |ξ̂(t) − ε(t)| the employment of the update rule (7.20) with a con-
= |ξ(t)|, ∀ t ∈ [trcc , trcc +1 [ , (7.25) stant μ ∈]0, 1[ if we can show that the sequence {trcc }

5 400

300
0
V(t)
ξ(t)

200

–5
100

–10 0
0 5 10 15 20 0 5 10 15 20
t t

4 6

3
4
i –1

ηi(t)
tk – tk

2
i

2
1

0 0
0 5 10 15 20 0 5 10 15 20
t t

FIGURE 7.6
State trajectory, Lyapunov function evolution, events generated at the sensor, and evolution of the threshold.
134 Event-Based Control and Signal Processing

is divergent. Thus, we must show that this sequence is threshold by ηm , then it is enough to guarantee (7.24)
divergent and that the second itemized condition in the for s ∈ [ηm , η(t0c )], ηm > 0. This can always be satisfied,
lemma also holds. given as for every α ∈ K∞ there always exists κ < ∞
First, we show that starting from some time trcc there such that α(s) ≤ κs for all s ∈ [ηm , η(t0c )]. An implemen-
always exists some time T ≥ trcc such that for all t ≥ T tation for this type of “practical stability” would thus
|ξ|(t) ≤ α−1 ◦ α(ρη(trcc )). Showing this guarantees that start with a larger threshold value than ηm and keep
{trcc } is a divergent sequence. From Assumption 7.2, we decreasing it as in the proposed implementation, but
know that for every ¯ > 0 there exists some T ≥ trcc such once the minimum level ηm is reached, it will stop updat-
that α(|ξ(t)|) ≤ V (ξ(t)) ≤ α− v ◦ αe (η( tr c )) + ¯ for every
1 c ing the threshold. To attain the same level of stabilization

t ≥ T. Let ¯ = ( − 1)αv ◦ αe (η(trc )), for some  > 1,
1 c precision, one could also just employ ηm as the thresh-
then old from the beginning, but if the system has faster
dynamics when it is farther from the origin (as happens
|ξ(t)| ≤ α−1 (α−
v ◦ αe (η( tr c ))), ∀ t > T,
1 c with many nonlinear systems) that would result in much
shorter intertransmission times.
and thus, we have that the proposed norm estima-
tor (7.25) satisfies the bound: The implementation we just described requires only
−1 the exchange of one bit after any event:
|ξ|(t) ≤ α (α−
v
1
◦ αe (η(trcc ))) + 2η(trcc ), ∀t > T.
• To recover the value of a sensor after a threshold
Therefore, if there exists a ρ > 0 satisfying (7.27) for crossing, it is necessary only to know the previous
some  > 1 and for all s ∈]0, η(t0)], a triggering event value of the sensor and the sign of the error εi when
will eventually happen. Finally, from Assumption 7.6 it crossed the threshold:
and Lemma 7.1, one can conclude that such a ρ < ∞ 
exists. ξ̂i (tri i ) = ξ̂i (tri i −1 ) + sign(εi (tri i )) ηi (trcc ). (7.30)
The second condition of Lemma 7.3 is easier to prove.
We start remarking that with ρ so that (7.27) holds, as • Similarly, messages from the controller to the sen-
μ < 1,  > 1 and α−1 ◦ α(s) ≥ s for all s, we also have sors, commanding a reduction of the thresholds,
can be indicated with a single bit.
ρ
s ≥ α− 1 ◦ α −
v ◦ αe ( s),
1
(7.28) If one employs this one-bit implementation, bounds
μ
for the time between updates of a sensor valid glob-
for all s ∈]0, η(t0 )]. Thus, (7.28) and the triggering condi- ally (not only between threshold updates, but also across
tion (7.26), as V (ξ(trcc )) ≤ α(|ξ|(trcc )), guarantee that the such updates) can be obtained as follows:
following holds:
ρ Proposition 7.2: [16] Intertransmission time bounds
α−1 (max{V (ξ(trcc )), α−
v ◦ αe (η( tr c ))}) ≤
1 c
η( t c ),
μ rc
The controller implementation from Theorem 7.2 with
at all times trcc > t0c . Therefore,  −1 )
μL f ωi
 , controller updates (7.30), τ ≥ maxi∈[1,n]
c
μ+ρ
i
and
ρ α−1 ◦ V (ξ(t0c )
κ := max , (7.29) μ −1
μ η(t0c ) η(t0c ) ≥
α ◦ V (ξ(t0c )), (7.31)
ρ
α−1 (max{V (ξ(trcc )), α−v ◦ αe (η( tr c ))})
1 c
guarantees that a minimum time between events at each
≥ ,
η( tr c )
c
sensor is given, for all t ≥ t0c , by
for all trcc , which concludes the proof. L− 1
f ωi
tri i − tri i −1 ≥ τ∗i ≥ (2μ − 1) i
> 0, (7.32)
REMARK 7.3 Assumption 7.6 is the most restrictive of μ+ρ
the assumptions introduced this far. In general, it may where L f i is the Lipschitz constant of the function
be hard to find a controller that satisfies the property; f i ( x, k( x + e)) for | x | ≤ α−1 (max{V (ξ(t0c )), α−
v
1 ◦
however, in practice one can also disregard this condi- αe (η(t0 ))}) and |e| ≤ η(t0 ).
c c
tion. If one is only interested in stabilizing the system up
to a certain precision, one can compute the Lyapunov PROOF Theorem 7.2, by means of Lemma 7.3,
level set to which converging guarantees such precision. guarantees that
By knowing that level set, one can compute the value
of η guaranteeing convergence to that level set [accord- L− 1
f ωi
tri i − tri i −1 ≥ τbi ≥ i
, ∀ tri i , tri i −1 ∈ ]trcc , trcc +1 [ ,
ing to (7.1)]. If one denotes that value of the minimum 1+κ
Decentralized Event-Triggered Controller Implementations 135

with (see proof of Theorem 7.2) κ := μρ when (7.31) we just described. Similar to the synchronous case, the
holds. However, it can happen that some sensors auto- approach is to make the triggering conditions more con-
matically violate their triggering condition when their servative. Consider delays occurring between the event
local threshold is reduced, that is, some tri i = trcc . This generation at the sensors and its effect being reflected in
can lead to two possible problematic situations: that the control inputs applied to the system. Event-triggered
tri i − tri i −1 < τbi and/or that tri i +1 − tri i < τbi . In the first techniques deal with delays of this type by controlling
case, one can always bound tri i − tri −1 ≥ μτib follow- the magnitude of the virtual error ε due to sampling. As
i has been illustrated across this chapter, as long as the
ing the reasoning in the proof of Lemma 7.2 with the
magnitude of this error signal is successfully kept within
same bound for the system speed but to reach a thresh-
certain margins, the controller implementation is stable.
old |εi (tri−
i
)| = μ ηi (trcc −1 ), as (7.26) guarantees that no Looking carefully at the definitions, one can notice that
more than one threshold update can occur simultane- this error signal is defined at the plant side. Therefore, in
ously. Note that by employing τc > maxi {τbi }, one also the presence of delays, while the sensors send new mea-

guarantees that threshold updates do not trigger sensor surements trying to keep |εi (t)| = |ξi (tri i ) − ξi (t)| ≤ ηi ,
updates closer than τbi . In the second case, the source of the value of the error at the plant-side ε̂i (t) might be
 is the update of ξ̂ following (7.30). When
the problem different. This error ε̂i (t) can be defined as
|εi (tri−
i
)| > ηi (trcc ), the controller is updated with a
ε̂i (t) = ξ(tri i −1 ) − ξ(t) t ∈ [tri i , tri i + Δτri i [ , (7.34)
value
 ε̂i (t) = εi (t) t ∈ [tri i + Δτri i , tri i +1 [ , (7.35)
ξ̂i (tri i ) = ξ̂i (tri i −1 ) + sign(εi (tri i )) ηi (trcc ). (7.33)

Thus, updating the local error accordingly as where Δτri i denotes the delay between the time tri i , at
which a measurement is transmitted, and the time tri i +
εi (tri i ) := ξ̂i (tri i ) − ξi (tri i ) results in an error satisfy-

Δτri i , at which the controller is updated with that new
ing |εi (tri i )| ≤ ( μ1 − 1) ηi (trcc ), and not necessarily
measurement. Thus, to retain asymptotic stability, |ε̂(t)|
equal to zero. Reasoning again as in the proof of needs to be kept below the threshold η. Looking at the
Lemma 7.2, but now computing   it takes |εi | to
the time proof of Lemma 7.2, we know that the maximum speed
go from a value of ( μ1 − 1) ηi (trcc ) to ηi (trcc ), one can v̄ ei of the error signal is always bounded from above by
' (
v̄ ei ≤ L f i (κ + 1)η, with L f i as in Proposition 7.2. Assume
show that tri +1 − tri i ≥ 2 − μ1 τbi , whenever tri i = trcc .
i now that the delays for every sensor are bounded from
Finally, realizing that μ > 2 − μ1 ≥ 0 for all μ ∈ ]0.5, 1[ above by some known quantity Δτ, that is, Δτri ≤ Δτ.
i
concludes the proof. To appropriately accommodate for the delays, one needs
to select a new threshold η̄ := rη η < η in such a way that
√ √
v̄ ei Δτ + η̄i ≤ ηi ,
7.5.1 Time between Actuation Updates and Delays √
v̄ ei Δτ ≤ η̄i . (7.36)
The effect of transmission and computation speed
limitations introduce delays in any control loop imple- The first condition makes sure that by keeping |ε(t)| ≤
mentation. Additionally, in the presented asynchronous η̄ for t ∈ [tri −1 , tri i ], the error at the plant side remains
i
implementation, two controller updates could be appropriately bounded: |ε̂(t)| ≤ η in the interval t ∈
required arbitrarily close to each other when they are [ti , ti + Δτ]. The second condition makes sure that after
ri ri
generated by different sensors. To address this problem, one transmission is triggered the next transmission is
one can consider two complementary solutions at the not triggered before the actuation is updated with the
cost of introducing an additional delay: (1) To use a measurement from the first transmission.∗ From the two
slotted time-division multiple access communication conditions (7.36), one can easily verify that rη must
between sensors and controller, automatically, this satisfy
means that controller updates can only occur at the
v̄ei Δτ v̄ ei Δτ
frequency imposed by the size of the slots. (2) On top of √ ≤ rη ≤ 1 − √ . (7.37)
this, one can impose a periodic update of the actuators, ηi ηi
that is, allow changes on the controller inputs only
∗ If one assumes that measurements are time-tagged and the con-
at instants at t = kτua , for some period τua satisfying
τua < mini {τ∗i }. troller discards old measurements, that may arrive later than more
recent ones due to the variable delays, this second condition can
Thus, it is important to briefly discuss how delays can be ignored resulting in less restrictive conditions on the admissible
be accommodated by the asynchronous implementation delays.
136 Event-Based Control and Signal Processing

From this last relation, one can also deduce that the ISS bounds. Observe that α(s) ≤ α(s), ∀ > 1, then
maximum allowable delay needs to satisfy
(α−1 ◦ α− −1
v ◦ αe ( s) + 2s) < (λ M ( P ) /λm ( P ))α x ◦ αe ( s)
1

1 ηi + 2s.
Δτ ≤ . (7.38)
2 v̄ ei
This shows that Assumption 7.6 is satisfied, and we get
the condition
A sufficient condition for (7.38) to hold is given by
 7
λ M ( P) ae λ M ( P)
−1 ρ> +2 ,
1 L f i ωi λm ( P ) a x λm ( P )
Δτ ≤ , (7.39)
2 1+κ
where a x = 12 and ae = 2(| PBK | + L g | PB|)2 with L g
and similarly for (7.37) to hold by the Lipschitz constant of g in the compact determined
by | x | ≤ α−1 (max{V (ξ(t0 )), α− v ◦ αe (η( t0 ))}) + η( t0 ). In
1

1+κ 1+κ this case, L f i can be taken as L f i = max{| Ac |, | BK | +


Δτ −1 ≤ rη ≤ 1 − Δτ −1 . (7.40) | B| L g }.
L f ωi L f ωi
i i In the simulation results that we show, the system is
defined by
Remember that κ := μρ when (7.31) holds, and thus ⎡ ⎤ ⎡ ⎤
one can also use condition (7.39) to design the imple- 1.5 0 7 −5 0 0
⎢ −0.5 −4 0 0.5 ⎥ ⎢ ⎥
mentation so that a certain amount of delay can be A=⎢ ⎥, B = ⎢ 5 0 ⎥,
accommodated for. Note also that the more conserva- ⎣ 1 4 −6 6 ⎦ ⎣ 1 −3 ⎦
tive the estimates of κ and L f i that are employed, the 0 4 1 −2 1 0
more conservative the values of tolerable delays will be.    
Finally, as the size of the thresholds are reduced, the min- x22 0.1 −0.2 0 −0.2
g( x ) = , K= .
imum interevent times will also reduce proportionally sin( x3 ) 1.5 −0.2 0 0
resulting in new bounds:
Figure 7.7 shows the results when μ = 0.9,
ρ = 3106, τc = 0.1 s, ωi = [0.58 0.33 0.67 0.30]T, ξ(0) =
L− 1
ω i [2 1.6 0.8 1.2] T, ε(0) = 0, and η(0) = 2.5 × 10−3. Having
tri i − tri i −1 ≥ τ∗i ≥ rη (2μ − 1) i
f
> 0. (7.41)
μ+ρ initial conditions in |ξ(t0 )| ≤ 3 imposes a ρ > 3075.
Then, assuming the maximum delay introduced is
We provide now an example (borrowed from [16]) that Δτ = 6 μs results in a minimum time between transmis-

illustrates a typical execution of the proposed controller sions τi ≥ 17 μs. These bounds are clearly conservative,
implementation showing the asynchrony of measure- as can be seen in the simulation results in which the
ments and the evolution of the thresholds. minimum intertransmission time observed is 68 μs. Fur-
thermore, the average intertransmission time during the
EXAMPLE 7.3 Consider a nonlinear system of the 5 seconds of the simulation is one order of magnitude
form larger.

ξ̇(t) = Aξ + B( g(ξ(t)) + υ(t)),

where g is a nonlinear locally Lipschitz function. Con- 7.6 Energy Consumption Simulations
sider a controller affected by measurement errors:
We have discussed in the introduction and in
Section 7.4.2 the effect of listening time on the power
υ(t) = − g(ξ(t) + ε(t)) − K (ξ(t) + ε(t)), consumption of the sensor nodes. In general, it is
very hard to analytically compare the two alternative
with K such that Ac = A − BK is Hurwitz. implementations we present in this chapter. This is
Let V ( x ) = x T Px, where PAc + AcT P = − I, be the can- mainly due to the fact that predicting the type of pat-
didate ISS-Lyapunov function for the system. In this tern of triggering is a tough problem, and the energy
case, one can employ the following functions: α(s) = consumption is tightly connected to those triggering
λ M ( P)s2 , α(s) = λm ( P)s2 , αx (s) = a x s2 , and αv (s) = patterns. However, we argue that an asynchronous
αx ◦ α−1 (s) with αe (s) = ae s2 to obtain the necessary implementation generally results in less time spent by
Decentralized Event-Triggered Controller Implementations 137
States Threshold
× 10−3
3 1.5
State 1
State 2
2
State 3
1
State 4
x 1

η
0.5
0

−1 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Event interval Event interval
× 10−3
0.15 1.5

0.1 1

Interval (s)
Interval (s)

0.05 0.5

0 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s) × 10 −3

FIGURE 7.7
State trajectory, evolution of the thresholds, and events generated at the sensors.

the sensors listening for messages, and thus, one would are considering is essentially a star network with every
hope that it also results in a lower energy consumption. sensor and actuator capable of direct communication
To check this intuitive hypothesis, we construct a sim- with the controller, which is at the center of the network,
ulator capable of tracking the energy consumed by the see Figure 7.8.
sensors in a simulation depending on three possible Nonetheless, these simplified scenarios give us an
states of the sensor: transmitting, listening, or idle (nei- abstraction sufficient to explore the type of energy con-
ther transmitting nor listening). Using this simulator, we sumption that each of the proposed schemes will exhibit
perform some experiments to compare a decentralized in reality, if appropriate custom-made protocols are con-
synchronous implementation with an asynchronous one structed. We briefly describe the two schemes as follows.
providing a similar performance (in terms of decay rate
of the Lyapunov function). In order for this simulation 7.6.1.1 Synchronous Event Triggering
to be somehow realistic, we need to first come up with Figure 7.9 shows the type of periodic hyperframes
a TDMA scheme for each of the implementations. These employed in the synchronous implementation. Two
are described in the following section. kinds of hyperframes are considered, depending on
whether an event is triggered or not. The hyperframes
are defined by the following subframes, slots, and
7.6.1 TDMA Schemes
respective parameters:
Two different simplified TDMA schemes are followed
in each of the two proposed implementations. These • Notification (NOT) slot (length τNOT ). In this slot,
schemes are far from being actually implementable, all sensors can notify the controller of an event.
as they ignore, for example, overheads needed for When there is an event, the sensor node that
synchronization and error correction. Furthermore, we triggered the event switches to transmission (TX)
do not consider the necessary modifications in the case mode to notify the controller node. The node is
of a multi-hop network. Thus, the type of network we switched to idle mode immediately after to save
138 Event-Based Control and Signal Processing
ξ1 ξ4

ξ3
ξ5

ξ2
υ1 υ2

FIGURE 7.8
A star topology control network with a centralized computing (controller) node.

INT

NOT
x1 x2 ... xn u1 u2 ... um θ1 θ2 ... θn

REQ CAL

NOT REQ

FIGURE 7.9
Synchronous event-triggering hyperframes.

energy. Meanwhile, other nodes remain idle if they • Calculation (CAL) slot (length τCAL ). Free slot to
do not trigger an event. allow the computation of updated u and θ values.

• Request (REQ) slot (length τREQ ). In this slot all If a request is transmitted during the request (REQ)
sensor nodes switch to the listening mode (RX) to slot, the communication continues with an intercom-
be able to receive a request of measurements from munication (INT) frame; otherwise, the system skips
the controller node, if an event was notified in the that frame and a new hyperframe starts again with a
NOT slot. notification period.
In terms of energy consumption, the sensor nodes are
• Intercommunication (INT) frame (length τINT ). in listening mode both at the REQ slot and, if an even
During this period, the sensors switch to TX mode occurred, in their respective θi slot of the INT frame.
to send their current measurements to the con- Otherwise sensors are idle, except when they trigger an
troller one at a time. Next, the actuators receive the event, in which case they transmit for some amount of
new control commands ui , while the sensors are time in the NOT slot.
idle. And finally all the sensor nodes switch to RX The length of the notification slot must satisfy the
mode to receive the newest local parameters θi . following condition: τNOT ≥ τdelay , where τdelay is the
transmission time associated with 1 bit, which is deter-
• x i slot (length τxi ). Slot employed to transmit mined by the rate k of the radio chip. We take as refer-
measurements of sensor i to the controller. ence radio systems based on the IEEE 802.15.4 standard,
which provides a maximum speed of 250 kbps. Using
• ui slot (length τui ). Slot employed to transmit this rate value results in τdelay = 4 × 10−6 s. Similarly,
controller computed inputs to actuator i. 1 bit is enough to perform a request for measure-
ments (REQ). τINT is determined by the number of bits
• θi slot (length τθi ). Slot employed to transmit employed to quantify the measurements and θ, and the
updated thresholds to sensor i. amount of time τCAL . We assume the controller to be
Decentralized Event-Triggered Controller Implementations 139
Event time Event transmit time Receive time Execute time

INT
NOT NOT
x1 x2 ... xn u1 u2 ... um θ1 θ2 ... θn

τ
Delay REQ REQ CAL

FIGURE 7.10
Synchronous event-triggering maximum delay.

sufficiently powerful so that τCAL can be neglected. We x2 CAL


assume the use of analog-to-digital converters (ADC)
employing 16 bits: BAD = 16. Similarly, we assume θ
... u1 u2 ... um
and u are also codified with 16 bits: Bθ = 16, Bu = 16.
In a system with n sensors and m inputs, this results FIGURE 7.11

in τINT = BADk ×n + Bθk×n + Bu ×


Asynchronous event-triggering
k
m
= 1.28 × 10−4 n + 6.4 × x1 xn η
frame.
10−5 m s.
Assuming sufficiently good synchronization and
given a pre-fixed schedule, the identity of the sensors from each sensor to the controller, and slots for the com-
could be inferred by the slot they employed to transmit, munication from the controller to each of the actuators,
which allows us to reduce any additional addressing followed by a broadcast slot to indicate if an update of
overheads. Thus, under all these considerations and the the thresholds is needed. Note that additionally an ini-
TDMA scheme described, the maximum delay that can tialization frame would be used in practice each time
appear between the generation of an event and the a change of the initial condition is made (e.g., because
update of the controller is a change of a reference command at a higher control
n m level) to transmit with a number of bits (e.g., 16) the
Δτ = 2τREQ + τNOT + ∑ τxi + ∑ τui + τCAL + τdelay . whole state of the system. We do not discuss such a
i =1 i =1 frame in detail as it is not relevant for the overall energy
Note that this holds for as long as τNOT + τREQ + τINT < consumption comparison or to compute the delays.
τmin , which ensures that no events can be generated dur- The type of frames we consider are defined by the
ing the transmissions of u and θ. This maximum delay is following slots and respective parameters:
computed considering a worst-case scenario in which an • xi slot (length τxi ). Slot employed to notify the
event takes place τdelay seconds in advance of the end of controller of an event at sensor i and the sign of
the notification period, meaning it cannot complete the the deviation with respect to the last stored value,
notification in time. Thus, the event has to wait until the as employed in Equation 7.30.
next NOT slot, and the updated control action cannot
be computed until all measurements are collected. An • ui slot (length τui ). Slot employed to transmit
illustration of this situation is shown in Figure 7.10. controller computed inputs to actuator i.
A remark is needed at this point. The sensor that gen-
erates the event sends also the measurement taken at the • Calculation (CAL) slot (length τCAL ). Free slot to
request moment instead of at the actual time the event allow the computation of updated u and check if
was triggered. Otherwise, this measurement would not the thresholds need to be updated.
be synchronized with the rest of the measurements. Note
that this does not affect any of the previous delay anal- • η slot (length τη ). Slot reserved to broadcast sig-
yses, as essentially this means that part of the delay is nals commanding the update of η.
not harmful, as the measurement employed is even more
All τxi , τui , and τη are design parameters that should be
recent than would have been otherwise.
greater than τdelay .
In this communication scheme, sensor nodes only
7.6.1.2 Asynchronous Event Triggering
need to listen during the slot reserved for the broad-
Figure 7.11 shows the type of periodic frame employed cast of η. The rest of the time the sensor nodes remain
in the asynchronous implementation. In this case, the idle, except for the times when they need to transmit
type of frames employed are simpler and essentially an event in their transmission slot. In fact, sensors do
contain a sequence of slots reserved for communication not even need to listen to the controller in each of the
140 Event-Based Control and Signal Processing
Event time Event transmit time Receive time Execute time

... u1 u2 ... um ... u1 u2 ... um

x1 x1 η

FIGURE 7.12
Asynchronous event-triggering maximum delay.

η broadcast slots. The sensors only need to listen to the For Q = I the resulting P defining the Lyapunov func-
controller every τc units of time, which is selected as a tion V ( x ) = x T Px is given by
multiple of the complete frame size, that is, τc = rτ f for ⎡ ⎤
some r ∈ N, where 0.5781 −0.0267 0.3803 −0.3948
⎢ −0.0267 0.2809 0.0629 0.2011 ⎥
n m P=⎢ ⎣ 0.3803
⎥.
0.0629 0.4024 −0.2279⎦
τ f := ∑ τxi + ∑ τui + τη + τCAL. −0.3948 0.2011 −0.2279 0.5780
i =1 i =1

That means that given r, the sensors only need to listen Note that even though this system is supposed to rep-
in the η slot of every rth frame. resent a batch reactor which usually exhibits rather
The maximum delay that this scheme introduces is slow dynamics, because of the state-feedback controller
given by employed, the stabilization speed is unusually fast. Fur-
Δτ = 2τ f − τη , thermore, we do not investigate whether the employed
as illustrated in Figure 7.12 for a worst-case scenario in initial conditions are realistic or not. Thus, one should
which an event is generated right at the time at which see this example as a purely academic one, and not think
the slot reserved to the sensor triggering begins. This of it in terms of a physically realistic system.
forces the sensor to wait until the next frame to send For the two controller implementations described in
its notification, which is finally executed in the actuator the previous sections, we designed their relevant param-
right before the next η update slot. eters as detailed in the following:

7.6.2.1 Synchronous Event-Triggered Controller


7.6.2 Controller Implementations
We start by finding the fastest synchronous event-
We employ the benchmark linear system originally com- triggered controller (SETC) implementation possible,
ing from a batch reactor model [23]: that is, minimum possible σ, given the TDMA scheme
described previously. We consider two different combi-
ξ̇ = Aξ + Bυ,
nations of TDMA parameters resulting also in different
with the linear controller maximum delays, as summarized in Table 7.1. Given
this maximum delay, one can now compute the mini-
u = Kx, mum value of σ so that a smaller σ
can be found to
compensate for the delay Δτ, that is, the constraints
where imposed in Corollary 7.1 can be satisfied. We find this
⎡ ⎤
1.38 −0.20 6.71 −5.67 lower bound on σ through an iterative fixed point algo-
⎢ −0.58 −4.29 0 0.67 ⎥ rithm, described in Algorithm 7.2, where φ is the solu-
A=⎢⎣ 1.06
⎥,
4.27 −6.65 5.89 ⎦ tion to the differential equation provided at the end of
0.04 4.27 1.34 −2.10 Section 7.2. An upper bound on σ can also be com-
⎡ ⎤ puted following the results in [21]. In order to study the
0 0 effect of enlarging the idle time of sensors in the SETC
⎢5.67 0 ⎥ implementation, we computed also the range of σ for
B=⎢ ⎥
⎣1.13 −3.14⎦ , the case when τNOT = 0.0005 s. Finally, in all the SETC
1.13 0 implementations, a Taylor approximation order q = 2 is
  employed. The resulting range of possible σ values is
0.1006 −0.2469 −0.0952 −0.2447 shown in Table 7.2 for the two different TDMA designs
K= ,
1.4099 −0.1966 0.0139 0.0823 considered.
Decentralized Event-Triggered Controller Implementations 141
TABLE 7.1
TDMA Parameters (μs)
Method τNOT τREQ τINT τxi τu i τθ i τη τf Δτ
SETC(1) 100 4 640 64 64 64 — 744 496
SETC(2) 500 4 640 64 64 64 — 1144 896
AETC(1) — — — 4 64 — 4 148 292
AETC(2) — — — 4 64 — 4 148 292

TABLE 7.2
Control Parameters

Method σ ρ τc ( s ) μ
SETC(1) ]0.0138, 0.1831[ — — —
SETC(2) ]0.0252, 0.1831[ — — —
AETC(1) — ]92.7915, 98.1302[ 0.111 0.9
AETC(2) — ]92.7915, 98.1302[ 0.222 0.9

Algorithm 7.2: SETC parameter selection algorithm. To compute the lower bound, we employed Equa-
tion 7.39 with  = 1, and optimizing over different ISS
Input: A, B, K, Δτ, precision
bounds computed by tuning the parameters a and b
Output: σ
satisfying ab = | PBK |:
i = 0;
σ(i ) = 0;
while σ(i ) − σ(i − 1) > precision √ do V̇ ( x ) = ẋ T Px + xT P ẋ = − xT Qx + eT BK T Px + xT PBKe
√ Δτ|[ A + BK | BK ]|( σ( i )+1)

σ = √ ;
1−Δτ|[ A + BK | BK ]|( σ( i )+1)
√ ≤ −λmin ( Q)| x |2 + 2ab| x ||e|

= min{t|φ(tσ
, 0) = σ
};
tσ = tσ
+ Δτ; λmin ( Q) − a2
≤− V ( x ) + b2 | e | 2 .
i = i + 1; λmax ( P)

σ( i ) = φ ( t σ , 0 ) ;
end The resulting limits for the choices of ρ are shown in
Table 7.2. In general, for larger values of ρ threshold,
updates are triggered more frequently, thus resulting
in a faster system, but the range of delays that can
7.6.2.2 Asynchronous Event-Triggered Controller be tolerated is reduced, and the energy consumption
is likely to be higher. We test both extremes in our
Similar to the SETC design, we first find the fastest asyn-
simulations.
chronous event-triggered controller (AETC) implemen-
One can easily compute the Lipschitz constant for
tation once the maximum delay imposed by the choice
the closed-loop dynamics of each state variable from
of TDMA parameters selected, summarized in Table 7.1.
A and B and K resulting in L f 1 = 8.8948, L f 2 = 5.7603,
In the AETC case, the design of the parameters is a bit
L f 3 = 10.3322, and L f 4 = 4.8082. The design parameters
more complex given that we need to design simultane-
−1 −1
ously ρ, μ, τc , and η(0). We split this decision in several ωi making L f i ωi = L f j ω j for all i, j = 1, . . . , n, result in
steps: first, we fixed the values of μ and τ ; and next ω = [0.5716 0.3702 0.6639 0.3090 ] T .
c

we assume that η(t0 ) ≥ μρ α−1 ◦ V (ξ(0)) and thus κ = μρ .


These two decisions reduce then the design decision of 7.6.3 Results
ρ to values satisfying the condition in Theorem 7.2 and
the restriction imposed by the maximum delay as given We perform simulations on four different √ imple-
by (7.39) by means of κ. Assume that ω parameters are mentation designs: √ (1) SETC(1) with σ = 0.014;
−1 −1
found so that L f ωi = L f ω j = L for all i, j = 1, . . . , n. (2) SETC(2) with σ = 0.18; (3) AETC(1) with ρ = 98;
i j and (4) AETC(2) with ρ = 93.
Then, (7.39) can be equivalently rewritten, by replacing Two different types of energy consumption pro-
κ = μρ by files are employed in the simulations: CC2420 [1] and
 
L CC2530 [2], whose most relevant parameters are pre-
ρ≤μ −1 . (7.42)
2Δτ sented in Table 7.3.
142 Event-Based Control and Signal Processing
TABLE 7.3
CC2420 and CC2530 Parameters
Operation CC2420 CC2530 Unit
Idle 0.426 0.2 mA
Listening to channel 18.8 24.3 mA
Receive 18.8 24.3 mA
Transmit 17.4 (P=0 dBm) 28.7 (P=1 dBm) mA

States × 10–3 Event interval


30 6
State 1
State 2
20
State 3
4

Interval (s)
State 4
10
X

2
0

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

Theta × 10–3 Discharge


4

0.02
3
0.01
Discharge (mAh)

0 2
θ

−0.01
1
−0.02

−0.03 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.13
SETC(1), CC2420, x (0) = x a .

Finally, we also perform simulations with the fol- delay and value of σ as the decentralized SETC to verify
conditions: x (0) = x a =
lowing two different initial  that the decentralized implementation is operating very
μ V (0)
[12 15 10 8] , with η(0) = ρ λ ( P) = 0.5285; and closely to the centralized one.
T
min
In order to show that these results are not coinciden-
x (0) = xb = [18 2 14 3] T , with η(0) = 0.6605.
tal, we tried several initial conditions in which simi-
lar patterns could be observed. For completeness, we
7.6.3.1 Parameters from TI CC2420 provide in Figures 7.17 through 7.19 the results when
Employing the CC2420 chipset, we compare first what x (0) = xb .
should be our fastest SETC and AETC implementations: In order to reduce the energy consumption in the
SETC(1) and AETC(1). We see in Figures 7.13 and 7.14 synchronous implementation, we performed two mod-
that indeed both implementations stabilize the plant at ifications: enlarged the NOT slot to reduce listening
a similar rate, see also Figure 7.15 showing a compari- times and reduce the speed of the implementation by
son of the Lyapunov functions evolution. However, we increasing σ. These modifications resulted in imple-
can clearly observe how the energy consumption of the mentation SETC(2). As can be seen in Figures 7.20
asynchronous implementation is considerably lower. and 7.21, the energy consumption was greatly reduced
We also include in Figure 7.16 a simulation result of as expected, at the cost of some slight performance
the centralized event-trigger controller with the same degradation.
Decentralized Event-Triggered Controller Implementations 143
States Event interval
30 4
State 1
20 State 2 3
State 3

Interval (s)
State 4
2
X
10

0 1

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

Threshold × 10–3 Discharge


0.5 4

0.4
3

Discharge (mAh)
0.3
2
η

0.2

1
0.1

0 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.14
AETC(1), CC2420, x (0) = x a .

Finally, we simulated the slower asynchronous imple- controllers so that no simultaneous centralized mon-
mentation AETC(2) to see how much further energy itoring of all sensor measurements is needed. These
consumption could be reduced in the asynchronous strategies are thus well suited for systems where several
implementation without losing much performance. The sensors are not collocated with each other. Driven by the
result of this simulation can be found in Figure 7.22. fact that wireless sensor nodes consume a large portion
of their energy reserves when listening for transmis-
7.6.3.2 Parameters from TI CC2530 sions from other nodes in the network, we investigated
and proposed solutions that reduce the amount of time
To investigate further the effect of the cost of listening,
sensor nodes stay in a listening mode.
we repeated the experiments with SETC(1) and AETC(1)
employing the radio chip CC2530, which consumes less The first technique we proposed triggers synchronous
while idle and more both when transmitting and lis- transmissions of measurements and forces the radios
tening. As expected, the asynchronous implementation of the sensors to go idle periodically in order to
managed to gain more benefit from this change, in terms reduce listening times. This is done at the cost of intro-
of energy consumption, than the synchronous imple- ducing an artificial delay that can be accounted for
mentation. The results of this comparison are shown in by conservatively triggering, which usually produces
Figures 7.23 and 7.24. more frequent updates. The second technique relies on
allowing updates of the control with measurements
from different sensors not synchronized in time. This
allows sensors to operate without having to listen to
other nodes very frequently, thus reducing automati-
cally energy consumption on listening. However, by the
7.7 Discussion nature of this later implementation, in general it can
We presented two different schemes to decentralize tolerate smaller delays than the first implementation
the process of triggering events in event-triggered technique.
144 Event-Based Control and Signal Processing
Lyapunov evolution
400
SETC
300 AETC

200

100

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time (s)

Difference of Lyapunov evolutions (SETC−AETC)


1

0.5

−0.5

–1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time (s)

FIGURE 7.15
Lyapunov evolution for SETC(1) versus AETC(1). CC2420, x (0) = x a .

States × 10–3 Event interval


30 6
State 1
State 2 5
20 State 3
State 4 4
Interval (s)

10 3
X

2
0
1

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.16
Centralized event-triggered controller, x (0) = x a .

Alternative implementations could also be devised if Such hardware would allow synchronous implemen-
one considers the use of wake-up radios [5,11,22], that tations with very small delays due to channel access
is, radio chips that have a very low power consumption scheduling, and consuming much less energy than in
while in a stand-by mode in which, while they can- the form we presented. Similarly, wake-up radios could
not receive messages, they can be remotely awakened. reduce even further the energy consumption of the
Decentralized Event-Triggered Controller Implementations 145

States × 10–3 Event interval


30 6
State 1
20 State 2
State 3 4

Interval (s)
State 4
10

x
2
0

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Theta × 10–3 Discharge
0.04 4

Discharge (mAh)
0.02 3

0 2
θ

−0.02 1

−0.04 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.17
SETC(1), CC2420, x (0) = xb .

States Event interval


30 4
State 1
20 State 2 3
State 3
Interval (s)

State 4
10 2
x

0 1

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Threshold × 10–3 Discharge
0.5 4

0.4
3
Discharge (mAh)

0.3
2
η

0.2
1
0.1

0 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.18
AETC(1), CC2420, x (0) = xb .
146 Event-Based Control and Signal Processing
Lyapunov evolution
400
SETC
300 AETC

200

100

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
Difference of Lyapunov evolutions (SETC−AETC)
1.5

0.5

−0.5

−1

−1.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)

FIGURE 7.19
Lyapunov evolution for SETC(1) versus AETC(1). CC2420, x (0) = xb .

States Event interval


30 0.1
State 1
State 2 0.08
20
State 3
interval (s)

0.06
State 4
10
x

0.04
0
0.02

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Theta × 10–3 Discharge
6 4
4
Discharge (mAh)

3
2
0 2
θ

−2
1
−4
−6 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.20
SETC(2), CC2420, x (0) = x a .
Decentralized Event-Triggered Controller Implementations 147
Lyapunov evolution
400
SETC
300 AETC

200

100

0
0 0.5 1 1.5 2
2.5 3 3.5 4 4.5 5
Time (s)
Difference of Lyapunov evolutions (SETC−AETC)
5

−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)

FIGURE 7.21
Lyapunov evolution for SETC(2) versus AETC(1). CC2420, x (0) = x a .

States Event interval


30 4
State 1
20 State 2 3
State 3
Interval (s)

State 4
10 2
x

0 1

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Threshold × 10–3 Discharge
0.5 4

0.4
Discharge (mAh)

3
0.3
2
η

0.2
1
0.1

0 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.22
AETC(2), CC2420, x (0) = x a .
148 Event-Based Control and Signal Processing
States × 10–3 Event interval
30 6
State 1
State 2
20
State 3 4

Interval (s)
State 4
10

x
2
0

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Theta × 10–3 Discharge
4
0.02

Discharge (mAh)
3
0.01
0 2
θ

−0.01
1
−0.02
−0.03 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.23
SETC(1), CC2530, x (0) = x a .

States Event interval


30 4
State 1
20 State 2 3
State 3
Interval (s)

State 4
10 2
x

0 1

−10 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Threshold × 10–3 Discharge
0.5 4

0.4
Discharge (mAh)

3
0.3
2
η

0.2
1
0.1

0 0
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)

FIGURE 7.24
AETC(1), CC2530, x (0) = x a .
Decentralized Event-Triggered Controller Implementations 149

asynchronous implementations. Whether with or with- in previous work on level sampling [13], dead-band
out making use of wake-up radios, it would be inter- control [19], and dynamic quantization [14]), it may be
esting to see actual energy consumption comparisons possible to produce more reliable communication pro-
based on real experimental data. To this end, appropri- tocols while keeping transmission delays within reason-
ate new communication protocols that can exploit the able limits for control applications. This can be achieved
benefits of the proposed implementations need to be by, for example, introducing in a smart fashion redun-
designed. dancy in the transmitted messages.
Further work needs to also be done to address some
of the limitations of the current proposals. The main
limiting assumption of the proposed techniques when
dealing with nonlinear plants is the requirement of hav-
ing a controller rendering the system ISS on compacts
Bibliography
with respect to measurement errors. It is important to
remark that this property does not need to be globally [1] CC2420 Data Sheet. http://www.ti.com/lit/ds/
satisfied but only in the compact of interest for the sys- symlink/cc2420.pdf.
tem operation (as our results are semiglobal in nature),
which relaxes drastically the requirement and makes [2] CC2530 Data Sheet. http://www.ti.com/lit/ds/
it also easier to satisfy [10]. Nonetheless, investigating symlink/cc2530.pdf.
controller designs satisfying such a property, as well [3] A. Agrachev, A. S. Morse, E. Sontag, H. Sussmann,
as the more involved one from Assumption 7.6, is an and V. Utkin. Input to state stability: Basic con-
interesting problem in its own right. One of the main cepts and results. In Nonlinear and Optimal Control
drawbacks of the proposed implementations, even in Theory, volume 1932 of Lecture Notes in Mathemat-
the linear case, is that they can only be applied to state- ics (P. Nistri and G. Stefani, Eds.), pages 163–220.
feedback controllers. It would be interesting to combine Springer, Berlin, 2008.
these ideas with some of the work that is available
addressing event-triggered output-feedback controllers, [4] S. Al-Areqi, D. Gorges, S. Reimann, and S. Liu.
as, for example, [9]. As indicated earlier on, the pro- Event-based control and scheduling codesign of
posed implementations are complementary to imple- networked embedded control systems. In American
mentations proposed for more distributed systems with Control Conference (ACC), 2013, pages 5299–5304,
loosely interacting local control loops [24]. Once more, it June 2013.
would be interesting to see both techniques combined in
a real application. As with most of the work on event- [5] J. Ansari, D. Pankin, and P. Mähönen. Radio-
triggered control, the study of co-design of controller triggered wake-ups with addressing capabilities for
and implementation is largely an open issue, with some extremely low power sensor network applications.
recent exceptions in the case of linear controllers, for International Journal of Wireless Information Networks,
example, [4]. Investigating such co-design and numeri- 16(3):118–130, 2009.
cal methods, for example, employing LMIs [8], to further
[6] A. Anta and P. Tabuada. To sample or not to sam-
improve the triggering conditions or parameters of the
ple: Self-triggered control for nonlinear systems.
implementations are another promising venue of future
IEEE Transactions on Automatic Control, 55:2030–
work. Additionally, further energy savings can be envis-
2042, 2010.
aged by combining these event-triggered strategies with
self-triggered approaches [6]. A promising approach [7] J. Araújo, H. Fawzi, M. Mazo Jr., P. Tabuada,
along these lines can be found in another chapter of this and K. H. Johansson. An improved self-triggered
book, inspired in [18]. implementation for linear controllers. In 3rd IFAC
Finally, we believe it is worth investigating in Workshop on Distributed Estimation and Control in
more detail the possibilities the asynchronous approach Networked Systems, volume 3(1) of Estimation and
brings by reducing the size of the exchanged messages Control of Networked Systems (F. Bullo, J. Cortes,
to a single bit. In the present chapter, we have already J. Hespanha, and P. Tabuada, Eds.), NY, pages
remarked on the benefits in terms of energy savings 37–42. International Federation of Automatic Con-
that a smaller payload brings, an already well-known trol, 2012.
fact [25]. The careful reader may have also noticed the
beneficial impact reduced payloads also have in reduc- [8] M. Donkers, W. Heemels, N. van de Wouw, and
ing the medium access-induced delays. We believe that L. Hetel. Stability analysis of networked control
by using this type of one-bit strategy (heavily inspired systems using a switched linear systems approach.
150 Event-Based Control and Signal Processing

IEEE Transactions on Automatic Control, 56(9):2101– Special Issue on Wireless Sensor Actuator Networks,
2115, 2011. 56(10):2456–2461, 2011.

[9] M. C. F. Donkers and W. Heemels. Output-based [18] C. Nowzari and J. Cortés. Robust team-triggered
event-triggered control with guaranteed l-gain and coordination of networked cyberphysical systems.
improved and decentralized event-triggering. IEEE In Control of Cyber-Physical Systems (D. C. Tarraf,
Transactions on Automatic Control, 57(6):1362–1376, Ed.), Switzerland, pages 317–336. Springer, 2013.
2012.
[19] P. G. Otanez, J. R. Moyne, and D. M. Tilbury.
[10] R. Freeman. Global internal stabilizability does not Using deadbands to reduce communication in net-
imply global external stabilizability for small sen- worked control systems. In American Control Confer-
sor disturbances. IEEE Transactions on Automatic ence, 2002. Proceedings of the 2002, volume 4, pages
Control, 40(12):2119–2122, 1995. 3015–3020, 2002.

[11] L. Gu and J. A. Stankovic. Radio-triggered wake- [20] E. Sontag. State-space and i/o stability for non-
up capability for sensor networks. In Proceedings of linear systems. In Feedback Control, Nonlinear Sys-
RTAS, pages 27–36, May 25–28, 2004. tems, and Complexity (B. A. Francis and A. R. Tan-
nenbaum, Eds.), Springer-Verlag Berlin Heidelberg:
[12] J. Johnsen and F. Allgöwer. Interconnection and Berlin, pages 215–235, 1995.
damping assignment passivity-based control of a
four-tank system. In Lagrangian and Hamiltonian [21] P. Tabuada. Event-triggered real-time scheduling
Methods for Nonlinear Control 2006 (F. Bullo and of stabilizing control tasks. IEEE Transactions on
K. Fujimoto, Eds.), Springer-Verlag, Heidelberg: Automatic Control, 52(9):1680–1685, 2007.
Berlin, pages 111–122, 2007.
[22] B. Van der Doorn, W. Kavelaars, and K. Langen-
[13] E. Kofman and J. H. Braslavsky. Level crossing doen. A prototype low-cost wakeup radio for the
sampling in feedback stabilization under data-rate 868 MHz band. International Journal of Sensor Net-
constraints. In Decision and Control, 2006 45th IEEE works, 5(1):22–32, 2009.
Conference on, pages 4423–4428, December 2006.
[23] G. C. Walsh and H. Ye. Scheduling of networked
[14] D. Liberzon. Hybrid feedback stabilization of control systems. Control Systems, IEEE, 21(1):57–65,
systems with quantized signals. Automatica, 2001.
39(9):1543–1554, 2003.
[24] X. Wang and M. D. Lemmon. Event-triggering in
[15] M. Mazo Jr., A. Anta, and P. Tabuada. An ISS distributed networked control systems. IEEE Trans-
self-triggered implementation of linear controller. actions on Automatic Control, 56(3):586–601, 2011.
Automatica, 46:1310–1314, 2010.
[25] W. Ye, J. Heidemann, and D. Estrin. An energy-
[16] M. Mazo Jr. and M. Cao. Asynchronous decen- efficient mac protocol for wireless sensor networks.
tralized event-triggered control. Automatica, 50(12): In INFOCOM 2002. Twenty-First Annual Joint Con-
3197–3203, 2014, doi: 10.1016/j.automatica.2014. ference of the IEEE Computer and Communications
10.029. Societies. Proceedings. IEEE, volume 3, pages 1567–
1576, 2002.
[17] M. Mazo Jr. and P. Tabuada. Decentralized event-
triggered control over wireless sensor/actuator
networks. IEEE Transactions on Automatic Control,
8
Event-Based Generalized Predictive Control

Andrzej Pawlowski
UNED
Madrid, Spain

José Luis Guzmán


University of Almerı́a
ceiA3-CIESOL, Almerı́a, Spain

Manuel Berenguel
University of Almerı́a
ceiA3-CIESOL, Almerı́a, Spain

Sebastián Dormido
UNED
Madrid, Spain

CONTENTS
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.2 Event-Based Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.3 Sensor Deadband Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.3.1 Signal Sampling for Event-Based Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.3.2 Sensor Deadband Approach for Event-Based GPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.3.3 GPC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.3.4 Signal Sampling and Resampling Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.3.4.1 Resampling of Process Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.3.4.2 Reconstruction of Past Control Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.3.5 Tuning Method for Event-Based GPC Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.4 Actuator Deadband Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.4.1 Actuator Deadband Approach for Event-Based GPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.4.2 Classical GPC with Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.4.2.1 Actuator Deadband: Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.4.3 Formulation of the Actuator Deadband Using a Hybrid Dynamical Systems Framework . . . . . . 161
8.4.4 MIQP-Based Design for Control Signal Deadband . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
8.5 Complete Event-Based GPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.6.1 Greenhouse Process Description and System Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.6.2 Performance Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.6.3 Sensor Deadband Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.6.4 Actuator Deadband Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.6.5 Complete Event-Based GPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

151
152 Event-Based Control and Signal Processing

ABSTRACT This chapter describes a predictive the control actions will be applied to the process in an
event-based control algorithm focused on practical asynchronous way and only when it is really necessary.
issues. The generalized predictive control scheme is However, with classical control systems, new control
used as a predictive controller, and sensor and actuator actions are produced every sampling instant, and even
deadbands are included in the design procedure to for very small errors, constant movements (position
provide event-based capabilities. The first configuration changes) of the actuator are required. On the contrary,
adapts a control structure to deal with asynchronous the application of event-based control for distributed
measurements; the second approach is used for control systems (which are very common in industry)
asynchronous process updates. The presented ideas, for can reduce the information exchange between different
an event-based control scheme, conserve all of the well- control system agents [31,44]. This fact is even more
known advantages of the adaptive sampling techniques important if the communication is performed through
applied to process sampling and updating. The objec- computer networks, where the network structure is
tive of this combination is to reduce the control effort, shared with other tasks [42].
whereas the control system precision is maintained at Another important benefit that characterizes the
an acceptable level. The presented algorithm allows us event-based control systems is the relaxed requirements
to obtain a direct trade-off between performance and on the sampling period. This problem is very important
number of actuator events. The diurnal greenhouse tem- in all computer-implemented control systems [2]. The
perature control problem is used as the benchmark to correct selection of the best sampling period for a digital
evaluate the different control algorithm configurations. control system is a compromise between many design
aspects [1]. Lower sampling rates are usually character-
ized by their elevated cost; a slow sampling time directly
reduces the hardware requirements and the overall cost
8.1 Introduction
of process automatization. A slower sample rate makes
Most industrial control loops are implemented as it possible for a slower computer to achieve a given con-
computer-based controlled systems, which are governed trol function or provides greater capability from a given
by a periodic sampling time. The main reason for computer. Factors that may require an increase in the
using this solution is that the sampled-data theory in sample rate are command signal tracking effectiveness,
computer-based control is well established and is simple sensitivity to plant disturbance rejection, and sensitivity
to implement [6]. However, there are many situations to plant parameter variations.
in the real world where a periodic sampling time does Therefore, it seems that the event-based control idea
not make sense. That is, it is not always necessary to has more common sense than the traditional solution
check the process and to compute the control law contin- based on periodic sampling time—one must do some-
uously, since changes in real systems do not follow any thing only if it is really necessary. So why are event-
periodic pattern (although there are some exceptions for based approaches not being widely used in industry?
pure periodic systems). This is the case with biological The main reason is that it is necessary to propose new
systems, energy-based systems, or networked systems. event-based control approaches or to adapt the current
Those processes are characterized for being in an equi- control algorithms to the event-based framework. Note
librium state, and system disturbances come in a spo- that with an event-based strategy, the sampling time is
radic way because of electrical stimulation, demand for not periodic; thus, it is necessary to study and to ana-
energy, requests via the network, etc. [4]. For these pro- lyze how the control techniques should be modified to
cesses, event-based control and event-based sampling account for that [13]. On the contrary, its main drawback
are presented as ideal solutions, and for that reason they currently is that there is no well-supported theory as for
have become popular in the control community over computer-based controller systems, and there are still
the past few years [3,5,12,15,22–24,28,29,32,48,49,51,52, many open research problems regarding tuning and sta-
59,60]. With these approaches, the samples, and thus bility issues [4]. For that reason, the main contributions
the computation of the control law, are calculated in an in this field recently have been focused on the adapta-
aperiodic way, where the control strategy is now gov- tion, or modification, of well-known control strategies
erned by events associated with relevant changes in the to be used within an event-based framework. See, for
process. instance, [51], where a typical two-degrees-of-freedom
The main advantages of event-based control are quite control scheme based on a PI controller was modified to
remarkable from a practical point of view. One advan- work based on events; [3], where a simple event-based
tage is the reduction of resource utilization, for instance, PID control was proposed; or [29], where a new ver-
actuator waste for mechanical or electromechanical sion of the state-feedback approach was presented for
systems. When the controller is event triggered, event-based control.
Event-Based Generalized Predictive Control 153

There are many active research topics related to the implementation considering design and experimental
event-based control, where model predictive control MPC validation for a hybrid dynamical process with
(MPC) techniques are frequently used [8,18,25,36,46,54]. wireless sensors are presented in [7]. The control archi-
Most attention has been given to networked control tecture is based on the reference governor approach,
systems where event-based MPC controllers have been where the process is actuated by a local controller.
used to reduce the amount of information exchanged A hybrid MPC algorithm is running on the remote base
through a network of control system nodes [7,26,30,47]. station, which sends optimal setpoints. The experimen-
Moreover, the MPC properties are used to compensate tal results show that the proposed framework satisfies
for typical phenomena related to computer networks, all requirements and presents good performance evalu-
such as variable delay, packet loss, and so on. ated for a hybrid system in a networked control system.
As is well known, in event-based control systems, All previously presented event-based and networked
the signal samples are triggered in an asynchronous control systems are focused on the events and com-
way due to their own signal dynamics as well as asyn- munications between the sensor and controller nodes.
chronous and delayed measurements. Usually, special As a consequence of this assumption, the communi-
requirements, or limitations on sampling-rate issues, cation rates between the controller and the actuator
appear in the distributed control system. The reason is nodes are determined by the controller-input events.
that typical implementation of modern control systems However, the issue related to actuators has received
involves communication networks connecting the dif- less attention. Recently, some problems related to
ferent control system agents [11,14,25,44,57,58]. In some controller–actuator links were considered in certain
cases, a dedicated control network is required to achieve research works. The control algorithm described in [19]
the necessary communication bandwidth and reliability. takes the communication medium-access constraint into
However, for economical reasons, a common network account and analyzes the actuator assignment problem.
structure is desired for multiple services. In these cases, Medium-access constraints arise when the network
however, more complex algorithms for networked con- involves a large number of sensors and actuators. In
trol systems (NCS) must be used to compensate for the that work, the medium access of actuators is event
typical issues in controlling networks. NCS, with com- driven, and only a subset of the actuators is assigned
pensation for random transmission delays using net- at each time step to receive control signals from the
worked predictive control, was studied in [25]. Control controller with a bounded time-varying transmission
systems based on this idea generally assume that future delay. A resource-aware distributed control system
control sequences for all possible time delays are gener- for spatially distributed processes is presented in [61].
ated on the controller side and are afterward transmitted The developed framework aims to enforce closed-loop
to the process side. The new assumption presented in stability with minimal information transfer over the
that work is that the control signal is selected according network. The number of transmissions is limited by the
to the plant output rather than the time-delay mea- use of models to compute the necessary control actions
surement. The obtained results show improvements in and system states using observers. The robustness of
control performance when compared with the classi- the control system against packet dropouts is described
cal predictive controller. The authors in [47] study a in [34], applying a packetized control paradigm, where
predictive control architecture for plants with random each transmitted control packet contains tentative future
disturbances, where a network is placed between the plant input values. The main objective of that work is
control system agents. The network imposes a constraint to derive sparse control signals for efficient encoding,
on the expected bit-rate and is affected by random inde- whereas compressed sensing aims at decoding sparse
pendent dropouts. The controller explicitly incorporates signals.
bit-rate and network constraints by sending a quantized Therefore, in the literature, there are several solutions
finite-horizon plant input sequence. In [26], the exper- dealing with event-based MPC. However, most of
imental study of an event-based input–output control them are presented for specific applications such as
scheme is presented. The sporadic control approach [24] NCS, to solve changing delay problems, or are based
is adopted and implemented in a networked environ- on complicated modifications of the classical MPC
ment. The main reported advantage is the reduction in algorithm. On the contrary, most available solutions
the control rate, allowing a less saturated communica- are quite difficult to implement as they require clock
tion network. The main results were based on the real synchronization for each control system node. Thus,
implementation, and experimental tests, in a networked this chapter presents a new solution to the problem,
system for a DC motor control, using a distributed con- where a straightforward algorithm to include event-
trol scheme with a TCP/IP protocol for the communica- based features in an MPC controller is provided both
tion link. Other benefits of event-based control system for sensor and actuator deadbands. The event-based
154 Event-Based Control and Signal Processing

control system is built using the generalized predictive The main ideas of these approaches are detailed one
control (GPC) algorithm since it is considered the most after the other.
popular MPC algorithm in industry [10].

8.3 Sensor Deadband Approach


8.2 Event-Based Control System
When events are coming only from the sensor, an event-
The event-based control structure is shown in Figure 8.1, based controller consists of two parts: an event detector
where C represents an event-based controller and P(s) and a controller [4]. The event detector indicates to the
the controlled process. In this configuration, two types controller when a new control signal has to be calcu-
of events can be generated from u-based and y-based lated, caused by a new event. The complete event-based
conditions. In the developed application, the actuator control structure for this approach, including the pro-
possesses a zero-order hold (ZOH) so that the current cess, the actuator, the controller, and the event generator,
control action is maintained until the arrival of a new is shown in Figure 8.2. In this configuration, the event-
one. based controller is composed of a set of controllers, in
The u-based criterion is used to trigger the input-side such a way that one of them will be selected according
event, Eu , consisting of the transmission of a new con- to the time instant when a new event is detected, such as
trol action, u(tk ), if it is different enough (bigger than βu ) is described below. This control scheme operates using
with respect to the previous control action. On the con- the following ideas:
trary, the y-based condition will trigger the output side
event, Ey , when the difference between the reference w(t) • The process is sampled using a constant sampling
and the process output is out of the limit βy . The dead- time Tbase at the event generator block, whereas the
band sampling/updating (based on the level crossing control action is computed and applied to the pro-
technique) is used for y-based and u-based conditions. cess using a variable sampling time T f , which is
Since the u-based condition is related to the actuator determined by an event occurrence.
and the y-based condition to the sensor, two approaches
• T f is a multiple of Tbase (T f = f Tbase , f ∈ [1, nmax ])
to perform event-based control strategies can be
and verifies T f ≤ Tmax , Tmax = nmax Tbase being the
established:
maximum sampling time value. This maximum
• Sensor deadband approach sampling time will be chosen to maintain mini-
mum performance and stability margins.
• Actuator deadband approach
Both configurations can be combined to compose an • Tbase and Tmax are defined by considering pro-
event-based control system with the desired characteris- cess data and closed-loop specifications, following
tics. Both approaches are presented in detail, maintain- classical methods for the sampling time choice.
ing the universal formulation that permits it to be used
in a wide range of control techniques.
Controller structure

C1
βu βy
u f C2 Ey(tk , yTf )
detector

T
f f
Event

….

w(t) u-based and y-based conditions ….

Cf Setpoint
Eu Ey w(t)
ZOH

C P(s)
u(tk) y(t)
Event logic
Actuator

u(t) y(t) Event


ZOH

P(s)
Sensor

generator
Controller side Process side

FIGURE 8.1 FIGURE 8.2


Event-based control approach. Proposed event-based control scheme.
Event-Based Generalized Predictive Control 155

• After applying a control action at time t, the pro- that is to say, a new event is considered when the
cess output is monitored by the event generator absolute value between two signals is higher than a
block at each base sampling time, Tbase . This infor- specific limit βy . For instance, in the case of setpoint
mation is used by the event detector block, which tracking, the condition would be as follows:
verifies if the process output satisfies certain spe-
|w(k) − y(k)| > βy , (8.1)
cific conditions (y-based condition and T f ≤ Tmax ).
If some of those conditions are met, an event is trying to detect that the process output, y (k), is tracking
generated with a sampling period T f , and a new the reference, w(k), within a specific tolerance βy .
control action is computed. Otherwise, the con- The second condition is a time condition used for sta-
trol action is only computed by a timing event, bility and performance reasons. This condition defines
at tk = ts + Tmax , where tk is the current discrete that the maximum period of time between two control
time instant and ts is the discrete time instant cor- signal computations, and thus between two consecu-
responding to the last acquired sample in event- tive events, is given by Tmax . Hence, this condition is
based sampling. represented as
tk − ts ≥ Tmax . (8.2)
• It has to be pointed out that, according to the
previous description, the control actions will be These conditions are checked with the smallest sam-
computed based on a variable sampling time, T f . pling rate Tbase, where the detection of an event will
For that reason, a set of controllers is used, where be given within a variable sampling time T f = f Tbase,
each controller can be designed for a specific sam- f ∈ [1, nmax ]. It has to be highlighted that this vari-
pling time T f = f Tbase, f ∈ [1, nmax ]. Thus, when able sampling period determines the current closed-loop
an event is detected, the controller associated to sampling time to be used in the computation of the new
sampling time of the current event occurrence control action.
will be selected. Conversely, the controller can Figure 8.3 shows several examples of how events are
be designed for continuous time using a tuning generated in the proposed control structure, according
procedure corresponding to the control technique to the conditions described above. As can be seen, the
used, and the obtained continuous time controller continuous time signal y(t) is checked in every Tbase,
is discretized for each sampling frequency from T f . and new events are generated if the current signal
value is outside the established sensitivity band w ± βy .
The following sections describe in detail the fun- For instance, the events Etk , Etk +1 ,Etk +2 , and Etk +5 are
damental issues of signal sampling for the proposed generated from an error condition such as that proposed
strategy. in (8.1). On the contrary, the events Etk +3 and Etk +4
correspond to the time limit cycle condition, described
by (8.2). In this case, the event is governed by the time
8.3.1 Signal Sampling for Event-Based Control
condition since there is no other event within that time
As discussed above, in an event-based control sys- period.
tem with sensor deadband, the control actions are exe- Note that events are generated at time instant
cuted in an asynchronous way. The sampling period multiples of Tbase, and the effective sampling time for
is governed by system events, and these events are the control loop is T f , computed as the time between
determined according to the event-based sampling tech- two consecutive events. This shows that during rapid
niques described in [33,43]. and high transients (during setpoint changes and when
As can be seen in Figure 8.2, this event-based sam- load disturbances affect the process output), the con-
pling is governed by the event generator block. This trol action is executed with a small sampling time,
block continuously checks several conditions in the pro- near Tbase, allowing rapid closed-loop responses. Con-
cess variables to detect possible events and thus deter- versely, in steady-state conditions, the control action is
mines the current closed-loop sampling time. updated with the maximal sampling time Tmax , mini-
The event generator block includes two different mizing energy and actuator movements but allowing the
kinds of conditions in order to generate new events. controller to correct small deviations from the setpoint.
When one of those conditions become true, a new event Therefore, when the changes in the monitored signals
is generated and then the current signal values of the are relatively small and slow, the number of computed
process variables are transmitted to the controller block, control actions is significantly smaller than in a periodic
which is used to compute a new control action. sampling scheme. This is the principal advantage of the
The first types of a conditions are those focused on proposed event-based control scheme.
checking the process variables. These conditions are It has to be pointed out that it is assumed that the
based on the level-crossing sampling technique [33,43], transmission time, TT , between the control system
156 Event-Based Control and Signal Processing
y(t)

w(t) ± βy
w(t)

Tmax Tmax

Et Et Et Et Et Et t
k k+1 k+2 k+3 k+4 k+5
Tbase

FIGURE 8.3
Example of event-based sampling.

blocks is neglected, TT Tbase. Furthermore, for Therefore, this section is devoted to presenting the
distributed event-based control systems, no information proposed event-based GPC algorithm, which uses a
dropout is considered. variable sampling period to take into account the
occurrence of events. The main ideas of the algorithm
8.3.2 Sensor Deadband Approach for and the controller structure are first described and
Event-Based GPC then the event detection and resampling procedures are
The core idea of this section consists of applying the sen- detailed in sequence.
sor deadband approach presented previously to MPC
algorithms [37,39]. The main steps of the algorithm can 8.3.3 GPC Algorithm
be summarized as follows: a sampling rate range is
defined which delimits when an event can be detected. As pointed out above, the proposed control structure is
Then, different GPC controllers are designed, one for based on the use of the GPC algorithm as the feedback
each sampling period in that set. Finally, the proposed controller. Specifically, a set of GPC controllers is used
algorithm is focused on selecting the adequate GPC con- to implement the proposed strategy, one for each sam-
troller when an event is generated, trying to maintain pling time T f , f ∈ [1, nmax ], following the ideas from the
a trade-off between control performance and control previous section. Each individual controller in that set
signal changes [38]. is implemented as a classical GPC algorithm. The GPC
In this approach, event detection is based on the controller consists of applying a control sequence that
level-crossing sampling technique with additional time minimizes a multistage cost function of the form [10]:
limits. With this idea, the control actions are only cal- N2
f

culated when the process output is outside a certain


band around the setpoint such as described above. Thus,
Jf = ∑ δ f [ŷ f (k + j|k) − w(k + j)]2
f
j = N1
while the process output stays within this band, there
f
are no new events, and therefore new control actions are Nu
not calculated. Nevertheless, an additional event logic + ∑ λ f [Δu f (k + j − 1)]2 , (8.3)
is included for stability issues in such a way that if j =1
the temporal difference between two consecutive events where ŷ f (k + j|k) is an optimum j step-ahead pre-
is larger than a maximum time period, a new event diction of the system output on data up to discrete
is generated. First, the typical benefits of any event- time k, Δu f (k + j − 1) are the future control incre-
based control scheme are achieved, such as an important ments, and w(k + j) is the future reference trajec-
reduction in actuations. This can be useful for the life of tory, considering all signals with a sampling time
mechanical and electromechanical actuators, reduction T f (k = kT f , k ∈ Z + ). Moreover, the tuning parame-
of transmissions in NCS, or economical savings (energy ters are the minimum and maximum prediction hori-
consumption due to changes in the actuators) [37]. On f f f
the contrary, the proposed control scheme allows one zons, N1 and N2 ; the control horizon, Nu ; and the
to use standard GPC controllers without modifying the future error and control weighting factors, δ f and
original control algorithm; only external changes are λ f , respectively [10]. The objective of GPC is to
required in the control scheme, which is valuable from compute the future control sequence u f (k), u f (k + 1),
f
a practical point of view. . . . , u f (k + Nu − 1) in such a way that the future plant
Event-Based Generalized Predictive Control 157

output y f (k + j) is driven close to w(k + j). This is 8.3.4 Signal Sampling and Resampling Techniques
accomplished by minimizing J f .
Such as is described above, the computation of a new
In order to optimize the cost function, the optimal pre-
f f control action in the proposed event-based algorithm is
diction of y f (k + j) for j = N1 . . . N2 is obtained using made with a variable sampling period T f . Thus, the past
a single-input single-output (SISO) linear model of the values of the process variables and of the control signals
plant described by have to be available for sampling with that of sampling
ε (k)
f period, T f . Therefore, a resampling of the corresponding
A f ( z −1 ) y f ( k ) = z − d B f ( z −1 ) u f ( k ) +
f
, (8.4) signals is required.
Δ
The problem of updating the control law when there
where ε (k) is the zero mean white noise, Δ = 1 − z−1 ; are changes in the sampling frequency can be solved by
f

d f is the discrete delay of the model; and A f and B f swapping between controllers working in parallel, each
are the following polynomials in the backward shift with a different sampling period. This is a simple task
operator z−1 : that implements swapping with signal reconstruction
techniques [16]. However, using multiple controllers in
A f (z−1 ) = 1 + a1 z−1 + a2 z−2 + . . . + ana z−na ,
f f f
parallel results in being computationally inefficient. In
the approach proposed in this chapter, the algorithm
B f (z−1 ) = b0 + b1 z−1 + b2 z−2 + . . . + bnb z−nb .
f f f f
shifts from one controller to another only when the sam-
Using this model, it is possible to obtain a relation- pling period changes and does not permit the controllers
ship between the vector of predictions yf = [ŷ f (k + to operate in parallel. This requires a change in the con-
f f f
N1 |k), ŷ f (k + N1 + 1|k), . . . , ŷ f (k + N2 |k)] T , the vec- troller parameters as well as the past values of the output
tor of future control actions uf = [Δu f (k), Δu f (k + 1), and control variables required to compute the next con-
. . . , Δu f (k + Nu − 1)] T , and the free response of the trol signal, according to the new sampling period. The
f

process f [10]: f control law is precomputed offline, as was explained in


Section 8.3.3, generating the GPC gain Kf for each sam-
yf = Guf + ff ,
pling time, T f . Then the past control and output signals
where G is the step-response matrix of the system. for that sampling period T f are calculated by using the
In this case, following the idea presented in [35], the procedure described in the following subsections.
f
prediction horizons are defined as N1 = d f + 1 and
f 8.3.4.1 Resampling of Process Output
N2 = d f + N f , and the weighting factor δ f = 1. Thus,
f
the control tuning parameters are N f , Nu , and λ f . The As discussed previously, the controller block only
minimum of J , assuming that there are no constraints receives the new state of the process output when a new
f

on the control and output signals, can be found by event is generated. This information is stored in the con-
making the gradient of J f equal to zero, which leads to troller block and is resampled to generate a vector yb
including the past values of the process output with Tbase
u f = ( G T G + λ f I) − 1 G T ( w − f f ) , (8.5) samples. The resampling of the process output is per-
where I is the identity matrix, and w is a vector of future formed by using a linear approximation between two
references. consecutive events, and afterward this linear approxi-
Notice that according to the receding horizon mation bis sampled with the Tbase sampling period, result-
approach, the control signal that is actually sent to the ing in y (k) with k = 0, Tbase, 2Tbase, 3T base, . . . . Then, this
b , is used to calcu-
process is the first element of vector u , given by f resampled process output signal, y
late the necessary past information to compute the new
Δu f (k) = Kf (ff − w), (8.6) control action, such as described in the following steps:
where Kf is the first row of matrix (GT G + λf I)−1 GT . • Suppose that the last value of the process output
This has a clear meaning if there are no future predicted was received at time instant ts = k and is stored in
errors; that is, if w − ff = 0, then there is no control move y b (k) with a sampling time of Tbase . Then, a new
since the objective will be fulfilled with the free evolu- event is detected and the sampling period changes
tion of the process. However, in the other case, there to T f = f Tbase. Thus, the new time instant is given
will be an increment in the control action proportional by tk = k + f , and a new value of the process
(with the factor Kf ) to that future error. Detailed formu- f
lation of classical GPC is given in [10]. When constraints output is received as yb (k + f ) = y T .
f
are taken into account, an optimization problem must
be solved using a quadratic cost function with linear • Using this information, a linear approximation is
inequality and equality constraints [10]. performed between these two samples, yb (k) and
158 Event-Based Control and Signal Processing

yb (k + f ), and the resulting linear equation is sam- • The required past information each T f samples
pled with Tbase in order to update the process is calculated from values of the ub signal. As the
output variable yb as follows: ZOH is being considered for the control signal cal-
culation, the past values of the new control signal,
yb (k) − yb (k + f )
yb (k + i ) = yb (k) − i, f
u p , must be reconstructed according to this fact.
f
That is, the reconstructed past values must be kept
where i = 1, . . . , f . constant between samples. However, since the val-
ues in ub are sampled with Tbase , the past values of
• Once the process output signal is resampled, f
u p are reconstructed using the average of the past
the required past information must be obtained
according to the new sampling time T f , resulting f values in ub , as follows:
f
in a new signal, y p , with the past information of the f −1
∑ ub ( j − l ),
f
process output in every T f sample. Thus, this sig- u p (i ) = (8.8)
nal is calculated taking values from the resampled l =0
process output signal, y b , by moving backward in
that vector with decrements of T f = f Tbase: f
for i = Pu , . . . , 1, with j = k − 1 − ( Pu − i ) f , and u p
f and Pu being the past values of u f and the num-
y p (i ) = y b ( k − j f ), (8.7) ber of required past values, respectively. Note that
with this procedure, the new reconstructed control
with i = Py , . . . , 1 and j = Py − i, where Py is the
signal and the original base signal have the same
required number of past values.
values of the integral of the control signal in the
• When there is not enough past information, the interval.
f
last available value is taken to fill the signal y p .
• Then, the actual control action for the new sam-
f
Hence, the vector y f is obtained as a result, which con- pling time, u T , is calculated from the GPC control
f
tains the past process information with the new sam- law (8.6) by using the reconstructed past control
pling period T f to be used in the calculation of the signals and the resampled past process outputs
current control action [39]. given by (8.8) and (8.7), respectively.

• Once the new control action has been calculated,


8.3.4.2 Reconstruction of Past Control Signals f
u T , the ub signal is updated by keeping constant
f
The procedure is similar to that described for the resam- the values between the two consecutive events.
pling of the process output. There is a control signal, Therefore,
ub , which is always used to store the control signal val- f
ub ( j) = u T ,
f
ues every Tbase samples. This is done by reconstructing
the control signal calculated between two consecutive for j = k, . . . , k + f .
events. However, in this case, there is not any linear
approximation between two consecutive samples, since
a ZOH is being used. Thus, the signal ub will be obtained
8.3.5 Tuning Method for Event-Based GPC Controllers
by keeping constant the control values between two
consecutive events. Nevertheless, the procedure for the The set of GPC controllers available at the controller
control signal is done in the opposite way than for the block have to be properly tuned so that the con-
process output. First, the required past information is trol system satisfies all the design and performance
calculated, and afterward the signal ub is updated: requirements independently of the event-based frame-
work. First, the maximum sampling time, Tmax , has to
• Let’s consider that a new event is generated, that be selected. This parameter is tuned according to the
results in a new sampling period T f = f Tbase . Now, desired closed-loop time constant, τ , and ensuring the
cl
the past information for the new sampling period, control system stability in the discretization process,
T f , is first calculated from the past values in ub and where [6]
f
stored in a variable called u p . Afterward, this infor- τ
Tmax ≤ cl . (8.9)
mation, together with the past process output data 4
given by (8.7), will be used to calculate the new Then, the nmax design parameter is established, which
f
control action, u T . determines the number of GPC controllers to be
f
Event-Based Generalized Predictive Control 159

considered and the faster sampling period as Tbase = on the actuator deadband approach, which tries to face
Tmax /nmax , with nmax ∈ Z + . In this way, the current these drawbacks regarding control signal changes. The
sampling period T f will be given as T f = f Tbase , f ∈ actuator deadband can be understood as a constraint on
[1, nmax ], such as is discussed above. control signal increments Δu(k):
The next step consists of tuning the GPC parameters,
f f
λ , N2 , and Nu . The λ f parameter is designed to achieve
f |Δu(k)| = |u(k − 1) − u(k)|  βu , (8.12)
the desired closed-loop performance for the fastest GPC
controller with a Tbase sampling period, and this same where βu is the proposed deadband.
value is used for the rest of the GPC controllers (notice The idea of this approach consists in including this vir-
that the algorithms are focused to use the fastest GPC tual deadband on the control signal (the input signal for
controller when events are detected, and thus the tun- the actuator) into a control system. The introduced dead-
ing rules have to be mainly focused on that controller). band, βu , will be used as an additional tuning parameter
The prediction horizon is calculated in continuous time, for control system design, to adjust the number of actu-
N2c , according to the typical GPC tuning rules (capturing ator events (transmissions from controller to actuator).
the settling time of the process)[10]. Then, the predic- If the deadband is set to a small value, the controller
f
tion horizon for each GPC controller, N2 , is calculated becomes more sensitive and produces more actuator
in samples dividing the continuous-time value by the events. Alternatively, when βu is set higher, the number
corresponding sampling period T f : of events is reduced, which is expected to result in worse
control performance. In this way, it is possible to estab-
f N2c
N2 = , f ∈ [1, nmax ]. (8.10) lish a desired trade-off between control performance and
Tf the number of actuator events/transmissions.
The control horizon is set in samples for the slowest
sampling period Tmax according to typical rules [10]. 8.4.1 Actuator Deadband Approach for
Then, this same value is used for the rest of the GPC Event-Based GPC
controllers: So far, and as described in the introduction of this
(8.11) chapter, research efforts in event-based MPC have been
f
Nu = Nu , f ∈ [1, nmax ].
n max
focused on compensation for network issues and not
The reason to keep the same control horizon for all on the control algorithm itself, which can greatly influ-
the GPC controllers is that having large control hori- ence the amount of information exchanged throughout
zons does not improve the control performance. Besides, the network. Motivated by this fact, a methodology that
large control horizon values increase the computational allows a reduction of the number of actuator events
burden considerably, what must be considered during is developed. This is possible to be included in the
MPC controllers design stage [50]. GPC by adding a virtual actuator deadband to constrain
the number of changes in the control signal over the
control horizon. This solution has been considered at
the control synthesis stage and influences many pre-
viously discussed aspects of network limitations, such
as transmission delay, quantization effect, and multi-
8.4 Actuator Deadband Approach
packet transmission. At the same time, using the intro-
The main idea of this approach is to develop a con- duced actuator deadband, event-based controllers can
trol structure where the control signal is updated in an reduce the number of actuator events.
asynchronous manner [36]. The main goal is to reduce The idea presented in this section consists of applying
the number of control signal updates, saving system virtual constraints on the control signal. The formulated
resources, while retaining good control performance. constraints introduce a deadband that disallows small
In the sensor deadband approach presented in the pre- control signal changes. When the required control signal
vious section, the controller event always produces a increment is insignificant (smaller than an established
new control action, which is transmitted to the actuator band), it is suppressed because its effect on the con-
node. Most event-based control systems are focused on trolled variable is not important. However, when the
this idea, where the sampling events are generated by increment in the control signal is significant (bigger than
the event generator as a function of the process output. the deadband), its optimal value is transmitted to the
In this configuration, the control action is transmitted to actuator and applied to the controlled process. In this
the actuator even if its change is small and transmis- way, the introduced virtual deadband is included in
sion is not justified. Therefore, this section will focus the optimization technique, which provides an optimal
160 Event-Based Control and Signal Processing

solution for the control law. The presence of the dead- 8.4.2.1 Actuator Deadband: Problem Statement
band in the optimization procedure can significantly
Taking advantage of the constraints handling mecha-
reduce the number of events affecting the actuator, at the
nisms in MPC controllers described above, the actuator
same time guaranteeing the optimality of the solution.
deadband
The virtual deadband is modeled and included in the
|Δu(k)|  βu ,
developed algorithm exploring a hybrid system design
framework. The resulting optimization function con- can be included in the optimization procedure. Com-
tains continuous and discrete variables, and thus mixed pensation techniques for actuator issues were explored
integer quadratic programming (MIQP) has to be used in previous works about predictive control. Some exam-
to solve the formulated optimization problem. The pre- ples can found in [55], where actuator nonlinearity com-
sented idea is tested by simulation for a set of common pensation was analyzed based on the real experiments
industrial systems, and it is shown that a large reduction on gantry crane control problems. In that study, the
in the number of control events can be obtained. This can external pre-compensator for the process input nonlin-
significantly improve the actuator’s life span as well as earity was studied, using the inverse function combined
reduce the number of communications between the con- with the multivariable MPC. The proposed control archi-
troller and the actuator, which is important from an NCS tecture allows one to reduce oscillations in the controlled
point of view. variable caused by the actuator nonlinearity, but it intro-
duces a permanent offset in the reference tracking. On
8.4.2 Classical GPC with Constraints the other hand, the MPC algorithm was also used to
compensate for quantization effects in control signal val-
The minimum of the cost function J (of the form referred ues. This topic was investigated in [45], where the con-
in Equation 8.3), assuming there are no constraints on trol problem for a particular discrete-time linear system
the control signals, can be found by making the gradient subject to quantization of the control set was analyzed.
of J equal to zero. Nevertheless, most physical processes The main idea consists of applying a finite quantized
are subjected to constraints, and the optimal solution can set for the control signal, where the optimal solution
be obtained by minimizing the quadratic function: is obtained by solving an online optimization problem.
Authors consider the design of stabilizing control laws,
J (u) = (Gu + f − w) T (Gu + f − w) + λuT u, (8.13) which optimize a given cost index based on the state and
input evolution on a finite receding horizon in a hybrid
control framework.
where G is a matrix containing the coefficient of the The presence of the actuator deadband defines three
input-output step response, f is a free response of regions for possible solutions of the control law, and the
the system, w is a future reference trajectory, and following nonlinear function of the actuator deadband
u = [Δu(t), . . . , Δu(t + N − 1)]. can be used:
Equation 8.13 can be written in quadratic function ⎧
form: ⎨ Δu(k) : Δu(k)  βu
x (k) = 0 : −βu  Δu(k)  βu . (8.15)
1 T ⎩
J ( u) = u Hu + bT u + f0 , (8.14) Δu(k) : Δu(k)  −βu
2
Most actuators in industry exhibit a backlash and other
where H = 2(GT G + λI), bT = 2(f − w) T G), f0 = types of nonlinearities. However, controllers are nor-
(f − w) T (f − w). The obtained quadratic function is mally designed without taking into account the actuator
minimized subject to system constraints, and a classical nonlinearities. Because of the predictive nature of GPC,
quadratic programming (QP) problem must be solved. which will be used for the developed system, the actu-
The constraints acting on a process can originate from ator deadband can be dealt with. The deadband can
amplitude limits in the control signal, slew rate limits of be treated by imposing constraints on the controller in
the actuator, and limits on the output signals, and can be order to generate control signals outside the deadband.
expressed in the shortened form as Ru  r [10]. Many Taking into account that GPC can handle different types
types of constraints can be imposed on the controlled of linear constraints, the proposed system input nonlin-
process variable and can be expressed in a similar earity cannot be introduced directly into the optimiza-
manner. Those constraints are used to force the response tion procedure. Therefore, the feasible region generated
of the process with the goal to obtain certain charac- by this type of constraint is nonconvex and requires spe-
teristics [20]. In the following sections, the deadband cial techniques to solve it. The detailed description of
formulation and the modification of the QP problem by input constraints modeling and its integration into the
a MIQP optimization problem is considered. optimization procedure are detailed in the next sections.
Event-Based Generalized Predictive Control 161

In this study, it is assumed that the process output x(k)


is sampled in synchronous mode with constant sam-
pling time, and the controlled system is updated in an M
asynchronous way, making use of the actuator dead-
band. This assumption is made to highlight the influence
of the control signal deadband on the controlled vari-
able. However, the presented idea can be combined with
many MPC algorithms, where nonuniform sampling
techniques are used to monitor the controller variable βu
[7,8,25,26,30,47,54].
Δu(k)

–βu βu
8.4.3 Formulation of the Actuator Deadband Using a m M
Hybrid Dynamical Systems Framework
–βu
The methodology presented in this section consists of
including the actuator virtual deadband into the GPC
design framework. The deadband nonlinearity can be
handled together with other constraints on the con-
trolled process. This approach is reasonable because the
deadband nonlinearity has a nonsmooth nature, and it m
is characterized by discrete dynamics. Therefore, it can
be expressed mathematically by a series of if-then-else φ1=1 φ1=0 φ2=0 φ2=1
rules [62]. The hybrid design framework developed by
[9] allows one to translate those if-then-else rules into FIGURE 8.4
a set of linear logical constraints. The resulting formu- Control signal increments with deadband.
lation consists of a system containing continuous and
discrete components, which is known as a mixed logical
dynamic (MLD) system. deadband in a graphical form, where each region can
In accordance with (8.15), the virtual deadband forces be distinguished. The introduced auxiliary logical vari-
the control signal increment to be outside the band, and ables, ϕ1 and ϕ2 , define three sets of possible solutions
in the opposite case (−βu ≤ Δu(k) ≤ βu ), the control depending on their values:
signal is kept unchanged, Δu(k) = 0. ⎧ 
⎪ Δu(k) : Δu(k)  βu  ϕ2 =1
This formulation has the typical form used to describe ⎪
⎨ 
0 : Δu(k)  βu  ϕ2 =0
constraints in the GPC algorithm, which are imposed on x (k) =  . (8.20)
⎪ 0 : Δu(k)  −βu  ϕ1 =0
the decision variable. To incorporate a deadzone within ⎪
⎩ 
the GPC design framework, the input constraints have to Δu(k) : Δu(k)  −βu  ϕ1 =1
be reconfigured. Let us introduce two logical variables,
ϕ1 and ϕ2 , to determine a condition on the control signal For the overall performance of the optimization process,
increments, Δu(k). So, these logical variables are used it is important to model the deadband with few auxiliary
to describe the different stages of the control signal with variables to reduce the number of logical conditions that
respect to the deadband: must be verified. Thus, the proposed logic determined
[Δu(k)  βu ] → [ϕ2 = 1], (8.16) by (8.20) can be translated into a set of mixed-integer linear
inequalities involving both continuous variables, Δu ∈ R,
[Δu(k)  βu ] → [ϕ2 = 0], (8.17)
and logical variables ϕi ∈ {0, 1}. Finally, a set of mixed-
[Δu(k)  −βu ] → [ϕ1 = 0], (8.18) integer linear inequalities constraints for the actuator
[Δu(k)  −βu ] → [ϕ1 = 1]. (8.19) deadband are established as
To make this solution more general, minimal m and
maximal M values for the control signal increments Δu − (m − βu )(1 − ϕ2 )  βu , (8.21)
are included in the control system design procedure, Δu − ( M − βu )ϕ2  βu , (8.22)
resulting in Δu − (m + βu )ϕ1  −βu , (8.23)
M = max {Δu(k)}, and m = min{Δu(k)}. Δu − ( M + βu )(1 − ϕ1 )  −βu , (8.24)
In this way, it is possible to determine the solution region Δu  mϕ1 (8.25)
based on binary variables. Figure 8.4 shows the virtual Δu  Mϕ2 . (8.26)
162 Event-Based Control and Signal Processing

To guarantee a unique solution for the optimization where these constraints can be written in a matrix form
procedure, an additional condition must be considered. for the control horizon Nu = 1 as follows:
The objective of this action is a separation of the solu- ⎡ ⎤ ⎡ ⎤
tion regions, which delimits the optimal solution to one 1 0 −( M − βu ) βu
⎢ 1 ( M + βu ) 0 ⎥ ⎢ ⎥
region for each time instant: ⎢ ⎥⎡ ⎤ ⎢ M ⎥
⎢ 1 −M ⎥ ⎢ ⎥
⎢ 0 ⎥ Δu ⎢ 0 ⎥
⎢ − 1 ( m + βu ) ⎥ ⎣ ϕ1 ⎦  ⎢ βu ⎥

ϕ1 + ϕ2  1. (8.27) ⎢ 0 ⎥ ⎥.
⎢ −1 0 ⎥
−(m − βu ) ⎥ ϕ2 ⎢ −m ⎥
⎢ ⎢ ⎥
⎣ −1 m 0 ⎦ ⎣ 0 ⎦
With this restriction, the three zones that cover all possi-
0 1 1 1
ble solutions for the control signal increments can easily
(8.28)
be determined: {(ϕ1 = 1 ∧ ϕ2 = 0), (ϕ1 = 0 ∧ ϕ2 = 1),
(ϕ1 = 0 ∧ ϕ2 = 0)} (see Figure 8.4 for details). These
In the case where the control horizon is Nu > 1, the
different configurations of auxiliary variables limit the
corresponding matrix becomes
solution area, assuring a solution singularity. The partic-
ular solution that can be expressed as constraints on the ⎡ ⎤
1D I 0D I −( M − βu )D I
control signal increments: ⎢ 1D I ( M + βu )D I ⎥
⎢ 0D I ⎥⎡ ⎤
⎢ 1D I 0D I − MD I ⎥ Δud I
⎢ ⎥
ϕ1 = 1 ∧ ϕ2 = 0 ϕ1 = 0 ∧ ϕ2 = 1 ϕ1 = 0 ∧ ϕ2 = 0 ⎢ −1D I (m + βu )D I 0D I ⎥ ⎣ ϕ1 d I ⎦
⎢ ⎥
Δu  m Δu  βu Δu  m ⎢ −1D I −(m − βu )D I ⎥
⎢ 0D I ⎥ ϕ
2dI

Δu  βu Δu  M Δu  βu ⎣ −1D I mD I 0D I ⎦
Δu  m Δu  −βu Δu  −βu . x
0D I 1D I 1D I
Δu  −βu Δu  M Δu  M  
Δu  m Δu  0 Δu  0 ⎡ ⎤
C
Δu  0 Δu  M Δu  0 βu d I
⎢ Md I ⎥
⎢ ⎥
⎢ 0d I ⎥
Hence, to obtain an optimal solution, the quadratic ⎢ ⎥
objective function, subject to the designed input con- ⎢ ⎥
⎢ βu d I ⎥ ,
⎢ ⎥
straints (Equations 8.21 through 8.27), has to be solved. ⎢ −md I ⎥
In this way, for each time step, new control actions ⎣ 0d I ⎦
are obtained, avoiding control signal increments smaller 1d I
 
than |βu |. ρ

8.4.4 MIQP-Based Design for Control Signal where ⎡ ⎤


Deadband 11 0 ... 0
⎢ .. ⎥
In this section, the reformulated hybrid input constraints ⎢ 0 12 0 ⎥.

DI = ⎢ . ⎥,

presented in the previous section are integrated into the ⎣ .. . . . ..
0 ⎦.
GPC optimization problem, where the resulting formu-
0 ... 0
1 Nu
lation belongs to an MIQP optimization problem. First,
for simplicity’s sake, the case for the control horizon is a diagonal matrix ( Nu × Nu ) of ones, and
Nu = 1 will be introduced, and subsequently, an exten-
⎡ ⎤
sion for any value of control horizon will be shown. The 11
previous set of constraints (Equations 8.21 through 8.27) ⎢ ⎥
d I = ⎣ ... ⎦ ,
can be rewritten in the following forms:
1 Nu
Δu − ( M − βu )ϕ2  βu ,
is a vector of ones with size ( Nu × 1). The previous
Δu + ( M + βu )ϕ1  M, matrices that contain linear inequality constraints can be
Δu − Mϕ2  0, expressed in a general form as
−Δu + (m + βu )ϕ1  βu ,
Cx  ρ, (8.29)
−Δu − (m − βu )ϕ2  −m,
−Δu + mϕ1  0, with x = [ xc , xd ] T , where xc represents the continu-
ϕ1 + ϕ2  1, ous variables Δu, and x d are those of the logical
Event-Based Generalized Predictive Control 163

variables ϕi . Introducing the matrix Q(3Nu ×3Nu ) and of these tuning parameters can be used to obtain the
l(3Nu ×1) defined as desired trade-off between control performance and the
⎡ ⎤ ⎡ ⎤ number of events for the sensor and the actuator, inde-
H O O b pendently. In this configuration, the process output is
Q=⎣ O O O ⎦;l = ⎣ Ō¯ ⎦ , (8.30) sampled using the intelligent sensor, where the dead-
O O O Ō¯ band sampling logic is implemented. When one of the
conditions becomes true, the event generator transmits
where O = Nu × Nu and Ō¯ = Nu × 1 both of zeros. Now, current process output to the controller node. The usage
the GPC optimization problem is expressed as of the sensor deadband allows one to reduce the pro-
cess output events Ey . The received information in the
min x T Qx + lT x, (8.31) event-based controller node triggers the event detector
x
to calculate the time elapsed since the last event. The
subject to (8.29), where obtained time value is used as the current sampling time,
  and the corresponding controller is selected to calcu-
xc
x= late a new control signal. Because the virtual actuator
xd deadband is also used in such a configuration, the corre-
sponding constraints on the control signal are active for
x c ∈ Rnc , x d ∈ {0, 1}nd ,
all controllers from the set. In this way, the obtained con-
which is a MIQP optimization problem [9]. The opti- trol signal takes into account the deadband and makes
mization problem involves a quadratic objective func- the reduction of process input events Eu possible.
tion and a set of mixed linear inequalities. The logical The complete event-based GPC merges the advan-
variables appear linearly with the optimization vari- tages of both previously introduced methods. In the
ables, but there is no product between the logical vari- resulting configuration, the process input and output
ables ϕi and the optimization variable Δu. events can be tuned independently using the actuator
The classical set of constraints represented by Ru  r or the sensor deadband, respectively. The developed
(see Section 8.4.2) can also be included in the optimiza- event-based control structure can satisfy the particular
tion procedure. To fulfill this goal, it is necessary to requirements, which are considered under the algorithm
introduce an auxiliary matrix R̆ of the form development procedure.
!
R̆ = R O O ,

where O is a matrix of zeros with the same dimensions


as R̆. Finally, all constraints that must be considered in
the optimization procedure are grouped in 8.6 Applications
   
C ρ This section presents a simulation study for an analyzed
x .
R̆ r event-based GPC algorithm, applied to the greenhouse
temperature control problem. The application scenario
under consideration allows us to highlight the most
important features and properties of the event-based
control strategy.

8.5 Complete Event-Based GPC


8.6.1 Greenhouse Process Description and
The complete event-based GPC control scheme consid-
System Models
ers both sensor and actuator deadbands at the same
time. To realize such an event-based control structure, it Control problems in greenhouses are mainly focused
is necessary to build the control structure introduced for on fertigation and climate systems. The fertigation con-
the sensor deadband approach. Additionally, developed trol problem is usually solved providing the amount of
controllers consider the actuator deadband in the opti- water and fertilizers required by the crop. The climate
mization procedure. In this way, an event-based GPC control problem consists in keeping the greenhouse
controller manages two sources of events related to pro- temperature and humidity in specific ranges despite
cess output and input, respectively. Thus, the complete disturbances. Adaptive and feedforward controllers are
event-based GPC control structure has two additional commonly used for the climate control problem. There-
tuning parameters βy and βu , which determine the dead- fore, fertigation and climate systems can be represented
band for the sensor and the actuator, respectively. Each as event-based control problems where control actions
164 Event-Based Control and Signal Processing

Soil
temperature D1
v1(k) A

Solar
D2

Process disturbances
radiation
v2(k) A

Inside
Wind
D3 temperature
velocity +
v3(k) A
y(k)

Outside
temperature D4
v4(k) A

Vents
opening B
u(k) A

FIGURE 8.5
Greenhouse process model with disturbances for diurnal temperature control.

will be calculated and performed when required by the The greenhouse interior temperature problem was
system, for instance, when water is required by the considered as a MISO (multi-input single-output) sys-
crop or when ventilation must be closed due to changes tem, where soil temperature, v1 (k), solar radiation,
in outside weather conditions. Furthermore, such as v2 (k), wind velocity, v3 (k), outside temperature, v4 (k),
discussed above, with event-based control systems, a and vents-opening percentage, u(k), were the input vari-
new control signal is generated only when a change is ables and the inside temperature, y (k), was the out-
detected in the system. That is, the control signal com- put variable (see Figure 8.5). It has to be pointed out
mutations are produced only when events occur. This that only the vents-opening variable can be controlled,
fact is very important for the actuator life and from an whereas the rest of the variables are considered measur-
economical point of view (reducing the use of electricity able disturbances. Additionally, wind velocity is charac-
or fuel), especially in greenhouses where actuators are terized by rapid changes in its dynamics, whereas solar
commonly composed of mechanical devices controlled radiation is a combination of smooth (solar cycle) and
by relays [17,21,27,53,56]. fast dynamics (caused by passing clouds). Soil and out-
The experimental data used in this work were side temperature were characterized by slow changes.
obtained from a greenhouse located at the Experimental Considering all of the abovementioned process prop-
Station of the CAJAMAR Foundation “Las Palmerillas” erties, the CARIMA model can be expressed as follows:
in Almerı́a, Spain.∗ It measures an average height of
3.6 m and has covered surface area of 877 m2 . It is A ( z −1 ) y ( k ) = z − d B ( z −1 ) u ( k − 1 )
provided with natural ventilation. Additionally, it is 4
ε( k )
equipped with a measurement system and software + ∑ z − d D i D i ( z −1 ) v i ( k ) + . (8.32)
adapted to carry out the experiments of process iden- i =1
Δ
tification and implementing different climatic control
Many experiments were carried out over several days
strategies. All greenhouse climatic variables were
where a combination of PRBS and step-based input
measured with a sampling period of 1 minute.
signals were applied at different operating points. It
was observed that the autoregressive model with exter-
nal input (ARX), using Akaike’s Information Crite-
∗ Hierarchic Predictive Control of Processes in Semicontinuous rion (AIC), presented better dynamic behavior adjust-
Operation (Project DPI2004-07444-C04-04), Universidad de Almeria, ment to the real system. This fact was confirmed by
Almeria, Spain. http://aer.ual.es/CJPROS/engindex.php. cross-correlation and residual analysis, obtaining the
Event-Based Generalized Predictive Control 165

model’s best fit of 92.53%. The following discrete-time 8.6.3 Sensor Deadband Approach
polynomials were obtained as the estimation results
The event-based GPC control structure with sensor
around 25◦ C (see Figure 8.5) [40,41]:
deadband was implemented with Tbase = 1 minute,
A(z ) − 1 −
= 1 − 0.3682z + 0.0001z ,
1 − 2 Tmax = 4, nmax = 4, and thus T f ∈ [1, 2, 3, 4]. The control
B ( z −1 ) z − d = (−0.0402 − 0.0027z−1)z−1 , horizon was selected to Nunmax = 5 samples for all GPC
D1 (z−1 )z−d D1 = (0.1989 + 0.0924z−1 + 0.1614z−2)z−2 , controllers such as described by (8.11). The prediction
− − d D2 − − − horizon was set to N2c = 20 minutes in continuous time,
D2 ( z ) z1 = (0.0001 + 0.0067z + 0.0002z )z ,
1 2 1
and the rule (8.10) was used to calculate the equivalent
D3 (z−1 )z−d D3 = (−0.0002 − 0.3618z−1 + 0.0175z−2)z−1 , prediction horizon for each controller with a different
D4 (z−1 )z−d D4 = (0.0525 + 0.3306z−1 + 0.0058z−2)z−1 . sampling period. Finally, the control weighting factor
was adjusted to λ f = 1 to achieve the desired closed-
For the controller configurations, which do not consider
loop dynamics for the faster GPC controller with T1 =
measurable disturbances, a typical input–output config-
Tbase = 1. Due to the physical limitation of the actuator,
uration based on the previous model is used with the
the event-based GPC controller considers constraints on
following structure:
the control signal 0 < u(k) < 100%. Taking into account
ε ( k ) all design parameters, the control structure is composed
A ( z −1 ) y ( k ) = z − d B ( z −1 ) u ( k − 1 ) + . (8.33) of four controllers for each possible sampling frequency.
Δ
Furthermore, the βy effect parameter was again
evaluated for different values, βy = [0.1, 0.2, 0.5, 0.75, 1].
8.6.2 Performance Indexes A periodic GPC with a sampling time of 1 minute
was also simulated for comparison. The simulation was
To compare the classical GPC and event-based GPC made for 19 days, and the numerical results are sum-
approaches, specific performance indexes for this type marized in Table 8.1. It can be seen how the proposed
of control strategies were considered [46]. As a first algorithm allows one to have similar performance to
measure, the integrated absolute error (IAE) is used to the periodic GPC, but with an important reduction in
evaluate the control accuracy, as follows: events (changes in the control signal, which, in this
 ∞ case, are vent movements), especially where βy = 0.1
IAE = |e(t)|dt, and βy = 0.2. Since a graphical result for 19 days will not
0
allow one to see the results properly, representative days
where e(t) is the difference between the reference r (t) have been selected, which are shown in Figures 8.6 and
and the process output y (t). The IAEP compares the 8.7, respectively.
event-based control with the time-based control used as These figures show the obtained results for the event-
a reference: based GPC control structure with different values of βy
 ∞ and for the periodic GPC controller with a sampling
IAEP = |y EB (t) − y TB (t)|dt, time of 1 minute. As observed in Table 8.1, the classical
0
implementation obtains the best control performance.
where y TB (t) is the response of the time-based classi- However, the event-based GPC control structure with
cal GPC. An efficiency measure index for event-based β = 0.1 and β = 0.2 presents almost the same perfor-
control systems can be defined as mance results as the classical configuration but with an
important reduction in actuations.
IAEP
NE = .
IAE

TABLE 8.1
Summary of the Performance Indexes for Event-Based GPC with Sensor Deadband Approach
IAE IAEP Ey NE Δ IAE [%] ΔEy [%]
TB 2275 — 5173 — — —
βy = 0.1 2292 715 4383 0.31 0.7 −15.3
βy = 0.2 2343 727 3829 0.31 3.0 −26.0
βy = 0.5 2799 1013 2656 0.36 23.0 −48.7
βy = 0.75 3283 1457 2161 0.44 44.3 −58.0
βy = 1 3391 1648 1828 0.49 86.3 −64.67
166 Event-Based Control and Signal Processing
27

Inside temperature (°C)


Setpoint TB βy = 0.1 βy = 0.2
26 βy = 0.5 βy = 0.75 βy = 1

25

24

23
10 11 12 13 14 15 16 17 18

50
Vents opening (%)

40
30
20
10
0
10 11 12 13 14 15 16 17 18
Time (hours)
4
3
TB

2
1

10 11 12 13 14 15 16 17 18
4
3
βy = 0.1

2
1

10 11 12 13 14 15 16 17 18
4
3
βy = 0.2

2
1

10 11 12 13 14 15 16 17 18
4
3
βy = 0.5

2
1

10 11 12 13 14 15 16 17 18
4
3
βy = 0.75

2
1

10 11 12 13 14 15 16 17 18
4
3
βy = 1

2
1

10 11 12 13 14 15 16 17 18
Time (hours)

FIGURE 8.6
Control results for the second day for event-based GPC with sensor deadband.

The event-based control structure with β = 0.5 structure with β = 0.75 and β = 1, it can be deduced that
presents a promising trade-off between control perfor- for the process, where some tolerance in setpoint track-
mance and the number of actuations for this process. The ing is permitted, an important reduction of actuations
evolution of events is shown on the bottom graph, where can be obtained. In the case of the greenhouse interior
the signal amplitude corresponds to the controller, temperature control process, this reduction can be
which calculates the control signal for this event. From translated to economic savings, reducing the electrome-
the obtained results for the event-based GPC control chanical actuator wastage, and saving electrical energy.
Event-Based Generalized Predictive Control 167
27
Setpoint T=1 βy = 0.1 βy = 0.2 βy = 0.5 βy = 0.75 βy = 1

Inside temperature (°C)


26

25

24

23
11 12 13 14 15 16
Vents opening (%) 20

10

0
11 12 13 14 15 16
Time (hours)
4
3
T=1

2
1

11 12 13 14 15 16
4
3
βy = 0.1

2
1

11 12 13 14 15 16
4
3
βy = 0.2

2
1

11 12 13 14 15 16
4
3
βy = 0.5

2
1

11 12 13 14 15 16
4
3
βy = 0.75

2
1

11 12 13 14 15 16
4
3
βy = 1

2
1

11 12 13 14 15 16
Time (hours)

FIGURE 8.7
Control results for the seventh day for event-based GPC with sensor deadband.

8.6.4 Actuator Deadband Approach control signal increments of the vents opening percent-
age were set to m = −20% and M = 20%. Addition-
For this event-based GPC configuration, simulations ally, due to the physical limitation of the actuator, the
were performed for the following system parameters: event-based GPC controller considered constraints on
the prediction horizon was set to N2 = 10, the control the control signal 0 < u(k) < 100%. In this configura-
horizon was set to Nu = 5, and the control weighting tion, the greenhouse process controlled variable is sam-
factor was adjusted to λ = 1 to achieve the desired pled with a fixed nominal sampling time of 1 minute,
closed-loop dynamics. The minimum and maximum and the process is updated in the event-based mode
168 Event-Based Control and Signal Processing
27

Inside temperature (°C)


Setpoint TB βu = 0.1 βu = 0.2
26 βu = 0.5 βu = 1 βu = 2

25

24

23
10 11 12 13 14 15

40
Vents opening (%)

30

20

10

0
10 11 12 13 14 15
Time (hours)

1
βu = 0.1

10 11 12 13 14 15

1
βu = 0.5

10 11 12 13 14 15

1
βu = 1

10 11 12 13 14 15

1
βu = 2

10 11 12 13 14 15
Time (hours)

FIGURE 8.8
Simulation results for the eighth day for event-based GPC with actuator deadband.

taking into account the actuator deadband in MIQP disturbance dynamics that affect the greenhouse interior
optimization. The actuator virtual deadband was set to temperature. It can be observed that for the eighth day
βu = [0.1, 0.5, 1, 2] to check its influence on the control and the central part of the day between 12 and 14 hours,
performance. the number of events is reduced for all deadband values.
Figures 8.8 and 8.9 show control results for a clear Due to process inertia in this time range, a process con-
day and for a day with passing clouds, respectively. It trolled by an event-based GPC with an actuator dead-
can be observed that for both days, the event-based con- band needs only a few control signal updates to achieve
troller with small values of actuator deadband βu = 0.1 the desired setpoint. However, a time-based GPC pro-
and βu = 0.5 obtained almost the same performance as cess is updated every sampling instant, even if the error
the time-based GPC. Control results for the eighth day, is very small. The same characteristic can be observed
where the deadband was set to βu = 2, are character- in Figure 8.9, where between 14 and 16 hours the
ized by poor control performance due to the response event-based controllers produce fewer process updates,
offset introduced by the actuator deadband and the low and the resulting control performance is kept at a
variability of the controlled variable. Control results for good level.
the 15th day for the same deadband obtained a perfor- Table 8.2 collects performance indexes for the event-
mance comparable with other deadband configurations, based GPC with an actuator deadband. As can be seen,
despite disturbance dynamics. The bottom graphs of even a small actuator deadband βu = 0.1 results in an
Figures 8.8 and 8.9 show event occurrence for differ- important reduction of process input events Eu , where
ent values of βu . It can be observed that the number of savings of 9.8% are achieved compared with time-based
events depends on the actuator deadband value and the GPC. In this case, the I AE increases by only 0.1%,
Event-Based Generalized Predictive Control 169
27

Inside temperature (°C)


Setpoint TB βu = 0.1
26 βu = 0.5 βu = 1 βu = 2

25

24

23
11 12 13 14 15 16 17

40
Vents opening (%)

30

20

10

0
11 12 13 14 15 16 17
Time (hours)
1
βu = 0.1

11 12 13 14 15 16 17

1
βu = 0.5

11 12 13 14 15 16 17

1
βu = 1

11 12 13 14 15 16 17

1
βu = 2

11 12 13 14 15 16 17
Time (hours)

FIGURE 8.9
Simulation results for the 15th day for event-based GPC with actuator deadband.

TABLE 8.2
Summary of the Performance Indexes for Event-Based GPC with Actuator Deadband Approach
IAE IAEP Eu NE Δ IAE [%] ΔEu [%]
TB 2275 — 5173 — — —
βu = 0.1 2277 12 4665 0.01 0.1 −9.8
βu = 0.5 2352 300 2694 0.13 3.4 −47.9
βu = 1 2637 845 1517 0.32 15.9 −74.5
βu = 2 4457 2894 453 0.65 95.9 −91.0

and I AEP values for individual days are close to zero, precision loss, and for this reason, the optimal deadband
which means that the obtained control performance is value depends on the individual process characteristics.
the same as a time-based technique. Interesting results This facts allows one to use a βu as an additional tun-
are obtained for βu = 1, where I AE increases by about ing parameter to establish a relationship between control
15.9%, but an events reduction of the order of 74% is performance and the number of system events. Overall,
obtained. It is observed that a direct relationship can be good control performance, even for a large value of βu ,
established between the actuator deadband value and is achieved because the actuator deadband is consid-
the control performance. When the deadband increases, ered in the optimization procedure. This fact is verified
the control performance decreases. A similar relation- by the NE measurement. On the contrary, disturbance
ship can be found between the number of process input dynamics have a strong influence on the control perfor-
events and the deadband value. However, for βu = 2, the mance and the number of events that are obtained for
obtained control results are characterized by important individual simulation days.
170 Event-Based Control and Signal Processing
27

Inside temperature (°C)


26

25

24

23
10 11 12 13 14 15 16 17
Vents opening (%) 50

40

30

20

10

0
10 11 12 13 14 15 16 17
Time (hours)
Eu Ey
6
βu = 0.1 βy = 0.2

10 11 12 13 14 15 16 17
Eu Ey
6
βu = 0.1 βy = 0.5

10 11 12 13 14 15 16 17
Eu Ey
6
βu = 0.5 βy = 0.2

10 11 12 13 14 15 16 17
Eu Ey
6
βu = 0.5 βy = 0.5

10 11 12 13 14 15 16 17
Eu Ey
6
βu = 1βy = 0.2

10 11 12 13 14 15 16 17
Eu Ey
6
βu = 1 βy = 0.5

10 11 12 13 14 15 16 17
Time (hours)
Setpoint TB βu = 0.1 βy=0.2 βu=0.1 βy=0.5 βu=0.5 βy=0.2 βu=0.5 βy=0.5 βu=1 βy=0.2 βu=1 βy=0.5

FIGURE 8.10
Control results for the 14th day for the complete event-based GPC scheme.

8.6.5 Complete Event-Based GPC the control horizon was selected to Nunmax = 5 samples,
the prediction horizon was set to N2c = 20, and the con-
The last analyzed configuration considers actuator and trol signal weighting factor was adjusted to λ f = 1.
sensor deadbands at the same time and forms the com- Taking into account all the design requirements, the
plete event-based GPC control structure. In this case, control structure is composed of four controllers for
the control system configuration is as follows: Tbase = each possible sampling frequency. Additionally, each
1 minute, Tmax = 4, nmax = 4, and thus T f ∈ [1, 2, 3, 4], controller considers actuator deadband constraints in
Event-Based Generalized Predictive Control 171
27

Inside temperature (°C)


26

25

24

23
12 13 14 15 16
Vents opening (%)
30

20

10

0
12 13 14 15 16
Time (hours)
Eu Ey
6
βu = 1 βy= 0.2

12 13 14 15 16
Eu Ey
6
βu = 0.1 βy= 0.5

12 13 14 15 16
Eu Ey
6
βu = 0.5 βy= 0.2

12 13 14 15 16
Eu Ey
6
βu = 0.5 βy= 0.5

12 13 14 15 16
Eu Ey
6
βu = 1 βy= 0.2

12 13 14 15 16
Eu Ey
6
βu = 1 βy= 0.5

12 13 14 15 16
Time (hours)

Setpoint TB βu = 0.1 βy= 0.2 βu = 0.1 βy= 0.5 βu = 0.5 βy= 0.2

βu = 0.5 βy= 0.5 βu = 1 βy= 0.2 βu = 1 βy= 0.5

FIGURE 8.11
Control results for the 16th day for the complete event-based GPC scheme.

MIQP optimization. In this configuration, the actu- good control performance were selected to check its
ator virtual deadband was set to βu = [0.1, 0.5, 1], influence on the control performance for complete
and the sensor deadband βy = [0.2, 0.5]. The selected event-based GPC.
deadband values were chosen considering previous Six different simulations have been performed for
simulation results in such a way that only those with all combinations of sensor and actuator deadbands.
172

TABLE 8.3
Performance Indexes for the Complete Event-Based GPC Scheme

TB βu = 0.1 βu = 0.5 βu = 1
βy = 0.2 βy = 0.5 βy = 0.2 βy = 0.5 βy = 0.2 βy = 0.5
Day IAE Ey = Eu IAE Ey Eu IAE Ey Eu IAE Ey Eu IAE Ey Eu IAE Ey Eu IAE Ey Eu
1 140 137 135 166 166 136 158 158 153 166 165 137 158 157 122 166 136 164 137 133
2 78 240 94 186 169 100 121 112 108 175 136 101 121 103 104 197 76 142 121 86
3 96 272 103 224 214 115 134 134 107 195 154 112 136 125 144 285 89 138 128 100
4 106 272 110 229 228 120 163 161 114 214 176 120 160 145 118 246 100 151 136 107
5 16 81 20 80 69 22 62 57 21 77 55 21 64 57 21 77 44 21 62 48
6 207 424 197 370 357 211 265 262 204 370 300 212 261 245 185 381 158 224 219 173
7 146 326 141 298 292 163 195 192 137 294 230 160 194 183 163 331 123 175 152 129
8 84 176 91 186 186 97 119 118 101 172 145 97 121 116 100 196 92 131 107 97
9 277 385 250 352 338 260 266 255 261 343 286 259 264 237 234 353 196 273 247 201
10 103 321 109 244 236 123 157 154 114 231 173 124 154 132 127 267 92 150 138 99
11 98 311 105 257 249 117 144 141 101 221 172 115 141 129 157 331 88 141 130 102
12 53 181 68 159 156 79 88 87 76 146 115 80 89 80 80 164 62 125 98 69
13 107 311 112 265 263 127 154 154 110 231 182 125 153 142 151 312 101 152 132 104
14 131 361 129 332 330 163 175 174 114 271 209 161 171 162 198 382 104 176 136 118
15 152 211 150 213 212 153 165 164 159 210 191 154 165 156 148 207 148 192 159 143
16 91 161 102 143 137 104 113 109 119 142 117 104 113 98 107 145 86 145 112 88
17 117 271 119 242 240 124 170 169 128 239 196 122 168 163 119 248 108 149 140 115
18 83 281 95 216 206 103 133 129 107 214 166 103 134 116 103 230 71 146 129 89
19 189 451 178 368 363 210 267 261 182 360 267 208 268 246 215 439 154 203 210 164
∑ 2275 5173 2307 4530 4411 2525 3049 2991 2416 4271 3435 2516 3035 2792 2597 4957 2028 2998 2693 2165
Δ [%] 1.4 −12.4 −14.7 11.0 −41.1 −42.2 6.2 −17.4 −33.6 10.6 −41.3 −46.0 14.2 −4.2 −60.8 31.8 −47.9 −58.1
Event-Based Control and Signal Processing
Event-Based Generalized Predictive Control 173

The obtained results for selected days and ana-


lyzed event-based GPC configuration are shown in
8.7 Summary
Figures 8.10 and 8.11. As can be observed, of all
the combinations, the complete event-based GPC This chapter presented a straightforward algorithm to
obtains good control performance comparable with the implement a GPC controller with event-based capabil-
time-based (TB) GPC controller. For the 14th day, char- ities. The core idea of the proposed control scheme
acterized by disturbances with low changing dynamics, was focused on the sensor and actuator deadbands.
it can be observed that the application of sensor and Three configurations for an event-based GPC control
actuator deadbands allows one to achieve the desired structure were developed considering a sensor dead-
trade-off between control accuracy and the number of band approach, an actuator deadband approach, as well
system events. The bottom graphs in Figure 8.10 show as the complete event-based scheme. In the developed
the event occurrence, where actuator Eu and sensor Ey event-based GPC configurations, the deadband values
events are shown. It can be observed that deadband were used as additional tuning parameters to achieve
values have an important influence on the number of the desired performance.
the event, as in the previous configuration. However, The first strategy considers the sensor deadband
in this case, the number of input and output events approach, where the resulting control algorithm allows
are reduced simultaneously. The same characteristic one to use standard GPC controllers in such a way that
is obtained for the second analyzed day. On this day, only resampling techniques have to be used to calcu-
special attention should be given to the time period late the past process output and control signal values
between 13 and 14 hours, when transients in solar when events are detected. Then, this resampled infor-
radiation occur. Analyzing the event occurrence, it mation was used to calculate the control law for the
can be observed that the event-based GPC controller current sampling period. Simple tuning rules were also
works with the fastest sampling, and for each output, proposed for GPC controllers to be used within this
the event controller produces a new control signal event-based framework, where such tuning rules were
value. tested for an important number of industrial process
This behavior is the same for all deadband values models. Furthermore, it was shown how it is possible
used in the event-based GPC structure, due to an to achieve an important reduction in the control signal
important error between the inside temperature and the changes without losing too much performance quality.
reference. The opposite effect can be found between 14 The second approach takes into account the actuator
and 15 hours, when the process reaches a steady state deadband in such a way that only control signal changes
due to low disturbance variation. In this time period, higher than an established deadband limit are consid-
the process output is monitored with slow sampling, ered. The resulting control algorithm uses a MIQP opti-
and the majority of the event-based controllers uses a mization to compute control actions for the controlled
Tmax sampling frequency. The number of system update process, and small control changes are suppressed to
events Eu depends on the actuator deadband value, and zero in accordance with the established deadband. The
for this time period the value is considerably reduced. developed event-based control system realizes process
The event-based GPC controller configuration with updates only when an important change is detected
βu = 1 and βy = 0.5 produces only four system updates in the controlled process. Otherwise, the control signal
and keeps the control performance close to the TB values remain unchanged. The advantages presented
configuration. by this approach were obtained at the cost of a small
These results are confirmed by the performance control performance deterioration and of solving a com-
indexes used for evaluation of the complete event-based putationally costly online optimization algorithm. Fur-
GPC controller presented in Table 8.3. The analyzed con- thermore, this proposal can be applied to control any
figuration is characterized by relatively good control system that can be controlled with a classical GPC algo-
performance and obtains minimum and maximum I AE rithm. The last configuration for the event-based GPC
of between 1.4% and 33.6% higher than for the TB con- controller considered the sensor deadband and the actu-
figuration. The best trade-off between control accuracy ator deadband simultaneously. The resulting complete
and the number of events is obtained for βy = 0.5 with event-based GPC control scheme was characterized by a
βu = 0.5. For this case, I AE increases by about 10.6%, good trade-off between the control performance and the
where output Ey and input Eu events were reduced by number of events. For this configuration, both deadband
about 41.3% and 46%, respectively. As in previous event- values were adjusted independently in order to obtain
based GPC configurations, deadband values deter- the desired performance and the number of events for
mine overall control performance, and their selection input and output of the process.
should be carried out individually for each controlled Finally, the chapter concluded with a simulation
process. study, where the event-based control schemes were
174 Event-Based Control and Signal Processing

evaluated for the greenhouse temperature control prob- experimental study. International Journal of Robust
lem. In this case, the simulations were carried out and Nonlinear Control, 20(2):209–225, 2010.
considering real data, where it was possible to see
the advantages of the proposed event-based GPC con- [8] A. Bemporad, S. Di Cairano, and J. Júlvez. Event-
trollers. For each evaluated event-based GPC config- based model predictive control and verification of
uration, it was possible to reduce the control signal integral continuous hybrid automata. In Hybrid
changes, which in this case were associated with actua- Systems: Computation and Control (J. Hespanha and
tor wastage and economic costs, maintaining acceptable A. Tiwari, Eds.), Lecture Notes in Computer Sci-
performance results. ence. Springer-Verlag, Berlin, Germany, 2006.
[9] A. Bemporad and M. Morari. Control of systems
integrating logic, dynamics, and constraints. Auto-
matica, 35(3):407–427, 1999.
Acknowledgments
[10] E. F. Camacho and C. Bordóns. Model Predictive
This work has been partially funded by the follow- Control. Springer-Verlag, London, 2007.
ing projects and institutions: DPI2011-27818-C02-01/02
(financed by the Spanish Ministry of Economy and [11] Y. Can, Z. Shan-an, K. Wan-zeng, and L. Li-ming.
Competitiveness and ERDF funds); the UNED through Application of generalized predictive control in
postdoctoral scholarship; and supported by Cajamar networked control system. Journal of Zhejiang
Foundation. University, 7(2):225–233, 2006.
[12] A. Cervin and K. J. Åström. On limit cycles in
event-based control systems. In Proceedings of the
46th IEEE Conference on Decision and Control. New
Bibliography Orleans, LA, 2007.
[1] P. Albertos, M. Vallés, and A. Valera. Controller [13] A. Cervin, D. Henriksson, B. Lincoln, J. Eker, and
transfer under sampling rate dynamic changes. K. E. Årzén. How does control timing affect perfor-
In Proceedings of the European Control Conference, mance? IEEE Control Systems Magazine, 23(3):16–30,
Cambridge, UK, 2003. 2003.
[2] A. Anta and P. Tabuada. To sample or not to sample: [14] D. Chen, N. Xi, Y. Wang, H. Li, and X. Tang.
Self-triggered control for nonlinear systems. IEEE Event based predictive control strategy for teleop-
Transactions on Automatic Control, 55(9):2030–2042, eration via Internet. In Proceedings of the Conference
2010. on Advanced Inteligent Mechatronics. Xi’an, China,
2008.
[3] K. E. Årzén. A simple event-based PID controller.
In Proceedings of 14th IFAC World Congress, Beijing, [15] S. Durand and N. Marchand. Further results on
China, 1999. event-based PID controller. In Proceedings of the
European Control Conference. Budapest, Hungary,
[4] K. J. Åström. Event based control. In Analysis and
2009.
Design of Nonlinear Control Systems (A. Astolfi and
L. Marconi, Eds.), pp. 127–148. Springer-Verlag, [16] M. S. Fadali and A. Visioli. Digital Control
Berlin, Germany, 2007. Engineering—Analysis and Design. Academic Press,
Burlington, VT, 2009.
[5] K. J. Åström and B. M. Bernhardsson. Comparison
of Riemann and Lebesgue sampling for first order [17] I. Farkas. Modelling and control in agricultural
stochastic systems. In Proceedings of the 41st IEEE processes. Computers and Electronics in Agriculture,
Conference on Decision and Control, Las Vegas, NV, 3(49):315–316, 2005.
2002.
[18] P. J. Gawthrop and L. Wang. Intermittent model
[6] K. J. Åström and B. Wittenmark. Computer Con- predictive control. Journal of Systems and Control
trolled Systems: Theory and Design. Prentice Hall, Engineering, 221(7):1007–1018, 2007.
Englewood Cliffs, NJ, 1997.
[19] G. Guo. Linear systems with medium-access con-
[7] A. Bemporad, S. Di Cairano, E. Henriksson, straint and Markov actuator assignment. IEEE
and K. H. Johansson. Hybrid model predictive Transaction on Circuits and Systems, 57(11):2999–
control based on wireless sensor feedback: An 3010, 2010.
Event-Based Generalized Predictive Control 175

[20] J. L. Guzmán, M. Berenguel, and S. Dormido. [31] J. Liu, D. Muñoz de la Peña, and P. D. Christofides.
Interactive teaching of constrained generalized Distributed model predictive control of nonlinear
predictive control. IEEE Control Systems Magazine, systems subject to asynchronous and delayed mea-
25(2):52–66, 2005. surements. Automatica, 46(1):52–61, 2010.

[21] J. L. Guzmán, F. Rodrı́guez, M. Berenguel, and [32] N. Marchand. Stabilization of Lebesgue sampled
S. Dormido. Virtual lab for teaching greenhouse cli- systems with bounded controls: The chain of inte-
matic control. In Proceedings of the 6th IFAC World grators case. In Proceedings of the 17th World Congress
Congress, Prague, Czech Republic, 2005. of IFAC, Seoul, Korea, 2008.

[22] W. P. M. H. Heemels, J. H. Sandee, and P. P. [33] M. Miskowicz. Send-on-delta concept: An event-


J. Van Den Bosch. Analysis of event-driven con- based data reporting strategy. Sensors, 6(1):49–63,
trollers for linear systems. International Journal of 2006.
Control, 4(81):571–590, 2008.
[34] M. Nagahara and D. E. Quevedo. Spare representa-
[23] T. Henningsson and A. Cervin. Comparison of LTI tion for packetized predictive networked control. In
and event-based control for a moving cart with Proceedings of the 18th IFAC World Congress, Milano,
quantized position measurements. In Proceedings of Italy, 2011.
the European Control Conference, Budapest, Hungary,
2009. [35] J. E. Normey-Rico and E. F. Camacho. Control of
Dead-Time Processes. Springer-Verlag. London, 2007.
[24] T. Henningsson, E. Johannesson, and A. Cervin.
Sporadic event-based control of first-order linear [36] A. Pawlowski, A. Cervin, J. L. Guzmán, and
stochastic systems. Automatica, 44(11):2890–2895, M. Berenguel. Generalized predictive con-
2008. trol with actuator deadband for event-based
approaches. IEEE Transactions on Industrial
[25] W. Hu, G. Liu, and D. Rees. Event-driven net- Informatics, 10(1):523–537, 2014.
worked predictive control. IEEE Transactions on
Industrial Electronics, 54(3):1603–1613, 2007. [37] A. Pawlowski, I. Fernández, J. L. Guzmán,
M. Berenguel, F. G. Acién, and J. E. Normey-Rico.
[26] J. Jugo and M. Eguiraun. Experimental imple- Event-based predictive control of pH in tubular
mentation of a networked input-output sporadic photobioreactors. Computers and Chemical Engineer-
control system. In Proceedings of the IEEE Interna- ing, 65:28–39, 2014.
tional Conference on Control Applications, Yokohama,
Japan, 2010. [38] A. Pawlowski, J. L. Guzmán, J. E. Normey-Rico,
and M. Berenguel. Improving feedforward dis-
[27] R. King and N. Sigrimis. Computational intelli- turbance compensation capabilities in general-
gence in crop production. Computers and Electronics ized predictive control. Journal of Process Control,
in Agriculture—Special Issue on Intelligent Systems in 22(3):527–539, 2012.
Crop Production, 31(1):1–3, 2000.
[39] A. Pawlowski, J. L. Guzmán, J. E. Normey-Rico,
[28] E. Kofman and J. Braslavsky. Level crossing sam- and M. Berenguel. A practical approach for gen-
pling in feedback stabilization under data rate con- eralized predictive control within an event-based
straints. In Proceedings of the 45th IEEE International framework. Computers and Chemical Engineering,
Conference on Decision and Control, San Diego, CA, 41(6):52–66, 2012.
2006.
[40] A. Pawlowski, J. L. Guzmán, F. Rodrı́guez,
[29] D. Lehmann and J. Lunze. Event-based control: M. Berenguel, and J. E. Normey-Rico. Predictive
A state-feedback approach. In Proceedings of the control with disturbance forecasting for greenhouse
European Control Conference, Budapest, Hungary, diurnal temperature control. In Proceedings of the
2009. 18th World Congress of IFAC, Milan, Italy, 2011.

[30] G. Liu, Y. Xia, J. Chen, D. Rees, and W. Hu. Net- [41] A. Pawlowski, J. L. Guzmán, F. Rodrı́guez,
worked predictive control of systems with ran- M. Berenguel, and J. Sánchez. Application of time-
dom network delays in both forward and feedback series methods to disturbance estimation in pre-
channels. IEEE Transaction on Industrial Electronics, dictive control problems. In Proceedings of the IEEE
54(3):1603–1613, 2007. Symposium on Industrial Electronics, Bari, Italy, 2010.
176 Event-Based Control and Signal Processing

[42] A. Pawlowski, J. L. Guzmán, F. Rodrı́guez, [52] J. H. Sandee, W. P. M. H. Heemels, and P. P. J.


M. Berenguel, J. Sánchez, and S. Dormido. Event- van den Bosch. Event-driven control as an oppor-
based control and wireless sensor network for tunity in the multidisciplinary development of
greenhouse diurnal temperature control: A simu- embedded controllers. In Proceedings of the American
lated case study. In Proceedings of the 13th IEEE Control Conference, Portland, OR, 2005.
Conference on Emerging Technologies and Factory
[53] N. Sigrimis, P. Antsaklis, and P. Groumpos. Control
Automation, Hamburg, Germany, 2008.
advances in agriculture and the environment. IEEE
[43] A. Pawlowski, J. L. Guzmán, F. Rodrı́guez, Control System Magazine, 21(5):8–12, 2001.
M. Berenguel, J. Sánchez, and S. Dormido. The
[54] M. Srinivasarao, S. C. Patwardhan, and R. D. Gudi.
influence of event-based sampling techniques on
Nonlinear predictive control of irregularly sampled
data transmission and control performance. In Pro-
multirate systems using blackbox observers. Journal
ceedings of the 14th IEEE International Conference
of Process Control, 17(1):17–35, 2007.
on Emerging Technologies and Factory Automation,
Mallorca, Spain, 2009. [55] S. W. Su, H. Nguyen, and R. Jarman. Model predic-
tive control of gantry crane with input nonlinearity
[44] A. Pawlowski, J. L. Guzmán, F. Rodrı́guez, M.
compensation. International Journal of Aerospace and
Berenguel, J. Sánchez, and S. Dormido. Study of
Mechanical Engineering, 4(1):34–38, 2010.
event-based sampling techniques and their influ-
ence on greenhouse climate control with wireless [56] G. van Straten. What can systems and control
sensor networks. In Javier Silvestre-Blanes Edt. Fac- theory do for agriculture? In Proceedings of the
tory Automation, pages 289–312. InTech, Vukovar, 2nd IFAC International Conference AGRICONTROL,
Croatia, 2010. Osijek, Croatia, 2007.
[45] B. Picasso, S. Pancanti, A. Bemporad, and A. Bic- [57] P. Varutti, T. Faulwasser, B. Kern, M. Kogel, and
chi. Receding-horizon control of LTI systems with R. Findeisen. Event based reduced attention pre-
quantized inputs. In Analysis and Design of Hybrid dictive control for nonlinear uncertain systems.
Systems (G. Engell and Zaytoon, Eds.), pages 259– In IEEE International Symposium on Computer-Aided
264. Elsevier, Oxford, UK, 2003. Control System Design. Yokohama, Japan, 2010.
[46] J. Ploennigs, V. Vasyutynskyy, and K. Kabitzsch. [58] P. Varutti, B. Kern, T. Faulwasser, and
Comparative study of energy-efficient sampling R. Findeisen. Event-based model predictive control
approaches for wireless control networks. IEEE for networked control systems. In Proceedings of
Transactions on Industrial Informatics, 6(3):416–424, the 48th IEEE Conference on Decision and Control,
2010. Shanghai, China, 2009.
[47] D. E. Quevedo, J. Østergaard, and D. Nešić. Packe- [59] V. Vasyuntynskyy and K. Kabitzsch. Simple PID
tized predictive control of stochastic systems over control algorithm adapted to deadband sampling.
bit-rate limited channels with packet loss. IEEE In Proceedings of the 12th IEEE Conference on Emerg-
Transactions on Automatic Control, 56(12):2855–2868, ing Technologies and Factory Automation, Patras,
2011. Greece, 2007.
[48] M. Rabi and J .S. Baras. Level-triggered control of a [60] V. Vasyutynskyy, A. Luntovskyy, and K. Kabitzsch.
scalar linear system. In Proceedings of the 15th IEEE Limit cycles in PI control loops with absolute dead-
Mediterranean Conference on Control and Automation, band sampling. In Proceedings of the 18th Crimean
Athens, Greece, 2007. Conference on Microwave and Telecommunication Tech-
nology, Sevastopol, Crimea, 2008.
[49] M. Rabi and K. H. Johansson. Event-triggered
strategies for industrial control over wireless net- [61] Z. Yao, Y. Sun, and N. H. El-Farra. Resource-aware
works. In Proceedings of the 4th Annual International scheduled control of distributed process systems
Conference on Wireless Internet, Maui, HI, 2008. over wireless sensor network. In American Control
Conference. Baltimore, MD, 2010.
[50] J. A. Rossiter. Model Based Predictive Control: A Prac-
tical Approach. CRC Press, Boca Raton, FL, 2003. [62] H. Zabiri and Y. Samyudia. A hybrid formulation
and design of model predictive control for system
[51] J. Sánchez, A. Visoli, and S. Dormido. A two- under actuator saturation and backlash. Journal of
degree-of-freedom PI controller based on events. Process Control, 16(7):693–709, 2006.
Journal of Process Control, 21(4):639–651, 2011.
9
Model-Based Event-Triggered Control of Networked Systems

Eloy Garcia
Infoscitex Corp.
Dayton, OH, USA

Michael J. McCourt
University of Florida
Shalimar, FL, USA

Panos J. Antsaklis
University of Notre Dame
Notre Dame, IN, USA

CONTENTS
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
9.2 The Model-Based Approach for Networked Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9.2.1 Model-Based Networked Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9.2.2 Model-Based Event-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9.3 Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
9.3.1 MB-ET Control of Uncertain Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9.3.2 MB-ET Control of Uncertain Linear Systems over Uncertain Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9.3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9.4 Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
9.4.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
9.4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.5 MB-ET Control of Discrete-Time Nonlinear Systems Using Dissipativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.5.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
9.5.2 Dissipativity Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.5.3 Output Boundedness Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.5.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.6 Additional Topics on Model-Based Event-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
9.6.1 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
9.6.2 Optimal MB-ET Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
9.6.3 Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
9.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
ABSTRACT The model-based event-triggered are available. Additional topics concerning the MB-ET
(MB-ET) control framework is presented in this chapter. control framework are also described. Multiple exam-
This framework makes explicit use of existing knowl- ples are shown through this chapter in order to illustrate
edge of the plant dynamics to enhance the performance the advantages and functionality of this approach.
of the system; it also makes use of event-based com-
munication strategies to use network communication
resources more wisely. This chapter presents extensive
results that consider uncertain continuous-time linear
9.1 Introduction
and nonlinear systems. Particular emphasis is placed
on uncertain nonlinear discrete-time systems subject to In control systems, the success of control methodolo-
external disturbances where only output measurements gies in stabilizing and providing desired performance,
177
178 Event-Based Control and Signal Processing

in the presence of parameter uncertainties and external It turns out that stability margins, controller robust-
disturbances, has been mainly due to the use of contin- ness, and other stability and performance measures may
uous feedback information. Closed-loop feedback con- be significantly improved when knowledge of the plant
trollers have the property to change the response of dynamics is explicitly used to predict plant behavior.
dynamical systems and, when properly designed, are Note that the plant model is always used to design
able to provide desired system behavior under a class controllers in standard control design. The difference
of uncertainties and disturbances. here is that the plant model is used explicitly in the
Robustness to parameter uncertainties and external controller implementation to great advantage. This is
disturbances is deteriorated when continuous feed- possible today because existing inexpensive computa-
back is not available. The main reason to implement a tion power allows the simulation of the model of the
feedback loop and provide sensor measurements to the plant in real time.
controller is to obtain some information about a dynam- The MB-ET control framework represents an exten-
ical system, which can never be perfectly described and sion of a class of networked systems called model-based
its dynamics can never be fully known. In the absence networked control systems (MB-NCSs). In MB-NCSs, a
of uncertainties and disturbances, there will be no need nominal model of the plant dynamics is used to esti-
for feedback information (assuming the initial condi- mate the state of the plant during the intervals of time
tions are known or after the first transmitted feedback that feedback measurements are unavailable to the con-
measurement) since an open-loop, or feed-forward, troller node. It has been common practice in MB-NCSs
control input that provides a desired system response to reduce network usage by implementing periodic
can be obtained based on the, known with certainty, updates [1]. However, time-varying stochastic update
plant parameters. intervals have been analyzed as well [2]. In the MB-ET
Design, analysis, and implementation of closed-loop control framework, the update instants are now decided
feedback control systems have translated into success- based on the current conditions and response of the
ful control applications in many different areas. How- underlying dynamical system.
ever, recent control applications make use of limited The contents of the present chapter are as follows. In
bandwidth communication networks to transmit infor- Section 9.2 we describe the MB-NCS architecture and the
mation, including feedback measurements from sen- conditions to stabilize a linear system under such a con-
sor nodes to controller nodes. The use of either wired figuration. The MB-ET control framework is also intro-
or wireless communication networks has been rapidly duced in this section. Stabilization of uncertain linear
increasing in scenarios involving different subsystems systems using MB-ET control is discussed in Section 9.3.
distributed over large areas, such as in many indus- Continuous-time nonlinear systems are considered in
trial processes. Networked systems are also common Section 9.4. In Section 9.5, we study discrete-time non-
in automobiles and aircraft where each vehicle may linear systems that are subject to parameter uncertain-
contain hundreds of individual systems, sensors, and ties and external disturbances using dissipativity tools
controllers exchanging information through a digital and MB-ET control. Additional extensions to the MB-ET
communication channel. Thus, it becomes important at control framework are described in Section 9.6. Finally,
a fundamental level to study the effects of uncertain- concluding remarks are made in Section 9.7.
ties and disturbances on systems that operate without
continuous feedback measurements.
In this chapter, we present the MB-ET control frame-
work for networked control systems (NCSs). The MB-ET
control framework makes explicit use of existing knowl-
edge of the plant dynamics, encapsulated in the mathe-
matical model of the plant, to enhance the performance 9.2 The Model-Based Approach for Networked
of the system; it also makes use of event-based com- Systems
munication strategies to use network communication
9.2.1 Model-Based Networked Control Systems
resources more wisely.
The performance of an NCS depends on the perfor- One of the main problems to be addressed when consid-
mance of the communication network in addition to ering an NCS is the limited bandwidth of the network. In
traditional control systems performance measures. The point-to-point control systems, it is possible to send con-
bandwidth of the communication network used by the tinuous measurements and control inputs. Bandwidth
control system is of major concern, because other control and dynamic responses of a plant are closely related. The
and data acquisition systems will typically be sharing faster the dynamics of the plant, the larger is its band-
the same digital communication network. width. This usually translates to large frequency content
Model-Based Event-Triggered Control of Networked Systems 179

on the controlling signal and a continuous exchange uncertainties and plant disturbances. In this section, we
of information between the plant and the controller. In provide necessary and sufficient conditions for stability
the case of discrete-time plants, the controller acts at that result in a maximum update time, which depends
spaced instants of time, and transmission of continuous mainly not only on the model inaccuracies but also on
signals is not required. However, some discrete-time sys- the designed control gain.
tems may have a fast internal sampling, which results If all the states are available for measurement, then
in large bandwidth requirements in terms of the net- the sensor can send this information through the net-
work characteristics. In this section, we describe the work to update the state of the model. The original plant
MB-NCS architecture, and we derive necessary and suf- may be open-loop unstable. We assume that the fre-
ficient stabilizing conditions for linear time-invariant quency at which the network updates the state in the
systems when periodic updates are implemented. In the controller is constant and that the compensated model
following sections, we use similar architectures for NCS, is stable, which is typical in control systems. For sim-
but we implement event-triggered control techniques to plicity, we also assume that the transportation delay
determine the transmission time instants. is negligible, which is completely justifiable in most of
We consider the control of a linear time-invariant the popular network standards like CAN bus or Ether-
dynamical system where the state sensors are connected net. The case of network delays and periodic updates
to controllers/actuators via a network. The main goal is addressed in [1]. In Section 9.3.2, we consider net-
is to reduce the number of transmitted messages over work delays when using event-based communication.
the network using knowledge of the plant dynamics. The goal is to find the largest constant update period
Specifically, the controller uses an explicit model of the at which the network must update the model state in
plant that approximates the plant dynamics and makes the controller for stability—that is, we are seeking an
possible the stabilization of the plant even under slow upper bound for h the update time. Consider the con-
network conditions. Although in principle, we can use trol system of Figure 9.1 where the plant and the model
the same framework to study the problem of packet are described, respectively, by
dropouts in NCSs, the aim here is to purposely avoid
ẋ = Ax + Bu, (9.1)
frequent broadcasting of unnecessary sensor measure-
ments so as to reduce traffic in the network as much as x̂˙ = Â x̂ + B̂u, (9.2)
possible, which in turn reduces the presence of the prob- where x, x̂ ∈ u∈
Rn , Rm .
The control input is given by
lems associated with high network load such as packet u = K x̂. The state error is defined as
collisions and network-induced delays. The main idea is
to update the state of the model using the actual state e = x − x̂, (9.3)
of the plant provided by the sensor. The rest of the time and represents the difference between the plant state
the control action is based on a plant model that is incor- and the model state. The modeling error matrices à =
porated in the controller/actuator and is running open A − Â and B̃ = B − B̂ represent the difference between
loop for a period of h seconds. The control architecture the plant and the model. The update time instants are
is shown in Figure 9.1. denoted by ti , where
In our control architecture, having knowledge of the
ti − ti−1 = h, (9.4)
plant at the actuator side enables us to run the plant in
open loop, while the update of the model state provides for i = 1, 2, . . . (in this section, h is a constant). The choice
the closed-loop information needed to overcome model of h, being a constant, is simple to implement and also
results in a simple analysis procedure as shown below.
However, event-triggered updates can potentially bring
better benefit, and they will be addressed in the follow-
Controller ing sections.
Since the model state is updated at every time
^x u Plant x Sensor instant ti , then e(ti ) = 0, for i = 1, 2, . . .. This resetting of
Model the state error at every update time instant is a key char-
acteristic of our control system. Define the augmented
Update state vector z(t) and the augmented state matrix Λ:
 
x (t)
z(t) =
Network e( t )
 . (9.5)
A + BK − BK
FIGURE 9.1
Λ=
Model-based networked control systems architecture. Ã + B̃K Â − B̃K
180 Event-Based Control and Signal Processing

Thus, for t ∈ [ti , ti+1), we have that u = K x̂ and using scenarios were studied; in the first, the assumption is
the error definition (9.3), the augmented system dynam- that transmission times are identically independently
ics can be described by distributed; in the second, transmission times are driven
     by a finite Markov chain. In both cases, conditions were
ẋ (t) A + BK − BK x (t) derived for almost sure (probability 1) and mean square
ż(t) = = = Λz(t),
ė(t) Ã + B̃K Â − B̃K e(t) stability. Nonlinear systems have also been considered
(9.6) using the MB-NCS configuration. Different authors have
provided stability conditions and stabilizing update
for t ∈ [ti , ti+1 ), with updates given by rates for nonlinear MB-NCSs with and without uncer-
   −  tainties and for nonlinear MB-NCSs with time delays
x (ti ) x (ti )
z(ti ) = = , (9.7) [4,5]. Different authors have dealt with similar problems
0 0
in different types of applications. Motivated by activ-
where ti − ti−1 = h. ities that involve human operators, the authors of [6]
During the time interval t ∈ [ti , ti+1 ), the system point out that typically, a human operator scans infor-
response is given by mation intermittently and operates the controlled sys-
    tem continuously; the intermittent characteristic in this
x (t) x (ti ) case refers to the same situation presented in [1]—that
z(t) = = eΛ( t − t i ) = eΛ(t−ti ) z(ti ) , (9.8)
e( t ) 0 is, a single measurement is used to update the inter-
  nal model and generate the control input. For a skill-
x (ti ) ful operator, the information is scanned less frequently.
and at time instants ti we have that z(ti ) = , that
0 Between update intervals, the control input is gener-
is, the error is reset to zero. We can represent this by ated the same way as in MB-NCSs—that is, an imperfect
  model of the system is used to generate an estimate of
I 0
z(ti ) = z(t−
i ) . (9.9) the state, and periodic measurements are used to update
0 0
the state of this model. In the output feedback case, a
Due to the periodicity of the state updates, the extended stochastic estimator is implemented with the assump-
state z(t) can be expressed in terms of the initial condi- tion that the statistical properties of the measurement
tion x (t0 ) as follows: noise are known. In both cases, the authors provide con-
ditions for stability based on the length of the sampling
   
I 0 I 0 interval. The authors of [7] also use a model that pro-
z(t) = e Λ ( t − t i ) e Λh z0 , (9.10)
0 0 0 0 duces the input for the plant (possibly nonlinear) and
considers a network in both sides of the plant. The actu-
x ( t0 ) ! ator is assumed to have an embedded computer that
for t ∈ [ti , ti+1 ), where h = ti − ti−1 and z0 = e ( t0 )
. decodes and resynchronizes the large packets sent from
the controller that contain the predicted control input
Theorem 9.1 obtained by the model. In [8], Hespanha et al. use differ-
ential pulse-code modulation techniques together with
The system described by (9.6) with updates (9.7) is glob- a model of the plant dynamics to reduce the amount of
! bandwidth needed to stabilize the plant. Both the sensor
ally exponentially stable around the solution z = xe =
!
0 if and only if the eigenvalues of
" I 0 ! Λh I 0 !# and the controller/actuator have a model of the plant,
0 00 e 00 are
which is assumed to be exact, and they are run simul-
strictly inside the unit circle.
taneously. Stabilizing conditions for MB-NCSs under
different quantization schemes were presented in [9].
PROOF See [1] for the proof.
Typical static quantizers such as uniform quantizers and
logarithmic quantizers were considered in [9]. Also, the
The MB-NCS architecture has been used to deal with
design of MB-NCSs using dynamic quantizers was also
different problems commonly encountered in NCSs.
addressed in the same paper.
In [1] the output feedback scenario was addressed
by implementing a state observer at the sensor node.
Network-induced delays were also considered within
9.2.2 Model-Based Event-Triggered Control
the MB-NCS architecture in [1]. The case where network
delays are greater than the update period h was studied The MB-ET control framework makes use of the
in [3]. MB-NCS architecture described in Section 9.2.1 and
An important extension of this work considered the event-triggered control paradigm largely discussed
time-varying updates [2]. In that case, two stochastic in this book. The main goal in the MB-ET control
Model-Based Event-Triggered Control of Networked Systems 181

framework is to adjust the update intervals based on state error is compared to a threshold to determine if an
the current state error and to send a measurement to event is triggered. When an event is triggered, the cur-
update the state of the model only when it is neces- rent measurement of the state of the plant is transmitted,
sary. This means that the update time intervals are no and the sensor model is updated using the same state
longer constant. The update time intervals are nonperi- measurement.
odic and depend on the current plant response. One key The MB-ET control framework addressed in this chap-
difference of the MB-ET control approach with respect ter was introduced in [10]. This approach was used for
to traditional event-triggered control techniques is that the control of networked systems subject to quantiza-
the transmitted measurement does not remain constant tion and network-induced delays [11]. In [12], updates of
between update intervals. The transmitted measure- model states and model parameters are considered. The
ment is used to update the state of the model x̂, and, sim- MB-ET approach has also been used for the stabilization
ilar to Section 9.2.1, the model state is used to compute of coupled systems [13], coordinated multi-agent track-
the control input defined as u = K x̂. ing [14], and synchronization of multi-agent systems
The application of event-triggered control to the with general linear dynamics [15].
MB-NCS produces many advantages compared to peri- A similar model-based approach has been developed
odic implementation. For instance, nonlinearities and by Lunze and Lehman [16]. In their approach, the model
inaccuracies that affect the system and are difficult or is assumed to match the dynamics of the system exactly;
impossible to model and may change over time or under however, the system is subject to unknown input dis-
different physical characteristics (temperature, different turbances. The main idea of the approach in [16] is the
load in a motor, etc.) may be handled more efficiently same as in this chapter, that is, to use the nominal model
by tracking the state error than by updating the state to generate estimates of the current state of the system.
at a constant rate. Also, the method presented here is Since the system is subject to an unknown disturbance
robust under random packet loss. If the sensor sends and the model is executed with zero input disturbance,
data but the model state is not updated because of a then a difference between plant and model states is
packet dropout, the state error will grow rapidly above expected, and the sensor updates transmitted over a
the threshold. In this case, the following event is trig- digital communication network are used to reset this dif-
gered sooner, in general, than in the case when the ference between the states of the plant and of the model.
previous measurement was successfully received. How- The same authors have extended this approach to con-
ever, under a fixed transmission rate, if a packet is lost, sider the output feedback, quantization, and network
the model will need to wait until the next update time delay cases. The authors of [17] also discussed a model-
to receive the feedback data, thus compromising the based approach for stabilization of linear discrete-time
stability of the system. systems with input disturbances and using periodic
The implementation of an event-based rule in a event-triggered control.
MB-NCS represents a very intuitive way to save net- The work in [18] also offers an approach to the prob-
work resources. It also considers the performance of lem of reducing the bandwidth utilization by making
the closed-loop real system. The implementation of the use of a plant model; here, the update of the model is
model to generate an estimate of the plant state and event driven. The model is updated when any of the
using that estimate to control the plant result in sig- states differ from the computed value for more than a
nificant savings in bandwidth. It is also clear that the certain threshold. Some stability and performance con-
accuracy of the estimate depends on factors such as the ditions are derived as functions of the plant, threshold,
size of the model uncertainties. One of the results in and magnitude of the plant-model mismatch.
the present chapter is that these error-based updates
provide more independence from model uncertainties
when performing this estimation. When the error is
small, which means that the state of the model is an accu-
rate estimation of the real state, then we save energy, 9.3 Linear Systems
effort, and bandwidth by electing not to send mea- The main advantage that the event-triggered feedback
surements for all the time intervals in which the error strategy offers compared with the common periodic-
remains small. The MB-ET control architecture is sim- update implementation is that the time interval between
ilar to the one shown in Figure 9.1. One difference is updates can be considerably increased, especially when
that copies of the nominal model parameters Â, B̂ and of the state of the system is close to its equilibrium point,
the control gain K are now implemented in the sensor thus releasing the communication network for other
node in order to obtain the state model x̂ at the sen- tasks. We will assume in this section that the commu-
sor node and be able to compute the state error. The nication delay is negligible. The approach in this section
182 Event-Based Control and Signal Processing

is to compute the norm of the state error and compare it uncertainty in the state matrix A. Similarly, the occur-
to a positive threshold in order to decide if an update rence of an error event leads the sensor to send the
of the state of the model is needed. When the model current measurement of the state of the plant that is used
is updated, the error is equal to zero; when it becomes in the controller to update the state of the model.
greater than the threshold, the next update is sent.
We consider a state-dependent threshold similar Theorem 9.2
to [19] where the norm of the state error is compared to a
function of the norm of the state of the plant; in this way, Consider the system (9.1) with input u = K x̂. Let the
the threshold value is not fixed anymore, and, in partic- feedback be based on error events using the following
ular, it can be reduced as we approach the equilibrium relation:
point of the system, assuming that the zero state is the
equilibrium of the system. Traditional event-triggered σ( q − Δ )
 e > x , (9.14)
control techniques [19–21] consider systems controlled b
& &
by static gains that generate piecewise constant inputs where
& T b &= &K T B̂ T P + P B̂K &, 0 < σ < 1, and
due to the fact that the update is held constant in the con- & Ã P + P Ã& ≤ Δ < q. Let the model be updated
troller. The main difference in this section is that we use
when (9.14) is first satisfied. Then the system is globally
a model-based controller (i.e., a model of the system and
asymptotically stable.
a static gain); the model provides an estimate of the state
between updates, and the model-based controller pro-
PROOF Proof is given in [10].
vides a control input for the plant that does not remain
constant between measurement updates.

9.3.2 MB-ET Control of Uncertain Linear Systems


9.3.1 MB-ET Control of Uncertain Linear Systems
over Uncertain Networks
Let us consider the plant and model described by (9.1)
In this section, we design stabilizing thresholds taking
and (9.2). Let use define the state error e = x̂ − x. Using
into account the availability not of the real vari-
the control input u = K x̂, we obtain
ables but only of quantized measurements. Addi-
tionally, we design stabilizing thresholds using the
ẋ = ( A + BK ) x + BKe, (9.11)
model-based event-triggered framework for networked
systems affected by both quantization and time delays.
and
The measured variables have to be quantized in order
to be represented by a finite number of bits, so as to
x̂˙ = ( Â + B̂K ) x̂, (9.12) be used in processor operations and to be transmitted
through a digital communication network. It becomes
for t ∈ [ti , ti+1 ). We choose a control gain K that renders necessary to study the effects of quantization error on
the closed-loop model (9.12) globally asymptotically sta- networked systems and on any computer-implemented
ble. We proceed to choose a quadratic Lyapunov func- control application because of the reasons just men-
tion V = x T Px, where P is symmetric positive definite tioned. In addition, we want to emphasize two impor-
and is the solution of the closed-loop model Lyapunov tant implications of quantization in event-triggered
equation control. First, an important step in event-triggered con-
trol strategies is that the model-plant state error is set
( Â + B̂K ) T P + P( Â + B̂K ) = − Q, (9.13) to zero at the update instants. When using quantization,
this is no longer the case, because we use the quantized
where Q is a symmetric positive definite matrix. Let measurement of the plant state to update the state of
us first analyze the case when B̂ = B for simplicity. the model, and this measurement is not, in general, the
Also assume
& that &
the next bound on the uncertainty same as the real state of the plant. Second, in traditional
holds: & à T P + P Ã& ≤ Δ < q, where à = A −  and event-triggered control techniques, the updates are trig-
q = σ( Q) is the smallest singular value of Q in the model gered by comparing the norm of the state, which is not
Lyapunov equation (9.13). This bound can be seen as a exactly available due to quantization errors, to the norm
measure of how close A and  should be. of the state error, which is not exactly available since it is
The next theorem provides conditions on the error and a function of the real state of the plant. The problem in
its threshold value so the networked system is asymptot- those approaches is that stability of the system is directly
ically stable. The error threshold is defined as a function related to nonquantized measurements that are assumed
of the norm of the state and Δ which is a bound on the to be known with certainty.
Model-Based Event-Triggered Control of Networked Systems 183
10

4
Logarithmic quantizer q(z)

−2

−4

−6

−8

−10
−10 −5 0 5 10
z

FIGURE 9.2
Logarithmic quantizer function.

The aim in this section is to find triggering condi- Consider a stable closed-loop nominal model, and
tions based on the available quantized variables that also define the Lyapunov function V = x T Px, where P is a
ensure asymptotic stability in the presence of quantiza- symmetric and positive definite matrix and is the solu-
tion errors. The type of quantizer that we are going to tion of the closed-loop model Lyapunov equation (9.13).
use in this section is the logarithmic quantizer. A loga- Let B̃ = B − B̂.
rithmic quantizer function is shown in Figure 9.2.
We define the logarithmic quantizer as a function q : Theorem 9.3
Rn → Rn with the following property:
Consider system (9.1) with control input u = K x̂.
z − q(z) ≤ δ z , z ∈ Rn , δ > 0. (9.15) Assume that there exists a symmetric positive definite
solution P for the model Lyapunov equation (9.13) and
Using the logarithmic quantizer defined in (9.15), we & &
that & B̃& ≤ β and ( Ã + B̃K ) T P + P( Ã + B̃K ) ≤ Δ < q.
have that at the update instants ti , i = 1, 2, . . ., the state of
the model is updated using the quantized measurement, Consider the relation
& &
that is, &eq & > ση  q( x ) , (9.18)
δ+1
q( x (ti )) → x̂ (ti ). (9.16)
q−Δ & &
where η = b , 0 < σ < σ
< 1, b = 2 & P B̂K & +
Define the quantized model-plant state error: 2β  PK , and let the model be updated when (9.18)
holds. Then,
eq (t) = x̂ (t) − q( x (t)), (9.17)
 e  ≤ σ
η  x  , (9.19)
where q( x (t)) is the quantized value of x (t) at any
time t ≥ 0 using the logarithmic quantizer (9.15). Note is always satisfied, and the system is asymptotically
that q( x ) and eq are the available variables that can be stable when
used to compute the triggering condition. Also note that
eq (ti ) = 0, that is, the quantized model-plant state error, δ ≤ (σ
− σ)η. (9.20)
is set to zero at the update instants according to the
update (9.16). PROOF See [11] for the proof.
184 Event-Based Control and Signal Processing

We now discuss stability thresholds that consider moving cart dynamics (linearized dynamics) described
quantization and time delays. The quantized model- in example 2E in [22].
plant state error was defined in (9.17). Consider also the
nonquantized model-plant state error e(t) = x̂ (t) − x (t). The linearized dynamics can be expressed using the
At the update instants ti , we update the model in the state vector x = [y θ ẏ θ̇] T , where y represents the dis-
sensor node using the quantized measurement of the placement of the cart with respect to some reference
state. At this instant, we have eq (ti ) = 0 at the sensor point, and θ represents the angle that the pendulum rod
node. When considering network delays, we can reset makes with respect to the vertical.
the quantized model-state error only at the sensor node. The matrices corresponding to the state-space repre-
The model-plant state error at the update instants is sentation (9.1) are given by
given by ⎡ ⎤
0 0 1 0
e(ti ) = x̂ (ti ) − x (ti ) = q( x (ti )) − x (ti ). (9.21) ⎢0 0 0 1⎥
A=⎢ ⎣0
⎥,
−mg/M 0 0⎦
It is clear that this error cannot be set to zero at the
0 ( M + m)/Ml 0 0
update instants as the quantized model-plant state error, ⎡ ⎤ (9.23)
due to the existence of quantization errors when mea- 0
suring the state of the plant. Using the logarithmic ⎢ 0 ⎥
B=⎢ ⎣ 1/M ⎦,

quantizer (9.15), we have that
−1/Ml
e(ti ) = q( x (ti )) − x (ti ) ≤ δ  x (ti ) . (9.22)
where the nominal parameters of the model are given
Theorem 9.4 provides conditions for asymptotic stability
ˆ
of the control system using quantization in the presence by m̂ = 0.1, M̂ = 1, and l = 1. The real parameters
of network-induced delays. In this case, the admissible represent values close to the nominal parameters but
delays are also a function of the quantization parame- not exactly the same due to uncertainties in the mea-
ter δ. That is, if we are able to quantize more finely, the surements and specification of these parameters. The
system is still stable in the presence of longer delays. physical parameter values are given by m = 0.09978,
M = 1.0016, l = 0.9994. Also, ĝ = g = 9.8. The input
Theorem 9.4 u represents the external force applied to the cart.
The open-loop plant and model dynamics are unsta-
ble. In this example, we use the following gain
Consider system (9.1) with control input u = K x̂. The K = [0.5379 25.0942 1.4200 7.4812].
event-triggering condition is computed using quantized Figure 9.3 shows the position y (t) and the angle θ(t);
data and using error events according to (9.18). The the second subplot shows the corresponding velocities.
model is updated using quantized measurements of the The norm of the state converging to the origin and the
state of the plant. Assume that there exists a symmet- threshold used to trigger events are shown in Figure 9.4,
ric positive definite solution P for the model Lyapunov while the second subplot shows the time instants where
equation (9.13) and a small enough δ, 0 < δ < 1, such events are triggered and communication is established.
ση
that 12δ −δ < δ+1 . Assume & also that B = B̂ and the follow-
& & &
ing bounds are satisfied: & Ã& ≤ Δ A and & Ã T P + P Ã& ≤ EXAMPLE 9.2: MB-ET control of a linear system with
Δ < q. Then there exists an (δ) > 0 such that for all delays and quantization In this example, we consider
network delays τ N ∈ [0, ], the system is asymptoti- the instrument servo (DC motor driving an inertial load)
cally stable. Furthermore, there exists a time τ > 0 such dynamics from example 6A in [22]:
that for any initial condition the inter-execution times       
{ti+1 − ti } implicitly defined by (9.18) with σ < 1 are ė 0 1 e 0
= + u, (9.24)
lower bounded by τ, that is, ti+1 − ti ≥ τ, for all i = ω̇ 0 −α ω β
1, 2, . . ..
where e = θ − θr represents the error between the cur-
rent position θ and the desired position θr , where the
PROOF See [11] for proof.
desired position is assumed to be constant, ω is the angu-
lar velocity, and u is the applied voltage. The parameters
α and β represent constants that depend on the physical
9.3.3 Examples parameters of the motor and load.

EXAMPLE 9.1: MB-ET control of a linear system In The nominal parameters are α̂ = 1, β̂ = 3. The real
this example, we use the inverted pendulum on a parameters are given by α = 1.14, β = 2.8. The model
Model-Based Event-Triggered Control of Networked Systems 185
1
y
θ
0.5

Positions 0

−0.5
0 1 2 3 4 5 6 7 8

0.4
v
ω
0.2
Velocities (m/s)

−0.2

−0.4
0 1 2 3 4 5 6 7 8
Time (s)

FIGURE 9.3
Positions and velocities of the inverted pendulum on the moving cart example using the MB-ET control framework.

× 10–3
1.5
|e|
Norm of state and error

σ(q−Δ)|x|/b
1

0.5

0
0 1 2 3 4 5 6 7 8
Communication instants

0 1 2 3 4 5 6 7 8
Time (s)

FIGURE 9.4
Error and state norms. Corresponding communication instants triggered by the MB-ET control framework.
186 Event-Based Control and Signal Processing
1.5
q(e)

Quantized plant states


1 q(ω)

0.5

−0.5
0 2 4 6 8 10

1.5

Controller model states

1 ωˆ

0.5

−0.5
0 2 4 6 8 10
Time (s)

FIGURE 9.5
Quantized versions of the plant states and model states available in the controller node.

parameters are used for controller design; in this case of the system is also nonlinear. This section consid-
the control gain is given by K = [−0.33 − 0.33]. ers continuous-time systems with state feedback. Sta-
The parameters used in the MB-ET with quantization bility of the networked system using MB-ET control
and delay framework are σ = 0.6, σ
= 0.8, στ = 0.99. is based on Lyapunov analysis, and a lower bound
The quantization parameter is δ = 0.08. With these on the interevent time intervals is obtained using the
parameters we have that the admissible delays are Gronwall–Bellman inequality. The next section consid-
bounded by  = 0.468. The system’s initial conditions ers discrete-time nonlinear systems with external distur-
are x (0) = [1.2 0.65] T ; the model state is initialized to the bance and output feedback. Dissipativity methods are
zero vector. used for stability analysis in that case.
Figure 9.5 shows the quantized plant states and the
model states at the controller node. The norm of the
9.4.1 Analysis
state converging to the origin and the threshold used to
trigger events are shown in Figure 9.6, where y q = δση +1
Let us consider continuous-time nonlinear systems rep-
and y
= σ
η. The third subplot shows the time instants resented by
where events are triggered and communication is estab- ẋ = f ( x ) + f u (u). (9.25)
lished. Because of time delays, the model in the con-
troller node is updated after some time-varying delay The nonlinear model of the plant dynamics is given by
0 ≤ τ N ≤ 0.468. This feature can be seen by looking at x̂˙ = fˆ( x̂ ) + fˆu (u), (9.26)
the second subplot of Figure 9.5 and at the third sub-
plot of Figure 9.6 which show that the updates of the where x, x̂ ∈ Rn , u ∈ Rm . The controller and the state
model at the controller node occur after the triggering error are given by u = h( x̂ ) and e = x − x̂, respectively.
time instants but not exactly at those instants. Thus, we have that
ẋ = f ( x ) + f u (h( x̂ )) = f ( x ) + g( x̂ )
(9.27)
x̂˙ = fˆ( x̂ ) + fˆu (h( x̂ )) = fˆ( x̂ ) + ĝ( x̂ ).
Let us consider plant-model uncertainties that can be
9.4 Nonlinear Systems characterized as follows:

In this section, we present results concerning the MB- fˆ(ξ) = f (ξ) + δ f (ξ)
(9.28)
ET control framework for nonlinear systems. The model ĝ (ξ) = g(ξ) + δg (ξ).
Model-Based Event-Triggered Control of Networked Systems 187

|e|
0.4 y’|x|

Norms of real
variables
0.3

0.2

0.1

0
0 2 4 6 8 10

|eq|
Norms of quantized

0.4
yq|q(x)|
variables

0.3

0.2

0.1

0
0 2 4 6 8 10
Communication
instants

0 2 4 6 8 10
Time (s)

FIGURE 9.6
Norms of nonquantized and quantized error and plant state. Communication time instants.

Then, the model dynamics can be expressed as follows: Theorem 9.5

x̂˙ = f ( x̂ ) + δ f ( x̂ ) + g( x̂ ) + δg ( x̂ ) = f ( x̂ ) + g( x̂ ) + δ( x̂ ).
The nonlinear model-based event-triggered control sys-
(9.29)
tem (9.25) with u = h( x̂ ) is asymptotically stable if the
Assume that zero is the equilibrium point of the system events are triggered when
ẋ = f ( x ) and that g(0) = 0 and δ(0) = 0. Also assume
σα
that both f and δ are locally Lipschitz with constants K f e > x , (9.33)
and Kδ , respectively; that is, β

 f ( x ) − f (y) ≤ K f  x − y where 0 < σ < 1. Furthermore, there exists a constant


(9.30) τ > 0 such that the interevent time intervals are lower
δ( x ) − δ(y) ≤ Kδ  x − y ,
bounded by τ; that is, ti+1 − ti ≥ τ.
for x, y ∈ Ω ⊆ Rn .
Also assume that the non-networked closed-loop PROOF Note that at the event time instants, the model
model is locally exponentially stable: state is updated using the plant state, so we have that
e(ti ) = 0. Also, because of the threshold (9.33), the error
 x̂(t) ≤ β̂  x̂ (t0 ) e−α̂(t−t0 ) , (9.31) satisfies e(t) ≤ σαβ  x , for t ≥ 0. Then, we can write

for α̂, β̂ > 0. V̇ ≤ −α  x  + β σα


Let us consider the following assumptions on the non- β  x  ≤ (σ − 1)α x  , (9.34)
networked closed-loop system dynamics.
and the nonlinear system (9.25) is asymptotically
Assumption There exist positive constants α, β and class K stable.
functions α1 , α2 such that In order to establish a bound on the interevent time
intervals, let us analyze the state error dynamics:
α1 ( x ) ≤ V ( x ) ≤ α2 ( x )
∂V ( x ) (9.32)
∂x f ( x, g ( x + e)) ≤ −α  x  + β  e  . ė = ẋ − x̂˙ = f ( x ) − f ( x̂ ) − δ( x̂ ). (9.35)
188 Event-Based Control and Signal Processing

Thus, the response of the state error during the interval 9.4.2 Examples
t ∈ [ti , ti+1 ) is given by
 t" # EXAMPLE 9.3 Consider the continuous-time nonlin-
e( t ) = e( t i ) + f ( x (s)) − f ( x̂ (s)) − δ( x̂ (s)) ds, ear system described by
ti
(9.36) ẋ = ax2 − x3 + bu, (9.41)
where e(ti ) = 0, since an update has taken place at time
ti , and the error is reset to zero at that time instant. Then, where a (|a| < 1) and b are the unknown system param-
we can write eters. The corresponding nonlinear model is given by
t t
 e(t) ≤ ti  f ( x (s)) − f ( x̂(s)) ds + ti δ( x̂ (s)) ds x̂˙ = â x̂ 2 − x̂ 3 + b̂u. (9.42)
t t
≤ K f t  x (s) − x̂ (s) ds + Kδ t  x̂ (s) ds
 ti t i Let us consider the following parameter values: a = 0.9,
≤ K f t e(s) ds + Kδ β̂ t  x̂ (ti ) e−α̂(s−ti ) ds b = 1, â = 0.2, and b̂ = 0.4. Selecting the control input
 ti i
" #
≤ K f t e(s) ds + Kδ β̂α̂x̂(ti ) 1 − e−α̂(t−ti ) , u = 2 x̂ and the Lyapunov function 12 x2 , the following
i
(9.37) parameters can be obtained [24]: α1 (s) = α2 (s) = 12 s2 ,
α(s) = 0.84s, and β(s) = 2.66s2 . Results of simulations
for t ∈ [ti , ti+1). We now make use of the Gronwall– are shown in Figure 9.7 for σ = 0.8. The top part of the
Bellman inequality [23] to solve the remaining integral figure shows the norms of the state and the error; the
in (9.37). bottom part shows the communication time instants.
We obtain
K K β̂ x̂ ( t )  t " #
−α̂( s −t i ) eK f ( t −s ) ds
 e(t) ≤ δ f α̂ i ti 1 − e
EXAMPLE 9.4 Consider the following system [25]:
" #
+ Kδ β̂α̂x̂(ti ) 1 − e−α̂(t−ti ) ẋ1 = b1 u1
Kδ β̂ x̂ ( t i ) "  t " K ( t−s) K f ( t − s )−α̂( s − t i ) # ẋ2 = b2 u2 (9.43)
≤ α̂ Kf t e f −e ds
# i ẋ 3 = ax 1 x 2 ,
+1 − e−α̂(t−ti )
"
≤ Kδ β̂α̂x̂(ti ) eK f (t−ti ) − 1 and the corresponding model is given by
K f " −α̂( t − t ) # #
+ K f +α̂ e i − e K f ( t − t i ) + 1 − e −α̂( t − t i )
x̂˙ 1 = b̂1 u1
" K #" # x̂˙ 2 = b̂2 u2 (9.44)
≤ Kδ β̂α̂x̂(ti ) 1 − K +f α̂ eK f (t−ti ) − e−α̂(t−ti ) x̂˙ 3 = â x̂1 x̂2 .
f
" #
≤ Kδ Kβ̂x̂+(α̂ti ) eK f (t−ti ) − e−α̂(t−ti ) , (9.38) The system and model parameters are as follows: a =
f

for t ∈ [ti , ti+1 ). Let τ = t − ti , and we can see that the 1, b1 = 1, b2 = 1, â = 0.8, b̂1 = 1.5, and b̂2 = 1.4. Let σ =
time τ > 0 that it takes for the last expression to be equal 0.6. The control inputs are given by
to σαβ  x  is less than or equal to the time it takes the
norm of the error to grow from zero at time ti and reach u1 = − x̂1 x̂2 − 2 x̂2 x̂3 − x̂1 − x̂3
(9.45)
σα
the value β  x . That is, by establishing the relationship u2 = 2x̂1 x̂2 x̂3 + 3 x̂32 − x̂2 .

Kδ β̂  x̂ (ti ) " K f τ # σα Using the Lyapunov function V ( x ) = 12 ( x1 + x3 )2 +


e − e−α̂τ =  x (t) , (9.39)
K f + α̂ β 2 ( x2 − x3 ) + x3 , the following parameters can be
1 2 2 2

we guarantee that obtained: α(s) = 91446s2, β(s) = 147190s2. Figure 9.8


shows the states of the system, the norms of the error
σα
 e(t) ≤ x , (9.40) and state, and the communication time instants.
β
which ensures that no event is generated before ti + τ,
where ti represents the time instant corresponding to the
latest event. Note that the solution τ of (9.39) is posi-
tive for any x (t) = 0. Also, if we have that x (ti )=0 for
9.5 MB-ET Control of Discrete-Time Nonlinear
some ti > 0, then an event is triggered at t = ti , and
we have that x̂ (ti ) = x (ti ) = 0 and e(t) = 0 for t ≥ ti , Systems Using Dissipativity
because δ(0) = 0 and g(0) = 0. Thus, the comparison Common approaches for control and analysis of NCSs
(9.33) becomes 0 > σα β · 0, which does not hold, and it often focus on state-based methods for both linear and
is not necessary to generate any further event. nonlinear systems. These approaches ignore the strong
Model-Based Event-Triggered Control of Networked Systems 189
0.5
|e|2
0.4

Norm of state and error


σ a|x|/b

0.3

0.2

0.1

0
0 1 2 3 4 5
Communication
instants

0 1 2 3 4 5
Time (s)

FIGURE 9.7
Error and state norms in Example 9.3. Corresponding communication time instants.

2
x1
1 x2
Plant states

x3
0

−1

−2
0 5 10 15 20 25
3
|e|2
state and error

2 σ a|x|2/b
Norm of

0
0 5 10 15 20 25
Communication

1.1
instants

0.9
0 5 10 15 20 25
Time (s)

FIGURE 9.8
Top: states of the system in Example 9.4. Middle: error and state norms. Bottom: communication time instants.
190 Event-Based Control and Signal Processing

tradition in nonlinear control to use output feedback plant, a model can be developed for use in the model-
methods to analyze feedback systems. These meth- based framework according to the dynamics
ods include the passivity theorem and the small-gain
theorem, among others. While these analysis methods
ŷ (k) = fˆio (ŷ (k − 1), . . . , ŷ(k − n), u(k), . . . , u(k − m)),
are not directly applicable to systems in the model-
(9.47)
based framework, they are applicable to a modified ver-
sion. Specifically, a signal-equivalent feedback system is
derived for analyzing systems using output feedback. where the nonlinear function fˆio (·) represents the avail-
While passivity and finite-gain analysis are attractive able model of the system function f io (·). While the
options, this section focuses on the more general frame- systems are controlled using only output feedback, inter-
work of dissipativity theory. nal models may be used for analysis purposes. These
Dissipativity, in general, can be applied to continuous- models are particularly useful for demonstrating dissi-
time and to discrete-time nonlinear systems. However, pativity for a given system. As with the output models,
the case of continuous-time systems with output feed- these state-space models need not perfectly represent the
back in the model-based framework would require a system dynamics. The state-space dynamics of the plant
state observer in order to update the state of the model. may be given by
On the other hand, discrete-time systems with output
feedback can be implemented using the model-based
x (k + 1) = f ( x (k), u(k))
approach without the need for a state observer by using (9.48)
y(k) = h( x (k), u(k)),
an input–output representation of the system dynamics,
as shown in this section.
This section considers the control of a discrete-time where x (k) ∈ Rn , u(k) ∈ Rm , and y (k) ∈ R p . The plant
nonlinear dissipative system over a limited-bandwidth may be modeled by the state-space model:
network. It is assumed that an appropriate controller has
been designed for the system ignoring the effects of the x̂ (k + 1) = fˆ( x̂ (k), u(k))
network. At this point, an event-triggered aperiodic net- (9.49)
ŷ(k) = ĥ( x̂ (k), u(k)).
work communication law is used that takes advantage
of the model-based framework to reduce communica-
tion. With the aperiodic communication, it is important The approach in this section makes use of the MB-ET
to consider robustness to both model uncertainties and control framework, but the approach acts on the system
external disturbances due to the absence of feedback output instead of on the state. More details on event-
measurements at every time k. The goal of this sec- triggered control for output feedback can be found in
tion is to present a boundedness result that provides a [26–30]. An intelligent sensor is implemented to com-
constructive bound on the system output of a nonlin- pare the current system output measurement y (k) to the
ear system with output feedback. The system is subject estimated output value ŷ(k) and only transmit the new
to both external disturbances and model uncertainties; output value when a triggering condition is satisfied. In
in addition, there is a lack of feedback measurements this case, the triggering condition is when the error e(k)
for extended intervals of time in order to reduce net- between the current output and the estimated output,
work usage. More details on this approach can be found
in [35]. e(k) = ŷ(k) − y(k), (9.50)

grows above some positive threshold, e(k) > α > 0. At


9.5.1 Problem Formulation this point, the current value and previous n values of
the output, based on the dimension of the system (9.46),
The systems of interest are single-input single-output
are sent across the network. The model output ŷ (k) is
(SISO) nonlinear discrete-time systems. The dynamics of
updated to equal y(k), and the output error (9.50) is
these plants can be captured by the output model:
zero. This approach requires a copy of the model to be
present at the sensor node to have continuous access to
y(k) = f io (y(k − 1), . . . , y (k − n), u(k), . . . , u(k − m)), ŷ(k). This additional computation is incurred in order
(9.46) to reduce the total network communication. Assuming
no delay in updating the output, the error is always
where y (k) is the current output and u(k) is the current bounded:
input. The current system output is a function of the n
previous outputs and the m previous inputs. For a given |e(k)| ≤ α. (9.51)
Model-Based Event-Triggered Control of Networked Systems 191

It should be noted that the reduction in network constant c2 ,


traffic is significant compared to the case in which a
|w2 (k)| ≤ W2 (k) + c2 . (9.53)
measurement of y (k) is sent at every sampling instant.
This is true even when the order of the system n is An example of such a disturbance and the appropriate
large compared to the interupdate intervals. In this case, bounds is given in Figure 9.9.
nearly every sample of the output is transmitted eventu- With all components of the problem formulation pro-
ally. The average data rates are still significantly reduced vided, the complete feedback system can be given in
when considering that bandwidth can be lost due to Figure 9.10. The switch indicates the aperiodic commu-
packet overhead and the minimum size of payload for nication over the network. The input to the controller uc
each packet. As the minimum payload in a packet is typ- is the estimated output ŷ with an added disturbance in
ically much larger than a single measurement, by saving w2 . The output of the controller yc is the calculated con-
measurements and sending them all at the same time, a trol effort. The actual control applied to the plant u p has
larger portion of the payload can be utilized, similar to some additional noise.
the approaches in [31,32]. One of the novel components of this approach is in
Feedback systems with periodic feedback typically reformulating the MB framework into a traditional feed-
have a low sensitivity to disturbances and unmod- back problem. This is done by representing the output
eled dynamics. This property is not guaranteed when
considering aperiodic communication. This approach
w1 + up y
explicitly considers a plant-input disturbance w1 (k) and Plant
a controller-input disturbance w2 (k). Both signals are –
assumed to have bounded magnitude for all time k,
but this magnitude may not go to zero. These sig-
nals can capture unmodeled dynamics as well as error Model
introduced by discretization. These disturbances are
ŷ
unknown but have magnitude bounded by + w2
yc Controller uc +
|w1 (k)| ≤ W1 (k) + c1 , (9.52)
FIGURE 9.10
The feedback of the plant and combined controller/model. The two
where the signal W1 (k) > 0 is an 2 signal, and c1 ≥ 0 is a disturbance signals are w1 and w2 , and the switch indicates the
constant. Likewise, for 2 signal W2 (k) > 0 and positive aperiodic communication over the network.

14

12
Wi(k)

10

8
|wi(k)|

6
|wi(k)|
4 ci

0
0 5 10 15 20 25 30 35 40 45 50
Time (s)

FIGURE 9.9
An example of an allowable disturbance w ( k) that is bounded by the sum of an 2 signal W ( k) > 0 and a constant c > 0.
192 Event-Based Control and Signal Processing
w1 + up y Signals in 2e have finite 2 norm for all K < ∞. The
Plant systems of interest in this section map input signals

+ u(k) ∈ 2e to signals y(k) ∈ 2e . This assumption disal-
+ e lows inputs with finite escape time as well as systems
that produce outputs with finite escape time. A sys-
ŷ
tem is 2 stable if u(k) ∈ 2 implies that y(k) ∈ 2 for all
yc
+ w2 u(k) ∈ U. An important special case of this stability is
Controller
uc + finite-gain 2 stability, where the size of the system out-
put can be bounded by an expression involving the size
FIGURE 9.11 of the input. Specifically, a system is finite-gain 2 stable
The feedback of the plant and the controller. While the model is not if there exists a γ and β such that
present, it is implicitly considered through the input of the model
error e. ||yK (k)||2 ≤ γ ||uK (k)||2 + β, (9.56)

of the model ŷ (k) as the sum of the plant output y (k) ∀K > 0 and ∀u(k) ∈ U. The 2 gain of the system is
and the error e(k), which is now treated as an external the smallest γ such that there exists a β to satisfy the
input. In this signal-equivalent representation, the plant inequality.
mapping from input u p (k) to output y (k) and controller Dissipativity is an energy-based property of dynam-
mapping from input uc (k) to output y c (k) are directly ical systems. This property relates energy stored in a
interconnected as in Figure 9.11. system to the energy supplied to the system. The energy
In the absence of disturbances, the input to the plant stored in the system is defined by an energy storage
is the output of the controller, and the input to the con- function V ( x ). As a notion of energy, this function must
troller is the predicted plant output ŷ. This is consistent be positive definite—that is, it must satisfy V ( x ) > 0 for
with the MB-NCS framework presented earlier in this x = 0 and V (0) = 0. The supplied energy is captured by
chapter and consistent with the definition of the output an energy supply rate ω(u, y). A system is dissipative
error, ŷ(k) = e(k) + y(k). The error e(k) is still present in if it only stores and dissipates energy, with respect to
the absence of external disturbances due to model uncer- the specific energy supply rate, and does not generate
tainties. While this forms a standard negative feedback energy on its own.
interconnection, it is not useful to show 2 stability, as
the error signal e(k) is not an 2 signal. DEFINITION 9.1 A nonlinear discrete-time system
(9.48) is dissipative with energy supply rate ω(u, y) if
there exists a positive definite energy storage function
9.5.2 Dissipativity Theory V ( x ) such that the following inequality holds:
The current approach uses the quadratic form of dissi-
k2
pativity that is typically referred to as QSR dissipativity.
V ( x (k2 + 1)) − V ( x (k1 )) ≤ ∑ ω(u(k), y(k)), (9.57)
This form is named after the parameters (matrices Q, S, k=k1
and R) that define the relative importance of weight-
ing the system input and output. In order to use dis- for all times k1 and k2 , such that k1 ≤ k2 .
sipativity theory, some preliminaries will need to be
defined. A particularly useful form of dissipativity with addi-
The space of all finite energy discrete-time signals is tional structure is the quadratic form, QSR dissipativity.
referred to as 2 . The corresponding 2 norm is given by
7 DEFINITION 9.2 A discrete-time system (9.48) is QSR

||w(k)||2 = ∑ w T (k)w(k). (9.54) dissipative if it is dissipative with respect to the supply
k =0 rate
A function w(k) is in 2 if it has finite 2 norm. The  T   
y Q S y
related signal space is the superset that includes all sig- ω(u, y) = , (9.58)
u ST R u
nals with finite energy on any finite time interval. This is
referred to as the extended 2 space, or 2e . The truncated where Q = Q T and R = R T .
2 norm is defined by the following:
8
9 K −1 The QSR dissipative framework generalizes many
9
:
||wK (k)|| 2 = ∑ w T ( k ) w ( k ). (9.55) areas of nonlinear system analysis. The property of pas-
k =0 sivity can be captured when Q = R = 0 and S = 12 I,
Model-Based Event-Triggered Control of Networked Systems 193

where I is the identity matrix. Systems that are finite- + u1 y1


G1
gain 2 stable can be represented by S = 0, Q = − γ1 I,
r1
and R = γI, where γ is the gain of the system. The –
following theorems give stability results for QSR dissi-
pative systems and dissipative systems in feedback.
+
r2
G2
Theorem 9.6
y2 u2 +

A discrete-time system is finite-gain 2 stable if it is QSR FIGURE 9.12


dissipative with Q < 0. The feedback interconnection of systems G1 and G2 .

PROOF The system being QSR dissipative implies that √


4s2 +2qr
there exists a positive definite storage function V ( x ) This shows 2 stability with γ= and
 q
such that, β = 2q V ( x (0)).
V ( x (k2 + 1)) − V ( x (k1 ))
k2 - . Theorem 9.7
≤ ∑ y T Qy + 2y T Su + u T Ru ,
k=k1
Consider the feedback interconnection of two QSR dis-
for all k2 ≥ k1 . The substitutions k1 = 0 and k2 = K will sipative systems (Figure 9.12). System G1 is dissipative
be made. As Q < 0, there exists a real number q > 0 such with respect to Q1 , S1 , R1 and system G2 with respect to
that Q ≤ −qI. Similarly, the matrices S and R can be Q2 , S2 , R2 . The feedback interconnection is 2 stable if
bounded by their largest singular values (s and r), S ≤ sI there exists a positive constant a such that the following
and R ≤ rI, which gives matrix is negative definite:
 
V ( x (K + 1)) − V ( x (0)) Q1 + aR2 −S1 + aS2T
Q̃ = < 0. (9.59)
K - . −S1T + aS2 R1 + aQ2
≤ ∑ −q ||y||2 + 2s ||y|| ||u|| + r ||u|| .
k =0 PROOF Each system being QSR dissipative implies
Rearranging terms and completing the square to remove the existence of positive definite storage functions V1
the cross term yields and V2 that satisfy
K  V1 ( x1 (k2 + 1)) − V1 ( x1 (k1 ))
1
V ( x (K + 1)) − V ( x (0)) ≤ ∑ − (2s ||u|| − q ||y||)2 k2   T Q
  
k =0
2q y1 1 S1 y1
 ≤ ∑ T
,
q 4s2 + 2qr u1 S R1 u 1
2 2 k=k1 1
− ||y|| + ||u|| .
2 2q V2 ( x2 (k2 + 1)) − V2 ( x2 (k1 ))
  
The squared term and V ( x (K + 1)) can be removed k2   T Q
y2 2 S2 y2
without violating the inequality. The remaining terms ≤ ∑ T
,
u2 S2 R2 u 2
can be rearranged to give k=k1

K K
4s2 + 2qr 2 where xi , ui , and y i are the state, input, and output of
∑ ||y||2 ≤ ∑ q2 ||u||2 + q V (x (0)). the ith system, respectively. The signal relationships in
k =0 k =0 the feedback loop can be given by
Letting K → ∞ and evaluating the summations on the
u 1 = r1 − y 2 ,
norms yields the 2 norms:
u2 = r2 + y 1 .
4s2 + 2qr 2
||y||22 ≤ ||u||22 + V ( x (0)). A total energy storage function for the loop can be
q2 q
defined as
The square√ root of this expression can be taken, and the V ( x ) = V1 ( x1 ) + aV2 ( x2 ),
fact that a2 + b2 ≤ | a| + |b| can be used to show for a positive constant a > 0, where
 7
4s2 + 2qr 2      
||y||2 ≤ ||u|| 2 + V ( x (0)). x u y1
q q x = 1 , u = 1 , and y= .
x2 u2 y2
194 Event-Based Control and Signal Processing

Looking at the change in V ( x ) over a time interval yields While notions of asymptotic stability or finite-gain 2
 T    stability are appealing, they are simply not achievable
k2
y1 Q1 S1 y1 in this framework. Instead this is relaxed to a bounded-
V ( x (k2 + 1)) − V ( x (k1 )) ≤ ∑ u1 S1T R1 u1 ness result. As this work considers systems described by
k=k1
  T   an input–output relationship, the notion of 2 stability is
k2
y2 Q2 S2 y2 relaxed to a bound on the output as time goes to infinity.
+a ∑ u2 S2T R2 u2
.
With output error and disturbances that are nonvanish-
k=k1
ing, the output may fluctuate over a large range. Due
The signal relationships can be substituted and the to fluctuations in the system input, it may be difficult
expression simplified to yield to find an ultimate bound on the size of the output for
 T    all time. Instead, this framework considers an average
y Q̃ S̃ y bound on the squared system output.
V ( x (k2 + 1)) − V ( x (k1 )) ≤ ,
u S̃ T R̃ u
DEFINITION 9.3 A nonlinear system is average out-
where put squared bounded if after time k̄, there exists a con-
  stant b such that the following bound on the output
Q1 + aR2 aS2T − S1
Q̃ = , holds for all times k1 and k2 larger than (k̄ ≤ k1 < k2 ):
aS2 − S1T R1 + aQ2
    k 2 −1
S1 aR2 R1 0 1
S̃ = , and R̃ = . ∑ y T (k)y(k) ≤ b. (9.60)
− R1 aS2 0 aR2 (k2 − k1 ) k=k1
This shows that the feedback of two QSR dissipative sys-
tems is again QSR dissipative. If Q̃ < 0, Theorem 9.6 can This form of boundedness is a practical form of sta-
be applied to demonstrate 2 stability. bility on the system output. While the output does not
necessarily converge to zero, it is bounded on aver-
QSR dissipativity can be used to assess the stability of age with a known bound as time goes to infinity. It is
a single system as well as systems in feedback. From a important to note that this concept is not useful for an
control design perspective, the QSR parameters of a given arbitrarily large bound b. However, the concept is infor-
plant can be determined and used to find bounds on sta- mative for a small, known bound. The notion should
bilizing QSR parameters of a potential controller. More be restricted to being used in the case when the bound
details about general dissipativity can be found in [33], is constructive and preferably when the bound can be
while the case of QSR dissipativity can be found in [34]. made arbitrarily small by adjusting system parameters.
The following boundedness theorem can be applied
to the analysis of a plant and a controller in the model-
9.5.3 Output Boundedness Result based framework. The plant and the model of the plant
The main result in this approach is a tool that allows must be QSR dissipative with respect to parameters Q P ,
the size of the system output to be bounded when oper- SP , and R P . Although the plant dynamics are not known
ating nonlinear discrete-time systems in the network exactly, sufficient testing can be done to verify that the
configuration described in the previous section. As dis- dissipative rate bounds the actual dissipative behavior
cussed previously, there are two issues with traditional of the system. The model-stabilizing QSR dissipative
stability for this network setup. The first is due to the controller has been designed with parameters QC , SC ,
aperiodic control updates. Between update events, the and RC .
feedback system is temporarily operating as an open
loop. With even small model mismatch between the Theorem 9.8
actual plant and the model, the outputs between the two
can drift significantly over time. Typically, the system Consider a plant and controller in the MB-NCS frame-
output does not go to zero and, thus, cannot be bounded work where model mismatch may exist between the
as in finite-gain 2 stability. The second issue with tra- plant and the model. The network structure contains
ditional notions of stability is that this work allows event-triggered, aperiodic updates and nonvanishing
nonvanishing input disturbances. Traditional dissipativ- disturbances. This feedback system is average output
ity theory shows stability for disturbances that are in squared bounded if there exists a positive constant a
2 (i.e., the disturbance must converge to zero asymp- such that the following matrix is negative definite
totically). This approach generalizes existing results to  
Q P + aRC aSCT − SP
disturbances that may not go to zero but do have an Q̃ = < 0. (9.61)
ultimate bound. aSC − SPT R P + aQC
Model-Based Event-Triggered Control of Networked Systems 195

PROOF The plant and controller being QSR dissipa- point, either the average output squared is bounded by
tive implies the existence of positive storage functions the following:
VP and VC , such that
k 2 −1
 T    1 (4s2 + 2qr )2
y QP SP y (k2 − k1 ) ∑ yT y ≤
q2 ( 1 − δ)
, (9.62)
ΔVP ( x P ) ≤ , k=k1
uP SPT RP uP
where 0 < δ < 1, or not bounded by it,
and a similar bound on ΔVC . A total energy stor-
age function can be defined, V ( x ) = VP ( x P ) + aVC ( xC ), 1 k 2 −1
(4s2 + 2qr )2
where x = [ x PT xCT ] T . The total energy storage function
(k2 − k1 ) ∑ yT y >
q2 ( 1 − δ)
. (9.63)
has the dissipative property k=k1

⎡ ⎤T ⎡ ⎤ When it is larger than this quantity, it is possible to show


y   y
⎢ yC ⎥ Q̃ S̃ ⎢ ⎥
ΔV ( x ) ≤ ⎢ ⎥ ⎢ yC ⎥ , qδ k 2 −1
⎣ w1 ⎦ S̃ T R̃ ⎣ w1 ⎦ V ( x (k2 )) ≤ V ( x (k1 )) −
2 ∑ yT y
( w2 + e ) ( w2 + e ) k=k1
k 2 −1
(4s2 + 2qr )
where −
2q ∑ [2 − w1T w1 − w2T w2 − eT e].
  k=k1
Q P + aRC aSCT − SP
Q̃ = , This can be used to show a bound on y:
aSC − SPT R P + aQC
   
SP aRC RP 0 k 2 −1
S̃ = , and R̃ = . 2
− R P aSC 0 aRC ∑ yT y ≤

V ( x (k1 )). (9.64)
k=k1
Due to Q̃ < 0 (9.61), there exists a constant q > 0 such
that Q̃ ≤ −qI. As S̃ and R̃ are constant matrices, the As the sum of y T y is bounded, the average is also
largest singular values, s and r, respectively, may be bounded. Either (9.62) or (9.64) holds, which shows that
used to show the following bound on ΔV: the average squared system output is bounded, satisfy-
ing Definition 9.3. Furthermore, the parameter δ can be
ΔV ( x ) ≤ −q[y T y + ycT yc ] + 2s[y T w1 + ycT (w2 + e)] adjusted to vary (and potentially lower) the relative size
of the two bounds. As (9.64) is a fixed bound, the average
+ r [w1T w1 + (w2 + e) T (w2 + e)]. output will be continually shrinking. Asymptotically,
bound (9.62) will hold as δ → 0.
Completing the square can be used to remove the cross
terms: One important takeaway is that the bound on the sys-
q tem output is constructive. The bounds can be made
ΔV ( x ) ≤ − [y T y + ycT yc ] smaller by adjusting the values of controller, which
2
(4s2 + 2qr ) T changes q, s, and r. The bounds also depend on the
+ [w1 w1 + w2T w2 + e T e]. value of the output error threshold α, which can be
2q
made small. The effect of the nonvanishing disturbances
Summing this inequality from k1 to k2 yields the may be significant depending on 1 and 2 . When these
following: disturbances are vanishing, the bound on the output
depends mainly on α, which may be made arbitrar-
k 2 −1 ily small. The value of α may be chosen to trade off
q
V ( x (k2 )) ≤ V ( x (k1 )) −
2 ∑ [y T y + ycT yc ] decreased communication with reduced output error.
k=k1
k 2 −1
(4s2 + 2qr ) 9.5.4 Example
+
2q ∑ [w1T w1 + w2T w2 + eT e].
k=k1 The following example was chosen to be LTI for ease
of following, but the results apply to nonlinear sys-
The effect of the nonvanishing disturbances w1 and w2 tems as well. The QSR parameters for each system were
can be bounded by constants 1 and 2 after some time k̄, found using state-space models, but the NCS is simu-
|wi (k)| ≤ i , for k ≥ k̄. Additionally, |e(k)| < α for all k. lated using the equivalent input–output representation
A single bound can be defined 2 = 21 + 22 + α2 . At this for the plant, controller, and model. The system to be
196 Event-Based Control and Signal Processing

controlled is unstable and uncertain with the model model (9.65) where δ1 = 0.09, δ2 = −0.07, and δ3 = 0.18,
given by which gives dynamics
       
1.14 −1 1 !
1.05 + δ1 −1 1 A= , B= , C = 0.68 1 , D = 1.
 = , B̂ = , 0 0.78 0
0 0.85 + δ2 0
! (9.68)
Ĉ = 0.5 + δ3 1 , D̂ = 1. (9.65) The QSR parameters can be verified for the plant using
the same storage function.
The model can be shown QSR dissipative (Q P = 0.1, By assumption, the controller also stabilizes the plant
SP = 0.15, and R P = 0) by using the storage function: and satisfies the inequality for boundedness. This MB-
NCS was simulated with input–output models for
 
0.12 0.36 the plant, model, and controller. The external distur-
V̂ ( x̂ ) = x̂ T
x̂. (9.66) bances w1 (k) and w2 (k) for this example are shown in
0.36 30
Figure 9.13.
The disturbances are bounded by 0.1 after time k = 50.
An example of a stabilizing controller is given by
With this magnitude of disturbance, it is not possible
AC = 0.3, BC = 0.8, CC = 0.7, and DC = 1. This con-
to guarantee that the output error stays less than 0.1.
troller is QSR dissipative (QC = −0.2, SC = 0.15, and
The threshold for the output error was chosen to be
RC = −0.3), which can be shown using storage func-
0.2. These systems were simulated, and the system out-
tion, Vc ( xc ) = 0.23xu 2 . The controller can be shown to
puts are shown in Figure 9.14. The evolution of the
stabilize the model by evaluating (9.61) with a = 1:
output error over time is shown in the first subplot of
  Figure 9.15. This plot shows the error after each update
−0.2 0 takes place, that is, when the error is reset to zero. As a
Q̃ = < 0. (9.67)
0 −0.2 result, the error is always bounded as stated in (9.51).
The second subplot shows the time instants at which
As discussed earlier, the actual plant is dissipative with output measurements are sent from the sensor node to
respect to the same QSR parameters (Q P , SP , and R P ). the controller node. The rest of the time the networked
The plant is assumed to be unknown but similar to system operates in an open loop.

0.2
Disturbance w1 (k)

−0.2

−0.4

−0.6
0 50 100 150 200

3
Disturbance w2 (k)

−1
0 50 100 150 200
Time (s)

FIGURE 9.13
The nonvanishing disturbances w1 ( k) and w2 ( k).
Model-Based Event-Triggered Control of Networked Systems 197
10

Plant output 0

−5
0 50 100 150 200
(a)

10
Model output

−5
0 50 100 150 200
Time (s)
(b)

FIGURE 9.14
The output of the plant (a) and the model (b). The model tracks the plant closely within the update threshold.

0.2
Error after updates

0.1

−0.1

−0.2
0 50 100 150 200
(a)
Network communication
instants

0 50 100 150 200


Time (s)
(b)

FIGURE 9.15
The output error (a) and communication instants (b) indicated by asterisks. The communication instants align with the pretransmission error
growing to above 0.2.
198 Event-Based Control and Signal Processing

Figure 9.14 shows that the outputs of the MB-NCS For discrete-time linear systems of the form (9.69), it
are bounded, as expected. Clearly, the outputs do not is possible to estimate the elements of the matrices A
converge to zero, but the model output tracks the plant and B using a linear Kalman filter. In order to show this
output closely. In this example, communication is rel- simple idea, let us focus on second-order autonomous
atively constant initially as the system responds to the systems with unknown time-invariant parameters aij .
large disturbances. After this point, the communication (The idea can be easily extended to higher-order systems
rate drops significantly. While the error stays bounded with deterministic inputs.) The system can be written as
by 0.2, the communication rate is reduced by 86.5%. follows:
For comparison, a simulation was run with the output     
x1 ( k + 1) a a x1 ( k )
being transmitted at every time instant. For this case, = 11 12 . (9.71)
x2 ( k + 1) a21 a22 x2 (k)
after k = 50, the output error was as large as 0.165. The
periodic output feedback provides a small reduction in We do not know the values of the parameters, and we
output error, with an average data transmission rate that only receive measurements of the states x (0), . . . , x (k).
is approximately 7.4 times higher. At any given step, due to the iterative nature of the
Kalman filter, we only need x (k) and x (k − 1). Now we
rewrite (9.71) as
   
x̄1 (k) x1 ( k − 1) x2 ( k − 1) 0 0
=
9.6 Additional Topics on Model-Based x̄2 (k) 0 0 x1 ( k − 1) x2 ( k − 1)
⎡ ⎤
Event-Triggered Control â11 (k)
⎢ â12 (k)⎥
This section briefly describes additional problems and ×⎢ ⎥
⎣ â21 (k)⎦ = Ĉ (k) â(k), (9.72)
applications where the MB-ET control framework has â22 (k)
been implemented.
where âij represents the estimated values of the real
9.6.1 Parameter Estimation parameters aij , and x̄ i (k) are the estimates of the state
based on the estimated parameters and on measure-
The first topic discussed here is concerned with the ments of the real plant state xi (k − 1). Equation 9.72
implementation of parameter estimation algorithms. represents the output equation of our filter. The state
The main idea is to estimate the current parameters of equation is described by
the real system, because in many problems, the dynam- ⎡ ⎤ ⎡ ⎤⎡ ⎤
ics of the system may change over time due to age, use, â11 (k + 1) 1 0 0 0 â11 (k)
or nature of the physical plant. In many other situations, ⎢ â12 (k + 1)⎥ ⎢0 1 0 0⎥ ⎢ â12 (k)⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥
the initial given model is simply outdated or inaccurate. ⎣ â21 (k + 1)⎦ = ⎣0 0 1 0⎦ ⎣ â21 (k)⎦ = I â(k) .
Better knowledge of the plant dynamics will provide an â22 (k + 1) 0 0 0 1 â22 (k)
improvement in the control action over the network (i.e., (9.73)
we can achieve longer periods of time without need for
feedback). Improved estimates of the plant parameters It can be seen that the systems (9.73) and (9.72) are lin-
also lead to the update of the controller gain, so it can ear time-varying ones. Thus, we can use a linear filter to
better respond to the dynamics of the real plant being obtain estimates of the parameters aij . Details related to
controlled. convergence of the estimated variables and to the stabil-
Let us consider the model-based networked architec- ity of the adaptive networked system using the MB-ET
ture in Figure 9.1, and let the plant and model dynamics control framework can be found in [12].
be represented in state-space form: The case of stochastic systems described by
x (k + 1) = Ax (k) + Bu(k), (9.69) x (k + 1) = Ax (k) + Bu(k) + w(k)
(9.74)
x̂ (k + 1) = Â x̂ (k) + B̂u(k), (9.70) y ( k ) = x ( k ) + v ( k ),

where x, x̂ ∈ Rn , u ∈ Rm . The control input is defined as was also considered in [12], where the input and mea-
u(k) = K x̂ (k). The state of the model is updated at times surement noises, w(k) and v (k), are white, Gaussian,
k i , when an event is triggered and a measurement trans- uncorrelated, and zero-mean, and have known covari-
mitted from the sensor node to the controller node. The ance matrices Q and R, respectively. The model of the
parameters of the model may or may not be updated system is still given by (9.70) and since we only mea-
at the same time instants depending on whether new sure y (k), the error is now given by e(k) = x̂ (k) − y(k). In
estimated parameters are available. this case, a nonlinear Kalman filter, such as the extended
Model-Based Event-Triggered Control of Networked Systems 199

Kalman filter (EKF), can be used for the estimation of The approximate solution to this problem offered in [36]
states and parameters. The challenges and difficulties of consists of designing the optimal controller by solving
using the EKF for parameter estimation and its imple- the nominal linear quadratic regulator problem—that is,
mentation in the MB-ET control architecture have been the control input is given by
documented in [12].
u∗ ( N − i ) = −[ B T P(i − 1) B + R]−1 B T P(i − 1)
9.6.2 Optimal MB-ET Control × Âx ( N − i ), (9.77)
Event-triggered control techniques have been used to
reduce communication between sensor and controller where P(i ) is recursively computed using the following:
nodes. In general, the objective of the event-triggered
P(i ) = [ Â + BK ( N − i )] T P(i − 1)[ Â + BK ( N − i )]
control system is to achieve stability in the absence
of continuous feedback measurements. Optimal control + Q + K T ( N − i ) RK ( N − i ). (9.78)
problems in NCS can be formulated in order to opti-
mize network usage, in addition to system response and Now, the optimal scheduling problem can be seen as
control effort. the minimization of the deviation of the system perfor-
The MB-ET control framework can be used to address mance from the nominal performance by also consid-
the optimal control-scheduling problem. Here, we ering the cost that needs to be paid by updating the
consider the design of optimal control laws and optimal model and resetting the state error. The error dynamics
thresholds for communication in the presence of plant- are given by
model mismatch by appropriately weighting the system
e(k + 1) = x̂ (k + 1) − x (k + 1) = Âe(k) − Ãx (k), (9.79)
performance, the control effort, and the communica-
tion cost. An approximate solution to this challenging where à = A − Â. Since the error matrix à is not
problem is to optimize the performance of the nominal known, we use the nominal error dynamics:
control system, which can be unstable in general, and
to ensure robust stability for a given class of model e(k + 1) = Âe(k).
uncertainties and for lack of feedback for extended
intervals of time. The optimal scheduling problem is Furthermore, when the sensor decides to send a mea-
approximated by minimizing the state error and also surement update, which makes β(k) = 1, we reset the
considering the cost that needs to be paid every time we error to zero. Then the complete nominal error dynamics
decide to send a measurement to update the model and can be represented by the following equation:
reset the state error. " #
Let us consider system (9.69) and model (9.70). e(k + 1) = 1 − β(k) Âe(k). (9.80)
Assume that B̂ = B, that is, plant-model mismatch is
It is clear that in the nominal case, once we update the
only present in the state matrix A. The state error is
model, the state error is equal to zero for the remaining
defined as e(k) = x̂ (k) − x (k). The following cost func-
time instants. However, in a real problem, the state-
tion weights the performance of the system, the control
error dynamics are disturbed by the state of the real
effort, and the price to transmit measurements from the
system, which is propagated by means of the model
sensor node to the controller node:
uncertainties as expressed in (9.79). Then, using the
min J = x T ( N ) Q N x ( N ) available model dynamics, we implement the optimal
u,β
control input and the optimal scheduler that result from
N −1 ! the following optimization problem:
+ ∑ x T (k) Qx (k) + u T (k) Ru(k) + Sβ(k) ,
k =0
minβ J = e T ( N ) Q N e( N ) + ∑kN=−01 e T (k) Qe(k) + Sβ(k)
(9.75)
subject to:
where Q and Q N are real, symmetric, and positive " #
e(k + 1) = 1 − β(k) Âe(k)
semidefinite matrices, and R is a real, symmetric, and
positive definite matrix. The weight S penalizes network β(k) ∈ {0, 1} .
communication, and β(k) is a binary decision variable (9.81)
defined as follows:
In order to solve the problem (9.81) we can use dynamic
β( k ) programming in the form of lookup tables. The main

0 sensor does not send a measurement at time k reason for using dynamic programming is that although
= .
1 sensor sends a measurement at time k the error will be finely quantized, the decision vari-
(9.76) able β(k) takes only two possible values, which reduces
200 Event-Based Control and Signal Processing

the amount of computations performed by the dynamic By measuring its local error, each subsystem is able to
programming algorithm. The sensor operations at time decide the appropriate times ti at which it should broad-
k are reduced to measure the real state, compute and cast the current measured state xi to all other subsystems
quantize the state error, and determine if the current so that they can update the state of their local models, x̂ i ,
measurement needs to be transmitted by looking at the corresponding to the state xi . At the same time, the sub-
corresponding table entries that are computed offline. system that transmitted its state needs to update its own
The table size depends only on the horizon N and the local model corresponding to xi which makes the local
error quantization levels. state error equal to zero, that is, e(ti ) = 0.
A decentralized MB-ET control technique to achieve
asymptotic stability of the overall uncertain system was
9.6.3 Distributed Systems
presented in [13]. This approach is decentralized since
Many control applications consider the interactions of each subsystem only needs its own local state and its
different subsystems or agents. The increased use of own local state error to decide when to broadcast infor-
communication networks has made possible the trans- mation to the rest of the agents.
mission of information among different subsystems in
order to apply improved control strategies using infor-
mation from distant subsystem nodes. The MB-ET con-
trol architecture has been extended to consider this type
of control system, see [13]. In the control of distributed 9.7 Conclusions
coupled systems, the improvement in the use of network
This chapter offered a particular version of the event-
resources is obtained by implementing models of other
triggered control method. Here, a nominal model of the
subsystems within each local controller.
system is implemented in the controller node to estimate
The dynamics of a set of N coupled subsystems, i =
the plant state or the plant output between event time
1, . . . , N, can be represented by
instants. Results concerning the MB-ET control of linear
N systems were summarized in this chapter. Then, a more
ẋi = Ai xi + Bi ui + ∑ Aij x j , (9.82) detailed analysis of two MB-ET control techniques for
j =1,j  = i nonlinear systems were presented. The first case con-
and the nominal models are described by sidered the event-triggered control of continuous-time
nonlinear systems. The second case addressed discrete-
N time nonlinear systems with external disturbance using
x̂˙ i = Âi x̂ i + B̂i ûi + ∑ Âij x̂ j , (9.83) output feedback control and dissipativity control theory.
j =1,j  = i

where x i , x̂ i ∈ Rni , and ui , ûi ∈ Rmi . In this framework,


each node or subsystem implements copies of the mod-
els of all subsystems, including the model corresponding
Bibliography
to its own local dynamics in order to generate estimates
of the states of all subsystems in the network. Note that [1] L. A. Montestruque and P. J. Antsaklis. On the
the subsystems can have different dynamics and differ- model-based control of networked systems. Auto-
ent dimensions as well. The dimensions mi and ni can matica, 39(10):1837–1843, 2003.
be all different in general. Note also that each node has
[2] L. A. Montestruque and P. J. Antsaklis. Stability
access to its local state xi at all times, which is used to
of model-based control of networked systems with
compute the local subsystem control input defined by
time varying transmission times. IEEE Transactions
N on Automatic Control, 49(9):1562–1572, 2004.
ui = Ki xi + ∑ Kij x̂ j , (9.84)
j =1,j  = i [3] E. Garcia and P. J. Antsaklis. Model-based con-
trol of continuous-time and discrete-time systems
while the model control inputs, ûi , are given by with large network induced delays. In Proceedings
N of the 20th Mediterranean Conference on Control and
ûi = Ki x̂ i + ∑ Kij x̂ j , (9.85) Automation, IEEE Publ., Piscataway, NJ, pp. 1129–
j =1,j  = i 1134, 2012.
where Ki and Kij are the stabilizing control gains to be [4] I. G. Polushin, P. X. Liu, and C. H. Lung. On
designed. The local state is also used to compute the the model-based approach to nonlinear networked
local state error, which is given by ei = x̂ i − xi . control systems. Automatica, 44(9):2409–2414, 2008.
Model-Based Event-Triggered Control of Networked Systems 201

[5] X. Liu. A model based approach to nonlinear net- the American Control Conference, IEEE Publ., Piscat-
worked control systems. PhD thesis, University of away, NJ, pp. 159–164, 2014.
Alberta, 2009.
[16] J. Lunze and D. Lehmann. A state-feedback
[6] K. Furuta, M. Iwase, and S. Hatakeyama. Internal approach to event-based control. Automatica, 46(1):
model and saturating actuation in human operation 211–215, 2010.
from human adaptive mechatronics. IEEE Transac-
tions on Industrial Electronics, 52(5):1236–1245, 2005. [17] W. P. M. H. Heemels and M. C. F. Donkers. Model-
based periodic event-triggered control for linear
[7] A. Chaillet and A. Bicchi. Delay compensation in systems. Automatica, 49(3):698–711, 2013.
packet-switching networked controlled systems. In
Proceedings of the 47th IEEE Conference on Decision [18] J. K. Yook, D. M. Tilbury, and N. R. Soparkar. Trad-
and Control, IEEE Publ., Piscataway, NJ, pp. 3620– ing computation for bandwidth: Reducing commu-
3625, 2008. nication in distributed control systems using state
estimators. IEEE Transactions on Control Systems
[8] J. P. Hespanha, A. Ortega, and L. Vasudevan. Technology, 10(4):503–518, 2002.
Towards the control of linear systems: With min-
imum bit-rate. In Proceedings of the International [19] P. Tabuada. Event-triggered real-time scheduling
Symposium on the Mathematical Theory of Networks of stabilizing control tasks. IEEE Transactions on
and Systems, pp. 1–15, 2002. Automatic Control, 52(9):1680–1685, 2007.

[9] L. A. Montestruque and P. J. Antsaklis. Static and [20] P. Tabuada and X. Wang. Preliminary results on
dynamic quantization in model-based networked state-triggered scheduling of stabilizing control
control systems. International Journal of Control, tasks. In Proceedings of the 45th IEEE Conference on
80(1):87–101, 2007. Decision and Control, IEEE Publ., Piscataway, NJ,
pp. 282–287, 2006.
[10] E. Garcia and P. J. Antsaklis. Model-based
event-triggered control with time-varying network [21] X. Wang and M. D. Lemmon. Event-triggering in
delays. In Proceedings of the 50th IEEE Conference on distributed networked control systems. IEEE Trans-
Decision and Control – European Control Conference, actions on Automatic Control, 56:586–601, 2011.
IEEE Publ., Piscataway, NJ, pp. 1650–1655, 2011.
[22] B. Friedland. Control System Design: An Introduction
[11] E. Garcia and P. J. Antsaklis. Model-based event- to State-Space Methods, Dover Publications, New
triggered control for systems with quantization and York, 2005.
time-varying network delays. IEEE Transactions on
Automatic Control, 58(2):422–434, 2013. [23] H. K. Khalil, Nonlinear Systems, 3rd edition. Prentice
Hall, Upper Saddle River, NJ, 2002.
[12] E. Garcia and P. J. Antsaklis. Parameter estimation
and adaptive stabilization in time-triggered and [24] R. Postoyan, A. Anta, A. Nesic, and P. Tabuada.
event-triggered model-based control of uncertain A unifying Lyapunov-based framework for the
systems. International Journal of Control, 85(9):1327– event-triggered control of nonlinear systems. In
1342, 2012. Proceedings of the 50th IEEE Conference on Decision
and Control, IEEE Publ., Piscataway, NJ, pp. 2559–
[13] E. Garcia and P. J. Antsaklis. Decentralized 2564, 2011.
model-based event-triggered control of networked
systems. In Proceedings of the American Control [25] A. Anta and P. Tabuada. To sample or not to sample:
Conference, IEEE Publ., Piscataway, NJ, pp. 6485– Self-triggered control for nonlinear systems. IEEE
6490, 2012. Transactions on Automatic Control, 55(9):2030–2042,
2010.
[14] E. Garcia, Y. Cao, and D. W. Casbeer. Model-based
event-triggered multi-vehicle coordinated tracking [26] D. Lehmann and J. Lunze. Event-based output-
using reduced-order models. Journal of the Franklin feedback control. In Proceedings of the 19th Mediter-
Institute, 351(8):4271–4286, 2014. ranean Conference on Control Automation, IEEE Publ.,
Piscataway, NJ, pp. 982–987, 2011.
[15] E. Garcia, Y. Cao, and D. W. Casbeer. Coopera-
tive control with general linear dynamics and lim- [27] H. Yu and P. J. Antsaklis. Event-triggered output
ited communication: Centralized and decentralized feedback control for networked control systems
event-triggered control strategies. In Proceedings of using passivity: Time-varying network induced
202 Event-Based Control and Signal Processing

delays. In Proceedings of the 50th IEEE Conference [32] D. E. Quevedo, E. I. Silva, and G. C. Goodwin. Pack-
on Decision and Control, IEEE Publ., Piscataway, NJ, etized predictive control over erasure channels. In
pp. 205–210, 2011. Proceedings of the American Control Conference, IEEE
Publ., Piscataway, NJ, pp. 1003–1008, 2007.
[28] L. Li and M. D. Lemmon. Event-triggered out-
put feedback control of finite horizon discrete-time [33] J. C. Willems. Dissipative dynamical systems, Part
multidimensional linear processes. In Proceedings I: General theory. Archive for Rational Mechanics and
of the 49th IEEE Conference on Decision and Control, Analysis, 45(5):321–351, 1972.
IEEE Publ., Piscataway, NJ, pp. 3221–3226, 2010.
[34] D. J. Hill and P. J. Moylan. Stability results of non-
[29] E. Garcia and P. J. Antsaklis. Output feedback linear feedback systems. Automatica, 13:377–382,
model-based control of uncertain discrete-time 1977.
systems with network induced delays. In Proceed-
ings of the 51st IEEE Conference on Decision and [35] M. McCourt, E. Garcia, and P. J. Antsaklis. Model-
Control, IEEE Publ., Piscataway, NJ, pp. 6647–6652, based event-triggered control of nonlinear dissipa-
2012. tive systems. In Proceedings of the American Control
Conference, IEEE Publ., Piscataway, NJ, pp. 5355–
[30] M. C. F. Donkers and W. P. M. H. Heemels. 5360, 2014.
Output-based event-triggered control with guar-
anteed Linf-gain and improved and decentralized [36] E. Garcia and P. J. Antsaklis. Optimal model-
event-triggering. IEEE Transactions on Automatic based control with limited communication. In The
Control, 57(6):1362–1376, 2012. 19th World Congress of the International Federation of
Automatic Control, pp. 10908–10913, 2014.
[31] D. Georgiev and D. M. Tilbury. Packet-based con-
trol. In Proceedings of the American Control Confer-
ence, IEEE Publ., Piscataway, NJ, pp. 329–336, 2004.
10
Self-Triggered and Team-Triggered Control of Networked
Cyber-Physical Systems

Cameron Nowzari
University of Pennsylvania
Philadelphia, PA, USA

Jorge Cortés
University of California
San Diego, CA, USA

CONTENTS
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.1.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
10.2 Self-Triggered Control of a Single Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
10.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
10.2.2 Self-Triggered Control Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.3 Distributed Self-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
10.3.1 Network Model and Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
10.3.2 Event-Triggered Communication and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
10.3.3 Self-Triggered Communication and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
10.4 Team-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
10.4.1 Promise Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
10.4.2 Self-Triggered Communication and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
10.4.3 Event-Triggered Information Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
10.4.4 Convergence Analysis of the Team-Triggered Coordination Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
10.5 Illustrative Scenario: Optimal Robot Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
10.5.1 Self-Triggered Deployment Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
10.5.2 Team-Triggered Deployment Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
10.5.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
10.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

ABSTRACT This chapter presents extensions to place to properly guarantee the desired stability results.
event-triggered control called self-triggered and team- Consequently, we also introduce the team-triggered con-
triggered control. Unlike event-triggered controllers that trol approach that combines ideas from both event- and
prescribe an action when an appropriate “event” has self-triggered control to help overcome these limitations.
occurred, self-triggered controllers prescribe a time in
the future at which an action should take place. The ben-
efit of this is that continuous or periodic monitoring of
the event is no longer required. This is especially useful
in networked scenarios where the events of subsystems
10.1 Introduction
may depend on the state of other subsystems in the net-
work because persistent communication is not required. This chapter describes triggered control approaches
However, self-triggered algorithms can generally yield for the coordination of networked cyber-physical sys-
conservative decisions on when actions should take tems. Given the coverage of the other chapters of this
203
204 Event-Based Control and Signal Processing

book, our focus is on self-triggered control and a novel state of the system associated with the given abstrac-
approach we term team-triggered control. tions. Intuitively, it makes sense to believe that the more
The basic idea behind triggered approaches for con- accurate a given abstraction describes the system behav-
trolled dynamical systems is to opportunistically select ior, the less conservative the resulting self-triggered
when to execute certain actions (e.g., update the actua- controller implementation. This is especially so in the
tion signal, sense some data, communicate some infor- context of networked systems, where individual agents
mation) in order to efficiently perform various control need to maintain various abstractions about the state of
tasks. Such approaches trade computation for sens- their neighbors, the environment, or even the network.
ing, actuation, or communication effort and give rise The team-triggered approach builds on the strengths
to real-time controllers that do not need to perform of event- and self-triggered control to synthesize a uni-
these actions continuously, or even periodically, in fied approach for controlling networked systems in
order for the system to function according to a desired real time, which combines the best of both worlds.
level of performance. Triggered approaches for control The approach is based on agents making promises to
become even more relevant in the context of networked one another about their future states and being respon-
cyber-physical systems, where computation, actuation, sible for warning each other if they later decide to
sensing, and communication capabilities might not be break them. This is reminiscent of event-triggered imple-
collocated. The successful operation of networked sys- mentations. Promises can be broad, from tight state
tems critically relies on the acquisition and transmission trajectories to loose descriptions of reachability sets.
of information across different subsystems, which adds With the information provided by promises, individ-
an additional layer of complexity for controller design ual agents can autonomously determine what time in
and analysis. In fact, a given triggered controller might the future fresh information is needed to maintain a
be implementable over some networks and not over desired level of performance. This is reminiscent of self-
others, depending on the requirements imposed by the triggered implementations. A benefit of the strategy is
triggers on the information flow across the individual that because of the availability of the promises, agents
subsystems. do not require continuous state information about their
In event-triggered control, the focus is on detecting neighbors. Also, because of the extra information pro-
events during the system execution that are relevant vided by promises about what other agents plan to
from the point of view of task completion in order do, agents can operate more efficiently and less con-
to trigger appropriate actions. Event-triggered strate- servatively compared to self-triggered strategies where
gies result in good performance but require continuous worst-case conditions must always be considered.
access to some state or output to be able to monitor the
possibility of an event being triggered at any time. This
10.1.1 Related Work
can be especially costly to implement when executed
over networked systems (for instance, if the availability The other chapters of this book provide a broad
of information relies on continuous agent-to-agent com- panoramic view of the relevant literature on general
munication). Other chapters in this book alleviate this event- and self-triggered control of dynamical sys-
issue by considering periodic or data-sampled event- tems [1–3], so we focus here on networked systems
triggered control, where the triggers are only checked understood in a broad sense. The predominant
when information is available, but this may still be paradigm is that of a single plant that is stabi-
inefficient if the events are not triggered often. lized through a decentralized triggered controller
In self-triggered control, the emphasis is instead on over a sensor–actuator network (see e.g., [4–6]).
developing tests that rely only on current informa- Fewer works have considered scenarios where mul-
tion available to the decision maker to schedule future tiple plants or agents together are the subject of the
actions (e.g., when to measure the state again, when overall control design, where the synthesis of appro-
to recompute the control input and update the actua- priate triggers raises novel challenges due to the
tor signals, or when to communicate information). To lack of centralized decision making and the local
do so, this approach relies critically on abstractions agent-to-agent interactions. Exceptions include con-
about how the system might behave in the future given sensus via event-triggered [7–9] or self-triggered
the information available now. Self-triggered strategies control [7], rendezvous [10], collision avoidance
can be made robust to uncertainties (since they can while performing point-to-point reconfiguration [11],
naturally be accounted for in the abstractions) and distributed optimization [12–14], model predictive
more easily amenable to distributed implementation. control [15], and model-based event-triggered con-
However, they often result in conservative executions trol [16,17]. The works in [7,18] implement self-triggered
because of the inherent overapproximation about the communication schemes to perform distributed control
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 205

where agents assume worst-case conditions for other a linear control system, although the results are valid
agents when deciding when new information should for more general scenarios, such as nonlinear systems
be obtained. Distributed strategies based on event- for which a controller is available that makes the closed-
triggered communication and control are explored loop system input-to-state stable. The exposition closely
in [19], where each agent has an a priori determined follows [23].
local error tolerance, and once it violates it, the agent
broadcasts its updated state to its neighbors. The same
10.2.1 Problem Statement
event-triggered approach is taken in [20] to implement
gradient control laws that achieve distributed optimiza- Consider a linear control system
tion. In the interconnected system considered in [16],
ẋ = Ax + Bu, (10.1)
each subsystem helps neighboring subsystems by moni-
toring their estimates and ensuring that they stay within with x ∈ Rn and u ∈ Rm , for which there exists a lin-
some performance bounds. The approach requires ear feedback controller u = Kx such that the closed-loop
different subsystems to have synchronized estimates system
of one another even though they do not communicate
at all times. In [21,22], agents do not have continuous ẋ = ( A + BK ) x,
availability of information from neighbors and instead is asymptotically stable. Given a positive definite matrix
decide when to broadcast new information to them, Q ∈ Rn×n , let P ∈ Rn×n be the unique solution to the
which is more in line with the work we present here. Lyapunov equation ( A + BK ) T P + P( A + BK ) = − Q.
Then, the evolution of the Lyapunov function Vc ( x ) =
10.1.2 Notation x T Px along the trajectories of the closed-loop system is

Here we collect some basic notation used throughout the V̇c = x T (( A + BK ) T P + P( A + BK )) x = − x T Qx.
paper. We let R, R≥0 , and Z≥0 be the sets of real, non- Consider now a sample-and-hold implementation of
negative real, and nonnegative integer numbers, respec- the controller, where the input is not updated contin-
tively. We use  ·  to denote the Euclidean distance. uously, but instead at a sequence of to-be-determined
Given p and q ∈ Rd , [ p, q] ⊂ Rd is the closed line seg- times {t }∈Z≥0 ⊂ R≥0 :
ment with extreme points p and q, and B( p, r ) = {q ∈
Rd | q − p ≤ r } is the closed ball centered at p with u(t) = Kx (t ), t ∈ [t , t+1 ). (10.2)
radius r ∈ R≥0 . Given v ∈ Rd \ {0}, unit(v) is the unit
vector in the direction of v. A function α : R≥0 → R≥0 Such an implementation makes sense in practical scenar-
belongs to class K if it is continuous, strictly increasing, ios given the inherent nature of digital systems. Under
and α(0) = 0. A function β : R≥0 × R≥0 → R≥0 belongs this controller implementation, the closed-loop system
to class KL if for each fixed r, β(r, s) is nonincreasing can be written as
with respect to s and lims→∞ β(r, s) = 0, and for each ẋ = ( A + BK ) x + BK e,
fixed s, β(·, s) ∈ K. For a set S, we let ∂S denote its
boundary, and Pcc (S) denote the collection of compact where e(t) = x (t ) − x (t), t ∈ [t , t+1 ), is the state error.
and connected subsets of S. Then, the objective is to determine the sequence of times
{t }∈Z≥0 to guarantee some desired level of perfor-
mance for the resulting system. To make this concrete,
define the function
V (t, x0 ) = x (t) T Px (t),
10.2 Self-Triggered Control of a Single Plant for a given initial condition x (0) = x0 (here, t "→
In this section, we review the main ideas behind the self- x (t) denotes the evolution of the closed-loop system
triggered approach for the control of a single plant. Our using (10.2)). We define the performance of the system
starting point is the availability of a controller that, when by means of a function S : R≥0 × Rn → R≥0 that upper
implemented in continuous time, asymptotically stabi- bounds the evolution of V. Then, the sequence of times
lizes the system and is certified by a Lyapunov function. {t } can be implicitly defined as the times at which
For a constant input, the decision maker can compute
V (t, x0 ) ≤ S(t, x0 ), (10.3)
the future evolution of the system given the current state
(using abstractions of the system behavior), and deter- is not satisfied. As stated, this is an event-triggered
mine what time in the future the controller should be condition that updates the actuator signal when-
updated. For simplicity, we present the discussion for ever V (t , x0 ) = S(t , x0 ). Assuming solutions are well
206 Event-Based Control and Signal Processing

defined, it is not difficult to see that if the performance Letting y = [ x T , e T ] T ∈ Rn × Rn , we write the
function satisfies S(t, x0 ) ≤ β(t, | x0 |), for some β ∈ KL, continuous-time dynamics as
then the closed-loop system is globally uniformly
asymptotically stable. Moreover, if β is an exponential ẏ = Fy, t ∈ [t , t+1 ),
function, the system is globally uniformly exponentially
where
stable.  
Therefore, one only needs to guarantee the lack of A + BK BK
F= .
Zeno behavior. We do this by choosing the performance − A − BK − BK
function S so that the interevent times t+1 − t are lower
bounded by some constant positive quantity. This can be With a slight abuse of notation, we let y = [ x T (t ), 0T ] T
done in a number of ways. For the linear system (10.1), be the state y at time t . Note that e(t ) = 0, for all
it turns out that it is sufficient to select S satisfying  ∈ Z≥0 , by definition of the update times. With this
V̇ (t ) < Ṡ(t ) at the event times t . (This fact is formally notation, we can rewrite
stated in Theorem 10.1.) To do so, choose R ∈ Rn×n pos-
itive definite such that Q − R is also positive definite. S(t) = (Ce Fs (t−t ) y ) T P(Ce Fs (t−t ) y ),
Then, there exists a Hurwitz matrix As ∈ Rn×n such that V (t) = (Ce F (t−t) y ) T P(Ce F (t−t) y ),
the Lyapunov equation
where
AsT P + PAs = − R,  
As 0 !
Fs = , C= I 0 .
holds. Consider the hybrid system 0 0
ẋ s = As xs , t ∈ [t , t+1 ), The condition (10.3) can then be rewritten as
x s ( t  ) = x ( t  ), T ( t−t
f (t, y ) = yT (e F  ) C T PCe F ( t − t  )
whose trajectories we denote by t "→ xs (t), and define
− e Fs (t−t ) C T PCe Fs (t−t ) )y ≤ 0.
T
the performance function S by

S(t) = xsT (t) Px s (t). What is interesting about this expression is that it clearly
reveals the important fact that, with the information
available at time t , the decision maker can determine
10.2.2 Self-Triggered Control Policy
the next time t+1 at which (10.3) is violated by comput-
ing t+1 = h( x (t )) as the time for which
Under the event-triggered controller implementation
described above, the decision maker needs access to f (h( x (t )), y ) = 0. (10.4)
the exact state at all times in order to monitor the con-
dition (10.3). Here we illustrate how the self-triggered The following result from [23] provides a uniform lower
approach, instead, proceeds to construct an abstraction bound tmin on the interevent times {t+1 − t }∈Z≥0 .
that captures the system behavior given the available
information and uses it to determine what time in the Theorem 10.1: Lower bound on interevent times for
future the condition might be violated (eliminating the self-triggered approach
need to continuously monitor the state).
For the particular case considered here, where the Given the system (10.1) with controller (10.2) and con-
dynamics (10.1) are linear and deterministic, and no troller updates given by the self-triggered policy (10.4),
disturbances are present, it is a possible to exactly com- the interevent times are lower bounded by
pute, at any given time t , the future evolution of the
state, and hence, how long t+1 − t will elapse until tmin = min{t ∈ R>0 | det( M (t)) = 0} > 0,
the condition (10.3) is enabled again. In general, this is
where
not possible, and one instead builds an abstraction that
over-approximates this evolution, resulting in a trigger !' ( 
I
M(t) = I 0 e Ft C T PCe Ft − e Fs t C T PCe Fs t .
that generates interevent times that lower bound the 0
interevent times generated by (10.3). This explains why
the self-triggered approach generally results in more Note that the above result can also be interpreted in
conservative implementations than the event-triggered the context of a periodic controller implementation: any
approach, but comes with the benefit of not requiring period less than or equal to tmin results in a closed-loop
continuous state information. system with asymptotic stability guarantees.
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 207

REMARK 10.1: Triggered sampling versus triggered network executions, in which events are triggered by
control We must make an important distinction here different agents at different times. This “active” form of
between triggered control and triggered sampling (or asynchronism poses challenges for the correctness and
more generally, any type of data acquisition). Due to convergence analysis of the coordination algorithms,
the deterministic nature of the general linear system and further complicates ruling out the presence of Zeno
discussed in this section, the answers to the questions behavior. Before getting into the description of triggered
control approaches, we provide a formal description of
• When should a sample of the state be taken? the model for networked cyber-physical systems and the
• When should the control signal to the system be overall team objective.
updated?
are identical. In general, if there are disturbances in 10.3.1 Network Model and Problem Statement
the system dynamics, the answers to these questions We begin by describing the model for the network of
will not coincide. This might be further complicated agents. In order to simplify the general discussion, we
in cases when the sensor and the actuator are not col- make a number of simplifying assumptions, such as lin-
located, and the sharing of information between them ear agent dynamics or time-invariant interaction topol-
is done via communication. The simplest implementa- ogy. However, when considering more specific classes
tion in this case would be to update the control signal of problems, it is often possible to drop some of these
each time a sample is taken; however, in cases where assumptions.
computing or applying a control signal is expensive, it Consider N agents whose interaction topology is
may be more beneficial to use a hybrid strategy combin- described by an undirected graph G . The fact that (i, j)
ing self-triggered sampling with event-triggered control, belongs to the edge set E models the ability of agents
as in [24]. i and j to communicate with one another. The set of
all agents that i can communicate with is given by
its set of neighbors N (i ) in the graph G . The state of
agent i ∈ {1, . . . , N }, denoted xi , belongs to a closed set
Xi ⊂ Rni . We assume that each agent has access to its
10.3 Distributed Self-Triggered Control own state at all times. According to our model, agent i
i = (x , {x }
In this section, we expand our previous discussion with
can access xN i j j ∈N ( i )) when it communicates

a focus on networked cyber-physical systems. Our main with its neighbors. The network state x = ( x1 , . . . , x N )
objective is to explain and deal with the issues that come belongs to X = ∏iN=1 Xi . We consider linear dynamics
forward when pushing the triggered approach from con- for each i ∈ {1, . . . , N }:
trolling a single plant, as considered in Section 10.2,
ẋ i = Ai xi + Bi ui , (10.5)
to coordinating multiple cooperating agents. We con-
sider networked systems whose interaction topology with block diagonal Ai ∈ Rni ×ni , Bi ∈ Rni ×mi , and
is modeled by a graph that captures how information ui ∈ Ui . Here, Ui ⊂ Rmi is a closed set of allowable
is transmitted (via, for instance, sensing or communi- controls for agent i. We assume that the pair ( Ai , Bi ) is
cation). In such scenarios, each individual agent does controllable with controls taking values in Ui . Letting
not have access to all the network information at any u = (u1 , . . . , u N ) ∈ ∏iN=1 Ui , the dynamics of the entire
given time, but instead interacts with a set of neigh- network is described by
boring agents. In our forthcoming discussion, we pay
attention to the following challenges specific to these ẋ = Ax + Bu, (10.6)
networked problems. First, the trigger designs now need
to be made for individuals, as opposed to a single cen- with A = diag ( A1 , . . . , A N ) ∈ Rn×n and B = diag
tralized decision maker. Triggers such as those of (10.3) ( B1, . . . , BN ) ∈ Rn×m , where n = ∑iN=1 ni , and m =
or (10.4) rely on global information about the network ∑iN=1 mi .
state, which the agents do not possess in general. There- The goal of the network is to drive the agents’ states
fore, the challenge is finding ways in which such con- to some desired closed set of configurations D ⊂ X and
ditions can be allocated among the individuals. This ensure that it stays there. Depending on how the set D
brings to the forefront the issue of access to information is defined, this goal can capture different coordination
and the need for useful abstractions about the behav- tasks, including deployment, rendezvous, and forma-
ior of other agents. Second, the design of independent tion control (see, e.g., [25]). Our objective here is to
triggers that each agent can evaluate with the informa- design provably correct triggered coordination strate-
tion available to it naturally gives rise to asynchronous gies that agents can implement to achieve this goal. With
208 Event-Based Control and Signal Processing

this in mind, given the agent dynamics, the commu- To answer these questions, the decision maker has now
nication graph G , and the set D, our starting point is changed from a single agent with complete information
the availability of a distributed, continuous control law on a single plant to multiple agents with partial informa-
that drives the system asymptotically to D. Formally, we tion about the network. For simplicity, and to emphasize
assume that a continuous map u∗ : X → Rm and a con- the network aspect, we assume that agents have contin-
tinuously differentiable function V : X → R, bounded uous access to their own state and that control signals
from below exist such that D is the set of minimizers are updated each time communications with neighbors
of V, and for all x ∈
/ D, occur. We refer the interested reader to [16,20] where
these questions are considered independently in specific
∇i V ( x ) ( Ai xi + Bi u∗i ( x )) ≤ 0, i ∈ {1, . . . , N }, classes of systems.
(10.7)
N
10.3.2 Event-Triggered Communication and Control
∑ ∇i V (x ) ( Ai xi + Bi u∗i (x )) < 0. (10.8)
i =1 In this section, we see how an event-triggered imple-
We assume that both the control law u∗
and the gradi- mentation of the continuous control law u∗ might be
ent ∇V are distributed over G . By this, we mean that for realized using (10.7).
each i ∈ {1, . . . , N }, the ith component of each of these Let t for some  ∈ Z≥0 be the last time at which
objects only depends on x N i , rather than on the full net- all agents have received information from their neigh-
work state x. For simplicity, and with a slight abuse of bors. We are then interested at what time t+1 the agents
notation, we write u∗i ( xN i ) and ∇ V ( x i ) to emphasize
i N
should communicate again. In between updates, the
this fact when convenient. This property has the impor- simplest estimate that an agent i maintains about a
tant consequence that agent i can compute u∗i and ∇i V neighbor j ∈ N (i ) is the zero-order hold given by
with the exact information it can obtain through com-
x;ij (t) = x j (t ), t ∈ [t , t+1 ). (10.9)
munication on G . We note that here we only consider
the goal of achieving asymptotic stability of the system, Recall that agent i always has access to its own state.
with no specification on the performance S as we did Therefore, x;Ni ( t ) = ( x ( t ), { x
;ij (t)} j∈N ( i)) is the informa-
i
in Section 10.2. For specific problems, one could reason
tion available to agent i at time t. The implemented
with similar performance function requirements.
controller is then given by
From an implementation viewpoint, the controller u∗
requires continuous agent-to-agent communication and x;ij (t) = x j (t ), uevent (t) = u∗i ( x;N
i
(t )),
i
continuous updates of the actuator signals. The trig-
gered coordination strategies that we review next seek to for t ∈ [t , t+1 ). We only consider the zero-order hold
select the time instants at which information should be here for simplicity, although one could also implement
acquired in an opportunistic way, such that the resulting more elaborate schemes for estimating the state of neigh-
implementation still enjoys certificates on correctness boring agents given the sampled information available,
and performance similar to that of the original controller. and use this for the controller implementation.
The basic general idea is to design implementations that The time t+1 at which an update becomes necessary is
guarantee that the time derivative of the Lyapunov func- determined by the first time after t that the time deriva-
tion V introduced in Section 10.3.1 along the solutions of tive of V along the trajectory of (10.6) with u = uevent is
the network dynamics (10.6) is less than or equal to 0 at no longer negative. Formally, the event for when agents
all times, even though the information available to the should request updated information is
agents is partial and possibly inexact.
N
d
REMARK 10.2: Triggered sampling, communication, V ( x (t+1 )) = ∑ ∇i V ( x (t+1 ))
dt i =1
and control In light of Remark 10.1, the consideration " #
of a networked cyber-physical system adds a third inde- × Ai xi (t+1 ) + Bi uevent
i (t ) = 0.
pendent question that each agent of the network must (10.10)
answer:
Two reasons may cause (10.10) to be satisfied. One
• When should I sample the (local) state of the system? reason is that the zero-order hold control uevent (t)
for t ∈ [t , t+1 ) has become too outdated, causing
• When should I update the (local) control input to the
an update at time t+1 . Until (10.10) is satisfied,
system?
it is not necessary to update state information
• When should I communicate with my neighbors? because inequality (10.8) implies that dtd
V ( x (t)) < 0
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 209

for t ∈ [t , t+1 ) if x (t ) ∈ / D given that all agents Assuming here that agents have exact knowledge about
have exact information at t and dt d
V ( x (t )) < 0 by the dynamics and control sets of their neighboring
continuity of dt V ( x ). The other reason is simply that
d agents, each time an agent receives state information, it
x (t+1 ) ∈ D, and thus the system has reached its desired can construct sets that are guaranteed to contain their
configuration. neighbors’ states in the future. Formally, if t is the time
Unfortunately, (10.10) cannot be checked in a dis- at which agent i receives state information x j (t ) from its
tributed way because it requires global information. neighbor j ∈ N (i ), then
Instead, one can define a local event that implic-
itly defines when a single agent i ∈ {1, . . . , N } should Xij (t, x j (t )) = R j (t − t , x j (t )) ⊂ X j , (10.12)
update its information. Letting ti be some time at which
agent i receives updated information, ti+1 ≥ ti is the is guaranteed to contain x j (t) for all t ≥ t . We refer to
first time such that these sets as guaranteed sets. These capture the notion of
' ( abstractions of the system behavior that we discussed
∇i V ( x (ti+1 )) Ai xi (ti+1 ) + Bi uevent
i (ti ) = 0. (10.11)
in Section 10.2 when synthesizing a self-triggered con-
trol policy for a single plant. For simplicity, we let
This means that as long as each agent i can ensure the Xij (t) = Xij (t, x j (t )) when the starting state x j (t ) and
local event (10.11) has not yet occurred, it is guaranteed time t do not need to be emphasized. We denote by
that (10.10) has also not yet occurred. Note that this is a Xi (t) = ( xi (t), {Xi (t)} j∈N ( i)) the information available
N j
sufficient condition for (10.10) not being satisfied, but it to an agent i at time t.
is not necessary. There may be other ways of distribut- With the guaranteed sets in place, we can now pro-
ing the global trigger (10.10) for more specific classes vide a test using inexact information to determine when
of problems. Although this can now be monitored in new, up-to-date information is required. Let ti be the

a distributed way, the problem with this approach is last time at which agent i received updated information
that each agent i ∈ {1, . . . , N } needs to have continu- from its neighbors. Until the next time ti information
+1
ous access to information about the state of its neigh- is obtained, agent i uses the zero-order hold estimate and
bors N (i ) in order to evaluate ∇i V ( x ) = ∇i V ( xN ) and control:
i

check condition (10.11). This requirement may make the


event-triggered approach impractical when this infor- ∗ i
x;ij (t) = x j (ti ), i ( t) = ui ( x
uself ;N (ti )),
mation is only available through communication. Note
that this does not mean it is impossible to perform event- for t ∈ [ti , ti ) and j ∈ N (i ). As noted before, the con-
triggered communication strategies in a distributed way, sideration  +1
of zero-order hold is only made to simplify
but it requires a way for agents to use only their own the exposition. At time ti , agent i computes the next time
information to monitor some meaningful event. An i 
t+1 ≥ ti at which information should be acquired via
example problem where this can be done is provided
in [26,27], where agents use event-triggered broadcast- ' (
ing to solve the consensus problem. sup ∇i V (yN ) Ai xi (ti+1 ) + Bi uself
i ( t  ) = 0.
i

y N ∈ X iN ( t i+1 )
(10.13)
10.3.3 Self-Triggered Communication and Control
In this section, we design a self-triggered communica- By (10.7) and the fact that Xij (ti ) = { x j (ti )}, at time ti we
tion and control strategy of the controller u∗ by building have
on the discussion in Section 10.3.2. To achieve this, the ' (
basic idea is to remove the requirement on continu- sup ∇i V (yN ) Ai xi (ti ) + Bi uself ( t i
)
i 
ous availability of information to check the test (10.11) y N ∈ X iN ( t i )
by providing agents with possibly inexact information ' (
about the state of their neighbors. = ∇i V ( xN
i
(ti )) Ai xi (ti ) + Bi uself
i ( t i
 ) ≤ 0.
To do so, we begin by introducing the notion of reach-
ability sets. Given y ∈ Xi , let Ri (s, y) be the set of reach- If all agents use this triggering criterium for updating
able points under (10.5) starting from y in s seconds: information, it is guaranteed that dt d
V ( x (t)) ≤ 0 at all
 times, because for each i ∈ {1, . . . , N }, the true state x j (t)
Ri (s, y) = z ∈ Xi | ∃ ui : [0, s] → Ui such that is guaranteed to be in Xij (t) for all j ∈ N (i ) and t ≥ ti .
 s ) The condition of (10.13) is appealing because it can
A i ( s −τ) be solved by agent i with the information it possesses
z = e y+
Ai s
e Bi ui (τ)dτ .
0 at time ti . Once determined, agent i schedules that,
210 Event-Based Control and Signal Processing

at time ti+1 , it will request updated information from of event-triggered approaches and, at the same time,
its neighbors. The term self-triggered captures the fact endows individual agents with the autonomy charac-
that each agent is now responsible for deciding when teristics of self-triggered approaches to determine when
it requires new information. We refer to ti+1 − ti as and what information is needed. Agents make promises
the self-triggered request time of agent i ∈ {1, . . . , N }. Due to their neighbors about their future states and inform
to the conservative way in which ti+1 is determined, them if these promises are violated later (hence, the con-
it is possible that ti+1 = ti for some i, which would nection with event-triggered control). These promises
mean that continuous information updates are neces- function as more accurate abstractions of the behavior
sary. (It should be noted that this cannot happen for all of the other entities than guaranteed sets introduced in
i ∈ {1, . . . , N } unless the network state is already in D.) Section 10.3.3. This extra information allows each agent
For a given specific task (target set D, Lyapunov func- to compute the next time that an update is required
tion V, and controller u∗ ), one might be able to discard (which in general is longer than in the purely self-
this issue. In other cases, this can be dealt with by intro- triggered implementation described in Section 10.3) and
ducing a dwell time such that a minimum amount of request information from their neighbors to guarantee
time must pass before an agent can request new infor- the monotonicity of the Lyapunov function (hence, the
mation. We do not enter into the details of this here, but connection with self-triggered control). Our exposition
instead show how it can be done for a specific example here follows [28].
in Section 10.5.
The main convergence result for the distributed self- 10.4.1 Promise Sets
triggered approach is provided below. We defer the
sketch of the proof to Section 10.4 as it becomes a direct Promises allow agents to predict the evolution of their
consequence of Proposition 10.1. neighbors more accurately than guaranteed sets, which
in turn affects the overall network behavior. For our
Proposition 10.1 purposes here, a (control) promise that agent j makes to
agent i at some time t
is a subset U ji ⊂ U j of its allow-
able control set. This promise conveys the meaning that
For the network model and problem setup described in
agent j will only use controls u(t) ∈ U ji for all t ≥ t
.
Section 10.3.1, if each agent executes the self-triggered

algorithm presented above, the state of the entire net- Given the dynamics (10.5) of agent j and state x j (t ) at

work asymptotically converges to the desired set D for time t , agent i can compute the state promise set:
all Zeno-free executions. 
X j (t) = z ∈ X j | ∃ u j : [t
, t] → U ji s.t.
i
Note that the result is formulated for non-Zeno exe- )
 t
cutions. As discussed above, dwell times can be used A j ( t−t
) A j ( t −τ)
to rule out Zeno behavior, although we do not go into z = e x j ( t ) + e B u
j j ( τ ) dτ ,
t

details here. The problem with the distributed self- (10.14)


triggered approach is that the resulting times are often
conservative because the guaranteed sets can grow large for t ≥ t
. This means that as long as agent j keeps
quickly as they capture all possible trajectories of neigh- its promise at all times (i.e., u j (t) ∈ U ji for all t ≥ t
),
boring agents. It is conceivable that improvements can then x (t) ∈ X i (t) for all t ≥ t
. We denote by X i (t) =
j j N
be made from tuning the guaranteed sets based on what
neighboring agents plan to do rather than what they ( x i ( t ) , { X i ( t )}
j j ∈N ( i ) ) the information available to an
can do. This observation is at the core of the team- agent i at time t. Since the promise U j is a subset of i

triggered approach proposed next. agent j’s allowable control set U j , it follows that X i (t) ⊂ j
Xij (t) for all t ≥ t
as well. This allows agent i to oper-
ate with better information about agent j than it would
by computing the guaranteed sets (10.12) defined in Sec-
tion 10.3.3. Promises can be generated in different ways
depending on the desired task. For simplicity, we only
10.4 Team-Triggered Control consider static promises U ji here, although one could also
Here we describe an approach that combines ideas consider more complex time-varying promises [28].
from both event- and self-triggered control for the In general, tight promises correspond to agents hav-
coordination of networked cyber-physical systems. The ing good information about their neighbors, which at
team-triggered strategy incorporates the reactive nature the same time may result in an increased communication
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 211

effort (since the promises cannot be kept for long periods all times, it is guaranteed that dtd
V ( x (t)) ≤ 0 at all times,
of time). Loose promises correspond to agents having because for each i ∈ {1, . . . , N }, the true state x j (t) is
to use more conservative controls due to the lack of guaranteed to be in X ji (t) for all j ∈ N (i ) and t ≥ ti .
information, while at the same time potentially being As long as (10.15) has not yet occurred for all agents
able to operate for longer periods of time without com- i ∈ {1, . . . , N } for some time t, and the promises have
municating (because promises are not violated). These not been broken, assumptions (10.7) and the continuity
advantages rely on the assumption that promises hold of the LHS of (10.15) in time guarantee
throughout the evolution. As the state of the network
changes and the level of task completion evolves, agents
N
might decide to break former promises and make new d
ones. We discuss this process in Section 10.4.3. dt
V ( x (t)) ≤ ∑ Li V sup (XNi (t)) < 0. (10.16)
i =1

REMARK 10.3: Controllers that operate on set-valued The condition (10.15) is again appealing because it can
information With the extra information provided by be solved by agent i with the information it possesses
promise sets, it is conceivable to design controllers that at time ti . In general, it gives rise to a longer interevent
operate on set-valued information rather than point- time than the condition (10.13), because the agents
valued information and go beyond zero-order state employ more accurate information about the future
holds. For simplicity, we consider only controllers that behavior of their neighbors provided by the promises.
operate on points in our general exposition, but illus- However, as in the self-triggered method described in
trate how set-valued controllers can be defined in Section 10.3.3 and depending on the specific task, it is
Section 10.5. We refer the interested reader to [28] for possible that ti+1 = t for some agent i, which means
further details. this agent would need continuous information from its
neighbors. If this is the case, the issue can be dealt
with by introducing a dwell time such that a minimum
10.4.2 Self-Triggered Communication and Control amount of time must pass before an agent can request
new information. We do not enter into the details of
With the promise sets in place, we can now provide a this here, but instead refer to [28] for a detailed discus-
test using the available information to determine when sion. Finally, we note that the conclusions laid out above
new, up-to-date information is required. only hold if promises among agents are kept at all times,
Let ti be the last time at which agent i received which we address next.
updated information from its neighbors. Until the next
time ti+1 information is obtained, agent i uses the zero-
order hold estimate and control: 10.4.3 Event-Triggered Information Updates
Agent promises may need to be broken for a variety
x;ij (t) = x j (ti ), uteam
i (t) = u∗i ( x;Ni
(ti )), of reasons. For instance, an agent might receive new
information from its neighbors, causing it to change its
for t ∈ [ti , ti+1 ) and j ∈ N (i ). At time ti , agent i com- former plans. Another example is given by an agent that
putes the next time ti+1 ≥ ti at which information made a promise that it is not able to keep for as long
should be acquired via as it anticipated. Consider an agent i ∈ {1, . . . , N } that
j
' ( has sent a promise Ui to a neighboring agent j at some
sup ∇i V (yN ) Ai xi (ti+1 ) + Bi uteam
i (ti ) = 0. j
time tlast , resulting in the state promise set xi (t) ∈ Xi (t).
y N ∈ XN ( t +1 ) If agent i ends up breaking its promise at time t∗ > tlast ,
i i

(10.15) then it is responsible for sending a new promise U j to


i
agent j at time tnext = t∗ . This implies that agent i must
By (10.7) and the fact that X j (t ) = { x j (t )}, at time t keep track of promises made to its neighbors and mon-
i i i i

we have itor them in case they are broken. This mechanism is


' ( implementable because each agent only needs informa-
sup ∇i V (yN ) Ai xi (ti ) + Bi uteam i (ti ) tion about its own state and the promises it has made to
y N ∈ X iN ( t i ) determine whether the trigger is satisfied. We also note
' (
= ∇i V ( xN (t )) Ai xi (t ) + Bi ui (t ) ≤ 0.
i i i team i that different notions of “broken promise” are accept-
able: this might mean that ui (t∗ ) ∈ Ui or the more flexi-
j

If all agents use this triggering criterium for updating ble x i (t∗ ) ∈
/ Xi (t∗ ). The latter has an advantage in that it
j

information, and all promises among agents are kept at is not absolutely required that agent i use controls only
212 Event-Based Control and Signal Processing
j
in Ui , as long as the promise set that agent j is operating or was received from some neighbor due to the breaking
with is still valid. of a promise.
Reminiscent of the discussion on dwell times in Sec- It is worth mentioning that the self-triggered approach
tion 10.4.2, it may be possible for promises to broken described in Section 10.3.3 is a particular case of the
arbitrarily fast, resulting in Zeno executions. As before, team-triggered approach, where the promises are simply
it is possible to implement a dwell time such that a the guaranteed (or reachable) sets described in (10.12).
minimum amount of time must pass before an agent In this scenario, promises can never be broken so the
can generate a new promise. However, in this case, event-triggered information updates never occur. There-
some additional consideration must be given to the fact fore, the class of team-triggered strategies contains the
that agents might not always be operating with correct class of self-triggered strategies.
information in this case. For instance, if an agent j has The following statement provides the main conver-
broken a promise to agent i but is not allowed to send gence result regarding distributed controller implemen-
a new one yet due to the dwell time, agent i might be tations synthesized via the team-triggered approach.
operating with false information. Such a problem can
be addressed using small warning messages that alert Proposition 10.2
agents of this situation. Another issue that is important
when considering the transmission of messages is that For the network model and problem setup described in
communication might be unreliable, with delays, packet Section 10.3.1, if each agent executes the team-triggered
drops, or noise. We refer to [28] for details on how these coordination policy described in Algorithm 10.1, the
dwell times can be implemented while ensuring stabil- state of the entire network asymptotically converges to
ity of the entire system in the presence of unreliable the desired set D for all Zeno-free executions.
communication.
Note that the result is only formulated for non-Zeno
10.4.4 Convergence Analysis of the Team-Triggered executions for simplicity of presentation. As discussed,
Coordination Policy the absence of Zeno behavior can be guaranteed via
dwell times for the self- and event-triggered updates. In
The combination of the self-triggered communication
the following, we provide a proof sketch of this result
and control policy with the event-triggered information
that highlights its key components. We refer the inter-
updates gives rise to the team-triggered coordination
ested reader to our work [28] that provides a detailed
policy, which is formally presented in Algorithm 10.1.
technical discussion that also accounts for the presence
The self-triggered information request in this strategy
of dwell times in the event- and self-triggered updates.
is executed by an agent anytime new information is
Our first observation is that the evolution of the
received, whether it was actively requested by the agent,
Lyapunov function V along the trajectories of the
team-triggered coordination policy is nonincreasing
by design. This is a consequence of the analysis of
Algorithm 10.1: Team-triggered coordination policy Section 10.4.2, which holds under the assumption that
(Self-triggered communication and control) promises are never broken, together with the event-
At any time t agent i ∈ {1, . . . , N } receives new promise(s) U ji and triggered updates described in Section 10.4.3, which
position information x j ( t) from neighbor(s) j ∈ N ( i), agent i performs: guarantee that the assumption holds. Then, the main
1: update x;ij ( t
) = x j ( t) for t
≥ t challenge to establish the asymptotic convergence prop-
2: compute and maintain promise set X ij ( t
) for t
≥ t erties of the team-triggered coordination policy is in
3: compute own control uteam ( t
) = u∗i ( x;N
i ( t )) for t
≥ t dealing with its discontinuous nature. More precisely,
i

4: compute first time t ≥ t such that (10.15) is satisfied since the information possessed by any given agent are
5: schedule information request to neighbors in t∗ − t seconds sets for each of its neighbors, promises shrink to single-
(Respond to information request) tons when updated information is received. To make this
At any time t a neighbor j ∈ N ( i) requests information, agent i
point precise, let us formally denote by
performs:
1: send current state information xi ( t) to agent j
j Si = Pcc (X1 ) × · · · × Pcc (Xi−1 ) × Xi × Pcc (Xi+1 )
2: send new promise Ui to agent j
(Event-triggered information update) × · · · × Pcc (X N ),
At all times t, agent i performs:
1: if there exists j ∈ N ( i) such that xi ( t) ∈
j
/ Xi ( t) then the state space where the variables that agent i ∈
2: send current state information xi ( t) to agent j {1, . . . , N } possesses live. Note that this set allows us to
j
3: send new promise Ui to agent j capture the fact that each agent i has perfect informa-
4: end if
tion about itself. Although agents only have information
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 213

about their neighbors, the above space considers agents


having promise information about all other agents to 10.5 Illustrative Scenario: Optimal Robot
facilitate the analysis. This is only done to allow for a
Deployment
simpler technical presentation and does not impact the
validity of the arguments. The space that the state of the In this section, we illustrate the above discussion on self-
entire network lives in is then S = ∏iN=1 Si . The informa- and team-triggered design of distributed coordination
tion possessed by all agents of the network at some time algorithms in a scenario involving the optimal deploy-
t is collected in ment of a robotic sensor network. We start by present-
' ( ing the original problem formulation from [29], where
X 1 (t), . . . , X N (t) ∈ S, the authors provide continuous-time and periodic dis-
tributed algorithms to solve the problem. The interested
" # reader is referred to [25] for additional references and a
where X i (t) = X1i (t), . . . , X N
i (t) ∈ S .
i
The team-triggered coordination policy corresponds general discussion on coverage problems.
to a discontinuous map of the form S × Z≥0 → S × Z≥0 . Consider a group of N agents moving in a convex
The discontinuity makes it difficult to use standard sta- polygon S ⊂ R2 with positions P = ( p1, . . . , p N ). For
bility methods to analyze the convergence properties of simplicity, consider single-integrator dynamics
the network. Our approach to this problem consists of ṗ i = ui , (10.17)
defining a set-valued map M : S × Z≥0 ⇒ S × Z≥0 with
good continuity properties so that the trajectories of where ui  ≤ vmax for all i ∈ {1, . . . , N } for some
vmax > 0. The network objective is to achieve optimal
( Z (t+1 ),  + 1) ∈ M ( Z (t ), ), deployment as measured by the following locational
optimization function H. The agent performance at a
contain the trajectories of the team-triggered coordi- point q of agent i degrades with q − pi 2 . Assume a
nation policy. Although this “overapproximation pro- density φ : S → R is available, with φ(q) reflecting the
cedure” enlarges the set of trajectories to consider in likelihood of an event happening at q. Consider then the
the analysis, the gained benefit is that of having a set- minimization of the function
valued map with suitable continuity properties that are - .
amenable to set-valued stability analysis. H( P ) = Eφ min  q − p i  2
. (10.18)
i ∈{1,...,N }
We conclude this section by giving an idea of how
to define the set-valued map M, which is essentially This formulation is applicable in scenarios where the
guided by the way information updates can happen agent closest to an event of interest is the one responsible
under the team-triggered coordination policy. Given for it. Examples include servicing tasks, spatial sam-
( Z, ) ∈ S × Z≥0 , we define the ( N + 1)th component pling of random fields, resource allocation, and event
of all the elements in M ( Z, ) to be  + 1. (This cor- detection.
responds to evolution in time.) The ith component of With regard to the problem statement in Section 10.3.1,
the elements in M ( Z, ) is the set given by one of fol- H plays the role of the function V, and the set D cor-
lowing possibilities: (1) the case when agent i does not responds to the set of critical points of H. Let us next
receive any new information in this step, in which case describe the distributed continuous-time controller u∗ .
the promise sets available to agent i are simply propa- In order to do so, we need to recall some notions from
gated forward with respect to the promises made to it; computational geometry. A partition of S is a collection
(2) the case when agent i has received information from of N polygons W = {W1 , . . . , WN } with disjoint inte-
at least one neighbor j, in which case the promise set X ji riors whose union is S. The Voronoi partition V ( P) =
becomes an exact point { x j } and all other promise sets {V1 , . . . , VN } of S generated by the points P is
are propagated forward as in (1). Lumping together in a Vi = {q ∈ S | q − pi  ≤ q − p j  , ∀ j = i }.
set the two possibilities for each agent gives rise to a set-
valued map with good continuity properties: formally, When the Voronoi regions Vi and Vj share an edge, p i
M is closed. (A set-valued map T : X ⇒ Y is closed if and p j are (Voronoi) neighbors. We denote the neighbors
x k → x, yk → y, and y k ∈ T ( xk ) imply that y ∈ T ( x ).) of agent i by N (i ). For Q ⊂ S, the mass and center of mass
The proof then builds on this construction, combined of Q with respect to φ are
with the monotonic evolution of the Lyapunov function  
1
along the trajectories of the team-triggered coordina- MQ = φ(q)dq, CQ = qφ(q)dq.
S MQ Q
tion policy, and employs a set-valued version of LaSalle
Invariance Principle, see [25], to establish the asymptotic A tuple P = ( p1 , . . . , p N ) is a centroidal Voronoi configu-
convergence result stated in Proposition 10.2. ration if pi = CVi , for all i ∈ {1, . . . , N }. The controller u∗
214 Event-Based Control and Signal Processing

then simply corresponds to the gradient descent of the 10.5.1 Self-Triggered Deployment Algorithm
function H, whose ith component is given by
Here we describe a self-triggered implementation of
∂H the distributed coordination algorithm for deploy-
− = 2MVi (CVi − pi ). ment described above. There are two key elements in
∂p i
our exposition. The first is the notion of abstraction
Remarkably, this continuous-time coordination strategy employed by individual agents about the behavior of
admits a periodic implementation without the need of their neighbors. In this particular scenario, this gives
determining any step-size in closed form. This design rise to agents operating with spatial partitions under
is based on the observation that the function H can be uncertain information, which leads us to the second
rewritten in terms of the Voronoi partition as important element: the design of a controller that oper-
N  ates on set-valued information rather than point-valued
H( P) = ∑ q − pi  φ(q)dq,
2 information. Our exposition here follows [18].
i =1 Vi Abstraction via maximum velocity bound: The notion of
abstraction employed is based on the maximum velocity
which suggests the generalization of H given by
bound for (10.17). In fact, an agent knows that any other
N  neighbor cannot travel farther than vmax Δt from its cur-
H( P, W) = ∑ q − pi 2 φ(q)dq, (10.19) rent position in Δt seconds. Consequently, the reachable
i =1 Wi set of an agent i ∈ {1, . . . , N } is
where W is a partition of S, and the ith agent is respon-
Ri (s, pi ) = B( pi , svmax ) ∩ S.
sible of the “dominance region” Wi . The function H
in (10.19) is then to be minimized with respect to the If t denotes the time at which agent i has just received
locations P and the dominance regions W . The key position information p j (t ) from a neighboring agent
observations now are that (1) for a fixed network con- j ∈ N (i ), then the guaranteed set that agent i maintains
figuration, the optimal partition is the Voronoi partition, about agent j is
that is,
Xij (t) = B( p j (t ), (t − t )vmax ) ∩ S,
H( P, V ( P)) ≤ H( P, W ),
for any partition W of S, and (2) for a fixed partition, the which has the property that p j (t) ∈ X j (t) for t ≥ t .
i

optimal agent positions are the centroids, in fact, Figure 10.1a shows an illustration of this concept.
Agent i stores the data in D i (t) = (X1i (t), . . . , XiN (t)) ⊂
H( P
, W) ≤ H( P, W), S N (if agent i does not have information about some
agent j, we set Xij (t) = ∅) and the entire memory of the
for any P
, P such that  p
i − CWi  ≤  pi − CWi , i ∈
2
{1, . . . , N }. These two facts together naturally lead to a network is D = (D 1 , . . . , D N ) ⊂ S N .
periodic implementation of the coordination policy u , ∗ Design of controller operating on set-valued information:
where agents repeatedly and synchronously acquire Agents cannot compute the Voronoi partition exactly
position information from their neighbors, update their with the data structure described above. However, they
own Voronoi cell, and move toward their centroid. Both can still determine lower and upper bounds on their
the continuous-time and periodic implementations are exact Voronoi cell, as we describe next. Given a set of
guaranteed to make the network state asymptotically regions D1 , . . . , D N ⊂ S, each containing a site pi ∈ Di ,
converge to the set of centroidal Voronoi configurations. the guaranteed Voronoi diagram of S generated by

Xij
Xij
pij pij

pi pi

(a) (b)

FIGURE 10.1
(a) Guaranteed sets and (b) promise sets kept by agent i. In (b), dark regions correspond to the promise sets given the promises U ji .
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 215

(a) (b)

FIGURE 10.2
(a) Guaranteed and (b) dual guaranteed Voronoi diagrams.

D = ( D1 , . . . , D N ) is the collection gV ( D1, . . . , D N ) = A similar discussion can be carried out for this notion,
{gV1 , . . . , gVN }, cf. [18]. The most relevant fact for our design is that
for any collection of points pi ∈ Di , i ∈ {1, . . . , N }, the
gVi = {q ∈ S | max q − x  ≤ min q − y for all j = i }. dual guaranteed Voronoi diagram contains the Voronoi
x ∈ Di y∈ Dj
partition, that is, Vi ⊂ dgVi , i ∈ {1, . . . , N }.
The guaranteed Voronoi diagram is not a partition of S, The notions of guaranteed and dual guaranteed
see Figure 10.2a. The important fact for our purposes is Voronoi cells allow us to synthesize a controller building
that, for any collection of points pi ∈ Di , i ∈ {1, . . . , N }, on the observation that, if
the guaranteed Voronoi diagram is contained in the
Voronoi partition, that is, gVi ⊂ Vi , i ∈ {1, . . . , N }. Each  pi − CgVi  > bndi ≡ bnd(gVi , dgVi )
point in the boundary of gVi belongs to the set ' MgVi (
= 2 cr(dgVi ) 1 − , (10.21)
g
Δij = {q ∈ S | max q − x  = min q − y}, (10.20) MdgVi
x ∈ Di y∈ Dj

g g then moving toward CgV from pi strictly decreases the


for some j = i. Note that Δij = Δ ji . Agent p j is a guar- distance  p − C  (here,i cr( Q) denotes the circumra-
i Vi
g
anteed Voronoi neighbor of pi if Δij ∩ ∂gVi is not empty dius of Q). With this in place, an agent can actually move
nor a singleton. The set of guaranteed Voronoi neighbors closer to the centroid of its Voronoi cell (which guaran-
of agent i is gNi ( D ). In our scenario, the “uncertain” tees the decrease of H), even when it does not know the
regions for each agent correspond to the guaranteed sets. exact locations of its neighbors. Informally, each agent
One can also introduce the concept of a dual guar- moves as follows:
anteed Voronoi diagram of S generated by D1, . . . , D N ,
which is the collection of sets dgV ( D1, . . . , D N ) = [Informal description]: The agent moves toward the
{dgV1 , . . . , dgVN } defined by centroid of its guaranteed Voronoi cell until it is
within distance bndi of it.
*
dgVi = q ∈ S | min q − x  ≤ max q − y
x ∈ Di y∈ Dj Note that, as time elapses without new information,
+ the bound bndi grows until it becomes impossible to
for all j = i . satisfy (10.21). This gives a natural triggering condition
216 Event-Based Control and Signal Processing

for when agent i needs updated information from its the exposition of the self-triggered implementation in
neighbors, which we describe informally as follows: Section 10.5.1 is the notion of promises employed by the
agents. For each j ∈ {1, . . . , N }, given the deployment
[Informal description]: The agent decides that
up-to-date location information is required if its controller u∗j : ∏i∈N ( j)∪{ j} Pcc (Xi ) → U j and δ j > 0, the
computed bound in (10.21) is larger than a design ball-radius control promise generated by agent j for
parameter ε and the distance to the centroid of its agent i at time t is
guaranteed cell.
t
≥ t,
j j
Ui = B(u j ( XN (t)), δ j ) ∩ U j (10.22)
The role of ε is guaranteeing the lack of Zeno behav-
ior: when an agent i gets close to CgVi , this requires for some δ j > 0. This promise is a ball of radius δ j in the
bndi to be small. The introduction of the design param- control space U j centered at the control signal used at
eter is critical to ensure Zeno behavior does not occur. time t. Here we consider constant δ j for simplicity, but
Note that each agent can compute the future evolution of one can consider time-varying functions instead, cf. [28].
bndi without updated information about its neighbors. Agent i stores the information provided by the promise
This means that each agent can schedule exactly when sets in D i (t) = ( X1i (t), . . . , X N
i ( t )) ⊂ S N . The event-

the condition above will be satisfied, corresponding to a triggered information updates, by which agents inform
self-triggered implementation. other agents if they decide to break their promises, take
The overall algorithm is the result of combining the care of making sure that this information is always accu-
j
motion control law and the update decision policy with rate; that is, pi (t) ∈ Xi (t) for all i ∈ {1, . . . , N } and j ∈
a procedure to acquire up-to-date information about N (i ). The team-triggered deployment algorithm is for-
other agents. We do not enter into the details of the mally presented in Algorithm 10.3 and has the same
latter here and instead refer the reader to [18]. The self- motion control law, update decision policy, and pro-
triggered deployment algorithm is formally presented in cedure to acquire up-to-date information about other
Algorithm 10.2. agents as the self-triggered coordination one.

Algorithm 10.2: Self-triggered deployment algorithm Algorithm 10.3: Team-triggered deployment algorithm
Agent i ∈ {1, . . . , N } performs: Agent i ∈ {1, . . . , N } performs:
1: set D = D i
1: set D = D i 2: compute L = gVi ( D ) and U = dgVi ( D )
2: compute L = gVi ( D ) and U = dgVi ( D ) 3: compute q = CL and r = bnd( L, U )
(Self-triggered update decision)
3: compute q = CL and r = bnd( L, U ) 1: if r ≥ max { q − pi , ε} then
(Update decision) 2: reset D i by acquiring updated location information from
1: if r ≥ max { q − pi , ε} then Voronoi neighbors
2: reset D i by acquiring updated location information from 3: set D = D i
Voronoi neighbors 4: set L = gV ( D ) and U = dgV ( D )
3: set D = D i 5: set q = CL and r = bnd( L, U )
4: set L = gV ( D ) and U = dgV ( D ) 6: end if
5: set q = CL and r = bnd( L, U ) (Event-triggered update decision)
j
6: end if 1: if pi ∈
/ Xi for any j ∈ N ( i) then
(Motion control) 2: send updated position information pi to agent j
j
1: if  pi − q  > r then 3: send new promise Ui to agent j
2: set u i = vmax unit( q − pi ) 4: end if
(Motion control)
3: else 1: if  pi − q  > r then
4: set u i = 0 2: set u i = vmax unit( q − pi )
5: end if 3: else
4: set u i = 0
One can establish the same convergence guarantees 5: end if
for this algorithm as in the periodic or continuous case.
In fact, for ε ∈ [0, diam(S)), the agents’ positions starting 10.5.3 Simulations
from any initial network configuration in Sn converge to
the set of centroidal Voronoi configurations. Here we provide several simulations to illustrate the
performance of the self-triggered and team-triggered
10.5.2 Team-Triggered Deployment Algorithm deployment algorithms. We consider a network of
N = 8 agents, moving in a 4 m × 4 m square, with a
Here we describe a team-triggered implementation
maximum velocity vmax = 1 m/s. The density φ is a sum
of the distributed coordination algorithm for optimal
of two Gaussian functions:
deployment. Given our previous discussion, the only
φ( x ) = 1.2e− x −q1 + e− x −q2  ,
2 2
new element that we need to specify with respect to
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 217

with q1 = (1, 1.5) and q2 = (2.5, 3). We evaluate the total of the signal transmitted by i in units of dBmW. In our
power consumed by communication among the agents simulations, these values are set to 1.
during the execution of each algorithm. To do this, we For the promises made among agents in the team-
adopt the following model [30] for quantifying the total triggered strategy, we use δi = 2λvmax in (10.22), where
power Pi used by agent i ∈ {1, . . . , 8} to communicate, λ ∈ [0, 1] is a parameter that captures the “tightness”
in dBmW power units: of promises. Note that λ = 0 corresponds to exact
promises, meaning trajectories can be computed exactly,
 n  and λ = 1 corresponds to no promises at all (i.e.,
0.1Pi → j+α p i − p j 
Pi = 10 log10 ∑ β10 , (10.23) recovers the self-triggered algorithm because promises
j ∈{1,...,N },i  = j become the entire allowable control set).
Figures 10.3 and 10.4 compare the execution of the
where α > 0 and β > 0 depend on the characteristics of periodic, self-triggered, and team-triggered deployment
the wireless medium, and Pi→ j is the power received by j strategies (the latter for two different tightnesses of

(a) (b) (c)

FIGURE 10.3
Plot (a) is an execution of the communicate-at-all-times strategy [29], plot (b) is an execution of the self-triggered strategy, and plot (c) is an
execution of the team-triggered strategy with λ = 0.5. Black dots correspond to initial positions, and light gray dots correspond to final positions.

20
300
Periodic Periodic
Team-trig (λ = 0.25)
16 Team-trig (λ = 0.25)
Team-trig (λ = 0.5)
Team-trig (λ = 0.5)
200 Self-trig
12 Self-trig

Ecomm 8 H
100

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time (s) Time (s)
(a) (b)

FIGURE 10.4
Plot (a) shows the total communication energy Ecomm in Joules of executions of the communicate-at-all-times strategy, the self-triggered strat-
egy, and the team-triggered strategy (for two different tightnesses of promises). Plot (b) shows the evolution of the objective function H. All
simulations have the same initial condition.
218 Event-Based Control and Signal Processing

150 70

60

50
100
40
Tcon
Pavg 30
50
20
Team triggered Team triggered
Communicate at all times 10 Communicate at all times
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
λ λ
(a) (b)

FIGURE 10.5
Average (a) communication power network consumption Pavg and (b) time to convergence Tcon for the team-triggered deployment strategy with
varying tightness of promise. Time to convergence is the time to reach 99% of the final convergence value of the objective function. For small λ,
the amount of communication energy required is substantially reduced, while the time to convergence only increases slightly.

promises, λ = 0.25 and λ = 0.5) starting from the same build on the notion of abstraction of the system behav-
initial condition. Figure 10.3 shows the algorithm tra- ior to synthesize triggers that rely only on the current
jectories, while Figure 10.4a shows the total communi- information available to the decision maker in schedul-
cation energy over time, and Figure 10.4b shows the ing future actions. The role of the notion of abstraction
evolution of the objective function H. These plots reveal acquires special relevance in the context of networked
the important role that the tightness of promises has systems, given the limited access to information about
on the algorithm execution. For instance, in Figure 10.4, the network state possessed by individual agents. The
one can see that for λ = 0.25, the communication energy often conservative nature of self-triggered implementa-
is reduced by half compared to the periodic strategy tions has motivated us to introduce the team-triggered
without compromising the network performance. approach, which allows us to combine the best com-
In Figure 10.5, we explore what happens for vari- ponents of event- and self-triggered control. The basic
ous different tightnesses λ for the team-triggered strat- idea is to have agents cooperate with each other in pro-
egy and compare them against the periodic and the viding better, more accurate abstractions to each other
self-triggered strategies. Figure 10.5a shows the aver- via promise sets. Such sets can then be used by indi-
age power consumption in terms of communication vidual agents to improve the overall network perfor-
energy by the entire network for varying levels of λ. mance. The team-triggered approach opens numerous
Figure 10.5b shows the time it takes each execution to venues for future research. Among them, we highlight
reach 99% of the final convergence value of the objective the robustness under disturbances in the dynamics and
function H. As we can see from these plots, for small sensor noise, more general models for individual agents,
λ > 0, we are able to drastically reduce the amount of methods that tailor the generation of agent promises to
required communication while only slightly increasing the specific task at hand, methods for the systematic
the time to convergence. design of controllers that operate on set-valued infor-
mation models, analytic guarantees on performance
improvements with respect to self-triggered strategies
(as observed in Section 10.5.3), the impact of evolving
10.6 Conclusions topologies on the generation of agent promises, and the
application to the synthesis of distributed strategies to
This chapter has covered the design and analysis of other coordination problems.
triggered strategies for networked cyber-physical sys-
tems, with an emphasis on distributed interactions. At
a fundamental level, our exposition aims to trade com-
putation for less sensing, actuation, or communication
effort by answering basic questions such as when to
Acknowledgments
take state samples, when to update control signals,
or when to transmit information. We have focused on This research was partially supported by NSF Award
self-triggered approaches to controller design which CNS-1329619.
Self-Triggered and Team-Triggered Control of Networked Cyber-Physical Systems 219

[12] P. Wan and M. D. Lemmon. Event-triggered dis-


Bibliography tributed optimization in sensor networks. In Sym-
posium on Information Processing of Sensor Networks,
[1] M. Velasco, P. Marti, and J. M. Fuertes. The self trig- pp. 49–60, San Francisco, CA, April 13–16, 2009.
gered task model for real-time control systems. In
Proceedings of the 24th IEEE Real-Time Systems Sym- [13] S. S. Kia, J. Cortés, and S. Martı́nez. Distributed
posium, pp. 67–70, Cancun, Mexico, December 3–5, convex optimization via continuous-time coordina-
2003. tion algorithms with discrete-time communication.
Automatica, 55:254–264, 2015.
[2] R. Subramanian and F. Fekri. Sleep scheduling and
lifetime maximization in sensor networks. In Sym- [14] D. Richert and J. Cortés. Distributed linear pro-
posium on Information Processing of Sensor Networks, gramming with event-triggered communication.
pp. 218–225, Nashville, TN, April 19–21, 2006. SIAM Journal on Control and Optimization, 2014.
Available at http://arxiv.org/abs/1405.0535.
[3] W. P. M. H. Heemels, K. H. Johansson, and
P. Tabuada. An introduction to event-triggered and [15] A. Eqtami, D. V. Dimarogonas, and K. J.
self-triggered control. In IEEE Conference on Decision Kyriakopoulos. Event-triggered strategies for
and Control, pp. 3270–3285, Maui, HI, 2012. decentralized model predictive controllers. In IFAC
[4] M. Mazo Jr. and P. Tabuada. Decentralized event- World Congress, pp. 10068–10073, Milano, Italy,
triggered control over wireless sensor/actuator August 2011.
networks. IEEE Transactions on Automatic Control,
[16] E. Garcia and P. J. Antsaklis. Model-based event-
56(10):2456–2461, 2011.
triggered control for systems with quantization and
[5] X. Wang and N. Hovakimyan. L1 adaptive control time-varying network delays. IEEE Transactions on
of event-triggered networked systems. In American Automatic Control, 58(2):422–434, 2013.
Control Conference, pp. 2458–2463, Baltimore, MD,
2010. [17] W. P. M. H. Heemels and M. C. F. Donkers. Model-
based periodic event-triggered control for linear
[6] M. C. F. Donkers and W. P. M. H. Heemels. systems. Automatica, 49(3):698–711, 2013.
Output-based event-triggered control with guar-
anteed L∞ -gain and improved and decentralised [18] C. Nowzari and J. Cortés. Self-triggered coordina-
event-triggering. IEEE Transactions on Automatic tion of robotic networks for optimal deployment.
Control, 57(6):1362–1376, 2012. Automatica, 48(6):1077–1087, 2012.
[7] D. V. Dimarogonas, E. Frazzoli, and K. H. [19] X. Wang and M. D. Lemmon. Event-triggered
Johansson. Distributed event-triggered control for broadcasting across distributed networked control
multi-agent systems. IEEE Transactions on Automatic systems. In American Control Conference, pp. 3139–
Control, 57(5):1291–1297, 2012. 3144, Seattle, WA, June 2008.
[8] G. Shi and K. H. Johansson. Multi-agent robust [20] M. Zhong and C. G. Cassandras. Asynchronous
consensus-part II: Application to event-triggered distributed optimization with event-driven com-
coordination. In IEEE Conference on Decision and munication. IEEE Transactions on Automatic Control,
Control, pp. 5738–5743, Orlando, FL, December 55(12):2735–2750, 2010.
2011.
[21] X. Wang and M. D. Lemmon. Event-triggering in
[9] X. Meng and T. Chen. Event based agreement
distributed networked control systems. IEEE Trans-
protocols for multi-agent networks. Automatica,
actions on Automatic Control, 56(3):586–601, 2011.
49(7):2125–2132, 2013.
[10] Y. Fan, G. Feng, Y. Wang, and C. Song. Dis- [22] G. S. Seybotha, D. V. Dimarogonas, and K. H.
tributed event-triggered control of multi-agent Johansson. Event-based broadcasting for multi-
systems with combinational measurements. Auto- agent average consensus. Automatica, 49(1):245–252,
matica, 49(2):671–675, 2013. 2013.

[11] K. Kang, J. Yan, and R. R. Bitmead. Cross- [23] M. Mazo Jr., A. Anta, and P. Tabuada. On self-
estimator design for coordinated systems: Con- triggered control for linear systems: Guarantees
straints, covariance, and communications resource and complexity. In European Control Conference,
assignment. Automatica, 44(5):1394–1401, 2008. pp. 3767–3772, Budapest, Hungary, August 2009.
220 Event-Based Control and Signal Processing

[24] C. Nowzari and J. Cortés. Self-triggered optimal multi-agent average consensus. In American Control
servicing in dynamic environments with acyclic Conference, pp. 2148–2153, Portland, OR, 2014.
structure. IEEE Transactions on Automatic Control,
58(5):1236–1249, 2013. [28] C. Nowzari and J. Cortés. Team-triggered coordina-
tion for real-time control of networked cyberphysi-
[25] F. Bullo, J. Cortés, and S. Martı́nez, Distributed cal systems. IEEE Transactions on Automatic Control,
Control of Robotic Networks. Applied Mathe- 2013. Available at http://arxiv.org/abs/1410.2298.
matics Series, Princeton University Press, 2009.
http://coordinationbook.info. [29] J. Cortés, S. Martı́nez, T. Karatas, and F. Bullo.
Coverage control for mobile sensing networks.
[26] E. Garcia, Y. Cao, H. Yu, P. Antsaklis, and D. Cas- IEEE Transactions on Robotics and Automation, 20(2):
beer. Decentralised event-triggered cooperative 243–255, 2004.
control with limited communication. International
Journal of Control, 86(9):1479–1488, 2013. [30] S. Firouzabadi. Jointly optimal placement and
power allocation in wireless networks. Master’s
[27] C. Nowzari and J. Cortés. Zeno-free, distributed thesis, University of Maryland at College Park,
event-triggered communication and control for 2007.
11
Efficiently Attentive Event-Triggered Systems

Lichun Li
Georgia Institute of Technology
Atlanta, GA, USA

Michael D. Lemmon
University of Notre Dame
Notre Dame, IN, USA

CONTENTS
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
11.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
11.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
11.4 Event-Trigger Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
11.5 Quantizer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
11.6 Efficient Attentiveness without Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
11.6.1 Local Input-to-State Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
11.6.2 Efficiently Attentive Intersampling Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
11.6.3 Zeno Behavior Free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
11.7 Efficient Attentiveness with Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
11.8 An Example: A Rotating Rigid Spacecraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
11.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.10 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.10.1 Proof of Admissibility of the Sampling and Arrival Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.10.2 Proof of Local ISS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.10.3 Proof of No Zeno Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.10.4 Proof of Efficient Attentiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

ABSTRACT Event-triggered systems sample the maximum acceptable delay under which a bandwidth-
system’s outputs and transmit those outputs to a con- limited event-triggered system is assured to be effi-
troller or an actuator when the “novelty” in the sample ciently attentive.
exceeds a specified threshold. Well-designed event-
triggered systems are efficiently attentive, which means
that the intersampling intervals get longer as the sys- 11.1 Introduction
tem state approaches the control system’s equilibrium
point. When the system regulates its state to remain Event triggering is a recent approach to sampled-data
in the neighborhood of an equilibrium point, effi- control in which sampling occurs only if a measure
ciently attentive systems end up using fewer compu- of data novelty exceeds a specified threshold. Early
tational and communication resources than comparable interest in event triggering was driven by simulation
time-triggered systems. This chapter provides sufficient results, suggesting that these constant event-triggering
conditions under which a bandwidth-limited event- thresholds could greatly increase the average sampling
triggered system is efficiently attentive. Bandwidth- period over periodically sampled systems with compa-
limited systems transmit sampled data with a finite rable performance levels [2,3,14]. Similar experimental
number of bits, which are then received at the destina- results were reported in [11] for state-dependent event
tion with a finite delay. This chapter designs a dynamic triggers enforcing input-to-state stability and in [12] for
quantizer with state-dependent upper bounds on the L2 stability. It is, however, relatively easy to construct

221
222 Event-Based Control and Signal Processing
10 needed to stabilize an event-triggered system. This inter-
val characterizes the rate at which sampled information
Intersampling interval (s)

5 is transmitted across a channel, but that information


E1 Without quantization is the sampled state without any quantization error.
0 In reality, the system’s communication channel trans-
0 5 10 15 20 25 mits information in discrete packets of a fixed size. This
0.2 means that the sampled state must also be quantized
E2 Without quantization
with a finite number of bits which are then transmit-
0.1 ted as a single packet across the channel. Furthermore,
that packet is not instantaneously received at the other
0 end of the channel. Transmitted packets are success-
0 5 10 15 20 25 fully received and decoded after a finite delay. One may,
Time (s) therefore, characterize the actual bandwidth needed to
transmit the sampled information as a stabilizing instan-
FIGURE 11.1 taneous bit-rate which equals the number of bits used to
Intersampling intervals of event-triggered systems (11.1). encode each sampled measurement and the maximum
delay with which the sampled-data system is known to
examples where an event trigger actually decreases the be asymptotically stable. Therefore, to properly study
average sampling period of the system. Consider, for efficiently attentive systems, one must determine condi-
example, a cubic system tions under which the intersampling interval increases,
and the stabilizing instantaneous bit-rate decreases as
ẋ = x3 + u; x (0) = x0 , (11.1) the system’s state asymptotically approaches the equi-
librium point.
where u = −3x̂3k
for t ∈ [sk , sk+1 ), where sk is the kth
This chapter provides a sufficient condition for event-
consecutive sampling instant, and x̂ k = x (sk ). Let us
triggered systems to be efficiently attentive in the sense
now consider two different event triggers. The trigger
that the intersampling interval gets longer, and the sys-
E1 generates a sampling instant sk+1 when | x (t) − x̂ k | =
tem’s instantaneous bit-rate gets smaller as the system
0.5| x (t)|, and the second trigger E2 generates a sampling
state approaches the origin. In particular, we develop a
instant when | x (t) − x̂ k | = 0.5| x4 (t)|. For both event trig-
dynamic quantization policy and identify a maximum
gers, the system is locally asymptotically stable for all
acceptable transmission delay for which the system is
| x0 | ≤ 0.6. But if one examines the intersampling inter-
assured to be locally ISS and efficiently attentive. The
vals in the two plots of Figure 11.1, it should be apparent
results consolidate earlier work in [5,6,13]. This chapter
that the intersampling intervals generated by trigger E1
is organized as follows. Sections 11.2 and 11.3 present
get longer as the system approaches its equilibrium. By
mathematical preliminaries and the problem statement,
contrast, trigger E2 results in intersampling intervals
respectively. Sections 11.4 and 11.5 state how to prop-
that get shorter as the system approaches the origin.
erly design the threshold function and the quantization
It is obvious that trigger E1 conserves communication
map, and the main results are presented in Sections 11.6
resources, while trigger E2 does not. The preceding
and 11.7 for the case without and with delay, respec-
examples raise an important question: namely, what are
tively. An example to demonstrate the results is pro-
the essential characteristics of an event-triggered system
vided in Section 11.8, and a summary is given in Section
that conserve its communication resources.
11.9 to conclude the chapter.
The event-triggered system with trigger E1 conserves
communication resources, in the sense that as the sys-
tem state approaches the equilibrium, the intersampling
intervals become longer. This is a desirable property of
an event-triggered system because one would conjec- 11.2 Preliminaries
ture that the “interesting” feedback information neces-
sary for system stabilization should be greater when Let Rn and R+ denote the n-dimensional Euclidean
the system’s state is perturbed away from the equilib- space and a set of nonnegative reals, respectively.
rium point. We call this property efficient attentiveness The infinity norm in Rn is denoted as  · . The
because the control system becomes more efficient in its L-infinity norm of a function x (·) : R+ → Rn is defined
use of the channel as it asymptotically approaches its as  x L∞ = ess supt≥0  x (t). This function is said to be
equilibrium point. essentially bounded if  x L∞ < ∞.
The intersampling interval, however, is not neces- Let Ω be a domain (open and connected subset) of Rn ,
sarily the best way of characterizing the information and Br ⊂ Rn be an open ball centered at the origin with
Efficiently Attentive Event-Triggered Systems 223

radius r. We say f (·) : Ω → Rn is locally Lipschitz on Ω if β( x (t0 ), t − t0 ) + γ(w(t)L∞ ) (i.e., the system (11.2)
each point of Ω has an open neighborhood Ω0 such that is locally ISS).
for any x, y ∈ Ω0 ,  f ( x ) − f (y) ≤ L0  x − y with some
constant L0 ≥ 0. PROOF
' It( can be shown that  x  ≥ α3−1
A function α(·) : R+ → R+ is of class K if it is continu- γ1 ( w  L ∞ )
 ⇒ ∂V
∂x f ( x, w ) ≤ −(1 − )α3 ( x ). Accord-
ous, strictly increasing, and α(0) = 0. The function α is of
class N if it is continuous, increasing, and α(s) = ∞ ⇒ ing to Theorem 4.18 of [4], we have Theorem
s = ∞. It is of class M if it is continuous, decreasing, and 11.1.
α(s) = 0 ⇒ s = ∞. A function β : R+ × R+ → R+ is of
class KL if β(·, t) is of class K for each fixed t ≥ 0, and
β(r, t) decreases to 0 as t → ∞ for each fixed r ≥ 0.
11.3 Problem Statement
Lemma 11.1
Consider a quantized event-triggered system, as shown
in Figure 11.2, in which an event detector and a quantizer
Let g : (0, ∞) → R+ be a continuous function satisfying are co-located within the system’s sensor with direct
lims→0 g(s) < ∞. There must exist a class N function h access to the plant’s state, x. The system controller is a
such that g (s) ≤ h(s), for all s ≥ 0. separate subsystem whose control output, u, is directly
applied to the plant. The sensor samples the system
PROOF See the proof of Lemma 4.3 in [4]. state at discrete time instants, {sk }∞
k =1 , and quantizes
the sampled state so it can be transmitted across a
Consider a dynamical system whose state x : R+ → bandwidth-limited communication channel. The plant is
Rn satisfies the differential equation an input-to-state system satisfying the following set of
differential equations:
ẋ (t) = f ( x (t), w(t)), x (0) = x0 , (11.2)
ẋ (t) = f ( x (t), u(t), w(t)), x (0) = x0 , (11.3)
in which x0 ∈ Rn , w : R+ → Rm is essentially bounded, .
u(t) = K ( x̂ k ) = uk , ∀t ∈ [ ak , ak+1 ), k = 0, 1, . . . , (11.4)
and f : Rn × Rm → Rn is locally Lipschitz in x. A point
x ∈ Rn is an equilibrium point for system (11.2) if and where x (·) : R+ → Rn is the system state, w(·) :
only if f ( x, 0) = 0. The system (11.2) is locally input- R+ → Rq is an L∞ disturbance with wL∞ = w̄ ≤
to-state stable (ISS) if there exist ρ1 , ρ2 > 0, γ ∈ K, and c1 wL∞ for some constant c1 ≥ 1, and f : Rn × Rm ×
β ∈ K L, such that for all  x0  ≤ ρ1 , and wL∞ ≤ ρ2 , the Rq → Rn is locally Lipschitz in all three variables with
system’s state trajectory x satisfies  x (t) ≤ β( x0 , t) + f (0, 0, 0) = 0. The control input, u(·) : R+ → Rm , is com-
γ(wL∞ ) for all t ≥ 0. puted by the controller as a function of the quantized
This chapter’s analysis will make use of the follow- sampled states, { x̂ k }∞k =1 . This quantized state, x̂ k , is a func-
ing theorem that provides a Lyapunov-like condition for tion of the true system state, x (sk ), at a sampling instant,
system (11.2) to be locally ISS. sk for k = 1, 2, . . . , ∞. The sampled state, x (sk ), is quan-
tized so it can be transmitted across the channel with
Theorem 11.1 a finite number of bits. The precise nature of the quan-
tization map is discussed below. The sensor transmits
Let Ω ⊂ Rn be a domain containing the origin, and V : the quantized sampled state at the sampling instant sk ,
Ω → R+ be continuous differentiable such that
Sensor
w(t)
α1 ( x ) ≤ V ( x ) ≤ α2 ( x ), ∀ x ∈ Ω, x(t) Event
Plant
∂V detector
f ( x, w) ≤ −α3 ( x ) + γ1 (wL∞ ), ∀ x ∈ Ω, xk = x(sk)
∂x sk = Sample time
u(t)
where α1 , α2, α3 , γ1 ∈ K. Take r > 0 such that Br ⊂ Ω,
and suppose Channel Quantizer
Controller
xk =Q(xk)

w(t)L∞ < γ1−1 (α3 (α2−1 (α1 (r )))), for some  ∈ (0, 1). xk xk
ak= Arrival time
Then, for every initial state x (t0 ), satisfying  x (t0 ) <
α2−1 (α1 (r )), there exist functions β ∈ KL and γ ∈ K FIGURE 11.2
such that the solution of (11.2) satisfies  x (t) ≤ Event-triggered control system with quantization.
224 Event-Based Control and Signal Processing

and that transmitted information is received by the function of the system state. This means that the sta-
controller at time instant ak . The delay associated with bilizing instantaneous bit-rate is a function of the sys-
this transmission is denoted dk = ak − sk , and we define tem state. Since an important motivation for studying
the kth intersampling interval τk = sk+1 − sk . We say the event-triggered systems is the claim that they use fewer
sampling, {sk }∞ ∞
k =1 , and arrival sequences, { a k }k =1 , are computational and communication resources than com-
admissible if sk ≤ ak ≤ sk+1 for k = 0, 1, . . . , ∞. parable time-triggered systems, it would be valuable
The sampling instants, {sk }∞ k =1 , can be generated in to identify conditions under which that claim is actu-
a number of ways. If the intersampling time is con- ally valid. As shown in the preceding example, it is
stant, then we obtain a traditional quantized sampled quite easy to generate event-triggered systems that are
data system with a so-called time-triggered sampling not efficient in their use of communication resources,
strategy. An alternative sampling strategy generates a in that the stabilizing instantaneous bit-rate gets larger
sampling instant sk when the difference between the cur- as one approaches the system’s equilibrium point. This
rent system state and the past sampled state, ek−1 (t) = motivates the following definition.
x (t) − x̂ k−1 , exceeds a specified threshold. This chapter
considers a sampling rule in which the kth consecutive DEFINITION 11.1 An event-triggered system is said
sampling instant, sk , is triggered when to be efficiently attentive if there exist functions h1 ∈ M
. and h2 ∈ N , such that
ek−1 (t) ≥ θ( x̂k−1 , w̄) = θk−1 , (11.5)
1. The intersampling interval τk satisfies τk ≥
where ek−1 (t) = x (t) − x̂ k−1 . The sampling strategy h1 ( x̂ k−1 ) or h1 ( x̂ k ).
described in Equation 11.5 is called a state-dependent 2. The stabilizing instantaneous bit-rate rk satis-
event trigger. The trigger is state dependent because it is fies rk ≤ h2 ( x̂ k−1 ) or h2 ( x̂ k ).
a function of the last sampled state.
The sampled state information, x (sk ), must be trans- To be efficiently attentive, therefore, requires the inter-
mitted across a bandwidth-limited channel. This means sampling interval to be increasing and the instantaneous
that the sampled state must be encoded with a finite bit-rate to be decreasing as the system state approaches
number of bits before it is transmitted across the chan- the equilibrium point. This is because with a bandwidth-
nel. Let Nk denote the number of bits used to encode limited channel, one wants the system to access the
the kth consecutive sampled state, x (sk ), and assume channel less often as it gets closer to its equilibrium
that these bits are transmitted in a single packet at point, and one also wants it to consume fewer channel
time sk from the sensor and received at time ak by the resources for each access. If one can guarantee that the
controller. Let x̂ k denote the real vector decoded by the system is efficiently attentive, then it will certainly be
controller from the received packet. We assume that more efficient in its use of channel resources than com-
the quantization error δk =  x̂ k − x (sk ) is bounded as parable time-triggered systems, and one would have
. established formal conditions under which the claim
 x̂k − x (sk ) ≤ Δ( x̂k−1 , w̄) = Δk . (11.6)
that event-triggering is “more efficient” in its use of
The bound in (11.6) is quite general in the sense that Δk system resources can be formally verified.
is a function of the system state. This allows one to con-
sider both static and dynamic quantization strategies.
Let us now define the concept of stabilizing instan-
taneous bit-rate for the proposed event-triggered sys- 11.4 Event-Trigger Design
tem. In particular, let the sampled state at time sk
The event-triggering function, θ( x̂ k−1 , w̄), in Equa-
(k = 0, 1, 2, . . . , ∞) be quantized with Nk bits, and
tion 11.5 is chosen to render the closed-loop system in
assume that the closed-loop system is asymptotically
Equations 11.3 and 11.4 locally ISS. Such a triggering
stable provided the transmitted information is received
function exists when the following assumption is sat-
with a delay dk = ak − sk ≤ Dk for all k = 0, 1, 2, . . . , ∞.
isfied. This assumption asserts that there exists a state-
Then the stabilizing instantaneous bit-rate is
feedback controller K for which the system is locally ISS
Nk with respect to the state-feedback error e(t) = x̂ k − x (t)
rk = .
Dk for all t and the external disturbance w.
We are interested in characterizing the asymptotic prop- ASSUMPTION 11.1 Let Ω, Ω ⊂ Rn , Ω ⊂ Rq be
e w
erties of the instantaneous bit-rate, rk , for the event- domains containing the origin. And e(t), w(t) are
triggered system described above. essentially bounded signals. For system
For event-triggered systems, the quantization error
and the stabilizing delay, Dk , will generally both be a ẋ (t) = f ( x (t), K ( x (t) + e(t)), w(t)), (11.7)
Efficiently Attentive Event-Triggered Systems 225

there exists a continuous differentiable function V : Ω → Let us then define a function ξk as


R satisfying
1
α1 ( x ) ≤ V ( x ) ≤ α2 ( x ), ∀ x ∈ Ω, (11.8) ξk (s, t) = ξ ξ(s, t), (11.12)
Lk + 1
∂V
f ( x, w) ≤ −α3 ( x ) + γ1 (e) + γ2 (w), (11.9)
∂x where ξ is defined in Equation 11.11. Lemma 11.2
∀ x ∈ Ω, e ∈ Ωe , w ∈ Ωw , (11.10) establishes the relationship between thresholds ξk (s, t)
and ξ(s, t).
where α1,2,3, γ1,2 ∈ K.
Lemma 11.2
Let e(t) = x̂ k − x (t) denote the gap between the cur-
rent state, x (t), and the last sampled state, x̂ k . Now
With Assumption 11.2,
assume that the gap satisfies
. ek (t) ≤ ξk ( x̂k , w̄) ⇒ ek (t) ≤ ξ( x (t), w̄),
e(t) ≤ ξ( x (t)) = γ1−1 [cα3 ( x (t)) + γ3 (w̄)] , (11.11)
∀ t ∈ [ s k , a k +1 ) .
for some c ∈ (0, 1) and any γ3 ∈ K. Inserting this rela-
tion into Equation 11.9 yields PROOF It is easy to see that ξ(s, t) is strictly increasing
∂V with respect to s. Under Assumption 11.2, we have
f ( x, w) ≤ −(1 − c)α3 ( x ) + γ(w̄),
∂x 1
ek (t) ≤ ξ ξ( x̂ k , w̄)
where γ(w̄) = γ2 (w̄) + γ3 (w̄) is class K. This relation Lk + 1
shows that a sampled data is locally ISS with respect
1
to the external disturbance w, provided the gap e(t) ≤ ξ ξ( x (t) + ek (t), w̄)
satisfies the inequality in (11.11). One obvious way Lk + 1
of enforcing this assumption is to require that the 1 Lξ
system samples the state when (11.11) is about to be ≤ ξ ξ( x (t), w̄) + ξ k ek (t)
violated. Inequality (11.11), therefore, represents a state- Lk + 1 Lk + 1
dependent event-triggering condition whose satisfac- ⇒ ek (t) ≤ ξ( x (t), w̄).
tion assures the closed-loop system is locally ISS with
respect to the input disturbances, w. The second inequality holds because  x̂ k  =  x (t) −
The event trigger in Equation 11.11 is a function of ek (t) ≤  x (t) + ek (t). The third inequality holds
the system state x (t) at time instant t. In actual imple- because ξ(s, t) is locally Lipschitz over s with Lipschitz
mentations, it is more convenient to make this threshold constant Lξk .
dependent only on the past sampled state, x̂k . This is
particularly important when we consider the impact With Assumption 11.2, one can then define an event
that transmission delays and quantization have on the trigger θ(·, ·) that is functionally dependent on x̂ k that
performance of the event-triggered system. In order to still assures the closed-loop system is locally ISS. This
obtain practical event triggers that are only a function event trigger is
of the past sampled state, x̂k , one must assume that the
ρθ .
event trigger ξ(s, t) is locally Lipschitz with respect to θ( x̂ k , w̄) = ρθ ξk ( x̂ k , w̄) = ξ( x̂ k , w̄) = θk ,
the first argument. This observation leads to Assump- 1+ Lξk
tion 11.2. (11.13)

ASSUMPTION 11.2 Let Br̄ be the smallest ball includ- where ρθ is a constant in (0, 1), and functions ξ and ξk
ing Ω. The function ξ(s, t) defined in (11.11) is locally are defined in (11.11) and (11.12), respectively. Through-
Lipschitz with respect to its first argument s ∈ (0, r̄). Let out the remainder of this chapter, the event trigger in
Equation 11.13 will be used.
Lξk be the Lipschitz constant of ξ(s, t) with respect to s
during time interval [sk , ak+1 ].

This assumption requires that the function


11.5 Quantizer Design
ξ( x (t), w̄) does not change too quickly with respect
to  x (t) in a neighborhood of the origin. Examples of The event trigger derived in the preceding section
functions that satisfy Assumption 11.1 are polynomial guarantees that the closed-loop system is locally ISS
functions with degree greater than one. provided the infinite precision sampled state is used at the
226 Event-Based Control and Signal Processing
Sampling interval
20
x(sk)
θk–1
2
10 Jk

0
0 5 10 15 20 25 2θk–1
Time (s) xk

FIGURE 11.3 FIGURE 11.4


Event-triggered control system from (11.1) using a static uniform Quantization map.
quantization level of 0.15. Note the occurrence of Zeno behavior 21 s
into the simulation.
x (sk ) as the center of that cell which contains x (sk ).
Based on this idea, the quantization map Q takes the
controller. When this information is transmitted across form of
a bandwidth-limited channel, that information must be
quantized with a finite number of bits, so there exists x̂ki = Qi ( x )
⎧ i
additional error between the current state and the last- ⎪ x ( sk ),


if i = Ik ;
sampled state. x̂ ki −1 >− θk−1 ?
=
One concern regarding quantization is that it may ⎪
⎪ x i ( s k )−( x̂ ki −1−θk −1 ) 2θk −1 θk −1
cause Zeno behavior in which the system broadcasts an ⎩ + 2θ /J J + Jk , otherwise,
k −1 k k
infinite number of times in a finite time interval. This (11.15)
can occur in systems where the quantization level is
larger than the state-dependent threshold function. An where x̂ ki , Qi , and x i indicate the ith dimension of the
example of this behavior is seen in Figure 11.3 where corresponding vectors, and Ik is the smallest index i such
a static uniform quantizer with a quantization level of that | x i (sk ) − x̂ ki −1 | = θk−1 .
0.15 is used on the event-triggered example in Equa- The quantization map Q designed in (11.15) guaran-
tion 11.1. The intersampling intervals for this simulation tees that the infinity norm of the quantization error δk =
are shown in Figure 11.3 using event trigger E1 in which  x (sk ) − x̂k  = ek (sk ) is always less than the thresh-
Zeno behavior occurs at 21 s into the simulation. old function θk for any possible state trajectories. This
To avoid the Zeno behavior shown in Figure 11.3, one assertion is formally stated and proven in the following
needs to guarantee that the quantization error is always lemma.
less than the event trigger in Equation 11.11. The quan-
tization level Δk for the kth sample is therefore defined Lemma 11.3
to be
. δk < θk , for all possible state trajectories.
δk ≤ Δk = ρΔ θ(| x̂ k−1  − θk−1 |, w̄), (11.14)
where ρΔ ∈ (0, 1) and θ(s, t) are the threshold given in PROOF We first show that δk ≤ Δk , and then show
Equation 11.13. Notice that Δk depends on x̂ k−1 instead that Δk < θk .
of x̂ k . This is because one needs Δk to perform the kth θ
From Equation 11.15, we see that δk ≤ kJ−1 =
quantization, and x̂k , of course, is not yet available. θ
k

With quantization level Δk , let us uniformly quantize < θ k −1 =


k −1
≤ Δk .
Δk
the uncertainty set { x :  x − x̂ k−1  = θk−1 } of x (sk ). The
Since we quantize the uncertainty set { x :  x −
uncertainty set of x (sk ) is the surface of, instead of all
x̂ k−1  = θk−1 }, x̂ k satisfies  x̂ k − x̂ k−1  = θk−1 ; hence, it
the area contained in, the hypercube centered at x̂ k−1
is true that  x̂ k  ≥ | x̂ k−1  − θk−1 |. According to Equa-
with edge length 2θk−1 . When the event ek−1 (t) ≥
tion 11.14, we have
θk−1 is triggered at sk , the state ek−1 (sk ) must satisfy
ek−1 (sk ) =  x (sk ) − x̂k−1  = θk−1 . Δk = ρΔ θ(| x̂ k−1  − θk−1 |, w̄)
Given the uncertainty set of x (sk ) to be { x :  x − ρθ
x̂ k−1  = θk−1 }, and the quantization level Δk as in = ρΔ ξ(| x̂ k−1  − θk−1 |, w̄)
1 + Lξk
(11.14), one denotes the quantizer as a map Q : Rn → Rn .
ρθ
Take a three-dimensional system as an example (see ≤ ρΔ ξ( x̂ k , w̄) = ρΔ θk < θk .
Figure 11.4). We first determine which facet x (sk ) lies in, 1 + Lξk
then <uniformly
= divide each dimension of that facet into The second and the third equalities are derived from
θk −1
Jk = Δk mutually disjoint cells, and finally quantize Equation 11.13, and the first inequality holds because
Efficiently Attentive Event-Triggered Systems 227

ξ(·, ·) is increasing in its first argument according to origin. Event trigger 2 has a quartic threshold function
Equation 11.11, and the last inequality holds because that decreases faster than the closed-loop dynamic as x
ρΔ < 1. gets close to 0 and, hence, is triggered more and more
often as the state gets closer and closer to the origin.
The quantization map Q in (11.15) generates Nk bits Suppose f¯c (·) is a polynomial function whose small-
at the kth sampling. Since an n-dimensional box has 2n est degree of any term is p. To be efficiently attentive,
facets, and each facet has n − 1 dimensions, the whole the threshold function should have its smallest degree
< =
θk −1 n − 1 of any term no larger than p. For example, if the closed-
surface is divided into 2n Δ parts. Therefore, the
k loop dynamic system is linear, then the threshold func-
number of bits Nk transmitted at the kth sampling is tion needs to have at least one term to be linear or
@ A B C
θk − 1 n − 1 constant.
Nk = log2 2n . (11.16)
Δk
11.6.1 Local Input-to-State Stability
Input-to-state stability and local input-to-state stability
of event-triggered systems without delay have already
been studied [1,7,9–11] without Assumption 11.3. So,
11.6 Efficient Attentiveness without Delay Assumption 11.3 is not essential to establish stability.
Although many works have discussed the stability issue
While overly simplistic, it is easier to establish sufficient in event-triggered systems, to be self-contained, we still
conditions for efficient attentiveness when transmis- give Lemma 11.4 followed by a brief proof.
sions are received without delay. The methods used to
derive these conditions are then easily extended to han- Lemma 11.4
dle the case when there are communication delays. This
section, therefore, derives sufficient conditions for an
event-triggered system to be efficiently attentive when Suppose there is no delay. On Assumptions 11.1 and
there are zero transmission delays. 11.2, the event-triggered system (11.3–11.5) with the
Using the approach in this section, the next section threshold function (11.13) and the dynamic quantization
will study the case when there is delay, provide an level (11.14) is locally ISS.
acceptable delay, and analyze the required instanta-
neous bit-rate. PROOF Without delay, we have sk = ak for all k =
This section is organized as follows: Section 11.6.1 0, 1, . . . .
establishes that the proposed event-triggered system Apply Equation 11.4 into Equation 11.3, and compare
is locally ISS. Efficient attentiveness is then stud- it with Equation 11.7. We have, for the event-triggered
ied in Section 11.6.2. Zeno behavior is discussed in system (11.3–11.5),
Section 11.6.3.
Assumption 11.3 is essential in guaranteeing the effi- e ( t ) = e k ( t ) , ∀ t ∈ [ a k , a k +1 ) , (11.19)
cient attentiveness of an event triggered system.
where ek (t), according to the event trigger (11.5) and the
ASSUMPTION 11.3 There exists a continuous func- threshold function (11.13), satisfies
tion f¯c (·) such that
 f ( x, K ( x ), 0) ≤ f¯ ( x ), ∀ x ∈ Ω, and (11.17) ek (t) ≤ θk = ρθ ξk ( x̂k , w̄), ∀t ∈ [ ak , ak+1 ).
c
f¯c (s)
lim < ∞. (11.18) According to Lemma 11.2,
s →0 θ( s, 0)

ek (t) ≤ ξ( x (t), w̄), ∀t ∈ [ ak , ak+1 ).


Assumption 11.3 requires that the threshold function
θ(·, 0) decreases more slowly than f¯c (·), an upper bound
on the dynamics of the closed-loop system without dis- Therefore, according to Equation 11.19, for all t ≥ 0,
turbances. Take the system (11.1) as an example. The along the trajectory of the system state of Equations 11.3
closed-loop dynamic is cubic. Event trigger 1 has linear through 11.5,
threshold function which decreases more slowly than
the closed-loop dynamic as x gets close to 0; hence, the e(t) ≤ ξ( x (t), w̄),
system state takes more and more time to hit the thresh-
old function when the state gets closer and closer to the where ξ(·, ·) is defined as in (11.11).
228 Event-Based Control and Signal Processing

Applying the above equation to Equation 11.10, we d |e ik ( t )| d e ik ( t )2
have The first inequality holds because dt = dt =
e ik ( t ) i
ė (t) ≤ |ėik (t)|, and the last inequality holds because
V̇ ≤ −α3 ( x ) + γ1 (ξ( x , w̄)) |e ik ( t )| k

+ γ2 (w), ∀ x ∈ Ω, w ∈ Ωw f (·, ·, ·) is locally Lipschitz in its first argument with


Lipschitz constant Lkx during time interval [sk , ak+1 ).
= −(1 − c)α3 ( x ) + γ2 (w) According to the comparison principle, for all t ∈
+ γ3 (w̄), ∀ x ∈ Ω, w ∈ Ωw [t , t+1 ],
≤ −(1 − c)α3 ( x ) + γ2 (wL∞ ) f¯( x̂ k , ũ, w̄) L x (t−t )
− 1) + ek (t )e Lk (t−t ) .
x
+ γ3 (c1 wL∞ ), ∀ x ∈ Ω, ek (t) ≤ (e k
Lkx
where c is the same c as defined in Equation 11.11, and (11.23)
c1 is a constant that satisfies w̄ ≤ c1 wL∞ . Together Since the gap ek (t) is continuous (because all the inputs
with Equation 11.8, according to Theorem 11.1, the of (11.20) are bounded), so is its infinity norm ek (t).
event-triggered system (11.3–11.5) with a threshold Therefore, the upper bound on ek (t+1 ) computed
function (11.13) and a quantization level (11.14) is from (11.23) can be used as the upper bound on the ini-
locally ISS. tial state of ek (t) for the next interval [t+1 , t+2 ], and
for all t ∈ [t+1 , t+2 ], ek (t) satisfies
f¯( x̂ k , ũ, w̄) L x (t−t )
− 1) + ek (t )e Lk (t−t ) .
x
11.6.2 Efficiently Attentive Intersampling Interval ek (t) ≤ (e k
Lkx
The preceding section focused on the performance level By mathematical induction, it is easy to show that for all
(local ISS) of the event-triggered system (11.3–11.5). This t ∈ [ t 1 , t l ],
section concentrates on the communication resources
f¯( x̂ k , ũ, w̄) L x (t−t1 )
− 1) + ek (t1 )e Lk (t−t1 ) .
x
needed to assure that performance level. Since delay is ek (t) ≤ (e k
supposed to be zero in this section, we only study the Lkx
intersampling interval. (11.24)
Let us first study the dynamic behavior of the gap With Equation 11.24, we are ready to find a lower
ek (t) = x (t) − x̂ k . Let ũ ∈ Rm be a constant control input bound on the intersampling interval of the event-
applied to the system (11.3–11.5) during interval [t1 , tl ), triggered system (11.3–11.5).
for some t1 and tl satisfying sk ≤ t1 < tl . The gap ek (t)
satisfies Lemma 11.5
ėk (t) = f ( x̂ k + ek (t), ũ, w(t)), ∀t ∈ [t1 , tl ). (11.20)
Suppose there is no delay. Under Assumptions 11.1
Next, we would like to analyze the dynamic behavior and 11.2, the event-triggered system (11.3–11.5) with the
of ek (t). Although ek (t) may not be differentiable threshold function (11.13) and quantization level (11.14)
at every time instant during [t1 , tl ], we find that ek (t) has its intersampling interval τk satisfying
is continuous and piecewise differentiable. Let eik be  
the ith element of ek . There must exist a sequence of 1 Lkx (θk − Δk )
τk ≥ x ln 1 + ¯ , ∀k = 1, 2, . . . .
time instants t1 < t2 < · · · < tl −1 < tl , such that for each Lk f ( x̂ k , uk , w̄) + Lkx Δk
small interval [t , t+1 ), ek (t) = |eik (t)|, and eik (t) keeps (11.25)
the same sign for some i = 1, . . . , l. The infinity norm
ek  of ek is differentiable during this interval [t , t+1 ] PROOF If there is no delay, the control input during
and satisfies interval [sk , sk+1 ) is a constant and equals uk . According
dek (t) d|eik (t)| to Equation 11.24, during interval [sk , sk+1 ), ek (sk+1 )
= ≤ |ėik (t)| = | f i ( x̂k + ek (t), ũ, w(t))| satisfies
dt dt
≤ f ( x̂k + ek (t), ũ, w(t)) ≤ f¯( x̂k , ũ, w̄) + Lkx ek (t), f¯( x̂ k , uk , w̄) L x τk
(e k − 1) + ek (sk )e Lk τk
x
ek (sk+1 ) ≤ x
(11.21) Lk
¯f ( x̂ k , uk , w̄) L x τ
( e k k − 1 ) + Δ k e L k τk ,
x
where ≤
Lkx
f¯( x̂ k , ũ, w̄) =  f ( x̂ k , ũ, 0) + Lw k w̄. (11.22)
where the second inequality holds because ek (sk ) is the
Let Lkx and Lw k be the Lipschitz constant of f with quantization error of the kth quantized state, and hence
respect to x and w, respectively, during interval [sk , ak+1 ]. satisfies ek (sk ) ≤ Δk .
Efficiently Attentive Event-Triggered Systems 229

According to the event trigger (11.5), the k + 1th sam- The second inequality holds because ln(1 + x ) ≥
pling occurs when ek (t) ≥ θk . Therefore, we have x/(1 + x ); the third inequality is derived from
Lemma 11.3; and the last inequality is from
f¯( x̂ k , uk , w̄) L x τk L kx τk
( e k − 1 ) + Δ k e ≥ θ k Equation 11.17.
Lkx Next, we find that there are constants L̄w and L̄ x such
 
that Lw k ≤ L̄ and Lk ≤ L̄ . Since the system is locally
w x x
1 L k ( θk − Δ k )
x
⇒ τk ≥ x ln 1 + ¯ . ISS, there must exist a constant μ such that  x (t) ≤ μ.
Lk f ( x̂ k , uk , w̄) + Lkx Δk
Let L̄w and L̄ x be the Lipschitz constant of f over w and
x for all w ≤ w̄ and  x  ≤ μ, respectively. We have
REMARK 11.1 The intersampling intervals are strictly
Lwk ≤ L̄ and Lk ≤ L̄ , and the intersampling interval
w x x
positive. From Lemma 11.3, we have Δk < θk . Accord-
satisfies
ing to Equation 11.25, it is easy to conclude that the
intersampling interval is strictly positive. 1 − ρΔ
τk ≥ .
f c ( x̂ k )+ L̄ w w̄
θk + L̄ x
The lower bound (11.25) on τk indicates that the
behavior of τk is strongly related with the ratio of the Finally, Assumption 11.3 implies that
closed-loop dynamic f¯( x̂ k , uk , w̄) to the threshold func-
tion θk . This relation becomes obvious if we ignore the f c ( x̂ k ) + L̄w w̄
lim < ∞.
quantization level Δk . Assuming Δk = 0, Equation 11.25  x̂ k →0 θk
is simplified as
  According to Lemma 11.1, there exists a class N function
1 1 h1 (·) such that
τk ≥ x ln 1 + Lk ¯ x
, ∀k = 1, 2, . . . .
Lk f ( x̂ k , uk , w̄)/θk
f c ( x̂ k ) + L̄w w̄
If we want the lower bound to be decreasing with ≤ h1 ( x̂k ).
θk
respect to the state, then we should require the ratio f¯/θk
to be increasing with respect to the state. But this require- Therefore, the intersampling interval satisfies
ment may be hard to meet during the design process.
Hence, Assumption 11.3 only places requirement on the 1 − ρΔ .
τk ≥ = h( x̂k ).
asymptotic behavior of the ratio f¯c /θk , which assures the h1 ( x̂ k ) + L̄ x
existence of an increasing function bounding the ratio
Since h1 ( x̂ k ) is of class N , h ( x̂ k ) is a class M
from above, according to Lemma 11.1.
function.
Lemma 11.6

Suppose there is no delay. On Assumptions 11.1 through 11.6.3 Zeno Behavior Free
11.3, the event-triggered system (11.3–11.5) with thresh- To avoid Zeno behavior, on one hand, we need to guar-
old function (11.13) and quantization level (11.14) has antee that immediately after each sampling, the event
efficiently attentive intersampling intervals—that is, (11.5) is not triggered, that is, ek (sk ) < θk for all k.
there exists a class M function h(·) such that On the other hand, we need to make sure that as the
τk ≥ h( x̂ k ). system state approaches the origin, the intersampling
interval does not approach 0. The first requirement actu-
PROOF First, we derive a more conservative lower ally requires that the quantization level be smaller than
bound on τk which is in a simpler form: the threshold for the next sampling, which is proved
  in Lemma 11.3 and 11.6. The second requirement is
1 Lkx (θk − Δk ) fulfilled automatically if efficient attentiveness is guar-
τk ≥ x ln 1 + ¯ anteed. Therefore, under the same conditions of Lemma
Lk f ( x̂ k , uk , w̄) + Lkx Δk
11.6, we show the event-triggered system is Zeno behav-
θk − Δ k
≥ ¯ ior free.
f ( x̂ k , uk , w̄) + Lkx θk
1 − Δk /θk 1 − ρΔ Corollary 11.1
= ¯ ≥ ¯
f ( x̂ k , uk , w̄)/θk + Lkx f ( x̂ k , uk , w̄)/θk + Lkx
1−ρ Suppose there is no delay. On Assumptions 11.1 through
≥ f ( x̂ )+ LwΔ w̄
. 11.3, the event-triggered system (11.3–11.5) with thresh-
c k
θ
k
+ Lkx
k old function (11.13) and quantization level (11.14) is
230 Event-Based Control and Signal Processing

Zeno behavior free, that is, there exists a constant τ0 > 0 Δk is designed to guarantee Δk < θk as discussed in
such that τk ≥ τ0 for all k = 1, 2, . . . . Lemma 11.3, we have Tk > 0. Therefore, the acceptable
delay Dk = min{d¯k , Tk } is strictly positive.
PROOF To show there is no Zeno behavior, we need to
show that there exists a strictly positive constant that is The acceptable delay defined in (11.26) is state
a lower bound on τk for all k = 1, 2, . . . . dependent, and is long if the system state is close
According to Lemma 11.4, the system state x (t) must to the origin. Most prior work studied constant or
constant bounded delay in event-triggered systems.
be bounded, and so is x̂ k . Thus, there must exist a finite
constant μ > 0 such that  x̂  ≤ μ. Theorem 11.2 suggests that the acceptable delay is state
According to Lemma 11.6, τk ≥ h( x̂ k ), where h (·) is
dependent and hence time varying. Moreover, from
a class M function (continuous, decreasing, h (s) = 0 ⇒ the proof of efficient attentiveness, we see that once the
s = ∞). system is efficiently attentive, the acceptable delay is
Therefore, τk ≥ h( x̂ k ) ≥ h(μ) > 0. The second also “efficiently attentive”; that is, there exists a class
inequality holds because h (·) is a decreasing function, M function bounding the acceptable delay from below.
and the third inequality holds because h (s) > 0 for all Therefore, the acceptable delay can be long if the system
finite s. state is close to the origin.
Efficiently attentive event-triggered systems can share
their bandwidth with other non-real-time transmission
tasks. To guarantee local ISS, the communication chan-
11.7 Efficient Attentiveness with Delay nel should have larger bandwidth than the required
Let us now assume there is a positive transmission delay. instantaneous bit-rate rk for any x̂ k , x̂ k−1 ∈ Ω, where Ω
With delay, the arguments used to establish efficient is a domain defined in Assumption 11.1. Although the
attentiveness are essentially unchanged. The impact of required channel capacity may be large if the normal
the delay is mainly on the performance level. In order to operation region Ω is large, because of efficient atten-
preserve local ISS, we need to bound the delay. The max- tiveness, the event-triggered system (11.3-11.5) may use
imum acceptable delay is provided in Theorem 11.2. The only a portion of the channel bandwidth in an infrequent
proof of the main theorem, though more complicated, is way when the system state is close to its equilibrium,
based on the same ideas that were used when there is no and hence allows other non-real-time transmission tasks
delay. Interested readers can check the Appendix for the to use the channel.
proof.

Theorem 11.2
11.8 An Example: A Rotating Rigid Spacecraft
Suppose the delay d k for the kth sampling satisfies We apply the previous technique to a rotating rigid
.
dk ≤ min{d¯k , Tk } = Dk , (11.26) spacecraft whose model is borrowed from [8]. Euler
equations for a rotating rigid spacecraft are given by
where
  ẋ1 = − x2 x3 + u1 + w1 ,
1 Lkx−1 (ξk−1 ( x̂ k−1 , w̄) − θk−1 ) ẋ2 = x1 x3 + u2 + w2 ,
d¯k = x ln 1 + ¯ ,
L k −1 f ( x̂ k−1 , uk−1 , w̄) + Lkx−1 θk−1 ẋ3 = x1 x2 + u3 + w3 ,
 
1 L k ( θk − Δ k )
x
where w(t)L∞ ≤ 0.005 and x (0) = [0.6 0.5 0.4] T .
Tk = x ln 1 + ¯ .
Lk f ( x̂ k , uk−1 , w̄) + Lkx Δk A nonlinear feedback law ui = − x3i can render the
system to be ISS as proved by the ISS-Lyapunov function
The sampling and arrival sequences are admissible,
V ( x ) = x12 + 12 x22 + 12 x32 . If we apply the event-triggered
that is, d k ≤ τk for all k = 0, 1, · · · . Moreover, under
sampling strategy and the quantizer into the system,
Assumptions 11.1 through 11.3, the event-triggered sys-
the feedback law is ui (t) = − x̂ 3i,k , for t ∈ [ ak , ak+1 ). The
tem (11.3–11.5) with the threshold function (11.13) and
derivative of V along the trajectories of the quantized
the dynamic quantization level (11.14) is locally ISS,
event-triggered closed-loop system satisfies
Zeno behavior free, and efficiently attentive.
V̇ ≤ 2(− x14 + 3x13 e1 + x1 e31 + x1 w1 )
REMARK 11.2 The acceptable delay Dk is strictly 3
positive for all k = 0, 1, · · · . It is easy to see from + ∑ (− x4i + 3x3i ei + xi e3i + xi wi ).
(11.13) that θk−1 < ξk−1 ( x̂ k−1 , w̄); hence, d¯k > 0. Since i =2
Efficiently Attentive Event-Triggered Systems 231

x1 x2 x3 ||x||
0.4

0.2

x
0

−0.2

0 50 100 150 200 250


3

2
τ (s)

0
0 50 100 150 200 250
400

300
rk (bit/s)

200

100

0
0 50 100 150 200 250

FIGURE 11.5
Simulation results.

If e ≤ 16  x  + 0.1w̄,
1
we have delay Dk is defined as in (11.26), our quantized event-
triggered control system is locally ISS, Zeno behavior
V̇ ≤ −0.249 x 4 + 5.2047w̄ + 0.0075w̄2 free, and efficiently attentive.
We applied the event trigger and the quantizer into
+ 0.004w̄ , ∀ x  ≤ 1.
3
the rotating rigid spacecraft system with the delay dk
for the kth packet to be the acceptable delay Dk , and ran
Therefore, the system is locally ISS with ξ(s) = 16 1
s + the system for 250 s. The simulation results are given in
0.1w̄, whose Lipschitz constant Lξk = 16 1
for all k. Figure 11.5.
According to Equations 11.12 through 11.14, we The top plot of Figure 11.5 shows the system state with
design our threshold and quantization level as respect to time. From this plot, we see that the system is
  locally ISS. The middle plot in Figure 11.5 presents the
. 1 intersampling intervals with respect to time. The small-
θk = θ( x̂ k , w̄) = 0.9412ρθ  x̂  + 0.1w̄ ,
16 k est intersampling interval is 0.1 s which is two times
. the simulation step size, so we can say that there is no
Δk = Δ( x̂ k−1 , w̄)
  Zeno behavior. Besides, we also find that the intersam-
1
= 0.9412ρΔρθ ( x̂k−1  − θk−1 ) + 0.1w̄ , pling intervals oscillate over time. As the system state
16 approaches the origin, the peak value of the intersam-
pling intervals grows larger, and the sampling frequency
where ρθ = 0.53, and ρΔ = 0.1.
during the trough becomes lower. Generally speaking,
Now, let us check whether Assumptions 11.1 through
the intersampling interval is growing long, and the inter-
11.3 all hold. During our design of event trigger, we
sampling intervals are efficiently attentive. The bottom
have shown that Assumptions 11.1 and 11.2 hold. For
plot shows the instantaneous bit-rate with respect to
Assumption 11.3, we find that f¯c (s) = 2s with Ω = { x :
2
¯f c ( s ) time. We can see that as the system state goes to the
 x  < 1}. It is easy to see that lims→0 θ(s,0) = 0, and origin, the required instantaneous bit-rate drops from
hence Assumption 11.3 is true. According to Theorem about 360 bit/s to about 50 bit/s, and the curve of the
11.2, as long as the delay dk ≤ Dk where the acceptable instantaneous bit-rate has a similar shape as the curve
232 Event-Based Control and Signal Processing

of  x (t) (the solid line in the top plot). Thus, it is this interval is uk . From Equation 11.24, we have
obvious that the required instantaneous bit-rate is also
efficiently attentive. f¯( x̂ k , uk , w̄) L x (t−sk +1)
− 1 ) + θk e L k ( t − s k +1 ) .
x
ek (t) ≤ x
(e k
Lk

Since t − sk+1 ≤ dk+1 ≤ d¯k+1 (11.26), we have


11.9 Conclusion ek (t) ≤ ξk ( x̂k , w̄), for all t ∈ [sk+1 , ak+1 ).
As prior work showed, event-triggered systems have the Therefore, for all t ∈ [ ak , ak+1 ), and all k = 0, 1, . . .,
potential to save communication resources while main- ek (t) ≤ ξk ( x̂k , w̄). According to Lemma 11.2,
taining system performance, and roughly speaking, the ek (t) ≤ ξ( x (t), w̄), ∀k = 0, 1, . . . . Let e(t) = ek (t),
intersampling interval is longer as the system state gets ∀t ∈ [ ak , ak+1 ). We have e(t) ≤ ξ( x (t), w̄). In other
closer to the origin. This property is called efficient. words, along the trajectories of system (11.3–11.5),
However, not all event-triggered systems are efficiently e ≤ ξ( x , w̄).
attentive. Moreover, with capacity-limited communica- Next, we show local ISS. Since e ≤ ξ( x , w̄) along
tion channels where the effects of quantization and delay the trajectories of system (11.3–11.5), according to
are taken into account, event-triggered systems may Assumption 11.1 of system (11.3–11.5), the derivative
have Zeno behavior or become unstable. To overcome of the ISS–Lyapunov function V along the trajectories
the Zeno behavior, and guarantee locally input-to-state satisfies
stability and efficient attentiveness, this chapter studies
how to properly design the event trigger and the quan- V̇ ≤ −α3 ( x ) + γ1 (ξ( x , w̄))
tizer, and provides a state-dependent acceptable delay. + γ2 (w), ∀ x ∈ Ω, w ∈ Ωw
The simulation results demonstrate the main results. = −(1 − c)α3 ( x ) + γ2 (w)
+ γ3 (w̄), ∀ x ∈ Ω, w ∈ Ωw
≤ −(1 − c)α3 ( x ) + γ2 (wL∞ )
11.10 Appendix + γ3 (c1 wL∞ ), ∀ x ∈ Ω,

11.10.1 Proof of Admissibility of the Sampling and where c is the same c as defined in Equation 11.11, and
Arrival Sequences c1 is a constant that satisfies w̄ ≤ c1 wL∞ . Therefore, the
First, we realize that d0 = 0 ≤ τ0 . system (11.3–11.5) is local ISS according to Theorem 11.1.
Next, let us assume that d k−1 ≤ τk−1 holds; that is,
ak−1 ≤ sk . If d k > τk > 0, then we have ak−1 ≤ sk ≤ 11.10.3 Proof of No Zeno Behavior
sk+1 < ak . For interval [sk , sk+1 ], the control input ũ =
uk−1 , and the initial state ek (sk ) satisfies ek (sk ) ≤ We first show that Dk ≤ τk for all k = 0, 1, . . . . From
Δk . From Equation 11.24, we have Section 11.10.1, we know that for any dk ∈ [0, Dk ], we
have dk ≤ τk . Choose d k = Dk , and we have Dk ≤ τk .
f¯( x̂ k , uk−1 , w̄) L x τk Next, we show that there exists a constant c1 > 0 such
( e k − 1 ) + Δ k e L k τk .
x
ek (sk+1 ) ≤
Lkx that
Since ek (sk+1 ) = θk , together with the equation above,
we have τk ≥ Tk . From Equation 11.26, we further have τk ≥ Dk ≥ c1 , ∀k = 0, 1, . . . . (11.27)
τk ≥ dk . This contradicts the assumption dk > τk .
Therefore, dk ≤ τk for all k = 0, 1, . . . , ∞. To show τk ≥ c1 , we need the following lemma.

Lemma 11.7
11.10.2 Proof of Local ISS
We first show that e(t) ≤ ξ( x (t), w̄), where e(t) = θk −1
lims→0 Δk < ∞.
ek (t), ∀t ∈ [ ak , ak+1 ); that is, along the trajectories of the
system (11.3–11.5), e ≤ ξ( x , w̄). Local ISS is, then, θ
PROOF If w̄ = 0, it is easy to show lims→0 Δ k −1
< ∞.
established using Assumption 11.1. k
For all t ∈ [sk , sk+1 ], we have ek (t) ≤ θk < If w̄ = 0, we first notice that θ(s, 0) < s. θ(s, w̄) =
ξ
ξk ( x̂ k , w̄). ρθ Lξ1+1 ξ(s, w̄). Let Lθ = ρθ LξL+1 . Notice that Lθ < 1. With
During interval [sk+1 , ak+1 ), we know ek (sk+1 ) = θk . Assumption 11.2 and Equation 11.13, we have |θ(s, w̄) −
Because of the admissibility, the control input during θ(t, w̄)| ≤ Lθ |s − t|; hence, θ(s, 0) ≤ Lθ s < s.
Efficiently Attentive Event-Triggered Systems 233

Let s indicate  x̂ k−1 . Since θ(s, 0) < s, we have f¯ ( x̂ )+ L̄w w̄


previous discussion about function c k −θ 1 and
k −1
Lemma 11.7, we know there must exist a class N func-
θk − 1 θ(s, 0) θ(s, 0)/ρΔ f¯( x̂ k ,u k −1,w̄)
= ≤ tion h2 such that h2 ( x̂ k−1 ) ≥ . Therefore,
Δk ρΔ θ(s − θ(s, 0), 0) θ(s, 0) − Lθ θ(s, 0) θ
k

1/ρΔ 1 − ρΔ 1 − ρΔ
= < ∞. Tk ≥ ≥
.
= ct > 0. (11.29)
1 − Lθ h1 ( x̂ k−1 ) + L̄ x h2 (μ) + L̄ x

The first inequality is derived from θ(s, 0) − θ(s −


θ(s, 0), 0) ≤ Lθ θ(s, 0); the second inequality is derived Therefore, τk ≥ Dk ≥ min{cd , ct } > 0 for all k.
from Lθ < 1.
11.10.4 Proof of Efficient Attentiveness
First, we show that there exists a positive constant cd
such that d¯k ≥ cd for all k = 0, 1, . . .. From the fact that From Equations 11.28 and 11.29, we have τk ≥ Dk ≥
ln(1 + x ) ≥ 1+x x , we have h̄( x̂ k−1 ), where
 )
1/ρθ − 1 1 − ρΔ
ξk−1 ( x̂ k−1 , w̄) − θk−1 h̄ ( x̂ k −1 ) = min , .
d¯k ≥ ¯ . h1 ( x̂ k−1 ) + L̄ x h2 ( x̂ k−1 ) + L̄ x
f ( x̂ k−1 , uk−1 , w̄) + Lk−1 ξk−1 ( x̂ k−1 , w̄)
x
(11.30)
Since the system is local ISS, there must exist a constant
Since h1 and h2 are class N functions, h̄ is a class
μ such that  x (t) ≤ μ, and the Lipschitz constant of f
M function. Therefore, part 1 of efficient attentiveness
with respect to x must satisfy Lkx ≤ L̄ x , for all k. Together
(Definition 11.1) is proved.
with Equations 11.22 and 11.17,
From Equation 11.16, we have
1/ρθ − 1  
d¯k ≥ . θk − 1
( f c ( x̂k−1 ) + Lw
¯
Bμ w̄ ) /θk −1 + L̄
x Nk ≤ 1 + log2 2n + (n − 1) log2 1 + .
Δk
With Assumption 11.3, we have lim x̂k −1→0 θk −1
f¯c ( x̂ k −1)+ L̄ w w̄
Lemma 11.7 shows that lim x̂k −1→0 Δk < ∞. Together
θk −1 < ∞. For any finite  x̂k−1 , θ
with the fact that finite  x̂ k−1  indicates finite Δ
k −1
,
f¯c ( x̂ k −1)+ L̄ w w̄ k
θk −1 < ∞. According to Lemma 11.1, according to Lemma 11.1, there exists a class N function
there must exist a class N function h1 such that h3 such that θk −1 ≤ h3 ( x̂ k−1 ); hence, we have
Δk
h1 (s) ≥ ( f¯c (s) + L̄w w̄)/θk−1 ; hence,
Nk ≤ 1 + log2 2n + (n − 1) log2 (1 + h3 ( x̂ k−1 )).
1/ρθ − 1 1/ρθ − 1 .
d¯k ≥ ≥ = cd > 0. (11.28)
h1 ( x̂ k−1 ) + L̄ x h1 (μ) + L̄ x Nk
The instantaneous bit-rate rk = Dk thus satisfies
Second, we show there exists a positive constant ct
such that Tk ≥ ct for all k. Since ln(1 + x ) ≥ 1+x x and 1 + log2 2n + (n − 1) log2 (1 + h3 ( x̂ k−1 ))
rk ≤
Δk /θk ≤ ρΔ (Remark 11.2), h̄( x̂ k−1 )
.
θk − Δ k 1 − ρΔ = h( x̂k−1 ),
Tk ≥ ¯ ≥ ¯ .
f ( x̂ k , uk−1 , w̄) + Lk θk
x f ( x̂ k , uk−1 , w̄)/θk + L̄ x where h̄(·) ∈ M is defined as in (11.30). Since h3 (·) is a
class N function and h̄(·) is a class M function, it is easy
Let us first look at the term f¯( x̂ k , uk−1 , w̄)/θk :
to see that h (·) is a class N function. This completes the
f¯( x̂ k , uk−1 , w̄) f¯( x̂ k−1 , uk−1 , w̄) + Lkx  x̂ k − x̂ k−1  proof of part 2 of efficient attentiveness.

θk θk
¯f c ( x̂ k−1 ) + L̄w w̄ + L̄ x θk−1

Δk
f¯c ( x̂ k−1 ) + L̄w w̄ θk−1 θ
= + L̄ x k−1 . Bibliography
θk − 1 Δk Δk
[1] A. Anta and P. Tabuada. To sample or not to sam-
The first inequality is derived from the locally Lipschitz ple: Self-triggered control for nonlinear systems.
property of f . The second inequality is derived from IEEE Transactions on Automatic Control, 55:2030–
(11.17) and the fact that L̄ x ≥ Lkx and L̄w ≥ Lw
k . From the 2042, 2010.
234 Event-Based Control and Signal Processing

[2] K.-E. Årzén. A simple event-based PID controller. event-triggered control of nonlinear systems. In
In Proceedings of the 14th IFAC World Congress, vol- Decision and Control and European Control Conference
ume 18, pages 423–428, 1999. (CDC-ECC), 2011 50th IEEE Conference on, pages
2559–2564, IEEE, 2011.
[3] K. J. Astrom and B. M. Bernhardsson. Comparison
of Riemann and Lebesgue sampling for first order [10] A. Seuret and C. Prieur. Event-triggered sampling
stochastic systems. In Decision and Control, 2002, algorithms based on a lyapunov function. In Deci-
Proceedings of the 41st IEEE Conference on, volume 2, sion and Control and European Control Conference
pages 2011–2016, IEEE, 2002. (CDC-ECC), 2011 50th IEEE Conference on, pages
6128–6133, IEEE, 2011.
[4] H. K. Khalil and J. W. Grizzle. Nonlinear Systems.
Volume 3. Prentice Hall, New Jersey, 1992. [11] P. Tabuada. Event-triggered real-time scheduling
of stabilizing control tasks. IEEE Transactions on
[5] L. Li, X. Wang, and M. Lemmon. Stabilizing bit- Automatic Control, 52:1680–1685, 2007.
rates in disturbed event triggered control systems.
In The 4th IFAC Conference on Analysis and Design of [12] X. Wang and M. D. Lemmon. Self-triggered feed-
Hybrid Systems, pages 70–75, 2012. back control systems with finite-gain L2 stability.
IEEE Transactions on Automatic Control, 54:452–467,
[6] L. Li, X. Wang, and M. Lemmon. Stabilizing bit- 2009.
rates in quantized event triggered control systems.
In Hybrid Systems: Computation and Control, pages [13] X. Wang and M. Lemmon. Minimum atten-
245–254, ACM, 2012. tion controllers for event-triggered feedback sys-
tems. In The 50th IEEE Conference on Decision
[7] D. Liberzon, D. Nešic, and A. R. Teel. Lyapunov- and Control-European Control Conference (CDC-ECC),
based small-gain theorems for hybrid systems. pages 4698–4703, IEEE, 2011.
IEEE Transactions on Automatic Control, 59:1395–
1410, 2014. [14] J. K. Yook, D. M. Tilbury, and N. R. Soparkar. Trad-
ing computation for bandwidth: Reducing commu-
[8] W. R. Perkins and J. B. Cruz. Engineering of Dynamic nication in distributed control systems using state
Systems. Wiley, New York, 1969. estimators. IEEE Transactions on Control Systems
Technology, 10(4):503–518, 2002.
[9] R. Postoyan, A. Anta, D. Nesic, and P. Tabuada.
A unifying lyapunov-based framework for the
12
Event-Based PID Control

Burkhard Hensel
Dresden University of Technology
Dresden, Germany

Joern Ploennigs
IBM Research–Ireland
Dublin, Ireland

Volodymyr Vasyutynskyy
SAP SE, Research & Innovation
Dresden, Germany

Klaus Kabitzsch
Dresden University of Technology
Dresden, Germany

CONTENTS
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
12.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
12.2.1 Sampled Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
12.2.2 Event-Based Sampling in Control Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
12.2.3 PID Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
12.3 Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
12.3.1 Basic Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
12.3.2 Combination of Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
12.4 Phenomena of Event-Based PID Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
12.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
12.5.1 Comparison of Sampling Schemes for PID Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
12.5.2 Adjustment of the Sampling Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
12.5.3 PID Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
12.6 Special Results for Send-on-Delta Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
12.6.1 Stability Analyses for PID Controllers with Send-on-Delta Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
12.6.2 Tuning Rules for PI Controllers with Send-on-Delta Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
12.7 Conclusions and Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
ABSTRACT The most well-known controller, espe- problems, including sampling scheme selection, sam-
cially in industry, is the PID controller. A lot of PID pling parameter adjustment, and PID algorithm opti-
controller design methods are available, but most of mization. Finally, tuning rules for PID controllers that
them assume either a continuous-time control loop are especially designed for event-based sampling are
or equidistant sampling. When applying a PID con- presented, allowing an adjustable trade-off between
troller in an event-based control loop without chang- high control performance and low message rate. The
ing the control algorithm, several phenomena arise that focus of this chapter is on send-on-delta sampling as
degrade both control performance and message rate. the most known event-based sampling scheme, but
This chapter explains these phenomena and presents many results are valid for any event-based sampling
the state of the art solutions to reduce or avoid these scheme.

235
236 Event-Based Control and Signal Processing

All signals are defined as time series S with a


series of observations sz = s(tz ) and the corresponding
12.1 Introduction
sample times tz ≥ 0 for z ∈ N. If the intersample interval
PID controllers are very popular in automation due to Tz = tz − tz−1 is a positive constant, we speak of uniform
their simplicity, easy tuning, and robustness [5,45]. More sampling or a periodic signal.
than 90 % of all control loops in industry use PID con- The intersample interval Tz is a random variable in
trollers [5,34]. PID controls are quite robust to changes of embedded systems. The kind and parameters of the dis-
plant parameters, allowing long operation times without tribution depend on the implementation of the sampling
maintenance. They are suitable for many process types, approach [42]. Some devices sample as fast as possible.
including stable, integrating, and unstable plants, as In this case, the intersample interval equals the opera-
well as plants with saturation or time delay. Practitioners tion cycle of the embedded device. This is common, for
can choose from a lot of tuning rules [34,71,72]. example, for Programmable Logic Controllers (PLC) and
The theory behind PID controllers is well line-powered embedded devices such as devices using
researched [43,71]. Due to the success of digital the building automation network LON [42].
controllers, a fundamental assumption is that all Other devices use timers to sample at predefined
devices in the control loop operate at the same equidis- intervals. This is very common in wireless sensor
tant and synchronized time instances. In practice, networks because it allows the nodes to go into a
an increasing amount of easily installable wired and low-power sleep mode in-between sampling events.
wireless network protocols favor event-based sampling However, clocks in embedded systems have a variance
approaches because they require less network band- and a drift [52], which results again in a randomly
width and energy. However, event-based sampling distributed intersample interval Tz .
approaches also induce new phenomena that cannot be For control design it is usually acceptable to simplify
explained with the classical theory due to the violation both cases. If the intersample intervals are very small
of the periodic sampling assumption. As a result, prac- (Tz → 0), then the time series S becomes similar to a
titioners need new approaches for evaluating stability continuous-time signal s(t). If the clock variance and
and deriving parameters, advanced PID algorithms, drift are significantly smaller than the sampling period,
and tuning rules. then the intersample interval can be assumed to be
Research in event-based PID control is a relatively young constant.
discipline. In 1999, Årzén presented an event-based ver- Most event-based sampling approaches are based on
sion of the PID controller [4]. Other authors continued such a periodic sampling and decide for each observa-
his work, such as Otanez et al. in 2002 [35], Sandee et al. tion if it should be transmitted. The difference of the
in 2005 [49], and Vasyutynskyy et al. in 2006 [61]. Since approaches lies primarily in the filter conditions applied
then, an increasing amount of publications mirror the to the original time series. They are often based on the
growing interest in event-based PID control. current sample sz and the last transmitted sample sl
The goal of this chapter is to synthesize this research (transmitted at time tl ). Table 12.1 compares some event-
into some practical guidelines to design and tune event- based sampling approaches and their filter conditions
based PID controls. Section 12.2 introduces the relevant [3,36,37,40,48]. Most common is send-on-delta (SOD)
foundations of event-based sampling and control loops, sampling as it is used in EnOcean and LON networks [33].
as well as of the classic PID controller. For evaluat- Other approaches are integral sampling [30] and its varia-
ing control loops, performance measures are discussed tion, called gradient-based sampling [41]. Both use a filter
in Section 12.3. Section 12.4 throws light on typical criterion based on the sampling error, while the latter also
problems in event-based PID controls. Possible solu- adapts the wake-up times. Model-based approaches use
tions are discussed in Section 12.5. Send-on-delta sam- predictors in sender and receiver to improve the signal
pling is a very common approach. We therefore present reconstruction in the receiver [24,50,57].
some results with focus on send-on-delta sampling in These sampling criteria may be modified with addi-
Section 12.6. Final conclusions are drawn in Section 12.7. tional temporal constraints on the intertransmission
interval to avoid oversampling or undersampling.
A min-send-time Tmin is used to limit the traffic load
caused by oversampling if a signal changes rapidly or
12.2 Basics if the sensor is faulty. This is particularly useful when
nodes sample as fast as possible such as in LON net-
12.2.1 Sampled Signals
works. As the min-send-time Tmin defines the lower
Before we dive into event-based control approaches, we limit of the interval between messages, it has to fulfill
shortly define our notation for periodic or event-based the Nyquist–Shannon sampling theorem. The min-send-
signals in the control loop. time loses its meaning in wireless sensor networks as
Event-Based PID Control 237
TABLE 12.1
Well-Known Sampling Approaches and Their Filter Conditions
Approach Condition cond(sz ) Description
Periodic (tz − tl ) ≥ TA Transmits signal with period TA
Send-on-delta |sz − sl | ≥ ΔS Transmits signal on significant change
since last transmission sl [33]
z  s + s 

j −1
∑  2 − sl  Tj ≥ Δ I
j
Integral Accumulates sampling error since last
j = l +1 transmission sl and avoids steady-
state errors [30,41]
Model-based |ŝz − sz | ≥ ΔP Uses a signal model ŝz in sender and
receiver to compress and reconstruct
the signal [24,50,57]

the underlying sampling period Tz should already be as (reference variable). The control error ek is the difference
large as possible for the purpose of energy efficiency. between plant output and setpoint (ek = rk − yk ). Based
A max-send-time Tmax defines an upper limit for the on the plant output and the setpoint value, the controller
intertransmission interval and ensures that a sample is computes a control value uk (manipulated variable) that
transmitted if more time passed since the last message is converted by the actuator to a physical action. The
was sent. It is primarily used to implement a periodic user interface potentially also visualizes the plant output
“watchdog signal” in the sensor that allows monitor- yk or the control value uk . In practice, several modules
ing its correct operation. It can also be used to reduce in the control loop are often combined into one device
the negative effects of event-based controls as shown to reduce the number of devices. For example, con-
in Section 12.5.2. Finally, in some cases, the max-send- trollers are often integrated with the actuators and the
timer may improve the energy efficiency of the overall sensors are often combined with the user interface into
communication system [21]. one device [68].
The resulting filtered time series S̃ is a subset of the The arrows in Figure 12.1 represent the data flow of
original time series S, depending on the sampling condi- the signals y k , rk , ek , and uk . The optional signal trans-
tion cond (sz ) as well as the min-send-time and the max- missions to the user interface are marked as dotted lines.
send-time. The intertransmission interval between two It is possible that all these signals are transmitted in an
subsequent elements of the new filtered time series S̃ is event-based manner. In many applications, the setpoint
denoted as Tk . changes infrequently compared to the other signals in
the control loop as it is defined by the user. Therefore,
12.2.2 Event-Based Sampling in Control Loops event-based transmission is advantageous [16]. It is also
Figure 12.1 shows a basic control loop with sensor, con- recommendable to transmit sensor values in an event-
troller, actuator, and the plant under control. A user based manner to reduce network traffic [33,40]. The actu-
interface may be an optional element in the control loop. ator value may be sampled event-based to reduce along
The sensor observes the plant and samples the time with the number of messages also the number of actu-
series of plant output y k (controlled variable) where k ation events, as this decreases the deterioration of the
is the index of the time series. The user interface or actuator and can help saving control energy [36,55]. Also
supervisory control system provides a set-point value rk an event-based transmission of the control error can be
found in the literature since it can be beneficial to change
the triggering scheme depending on the control error [3],
and it allows some theoretic investigations [6,44] as the
User
time instance of zero control error is detected.
interface Independent of the signal transmission, controllers
rk Setpoint may be operated periodically or event-based [68]. In the
yk Sensor uk Control
value Controller value first case, the code is run periodically, whether or not
– ek Control
the input has changed. In the latter case, the code is
Sensor error Actuator
only executed if the control error changes. This saves the
Plant processor load of the embedded device [4]. An event-
based operation of the actuator (independent of the
FIGURE 12.1 signal transmission scheme of the control value) may
Control loop elements with optional user interface. save energy.
238 Event-Based Control and Signal Processing

12.2.3 PID Controller goal of continuous-time or periodic control loops is good


control performance. For event-based sampling, energy
The behavior of an event-based PID controller can
criteria are also important as they consider the biggest
be approximated from its continuous-time counterpart.
advantage of these sampling schemes.
The classic continuous-time PID controller is defined in
The criteria presented below can be used not only for
the time domain as
evaluating PID controllers but also for other types of
  t 
1 de controllers. We show the continuous-time formulation of
u( t ) = K P · e( t ) + · e(ξ)dξ + TD · , (12.1) the control performance measures because these present
TI 0 dt
the ideas of the measures more intuitively.
where K P , TI , and TD are proportional gain, reset time,
and rate time, respectively (names according to [1]),
12.3.1 Basic Performance Measures
and in the complex frequency domain (after Laplace
transform) as Different performance criteria have been used in the
  existing literature for evaluating PID controllers with
U ( s) 1
= KP 1 + + TD · s . (12.2) event-based sampling. They can mainly be divided into
E ( s) TI · s control performance measures and energy consump-
tion measures. In addition, other aspects should also be
This definition does not allow immediate changes of the paid attention to when designing a controller, such as
control error because then the derivative action would energy consumption of the actuator and plant, processor
be infinite. Therefore, the derivative part is usually fil- load, and economic considerations [19]. However, these
tered by a first order low-pass filter, resulting in the aspects are not discussed in this chapter.
transfer function
 
Control performance measures
U ( s) 1 s
= KP 1 + + TD · , (12.3)
E ( s) TI · s 1+ N TD
·s Control performance usually evaluates the control error
D
e(t). As this signal changes over time, there are sev-
where ND denotes the derivative filter coefficient. eral possibilities for aggregating the signal into a single
There are several approaches to approximate the ideal scalar value. The most commonly used performance
(continuous-time) PID controller by a discrete-time con- measurements are:
troller [48]. The following equations give one example
• The L1 norm (integral of absolute error, IAE)
for the proportional (P), integrative (I), and derivative
(D) action, taken from [66]:  T
end
JIAE = |e(t)|dt, (12.8)
uk = Pk + Ik + Dk , (12.4) 0

Pk = K P · ek , (12.5) where (0,Tend) is the time range in which the


KP error signal is evaluated is the most often used
Ik = Ik − 1 + · Tk · ek−1 , (12.6)
TI index for evaluating control performance with
TD event-based PID controllers [3,20,35,37,39–41,44,
Dk = · D k −1 46,47,62–69]. The normalized version of this per-
TD + ND · Tk
K · T · ND formance measure is the mean absolute error
+ P D · ( e k − e k −1 ) . (12.7)
TD + ND · Tk J
ē = IAE , (12.9)
The start conditions are typically D0 = 0 and I0 = 0, Tend
if no techniques for bumpless transfer are used [58]. and can be very well understood even by practi-
Section 12.5.3 will discuss several possible improve- tioners with less theoretical background.
ments of this algorithm.
• The L2 norm (integral of squared error, ISE)
 T
end
JISE = e2 (t)dt, (12.10)
12.3 Performance Measures 0

The main challenge of event-based controller design is has been used in the context of event-based PID
the optimization of the sampling and controller param- control only in [20,22] although it is a common goal
eters. For that purpose, it must be defined how an for controller optimization [34]. From the authors’
event-based control loop should be evaluated. The usual point of view, ISE is better suited than IAE, because
Event-Based PID Control 239

for typical applications where event-based sam- for some tasks, especially the overshoot. Regard-
pling is reasonable, small errors are intentionally ing event-based PID controllers, the settling time
accepted for the advantage of reduced message has been evaluated in [3,6,8,11,44,67], the rise time
rate—and ISE punishes small errors (compared in [3,67], and the overshoot in [3,6,8,11,20,22,23,39,
with large errors) less than IAE. 41,67].
• The “biased” IAE is particularly adapted to the • Other performance measures are defined in [8,11,
phenomena of event-based control 20,28,44,47] and robustness measures have been
 T considered in [8,11].
end
JIAEΔ = f (e) dt, (12.11)
Message rate and energy consumption measures
0
|e| if |e| > emax
f ( e) = . (12.12) For event-based sampling, the message rate is also
0 if |e| ≤ emax important because reducing the message rate and
thus the energy consumption is the main target of
This measure tolerates any value of the con- event-based sampling and the reason for accepting the
trol error within a defined tolerance belt emax increased design effort and reduced control perfor-
for respecting that many event-based sampling mance.
schemes result in limit cycles, that is, small oscilla- The message rate has rarely been used directly for
tions around the setpoint as shown in Section 12.4. evaluating control loops with event-based PID con-
This avoids the accumulation of long-time steady- trollers [20,60]. More frequently, the overall number of
state errors during performance evaluation. It has messages N which have been transmitted during one
been used several times [6,20,60,67]. simulation has been used [3,6,8,37,41,44,65,66]. Also, the
• The difference between the controlled variable y(t) mean intertransmission interval
of the event-based control loop and the controlled T
T̄it = end , (12.14)
variable yref of a periodically sampled control loop N
with same process and controller settings has also between two messages (the inverse of the message rate)
often been used, especially integrated as IAEP has been used [20,39,41] because this interval allows
[8,37,39,46,47,60,62,64,67,68]: comparison with the period of a periodically sampled
 T control loop.
end
JIAEP = |yref (t) − y(t)|dt. (12.13) Several authors recommend estimation of the overall
0 energy consumption using a weighted sum of messages
This is a helpful measure if it is desired that and controller invocations for exact energy considera-
the event-based control loop hardly differs from tions [37,39,46], or other energy-oriented measures such
a continuous-time or discrete-time control loop. as power consumption or electric current [39,40]. Also
However, this is not necessarily suited for finding the ratio between the message number of the event-
a reasonable trade-off between message rate and based control loop and the message number of the
control performance as shown in Section 12.6.2. periodic control loop has been used [62–64,67,68].
Also, the sampling efficiency, that is, the quo-
tient of IAEP and IAE, has been used several 12.3.2 Combination of Performance Measures
times [37,46,47,62,64,68].
Several performance measures should be taken into
• There are some well-known performance crite- account for optimizing an event-based control loop. In
ria which are only applicable for evaluating step particular, as both the control performance and the mes-
responses. Examples are settling time, overshoot, sage rate form a trade-off in many practical scenarios, at
control rise time, ITAE, ITSE, or steady-state least one criterion of each category should be considered
error [1]. As these measures assume setpoint steps when designing event-based controllers.
without disturbances, their use is limited in com- The most generic type of multicriteria optimization
parison to the performance measures mentioned is Pareto optimization. A solution is called Pareto opti-
before. Further, limit cycles (oscillations around mal, if no other solution can be found which improves at
the setpoint, see Section 12.4) make the applica- least one of the performance measures without degrad-
tion difficult. For example, the settling time can ing any other measure. Therefore, Pareto optimization
only be determined with send-on-delta sampling, delivers a set of optimal solutions and it is up to the
if the final band around the setpoint is larger than user to decide, which of the solutions should be real-
the threshold ΔS . Nevertheless, they can be helpful ized. However, the set of Pareto optimal solutions is
240 Event-Based Control and Signal Processing

much smaller than the original design space, making the than that ratio [22]. The disadvantage is the
controller design easier. more difficult weighting of the performance
If the control designer is interested in getting only measures (if this is necessary) as this cannot
one single solution then different approaches are suit- be done by simple factors. Additionally, there
able. One way is to optimize only one basic measure is a problem if one of the basic performance
while considering constraints for all other relevant per- measures is zero or near to zero, because then
formance measures. Another possibility is the algebraic the other ones have no significant influence on
combination of the performance measures. Most perfor- the comparison to other solutions.
mance measures mentioned before are positive real and
should be minimized. For these measures, the algebraic The optimal solutions according to both strategies are
combination can mainly be realized in two different always Pareto optimal. That can simply be proven by
ways: contradiction.

1. Weighted sum of basic performance crite-


ria [35,37,46,64]

Jsum = ∑ wi · Ji . (12.15) 12.4 Phenomena of Event-Based PID


i
Controllers
The reasonable selection of the weighting fac- The simple PID control algorithm (12.4) along with
tors wi may be difficult, but allows much event-based sampling leads to the issues presented
freedom for application-specific priorities. in Figure 12.2 and described in the subsequent para-
2. Product of basic performance criteria graphs [40]. The following overview explains these
issues. Solutions are given in Section 12.5.
JProd = ∏ Ji , (12.16)
i
Approximation errors
in particular [20,22,23]
The control error is known by the control algorithm
JProd = N · JISE . (12.17) only in the time instances of sampling. While digital
PID algorithms such as (12.4) assume usually a con-
The advantage compared with the weighted stant signal between two samples (zero order hold), the
sum is that this performance measure is opti- real controlled variable (and hence the control error, too)
mal if none of the basic performance measures changes in between, thus being a source of uncertainty
can be improved by a specific ratio without in the controller. In event-based sampling, the reduced
degrading the other (or product of the oth- amount of samples increases the uncertainty. For exam-
ers for more than two used indices) by more ple, if the control error is sampled with send-on-delta

1.5

0.5
Setpoint r(t)
Reference loop yref(t), yref,k
SOD loop ysod(t), ysod,k

0
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (s)

FIGURE 12.2
Step response of the control loop with PID algorithm (12.4) and send-on-delta sampling with Tz = Tmin = 0.06 and Tmax → ∞. The emerging
issues (e.g., sticking and limit cycles) are visible.
Event-Based PID Control 241

sampling, the uncertainty of the PID controller’s integral and thus directly proportional to the duration Tk of the
action is limited by ±ΔS· Tk (if Tz Tk and Tmin Tk ), sticking phase. A sudden, extensive change of the inte-
that is, proportional to the intertransmission interval. gral action happens at this time instance. This abrupt
Integral sampling reduces the uncertainty as also small behavior of the integral action causes a control loop
errors accumulate and can thus cause a new event. For behavior that is strongly different from a continuous-
reaching the same mean intertransmission interval, all time or periodic control loop. This is clearly visible in
sampling schemes have time slots with more messages Figure 12.2 at t ≈ 1.8.
as well as with less messages than the other schemes.
Therefore, all sampling schemes have advantages and
Quantization effects and limit cycles
disadvantages regarding accuracy of control. In general,
very large intertransmission intervals (Tk $ Tz , charac- Event-based sampling limits the accuracy of the sam-
teristic for control loops with send-on-delta sampling pled signal and that results in non-linear quantization
in the “steady state”) cause large approximation errors. effects [70].
Although small intertransmission intervals improve the One possible effect of the quantization is a steady-
approximation of the error e(t), very small intertrans- state error which can be interpreted as an infinitely
mission intervals can cause large differences of the long sticking effect. Another possible quantization effect
derivative action compared with a periodic control loop, is limit cycles, that is, permanent oscillations around
if the period TA of the periodic control loop is greater the setpoint, which appear under proper conditions in
than the sensor sampling time Tz [60]. It depends on “steady-state situations” when the reference variable
the controller implementation and the controller and and the disturbances are constant. Limit cycles have
sampling parameters, how severely the approximation already been detected by Årzén in the first publica-
errors degrade the control performance as discussed in tion on event-based PID control [4]. Their magnitude is
Section 12.5. mainly defined by the deadband threshold of the sam-
pling scheme. Besides event-based sampling, another
precondition for the existence of limit cycles is that the
Triggering of events and sticking controller or the plant contains integral action [44].
Another example of event-triggering issues is that sam- In many applications, limit cycles are undesired since
ples are not triggered, if the signal (plant output or they increase the event rates without improving the con-
control error) is staying inside the deadband defined by trol performance. Limit cycles may be more acceptable
the sampling scheme such as ΔS in send-on-delta sam- than a constant steady-state error because the expected
pling. Therefore, if not specially adapted for this case, value of the mean error is usually smaller.
an event-based controller is not able to control the signal In some applications, there are external disturbances
within this deadband since the controller is not invoked that are practically never constant, unknown, and larger
as long as no message arrives. This can have the effect than ΔS . For example, in room temperature, control
visible in Figure 12.2 at 1.2 < t < 1.8, at the peak of the loops, outdoor air temperature variations, solar radia-
overshoot. There the event-based sensor does not pro- tion, and occupant influences are severe and temporary
duce a new message because the plant output changes that level crossings are generated regularly and can-
just a little. The controller is then not invoked since no not be avoided [20]. In such cases, the messages due to
event arrives. The control loop achieves a temporary limit cycles can often be neglected, depending on the
equilibrium, also called sticking [61], when both sensor controller settings.
and controller do not send any messages. The system
resides in this state until some event such as a significant
plant output change, load disturbance, noise, or expiry
of the Tmax timer forces the sensor to send a message.
Such sticking degrades the control loop performance
and is critical if the control error is large, for example, 12.5 Solutions
in case of large overshoots, since in that case the control The different issues related to event-based PID con-
error remains large for a long time. trol need to be considered during the control design.
Sticking leads further to integral action “windup.” This starts with the selection of an adequate sampling
If after the sticking phase a new message arrives, the approach according to the specific design goals of the
change in the integral part is control loop as discussed in Section 12.5.1. Section 12.5.2
will discuss the selection of the sampling parame-
KP ters. Special adaptations of the PID algorithms allow
ΔIk = · Tk · ek−1 , (12.18) to avoid the critical issues. That will be explained in
TI
242 Event-Based Control and Signal Processing

Section 12.5.3. Section 12.6.2 will finally present tuning 4


rules for the PID parameters.
3
12.5.1 Comparison of Sampling Schemes for
PID Control 2

The first step for improving the control loop behavior is


the selection of a suitable strategy for triggering events. 1
Multiple simulation and experimental studies investi-
gated the influence of different sampling approaches on 0
Control loop Energy Robustness Robustness Robustness
event-based control, with the result that all sampling performance consumption to to to
schemes have advantages and disadvantages. transmission measurement transmission
Pawlowski et al. analyzed a greenhouse temperature errors noise delays
control loop in a simulative study [36,37]. They explored Periodic Integral Model-based
the message rate and control loop performance of differ- Send-on-delta Gradient-based
ent sampling approaches (cf. Table 12.1 in Section 12.2.1).
They concluded that event-based sampling allows to FIGURE 12.3
save about 80 % of the messages in comparison to peri- Evaluation of the adaptive sampling approaches from Table 12.1
odic sampling. However, the parametrization of event- (Rating: 4 – best suited; 3 – well suited; 2 – mean; 1 – disadvantageous).
based sampling approaches has a large influence on the
resulting control performance.
Sánchez et al. [46] analyzed the level control of a tank in Section 12.3. The control loop performance usu-
as an example for an industrial control process. They ally forms a trade-off with the energy consumption.
investigated event-based and periodically sampled A smaller sampling period Tz and threshold (ΔS , Δ I , or
PI controls in an experimental setup. They concluded ΔP ) leads to a lower sampling error and improved con-
that event-based approaches are convenient control trol loop performance but requires more message trans-
strategies when the number of messages should be missions and thus energy. In [40], JIAE has been used
reduced, but that the parameter selection is an open for evaluating control performance. According to these
issue. Araújo [3] realized in his thesis a simulation simulations, periodic sampling is the best sampling
study of several standard process types in combination approach regarding control loop performance, followed
with several sampling schemes and PI control variants. by integral sampling and gradient-based sampling at the
The best results emerged from combining a modified cost of a higher message rate.
PIDPLUS controller with send-on-delta sampling on the
controlled variable. Message rate and energy consumption
The study in [40] analyzes a room temperature
Wireless sensor nodes are often powered by batteries or
and an illumination control loop as two typical build-
energy harvesting. The energy consumption for trans-
ing automation control scenarios with different signal
mitting is in many cases much higher than that for
dynamics. The temperature control is a slow process
sampling [17,21,26], such that the amount of transmit-
with large dead times. On the contrary, the illumina-
ted messages should be reduced to a minimum. All
tion control is a very fast process with no significant
event-based sampling approaches can save about 80 %
delay in comparison to the delay inflicted by the net-
of the messages compared with periodic sampling with-
work communication and the sampling time. The study
out significant losses in control performance [36,37,40].
compared the event-based sampling approaches given
The lowest energy consumption can be reached with
in Table 12.1 using different performance criteria. In
model-based sampling, but only if the signal is slowly
extension to other works, it also explores the influence
changing, free of noise, and the model and algorithm are
of measurement noise, message losses, and transmission
perfectly parametrized. In all other cases, integral and
delays on the event-based controls. Depending on the
gradient-based sampling provide good all-around solu-
relevance of the investigated criteria, an adequate sam-
tions, where particularly the latter is quite robust with
pling approach can be selected for a specific scenario.
regard to parametrizations.
The criteria are defined as follows, and the results are
visually presented in Figure 12.3.
Robustness to transmission errors
Control loop performance
Wireless communication is error-prone and easily dis-
Control loop performance is related to the sam- turbed by other wireless networks, electromagnetic
pling error of event-based sampling as explained sources, moving people, or objects [73]. Therefore,
Event-Based PID Control 243

the PID controller works on outdated information and periodically without synchronization to the transmis-
cannot react to changes in the plant. This effect is sion events [20]. This avoids sticking because the inte-
increased for model-based sampling as the estima- gral action is updated regularly even if no message
tor model wrongly estimates the plant behavior. Peri- arrives.
odic and gradient-based sampling are the most robust The performance of the event-based sampling
approaches because they usually transmit more samples approaches depends strongly on the parametrization
than the other approaches. and the controller. Therefore, Section 12.5.2 will inves-
tigate the sampling parametrization and Section 12.5.3
the control algorithm details.
Robustness to measurement noise
Measurements carry small noise added by the process 12.5.2 Adjustment of the Sampling Parameters
disturbances or sensor hardware. This may require a
filtering of the sampled signal. In the case of periodic The problems presented in Section 12.4 can be reduced
sampling, the noise is often tolerated because it does by appropriate choice of the sampling parameters, in
not influence the mean sampling rate. However, in case particular Tmax and the thresholds ΔS , Δ I , or ΔP . This
of event-based sampling, noise may trigger the trans- approach has the advantage that the PID control algo-
mission condition unnecessarily and result in higher rithm need not be changed.
power consumption. Especially, integral and gradient-
based sampling as well as model-based approaches are Min-send-time
sensible to noise. In contrast, for send-on-delta sam- The setting of Tmin should always be chosen according to
pling, the threshold ΔS allows filtering noise within its the sampling theorem, that is, about 6 to 20 times smaller
tolerance belt. than the inverse of the Nyquist frequency [61]. Lower
settings increase the message rate without improving
Robustness to transmission delays the control performance significantly; higher settings
endanger stability. Tmin is only needed if the inter-
Transmission delays are problematic for the control task sample interval Tz is shorter than Tmin , see Section 12.2.1.
if they are not negligibly small compared with the dom-
inating time constants of the control loop. Transmis- Max-send-time
sion delays are usually below 100 milliseconds in wired
networks [38] but can be larger in wireless networks Approximation errors and sticking are a consequence of
due to message caching in multihop transmissions if long intertransmission intervals. Hence, reducing Tmax
hops have different sleep cycles. Periodic sampling reduces these problems, at the cost of an increased mes-
shows the worst response to large transmission delays sage rate. The differences between the event-based and
because the numerous messages created by periodic continuous-time (or periodic for Tz > 0 or Tmin > 0)
sampling are disarranged when the delay’s jitter is larger control loops diminish with decreasing max-send-time.
than the sampling period. As a result, the messages are In [66] a rule for setting the maximum timer Tmax has
not received in the correct order [54]. This adds noise at been proposed in the case of send-on-delta sampling.
the controller’s input that is boosted by the proportional The max-send-time should lie between several bounds
and derivative part of the PID controller. which are defined by the process time constants, the
limit cycles, user-defined specifications about maximum
approximation errors, and the life sign:
Alternatives
$ % * +
Besides considering these different triggering criteria, max Teq , TLC < Tmax < min T̄max, Twatchdog ,
there are also sampling-related solutions that may help (12.19)
in specific cases. For example, the selection of the sig- where Teq is defined by
nal to be transmitted should be regarded. Triggering
ΔS
based on the control error allows for switching off the Teq = , (12.20)
dyref
dt ( t )
signal generation if the control error is smaller than a
deadband. That can reduce the message rate in steady
state [3]. On the contrary, evaluating the control error with the mean value dyref /dt of the periodic reference
requires knowledge of the setpoint whose transmission plant output slope during the rise time of the step
needs additional energy in some scenarios [3,23]. response. TLC is the period of the limit cycles in steady
If only the message rate is important and not the com- state, T̄max is designed to limit the approximation errors
putational load, it is possible to invoke the controller and sticking (see [66] for details), and Twatchdog is the
244 Event-Based Control and Signal Processing

maximum setting to give a watchdog signal to the actu- a PI-P controller which has two parameters, the send-
ator. The watchdog time could be set to about 1 to 2 times on-delta threshold ΔS and an aggressiveness-related
the settling time of the closed loop [63]. parameter α, where ΔS is designed for getting a desired
If the sensor is located in another device than the message rate and α for shaping the trade-off between
controller, it is advantageous to use different max-send- control performance and actuator action or robustness
timers in both devices. The max-send-time in the sensor to modeling uncertainties.
should be as large as possible for reducing network
load—as long as safety is not compromised (watch-
12.5.3 PID Algorithms
dog function) or the energy efficiency of the receiver is
degraded due to timing problems [21]. The additional Not only the sampling schemes and their parameters
max-send-timer is used to invoke the controller locally but also the PID control algorithm and its parameters
if no message arrived before this timer expires. It should can be improved to approach the problems of the sim-
be as small as possible for reducing sticking and approx- ple algorithm presented by Equation 12.4. Improved
imation errors—limited by acceptable processor load. As algorithms are presented in this section, and improved
the controller is invoked more frequently than the sen- controller parameters are discussed in Section 12.6.2.
sor, it has to reuse the last received sensor value several A comparison of different advanced PID control algo-
times [22,67]. That leads to another kind of approxima- rithms regarding several performance criteria is made
tion error, but the sampling error is limited according to in [67]. These simulations confirm that all explored algo-
the sampling threshold. rithms have their advantages and disadvantages, and
A comparable strategy is to force a new sensor mes- the application decides as to which strategy fits best.
sage if sticking is detected [61,63]. This increases the
message rate in such situations but avoids computations Integral action
with outdated data. Especially the computation of integral action has been
considered often in literature [48] because the integral
Threshold action is the main reason of deviations between periodic
and event-based control. In the simple PID algorithm
For send-on-delta sampling, the threshold should be defined in equation (12.4), the integral part has been
small enough that typical step responses (with step size computed from the last control error using a zero-order
Δr ) get a suitable resolution, that is, ΔS = 0.05 . . . 0.15 · hold with
Δr [63]. Furthermore, it should be smaller than the tol- K
erated steady state error Δe,max, or better still, ΔS ≤ Ik = Ik−1 + P · Tk · ek−1 . (12.22)
TI
Δe,max /2 [63]. However, it should not be set too small, This assumes inherently that the control error remained
because otherwise high-frequency noise could trig- constant at this value e . One could intuitively assume
k −1
ger many messages. Vasyutynskyy and Kabitzsch [62] that the implementation
noticed from simulations that the event rate jumps up if
the standard deviation of the noise is larger than ΔS /2. K
Ik = Ik−1 + P · Tk · ek , (12.23)
Thus, ΔS should lie between two times the noise band TI
and the half of Δr [44]. Several other guidelines are is at least equally suited, where it is assumed that the
discussed in [61]. control error changed instantly after the last transmis-
Beschi et al. [8] found for send-on-delta sampling that sion to its current value, that is, e . The advantage is that
k
both the mean intertransmission interval and the con- the controller reacts immediately to the current control
trol performance (measured as the maximum difference error. Unfortunately, this version is sensible to large con-
between the outputs of the event-based control loop and trol errors in combination with long intertransmission
a periodic control loop) are roughly proportional to ΔS . intervals. They occur in event-based sampling, for exam-
For integral sampling similar rules can be used. When ple, on setpoint changes after long steady-state periods.
assuming a linear evolution of the transmitted variable Then the controller assumes that the control error has
between the samples, a nearly equal intertransmission been that large for the full intertransmission interval,
interval T̄is can be reached [39] by setting the thresh- which leads to large approximation errors. A similar
old to case is the original implementation of Årzén [4]
1 KP
ΔI = · ΔS · T̄is . (12.21) Ik = Ik − 1 + · Tk−1 · ek−1 , (12.24)
2 TI
Recently, Ruiz et al. [44] presented an alternative which allows a faster computation but has the same
approach for optimizing the threshold. They introduced problem regarding the impact of approximation errors.
Event-Based PID Control 245

Alternatively, the control error can be estimated with weighting is not specific for event-based PID control [5,
a first order hold. The corresponding equation is 58,72]. For example, Årzén [4] and Sánchez et al. [48]
derived their event-based PID algorithm starting from
KP e + e k −1
Ik = Ik − 1 + · Tk · k . (12.25) the continuous-time PID controller
TI 2

This piecewise linear approximation improves the inter- 1
U ( s) = K P · β · R( s) − Y ( s) + ( R(s) − Y (s))
polation for short intertransmission intervals. However, TI · s
it is also vulnerable to large control errors and long 
T ·s
intertransmission intervals as it also contains the cur- + D T (γ · R(s) − Y (s)) . (12.30)
1 + ND s
rent control error ek . Therefore, the version (12.22) is D

the best suited approximation. A comparison of more


interpolation methods can be found in [63]. and Vasyutynskyy et al. [63,69] used the derivative
Durand and Marchand presented three other integral action
algorithms for send-on-delta sampling [15], repeated in
the more comprehensive overview [48]. The first algo-
TD K · T · ND
rithm (already published in [14]) uses the knowledge Dk = · D k −1 + P D · ( y k − y k −1 ) .
that the signal must have been in the deadband ±ΔS T D + N D · Tk TD + ND · Tk
around the setpoint up to Tz before the message, because (12.31)
otherwise a message would have been transmitted ear-
lier. This results in As this is not specific for event-based control, com-
K mon text books about PID control can be referenced for
Ik = Ik−1 + P · Ẽk , (12.26) details.
TI
Also, antireset windup [4,23], useful for improving the
with integral action in the presence of actuator saturation,
Ẽk = (Tk − Tz ) · ΔS + Tz · ek . (12.27) is independent from the sampling scheme and can be
found in standard PID control literature [72].
This approximation is of course not true if |ek−1 | > ΔS
like in sticking situations. But in that case it is advan-
tageous, too, as it reduces the “windup” after the next Observer-based algorithm
message due to the smaller change of the integral action.
The second approach uses an exponential forgetting Another way to decrease the approximation errors as
function to limit the integral action: well as to avoid sticking and limit cycles is the use of a
process model with an observer in the controller, which
KP estimates periodically the current process output [65]
Ik = Ik − 1 + · Tk · exp { Tz − Tk } · ek . (12.28)
TI (i.e., also without getting a message from the sensor).
Based on this estimation, the controller computes the
The best simulations regarding control performance and manipulated variable. If a message from the sensor con-
message rate resulted from the combination of both taining the current process output arrives, the state of
approaches by using the model is updated via the observer. This allows a
smaller approximation error than simply using the last
Ẽk = (Tk · exp { Tz − Tk } − Tz ) · ΔS + Tz · ek , (12.29)
transmitted value, provided that the model is accurate
in Equation 12.26. It is also computationally the most and disturbances change relatively seldom. A first-order
expensive. Durand and Marchand suggest using a look- observer brings acceptable control results while keeping
up table if the exponential function is too computation- the processing efforts in the controller small [65,67].
ally expensive for an industrial controller.
Other examples of limited integral action can be found
Other algorithmic extensions against limit cycles
in [3,61,63].
For avoiding limit cycles, besides the observer approach,
several other algorithmic solutions have been proposed.
Setpoint weighting and antireset windup
Some authors proposed to use intentionally “biased”
Setpoint weighting (two degrees of freedom) can often data (e.g., ek = 0, although the true control error is not
be found in PID literature. It has the goal to avoid sud- exactly zero) [3,48,59,60,63]. Another possibility is to fil-
den actuator reactions on stepwise setpoint changes, ter the control signal [3,60]. Basis of all algorithms is a
as these reduce the actuator lifetime. Hence, setpoint limit cycle detection strategy.
246 Event-Based Control and Signal Processing

output v ∗ (t) that are related by the following expression:


 
12.6 Special Results for Send-on-Delta ∗ v(t)
v (t) = ssod (v(t); ΔS, β) = ΔS β nssod .
Sampling ΔS
(12.33)
Although all sampling schemes have their own advan- The parameter β allows the generalization of the results,
tages and disadvantages, send-on-delta sampling [31], but is usually set to 1 and can be ignored.
also called level-crossing sampling [22,27] or absolute
deadband sampling [62,70], is the most commonly used
approach in practice as it is standard in LON and EnO- 12.6.1 Stability Analyses for PID Controllers with
cean networks, see also [44]. Send-on-delta sampling is Send-on-Delta Sampling
very easy to implement and the only parameter ΔS is The basic requirement for tuning a control loop is stabil-
easy to understand. Therefore, it also has the strongest ity, because unstable control loops are useless and often
focus in research. For that reason, this section will lead to damage of equipment or even compromise the
present some results which have been developed only safety of people. Stability analyses are hence at the cen-
for send-on-delta sampling. ter of control theory and are discussed in this section.
It is necessary to distinguish between ideal and real Event-based sampling makes such considerations much
send-on-delta sampling. Real send-on-delta sampling more difficult than continuous-time or periodic control,
fits the description above, especially containing a min- but there are already some important results.
imum duration between two subsequent messages due The stability of control loops with send-on-delta on
to the internal sampling period Tz of the sensor or the the plant state and full state feedback was considered
min-send-time Tmin . In contrast, ideal send-on-delta sam- by Otanez et al. in 2002 [35]. Their result is that it can
pling means that the messages are sent immediately be proven using the Lyapunov criterion that the event-
when a level has been crossed, that is, Tz → 0. Although based system is stable if the continuous-time system
this cannot be realized technically and would be addi- with the same controller settings is stable.
tionally more energy-demanding as the signal to be A partial stability analysis for a PID and a modi-
sampled must be evaluated continuously, this assump- fied PIDPLUS controller (a PID controller with limited
tion has been used for a lot of theoretical analyses; some integral action), together with a first order (or second
of them will be discussed in this chapter. order by an additional filter) process without time delay
Furthermore, many of the publications referenced in has been given by Araújo in 2011 [3]. He analyzed the
the following assume symmetric send-on-delta sampling maximum (constant) sampling period, which is much
(SSOD). SSOD is a subtype of ideal send-on-delta sam- higher for the PIDPLUS controller than for the simple
pling (and hence requires that the sensor is always PID controller.
active). Its basic idea is that the allowed values of the Tiberi et al. presented a stability analysis for a control
event-based signals are multiples of a given thresh- loop consisting of an event-based PIDPLUS controller,
old, including zero. Hence, it is symmetric around zero. a first order process without time delay, and a more com-
SSOD is defined as follows [10]: The normalized sym- plex send-on-delta triggering rule (“PI-based triggering
metric send-on-delta map nssod : R "→ Z is a nonlinear rule”) [59]. Chacón et al. performed stability analysis for
dynamical system where the input v(t) and the output a PI controller with a time-delay-free process and send-
v∗ (t) are related by the expression on-delta sampling based on investigations regarding
limit cycles [13].
v∗ (t) = nssod(v(t)) Ruiz et al. gave a stability analysis for a so-called


⎪ (i + 1) if v(t) > (i + 1) and v∗ (t− ) = i, PI-P controller (based on the Smith predictor) with a

⎨i first order plus time-delay process with SSOD sam-
if v (t) ∈ [(i − 1), (i + 1)]
= pling on the control error [44]. They also formulated a


⎪ and v ∗ (t− ) = i, general concept for analyzing stability of control loops

(i − 1) if v(t) < (i − 1) and v∗ (t− ) = i, with event-based sampling: “System stability is char-
(12.32) acterized by the absence of limit cycles and the pres-
ence of an equilibrium point around the setpoint.” In
and v ∗ (t0 ) = − sgn(v(t0 ))|v(t0 )|, where t0 is the start- conclusion, a stability proof should contain two ele-
ing time instant, i is an integer that denotes the current ments: “the demonstration of the existence of the equi-
sampling system output, and s denotes the integer librium point and determination of its reach.” However,
part of s. other authors showed that there also other possible
The symmetric send-on-delta map ssod : R "→ R is a approaches for investigating system stability, depending
nonlinear dynamical system with an input v (t) and the on the underlying stability definition.
Event-Based PID Control 247

Beschi et al. published several works regarding stabil- possible to choose ΔS according to the desired trade-off
ity of control loops with SSOD. between message rate and precision without influencing
They developed their stability analyses in a series of the stability of the overall system.
publications [6–8,10], of which [10] is the most generic, For SSOD (as a subtype of ideal send-on-delta sam-
even if not limited to PID controllers. A variation of this pling), the difference q(t) between v(t) and v∗ (t)
approach is explained in the following paragraphs. is limited to ±ΔS , see also [6]. It is possible to
The process is assumed to be linear but may have replace (12.34) by
any order and a time delay τ. Several architectures with 
SSOD at different positions can be unified by combining v (t) + q(t) if t ≥ t0 ,
v∗ (t) =
controller and process into a single system, such that the υ∗ (t) otherwise.
control loop consists of a linear system and the SSOD
unit. This can be formulated as follows: = v(t) + q̂ (t)

ẋ = Ax (t) + Bv∗ (t) + Bo o (t), q(t) if t ≥ t0 ,
with q̂ (t) = (12.39)
w(t) = Cx (t) + Do o (t), υ∗ (t) − v(t) otherwise.
v ( t ) = − w ( t − τ), So one can obtain
 
v(t)
v∗ (t) = ssod (v(t); ΔS , β) = ΔS β nssod , ẋ = Ax (t) + Bv∗ (t) + Bo o (t),
ΔS
(12.34) w(t) = Cx (t) + Do o (t),
(12.40)
with the initial conditions v ( t ) = − w ( t − τ),
x ( t0 ) = x0 , v∗ (t) = v(t) + q̂ (t),
(12.35)
v ∗ (ξ) = υ∗ (ξ), with ξ ∈ [t0 − τ, t0 ),
with the initial conditions (12.35). This system can be
where x (t) ∈ Rn is the process state; A ∈ Rn×n is the transformed to
system matrix; B ∈ Rn , Bo ∈ Rn×m , C ∈ R1×n , and Do ∈
ẋ = Ax (t) + Bv(t) + Bq̂(t) + Bo o (t),
R1×m are the other matrices and vectors describing the
system properties; w(t) ∈ R, v(t) ∈ R, v∗ (t) ∈ R, and w(t) = Cx (t) + Do o (t), (12.41)
υ∗ (t) ∈ R are signals whose physical meaning depend v ( t ) = − w ( t − τ),
on which signal in the control loop is sampled; and
x0 ∈ Rn the initial state. The external signal o (t) ∈ Rm and further by ô (t) = (o (t) T q(t)) T and B̂o = ( Bo B) to
can originate from the reference variable, disturbances,
or feedforward action. ẋ = Ax (t) + Bv(t) + B̂o ô (t),
Applying the transformations w(t) = Cx (t) + Do o (t), (12.42)
v(t) v∗ (t) x (t) v ( t ) = − w ( t − τ).
ṽ(t) := ṽ ∗ (t) := x̃ (t) := ,
ΔS ΔS β ΔS Classic continuous-time stability analysis methods can
w(t) o (t) υ∗ ( t )
w̃(t) := õ (t) := υ̃∗ (t) := be applied to this equivalent system. If this continuous-
ΔS ΔS ΔS βtime system is stable for any bounded external signal
x (t) ô(t), then the event-based system is stable, too. If the
x̃0 (t) := 0 ,
ΔS event-based system is unstable then also the continuous-
(12.36) time system is unstable. Thus, symmetric send-on-delta
on the event-based closed-loop system, one can obtain sampling does not degrade the system stability com-
∗ pared with continuous-time control.
x̃˙ = A x̃ (t) + Bβṽ (t) + Bo õ (t), This result may sound surprising as the Nyquist–
w̃(t) = C x̃ (t) + Do õ (t), Shannon sampling theorem [25,28] does not hold for
(12.37)
ṽ (t) = −w̃(t − τ), event-based sampling. It is also surprising because of
∗ the limit cycles which look like impending instability.
ṽ (t) = nssod(ṽ(t)),
In fact, limit cycles may reduce the control performance,
with the initial conditions but they have always a finite amplitude and never bring
x̃(t0 ) = x̃0 , the system to instability, if the equivalent continuous-
∗ ∗ (12.38) time control loop is stable. Anyhow, as limit cycles are
ṽ (ξ) = υ̃ (ξ), with ξ ∈ [t0 − τ, t0 ).
undesirable in most cases, there are also a lot of the-
This system does not depend on ΔS but has the same oretical studies on conditions for the existence of limit
stability properties (as long as ΔS is finite). Hence, it is cycles [6–8,10,12,13,70].
248 Event-Based Control and Signal Processing

The main conclusions of this section are: parallel by Beschi et al. [8,11] and the authors [22,23].
Later Ruiz et al. [44] proposed a tuning rule for a PI-P
1. It is possible to use continuous-time controller controller based on the Smith predictor.
design methods to find controller parame- This section assumes that the send-on-delta sampling
ters that guarantee stability of the event-based is applied to the process output. The tuning rule does
closed-loop system, at least when assuming not consider the optimization of the message rate for the
SSOD. other signals occurring in the control loop.
2. The threshold ΔS does not influence the sta-
bility of the system and can thus be chosen Generic concept for evaluating the sensor energy
flexibly to define the trade-off between control efficiency of event-based control loops
performance and message rate.
In [22], a generic approach for estimating the message
Nevertheless, it must be noted here that the pre- number N of a closed-loop step response after a setpoint
sented analysis bases on ideal send-on-delta sampling change has been presented. This approach is the basis
while for real send-on-delta sampling the used assump- for the subsequent controller tuning rules. It is based on
tions do not hold. Further, the controller is modeled as ideal send-on-delta sampling (Tz → 0) and the assump-
a continuous-time system, which does not fit to digi- tion that ΔS is sufficiently small in comparison to the
tal implementations and attenuates practical issues such setpoint change Δr (the difference between the setpoint
as sticking and approximation errors. To the authors’ before and after the step) that causes the step response
knowledge, up to now there are no publications about (ΔS Δr ).
stability analyses with real send-on-delta sampling. The message rate in “steady state” is mainly deter-
mined by limit cycles and the type of disturbances. It
is therefore more complex to analyze. For simplicity of
12.6.2 Tuning Rules for PI Controllers with
derived equations, it is assumed here that the num-
Send-on-Delta Sampling
ber of messages created due to limit cycles is negligible
In the first publications on event-based control, begin- compared with the messages created during the step
ning with Årzen in 1999 [4], it was common to response. Tuning proposals with respect to disturbance
presuppose a well-tuned discrete-time (periodic) or rejection and limit cycles can be found in [11,23].
continuous-time control loop and replace only the sam- If the step response is free of overshoot and does
pling scheme without changing the controller parame- not oscillate (typical for sluggish step responses), the
ters. The main challenge of these publications was to message number of the step response can simply be
identify and solve the new problems which arise from approximated by
the event-based sampling, such as limit cycles and stick- Δr
N≈ . (12.43)
ing. This often resulted in evaluating the quality of ΔS
an event-based control loop by comparing it with the The exact value of N depends on the position of the lev-
discrete-time or continuous-time counterpart and try- els relative to the setpoint values before and after the
ing to minimize the differences (in terms of control step and is bounded by
performance and trajectories). > ? > ?
Appropriate tuning of the PID parameters, K P , TI , and Δr Δr
≤N≤ + 1. (12.44)
TD , can help improve the trade-off between the mean ΔS ΔS
message rate and control performance as well as avoid Thus, the more exact the approximation (12.43), the
limit cycles. While stability proofs and analyses on limit smaller the threshold Δ , compared with the setpoint
S
cycles are very important for controller design, they state change Δ .
r
nothing about how to tune the parameters inside the As shown in Figure 12.4, it is possible to divide a typ-
allowed range. ical step response with oscillations (with overshoot) in
Vasyutynskyy suggested to change the parameter set- different phases—first the rising edge P until the set-
0
tings in “steady state” to avoid limit cycles [60,63] as point is reached, and then the “half cycles” P with i ∈
i
the probability of limit cycles and sticking (and thus N (i > 0). For P the number of messages can simply
+ 0
reduced control performance and higher message rate) be estimated by
rises with growing aggressiveness of the controller tun- Δr
ing. Araújo gave an example where limit cycles disap- N (P0 ) ≈ . (12.45)
ΔS
peared after reducing the proportional coefficient K P ,
The other parts can be approximated by
leading to improved control performance and mes-
sage rate [3]. The problem of systematic PID parameter |Δh a,i |
tuning for event-based sampling has been explored in N ( Pi ) ≈ 2 · , (12.46)
ΔS
Event-Based PID Control 249
2
Setpoint
Controlled variable
1.5

0.5

0
0 2 4 6 8 10 12 14 16 18 20
Time (s)
P0 P1 P2 P3 P4 P5 P6 P7

FIGURE 12.4
Closed-loop step response, divided into parts between the crossings with the setpoint.

where Δh a,i = yextr,i − rnew is the absolute overshoot with the introduction of the function
(the distance of each extremum yextr,i from the new 2 · hr
setpoint and steady state value rnew ). ν(hr , δdr ) := 1 + , (12.53)
1 − δdr
For simplicity, it is assumed here that the decay ratio
which does not depend on ΔS and Δr .
Δh a,(i+1) It is important to note that the number of generated
δdr = − , (12.47)
Δh a,i messages rises with growing overshoot and decreas-
ing decay ratio, leading to the controller design goal
is independent of i. Thus,
that overshoot (and oscillations in general) should be
Δh a,i = Δh a,1 · (−δdr )i−1. (12.48) as small as possible for reducing the message rate. This
result is not limited to PID controllers but can be used
For PI controllers with long reset time (i.e., much propor- for any type of controller using send-on-delta sampling.
tional action and little integral action), the oscillations For real send-on-delta sampling (Tz > 0), the message
are usually not symmetrical around the new setpoint number is usually lower due to the minimum duration
rnew , but the approach can be applied there, too, by between two subsequent messages, especially in the ris-
modeling the step response as the superposition of a ing edge P0 . Thus, ideally, a step response should have
“virtual” nonoscillating step response with (12.43) and a very steep flank at the beginning, but no oscillations.
the oscillations relative to that step response with (12.46). This would optimize both message rate and control
The relative overshoot can be defined as performance.
h a,1
hr = . (12.49) First tuning rule
Δr
Since for stable control loops In the following section, a tuning rule will be developed.
It will use only proportional and integral action for sev-
0 < δdr < 1, (12.50) eral reasons. Although derivative action can improve
the overall control performance of many control loops
holds, it is possible to approximate the overall number
as shown by Ziegler and Nichols [74], derivative action
of messages for the step response for ΔS Δr as
is not often used in industry [5]. Two reasons for this
∞ fact are that derivative action amplifies high-frequency
N = ∑ N ( Pi ) (12.51) noise and that it leads to abrupt changes of the con-
i =0
∞   trol error (e.g., after setpoint changes) to large peaks of
Δr Δh a,1 i−1
≈ + ∑ 2· · δdr the control signal which in many cases speeds up the
Δ S i =1 ΔS wear of actuators. Both problems can be reduced (at
Δr the cost of lower effectiveness of the derivative action)
= · ν(hr , δdr ), (12.52) by introducing a filter on the control error such as in
ΔS
250 Event-Based Control and Signal Processing

Equation 12.3. The large peaks after setpoint changes can The open-loop transfer function becomes
also be avoided by two-degree-of-freedom approaches
(i.e., setpoint weighting, see Section 12.5.3). Further- Go ( s ) = R ( s ) · G ( s )
 
more, derivative action increases the design effort as 1 Km
= KP 1 + · e−sτ
there is one parameter more to adjust. That makes the TI s 1 + Tm s
 
space of possible controller settings three dimensional. Tm 1 Km
Besides the reasons mentioned before, an additional = λ· · 1+ · e−sτ
Km τ  · Tm s 1 + Tm s
drawback of derivative action in case of event-based ' (
sampling is that the frequency of limit cycles rises and λ · 1 +  · ητ s
hence the message rate. = ' ( e−sτ .
τ · s ·  · 1 + ητ s
Assumptions of the tuning rule For the development
of the tuning rule, a process model of the form This result contains (for constant λ, , and η) only one
time constant τ, which can be replaced by Tm or TI .
Changing that time constant leads only to a proportional
Km
G ( s) = e−sτ , (12.54) change of the time constants of the closed-loop system,
1 + Tm s but not to a qualitative change of the system evolution
in terms of overshoot, oscillations, gain or phase mar-
is assumed, with proportional action coefficient Km , gins, etc. Thus, for qualitative analyses, it is enough to
first order time constant Tm , and dead time (or time examine the control loop properties as a function of η
delay) τ. This is a quite typical assumption [6,33,34,44]. and the controller factors  and λ while it is not necessary
The process type (12.54) is called FOPDT (first order to consider the other process parameters (Km and Tm ).
plus dead time) [6], FOPTD, or FOTD (first order plus This common approach in continuous-time systems
time delay) [3,13], FOLPD (first order lag plus time is also applied to develop a tuning rule for event-
delay) [34], or KLT model [3]. The quotient based systems. Without loss of generality, the tuning rule
which is developed below optimizes λ and , as this
allows more generic results because their optimal setting
τ
η= , (12.55) depends only on η.
Tm
Characteristics of the PI controller parameters Figure
is called “degree of difficulty” [23,51], “normalized time 12.5 shows the principal influence of the PI parameters
delay” [43], or “normalized dead time” [8]. For this on the closed-loop step response of a continuous-time
process structure, there exist simple parameter identifi- control loop with a FOLPD process. Summarizing, with
cation methods [72] and a lot of PI and PID controller increasing λ (i.e., K P ) and decreasing  (i.e., TI ) the
tuning rules [34], optimized for continuous-time control step responses become steeper and get more signifi-
or periodic sampling. cant oscillations, up to instability. The main difference
A continuous-time PI controller R(s) is used for between both is that large integral action (small ) leads
the first analyses because this allows to abstract from to smoother signal evolutions than large proportional
the complex event-based effects. The continuous-time action. For controller tuning, it is helpful to divide the
results will be transferred afterward to event-based con- controller settings for sluggish response without over-
trollers. For this combination of process model and shoot, response with overshoot, and unstable response.
controller type, there are a lot of PI controller tuning Furthermore, there is also a relatively small area of set-
rules [34] of the form tings for which the control loop is oscillating without
overshoot, typical for nearly pure proportional control
Tm (see TI = 5 in Figure 12.5b).
KP = λ · , (12.56) These typical characteristics are qualitatively shown
Km · τ
in Figure 12.6 as a contour plot. The borders of each area
TI =  · Tm , (12.57)
depend on η. Spinner et al. [56] used comparable dia-
grams to explain iterative returning approaches for PI
with the process-independent constants λ and  which controllers.
are given by the particular tuning rule. Since KP ∝ λ and As concluded from the estimation of the number of
TI ∝ , Equations 12.56 and 12.57 can be interpreted as messages in Equation 12.53, the settings with no or
a normalization of K P and TI , respectively. According only little oscillations can be expected to result in lower
to this interpretation, each tuning rule specifies only the message rate. If it is desired to maximize the control
selection of λ and . performance without getting oscillations into the step
Event-Based PID Control 251
1.4 1.4

1.2 1.2

1 1

0.8 0.8

y(t), r(t)
y(t), r(t)

0.6 0.6
r(t) r(t)
0.4 0.4 y(t) for TI = 0.5
y(t) for KP = 1
y(t) for KP = 1.84 y(t) for TI = 1
0.2 0.2
y(t) for KP = 3 y(t) for TI = 5
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s) Time (s)

FIGURE 12.5
Comparison of closed-loop step responses for different settings of K P and TI , respectively. The controlled variable y( t) and the reference variable
r ( t) are shown. The process parameters are Km = 1, Tm = 1, τ = 0.25. (a) Different K P for TI = 1. (b) Different TI and setpoint r ( t) for K P = 1.84.

response, the optimum must lie somewhere near the improves one of both performance indices (ν or JISE )
border between sluggish and oscillating step responses. without degrading the other one. If the Pareto optimal
If the control performance is evaluated using JISE , the points are known for a given process, all other settings
optimum will be near to the setting with least JISE under are not interesting. Figure 12.6 shows the Pareto opti-
the constraint of no oscillations. This optimum is shown mal region for several η. The region starts at settings
in each subfigure of Figure 12.6 in the form of a triangle. with λ = 0 as there is no controller action and thus also
If using the product JProd of message number N and no change of the controlled variable, resulting in ν = 0
JISE as the optimization goal as defined in (12.17), the (i.e., N = 0). These settings are practically not reason-
message number must be replaced with an estimation able, just like the sluggish settings in the upper left
since the simulation uses a continuous-time controller. corner with little λ and large  which did not reach
From (12.52) one can take over the approximation the setpoint in the simulated time and produced thus
only a few messages but gave better control performance
Δr than the settings with λ = 0. These solutions would not
N≈ · ν. (12.58)
ΔS be Pareto optimal, if the simulated time was infinitely
This does not change the location of the optimum with long, because then the step responses would reach the
respect to λ and η since setpoint and so the settings would have no advantage
regarding JISE and ν compared with the settings with
ΔS faster step responses without overshoot. The practically
Jν = ν · JISE ≈ ·J ∝ JProd . (12.59)
Δr Prod interesting region begins at the border to settings with
overshoot. Beginning there, the relevant Pareto optimal
For the computation of the normalized message num- points lie on a nearly linear area beginning at oscillation-
ber ν, there are two main possibilities: First, it can be free settings (with low message rate) and ending at the
approximated by (12.53), or it can be approximated for point of minimum JISE (marked with a plus symbol) for
ΔS Δr by larger λ and  at the cost of more oscillations and thus
 T  
1 end dy (ξ)  messages.
ν≈  
Δr 0  dt  dξ, (12.60) It is possible to store the Pareto optimal settings for
each η in a look-up table and use that for controller
what is more exact as it depends on less assumptions. design in a computer-aided controller design system.
When using (12.60), the optimum regarding Jν is shown Since rules of thumb are very popular in practice, we
in Figure 12.6 as a square for each η. give a simple linear approximation for the region 0.1 ≤
As stated in Section 12.3.2, the Pareto optimal con- η ≤ 0.3 which is interesting for many practical processes
troller settings are very interesting, because for all other (e.g., for room temperature control [51]). For that pur-
settings there is at least one Pareto optimal setting which pose, a weighting factor 0 ≤ ψ ≤ 1 is introduced which
252 Event-Based Control and Signal Processing

Oscillations, no overshoot
Basic characteristics
Basic characteristics
3.5 Pareto optimal 3.5

Oscillations, no overshoot
Pareto optimal
Min. JISE without oscillations
Min. JISE without oscillations
3 Min. Jν 3
Min. Jν
Min. JISE
Min. JISE
2.5 Linear tuning rule 2.5
Linear tuning rule
No oscillations

No oscillations
Simple tuning rule
Simple tuning rule
2 2
ε

With overshoot
1.5 With overshoot 1.5

1 1

Unstable

Unstable
0.5 0.5

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
λ λ
(a) (b)

Oscillations, no overshoot
Basic characteristics Basic characteristics
Oscillations, no overshoot

3.5 Pareto optimal 3.5 Pareto optimal


Min. JISE without oscillations Min. JISE without oscillations
3 Min. Jν 3 Min. Jν
Min. JISE Min. JISE
2.5 Linear tuning rule 2.5 Linear tuning rule
No oscillations
No oscillations

Simple tuning rule Simple tuning rule


2 2
ε
ε

With overshoot
With overshoot

1.5 1.5

1 1

Unstable
Unstable

0.5 0.5

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
λ λ
(c) (d)

FIGURE 12.6
Characteristic areas as a function of λ and  for an FOLPD process. Marked are the Pareto optimal settings (dots), the point of lowest ISE without
oscillations (triangle), the optimal point according to Jν (square), the optimal point regarding JISE (also with oscillations), the linear tuning
rule (12.61), and the optimum according to the simple tuning rule (12.63) (circle). Analyzed by simulations with step width 0.02 and 0.04 for λ
and , respectively. (a) η = 0.1. (b) η = 0.3. (c) η = 0.5, and (d) η = 1.

allows a stepless tuning between low message rate These principal characteristics hold for discrete-time
(ψ = 0) and high control performance JISE (ψ = 1). The PI controllers, too. For event-based PI controllers, these
set of reasonable Pareto optimum points between them characteristics are in principal similar if ΔS Δr .
can then roughly be approximated by the following rule
of thumb: Simplified tuning rule
λ = 0.5 + 0.3 · ψ, A simpler tuning rule has been published in [22]. It is
(12.61)
 = 1.1 + 0.4 · ψ. based on classic tuning rules which are targeted on little
Event-Based PID Control 253

overshoot and set  usually to 1 [22,34], resulting in optimizing Jν


pole-zero cancellation. The open-loop transfer function λ = 0.468,
is then (12.63)
 = 1.
Go ( s ) = R ( s ) · G ( s ) This setting is also indicated in each subfigure of
  Figure 12.6 as a circle. These figures show that this is not
1 Km
= KP 1 + · e−sτ a Pareto optimal solution although it is Pareto optimal
sTI 1 + sTm
among all solutions with η = 1.
1
= λ · e−sτ . Additional simulation results of unit step responses
τs for event-based control loops with different settings of
Again, τ influences only the time constants of the signal ΔS using that rule are shown in [22]. The differences
evolution but not qualitative properties such as sta- between the continuous-time approximation and the
bility, overshoot, or gain and phase margin. Here, the event-based simulation depend on ΔS and the simula-
control loop is even independent of η. Different perfor- tion conditions. They especially depend on the duration
mance measures can thus be analyzed only as a function of the simulated “steady state” phase after the step as
of λ. This is shown in Figure 12.7 for continuous-time this phase contains limit cycles which are not modeled
simulations of closed-loop unit step responses without by the continuous-time approximation.
disturbances. Also, simulations for real send-on-delta sampling
All settings between 0.38 ≤ λ ≤ 0.74 are Pareto opti- (Tz > 0) can be found in [22]. If either actuator satura-
mal regarding ν and JISE ; other settings make no practi- tion is likely to appear or the frequency of limit cycles
cal sense as settings with λ < 0.38 are very sluggish and should be diminished, K P should be reduced [22,23].
settings with λ > 0.74 are very aggressive (oscillatory). Beschi et al. developed a comparable tuning rule for
Comparable to (12.61), by introducing a weighting fac- SSOD in parallel [8,11]. The goal of their tuning rule
tor 0 ≤ ψ ≤ 1, the Pareto optimal points are in the area is the minimization of the settling time of the system
of step response. Also, a PI controller is used and a FOPDT
process is assumed.
λ = 0.38 + 0.36 · ψ, After transforming their results to the notation of this
(12.62)
 = 1. chapter, one can obtain

The minimum regarding Jν is reached with λopt ≈ λ(η) = K1 (η) · η,


0.468, showing an overshoot of hr ≈ 2.17%. In compar- K1 ( η ) (12.64)
ison, the minimum of the ISE is reached with λ ≈ 0.74 (η) = ,
K2 ( η )
with 24.37% overshoot. That shows that the optimum for
control performance (the primary goal in discrete-time where K1 (η) and K2 (η) are polynomials whose coeffi-
or continuous-time control) results not in a low message cients have been found by interpolation.
rate. In particular, the message rate can be reduced by The results for η > 0.1 are quite similar to the sim-
36 % if one accepts a control performance loss of only ple tuning rule (12.63), especially for 0.1 ≤ η ≤ 0.3, see
6.8 %. This case results in the simplified tuning rule for Figure 12.8. In fact, minimizing the settling time results

2.5

1.5

1

0.5 ν
JISE
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4
λ

FIGURE 12.7
Properties of the closed-loop unit step response as a function of λ for Km = Tm = τ = 1 and continuous-time control.
254 Event-Based Control and Signal Processing
2
λ
ε
1.5

λ, ε
1

0.5

0
0 0.5 1 1.5 2 2.5 3
η

FIGURE 12.8
Coefficients λ(η) and (η) computed from the tuning rule in [8].

1.6 1.6
u(t) r(t)
1.4 y(t) of second order process 1.4 y(t) of second order process
y(t) of FOLPD model y(t) of FOLPD model
1.2 1.2

1 1
u(t), y(t)

r(t), y(t)

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
0 2 4 6 8 10 0 2 4 6 8 10
Time (s) Time (s)
(a) (b)

FIGURE 12.9
Open-loop and closed-loop step responses of a second-order process (c = 1) and its FOLPD model. Closed-loop step responses with λ = 0.468
and  = 1. (a) Open loop step responses. (b) Closed loop step responses.

in fast step responses without large overshoot, as it is approximate a SOPDT process by a FOPDT process
also when JProd is minimized. [29,53]
Beschi et al. detected furthermore that the message Km
number is more or less independent from η but depends Ĝ2 (s) = e−sτ . (12.66)
1 + sTm
on the overshoot. That fits to (12.53).
However, it is not the same controlling a FOPDT pro-
Higher order processes cess or a SOPDT process, even if the latter has a FOPDT
approximation with equal parameters. This has its ori-
A second order, nonoscillating process with time delay gin in the different signal evolution of the step responses
(SOPDT) can be described with the transfer function due to the different frequency response. A SOPDT pro-
Km2 cess will in general result in more oscillations. If both
G2 (s) = e−sτ2 , (12.65) time constants of the SOPDT process are equal (c = 1),
(1 + sT2 )(1 + scT2 )
the difference is most obvious (see Figure 12.9) while if
with 0 ≤ c ≤ 1 (T2 is assumed to be the larger of one time constant of both is much smaller than the other
both time constants). There are several possibilities to (c 1), the difference disappears. Figure 12.10 shows
Event-Based PID Control 255

Basic characteristics
Pareto optimal
3.5
Min. JISE without oscillations

Oscillations, no overshoot
Min. Jν
3 Min. JISE
Linear tuning rule
Simple tuning rule
2.5

With overshoot
2
ε

No oscillations

1.5

Unstable
0.5

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

FIGURE 12.10
Characteristic areas as a function of λ and  for an SOLPD process (with two equal time constants, i.e., c = 1), which can be approximated by an
FOLPD process with η = 0.3. For details, see caption of Figure 12.6.

the Pareto optimal solutions for a SOPDT process if the event-based control. We presented typical problems
controller is tuned using (12.56) with the parameters of such control loops (approximation errors, sticking,
from the FOPDT approximation according to [29]. It is and limit cycles) with their currently most accepted
evident that the Pareto front is significantly distanced solutions, a comparison of several event types,
from that of the “real” FOPDT with the same parameters parametrization, and finally some stability and tuning
(shown in Figure 12.6b). Both λ and  should be lower considerations with regard to send-on-delta sampling.
than for a “real” FOPDT process. As the research in event-based PID control is a
If the real process order is known, this problem can relatively young discipline, there are many unsolved
be solved by using different tuning rules for differ- questions. From a theoretical point of view, stability
ent process orders. Unfortunately, under noisy condi- analyses should become more generalized. From a prac-
tical point of view, the most important problem is that
tions, the estimation of the process order is error-prone.
If the process order or c is not known, the tuning message losses cannot be detected reliably before the
rules (12.61) and (12.62) lose their meaning because the max-send-timer expires, especially in the case of model
tuning parameter ψ will not balance between minimum uncertainties and unknown disturbances. This lack of
message rate and optimum control performance. In that knowledge increases the risk of an unstable control loop
up to now [20], what is not acceptable in hard real-
case, the simplified tuning rule (12.63) can be used as it
delivers acceptable results both for FOPDT and SOPDT time applications. Sánchez et al. mention the lack of
processes. specific design or formal analysis methods for a stan-
dardized event-based PID control law as a reason as to
why event-based PID control is not yet a useful alterna-
tive in some specific applications [48]. Also, the lack of
results about SIMO and MIMO processes are mentioned
12.7 Conclusions and Open Questions
there. As PID controllers are often used for purposes
This chapter gave an overview about the current knowl- with reduced performance requirements but hard cost
edge about event-based PID control. PID control is very limits, automatic tuning procedures are of interest. First
well accepted in industry and is thus worth exploring in approaches in this direction can be found in [9,20,23].
256 Event-Based Control and Signal Processing

For applications where disturbances and setpoints are [9] M. Beschi, S. Dormido, J. Sánchez, and A. Visioli.
constant over a long period, avoidance of limit cycles is An automatic tuning procedure for an event-based
of great interest, and solutions for that should become PI controller. In Proceedings of the 52nd IEEE Con-
both more practical and reliable. Finally, system identifi- ference on Decision and Control, pages 7437–7442,
cation with nonuniform samples is needed for controller Florence, Italy, December 2013.
tuning if the sampling scheme cannot be changed for
[10] M. Beschi, S. Dormido, J. Sánchez, and
process identification. In contrast to available methods
A. Visioli. Stability analysis of symmetric send-on-
for nonuniform sampling that use only the available
delta event-based control systems. In Proceedings
samples [2,18,32], the implicit knowledge about the pos-
of the 2013 American Control Conference (ACC 2013),
sible range of the signal between the samples due to the
pages 1771–1776, Washington, DC, June 2013.
sampling threshold (ΔS , Δ I or ΔP ) should be used to
reduce the uncertainty about the parameters. [11] M. Beschi, S. Dormido, J. Sánchez, and A. Visioli.
Tuning of symmetric send-on-delta proportional-
integral controllers. IET Control Theory and
Applications, 8(4):248–259, 2014.
[12] M. Beschi, A. Visioli, S. Dormido, and J. Sánchez.
Bibliography On the presence of equilibrium points in PI con-
[1] IEC 60050-351 International electrotechnical trol systems with send-on-delta sampling. In Pro-
vocabulary—Part 351: Control Technology, 2006. ceedings of the 50th IEEE Conference on Decision and
Control and European Control Conference (CDC-ECC
[2] S. Ahmed, B. Huang, and S. L. Shah. Process Param- 2011), pages 7843–7848, Orlando, FL, December
eter and Delay Estimation from Nonuniformly 2011.
Sampled Data. In Identification of Continuous-Time
[13] J. Chacón, J. Sánchez, A. Visioli, L. Yebra, and
Models from Sampled Data (H. Garnier and L. Wang,
S. Dormido. Characterization of limit cycles for
Eds.), pages 313–337. Springer, London, 2008.
self-regulating and integral processes with PI con-
[3] J. Araújo. Design and Implementation of Resource- trol and send-on-delta sampling. Journal of Process
Aware Wireless Networked Control Systems. KTH Control, 23:826–838, 2013.
Electrical Engineering, Stockholm, Sweden, 2011. [14] S. Durand and N. Marchand. An event-based PID
Licentiate Thesis. controller with low computational cost. In Proceed-
ings of the 8th International Conference on Sampling
[4] K. E. Årzén. A simple event-based PID controller.
Theory and Applications (L. Fesquet and B. Torrésani
In Proceedings of the 14th IFAC World Congress, vol- Eds.), Marseille, France, 2009.
ume 18, pages 423–428, Beijing, China, July 1999.
[15] S. Durand and N. Marchand. Further results on
[5] K. J. Åström and T. Hägglund. The future of PID event-based PID controller. In Proceedings of the
control. Control Engineering Practice, 9:1163–1175, 10th European Control Conference, pages 1979–1984,
2001. Budapest, Hungary, August 2009.
[6] M. Beschi, S. Dormido, J. Sánchez, and [16] EnOcean. EnOcean Equipment Profiles EEP 2.6, June
A. Visioli. Characterization of symmetric send- 2014.
on-delta PI controllers. Journal of Process Control,
[17] EnOcean GmbH, Oberhaching, Germany. Dolphin
22:1930–1945, 2012.
Core Description V1.0, December 2012.
[7] M. Beschi, S. Dormido, J. Sánchez, and A. Visioli. [18] J. Gillberg and L. Ljung. Frequency Domain Identi-
On the stability of an event-based PI controller for fication of Continuous-Time ARMA Models: Interpo-
FOPDT processes. In Proceedings of the IFAC Con- lation and Non-Uniform Sampling. Technical report,
ference on Advances in PID Control, pages 436–441, Linköpings Universitet, Linköping, Sweden, 2004.
Brescia, Italy, March 2012.
[19] B. Hensel, A. Dementjev, H.-D. Ribbecke, and
[8] M. Beschi, S. Dormido, J. Sánchez, and A. Visioli. K. Kabitzsch. Economic and technical influences on
Tuning rules for event-based SSOD-PI controllers. feedback controller design. In Proceedings of the 39th
In Proceedings of the 20th Mediterranean Conference on Annual Conference of the IEEE Industrial Electronics
Control & Automation (MED 2012), pages 1073–1078, Society (IECON), pages 3498–3504, Vienna, Austria,
Barcelona, Spain, July 2012. November 2013.
Event-Based PID Control 257

[20] B. Hensel and K. Kabitzsch. Adaptive controllers [30] M. Miskowicz. Sampling of signals in energy
for level-crossing sampling: Conditions and com- domain. In Proceedings of the 10th IEEE Interna-
parison. In Proceedings of the 39th Annual Conference tional Conference on Emerging Technologies and Factory
of the IEEE Industrial Electronics Society (IECON), Automation (ETFA 2005), volume 1, pages 263–266,
pages 3870–3876, Vienna, Austria, November 2013. Catana, Italy, September 2005.

[21] B. Hensel and K. Kabitzsch. The energy benefit [31] M. Miskowicz. Send-on-delta concept: An event-
of level-crossing sampling including the actuator’s based data reporting strategy. Sensors, 6:49–63,
energy consumption. In Proceedings of the 17th IEEE 2006.
Design, Automation & Test in Europe (DATE 2014) [32] E. Müller, H. Nobach, and C. Tropea. Model param-
Conference, pages 1–4, Dresden, Germany, March eter estimation from non-equidistant sampled data
2014. sets at low data rates. The Measurement of Scientific
and Technological Activities, 9:435–441, 1998.
[22] B. Hensel, J. Ploennigs, V. Vasyutynskyy, and
K. Kabitzsch. A simple PI controller tuning rule for [33] M. Neugebauer and K. Kabitzsch. Sensor lifetime
sensor energy efficiency with level-crossing sam- using SendOnDelta. In Proceedings of INFORMATIK
pling. In Proceedings of the 9th International Multi- 2004—Informatik verbindet, Beiträge der 34. Jahresta-
Conference on Systems, Signals & Devices, pages 1–6, gung der Gesellschaft für Informatik e.V. (GI), vol-
Chemnitz, Germany, March 2012. ume 2, pages 360–364, Ulm, Germany, September
2004.
[23] B. Hensel, V. Vasyutynskyy, J. Ploennigs, and
K. Kabitzsch. An adaptive PI controller for room [34] A. O’Dwyer. Handbook of PI and PID Controller
temperature control with level-crossing sampling. Tuning Rules, 2nd edition. Imperial College Press,
In Proceedings of the UKACC International Conference London, 2006.
on Control, pages 197–204, Cardiff, UK, September
[35] P. G. Otanez, J. R. Moyne, and D. M. Tilbury. Using
2012.
deadbands to reduce communication in networked
[24] A. Jain and E. Y. Chang. Adaptive sampling for control systems. In Proceedings of the American Con-
sensor networks. In Proceedings of the 1st Interna- trol Conference 2002, volume 4, pages 3015–3020,
tional Workshop on Data Management for Sensor Net- Anchorage, Alaska, 2002.
works (DMSN 2010), pages 10–16, Toronto, Canada, [36] A. Pawlowski, J. L. Guzman, F. Rodrı́guez,
August 2004. ACM. M. Berenguel, J. Sánchez, and S. Dormido. Simu-
lation of greenhouse climate monitoring and con-
[25] A. J. Jerri. The Shannon sampling theorem—Its var-
trol with wireless sensor network and event-based
ious extensions and applications: A tutorial review.
control. Sensors, 9(1):232–252, 2009.
Proceedings of the IEEE, 65(11):1565–1596, 1977.
[37] A. Pawlowski, J. L. Guzmán, F. Rodrı́guez,
[26] K. Klues, V. Handziski, C. Lu, A. Wolisz, D. Culler, M. Berenguel, J. Sánchez, and S. Dormido. Study
D. Gay, and P. Levis. Integrating concurrency con- of event-based sampling techniques and their
trol and energy management in device drivers. influence on greenhouse climate control with
In Proceedings of the 21st ACM SIGOPS Symposium Wireless Sensor Networks. In Factory Automation
on Operating System Principles (SOSP 2007), pages (Javier Silvestre-Blanes, Ed.), pages 289–312, Rijeka,
251–264, ACM, Stevenson, WA, 2007. Croatia: InTech, 2010.
[27] E. Kofman and J. H. Braslavsky. Level crossing [38] J. Ploennigs, M. Neugebauer, and K. Kabitzsch.
sampling in feedback stabilization under data-rate Diagnosis and consulting for control network per-
constraints. In Proceedings of the 45th IEEE Con- formance engineering of CSMA-based networks.
ference on Decision and Control, pages 4423–4428, IEEE Transactions on Industrial Informatics, 4(2):
San Diego, CA, December 2006. 71–79, 2008.

[28] W. S. Levine, Ed. Control System Fundamentals. CRC [39] J. Ploennigs, V. Vasyutynskyy, and K. Kabitzsch.
Press, Boca Raton, 2000. Comparison of energy-efficient sampling methods
for WSNs in building automation scenarios. In Pro-
[29] H. Lutz and W. Wendt. Taschenbuch der Regelung- ceedings of the 14th IEEE International Conference on
stechnik, 8th edition. Harri Deutsch, Frankfurt am Emerging Technologies and Factory Automation (ETFA
Main, 2010. 2009), pages 1–8, Mallorca, Spain, 2009.
258 Event-Based Control and Signal Processing

[40] J. Ploennigs, V. Vasyutynskyy, and K. Kabitzsch. [50] S. Santini and K. Römer. An adaptive strategy for
Comparative study of energy-efficient sampling quality-based data reduction in wireless sensor net-
approaches for wireless control networks. IEEE works. In Proceedings of the 3rd International Con-
Transactions on Industrial Informatics, 6(3):416–424, ference on Networked Sensing Systems (INSS 2006),
2010. pages 29–36, Chicago, IL, May–June 2006.
[51] Siemens Switzerland Ltd, Building Technology
[41] J. Ploennigs, V. Vasyutynskyy, M. Neugebauer, and
Group. Control technology. Technical brochure,
K. Kabitzsch. Gradient-based integral sampling for
Reference number 0-91913-en.
WSNs in building automation. In Proceedings of the
6th European Conference on Wireless Sensor Networks [52] F. Sivrikaya and B. Yener. Time synchronization in
(EWSN 2009), Cork, Ireland, February 2009. sensor networks: A survey. IEEE Network, 18(4):
45–50, 2004.
[42] J. Plönnigs, M. Neugebauer, and K. Kabitzsch.
A traffic model for networked devices in the build- [53] S. Skogestad. Simple analytic rules for model
ing automation. In Proceedings of the 5th IEEE Inter- reduction and PID controller tuning. Journal of
national Workshop on Factory Communication Systems, Process Control, 13:291–309, 2003.
pages 137–145, Vienna, Austria, September 2004. [54] S. Soucek and T. Sauter. Quality of service concerns
in IP-based control systems. IEEE Transactions on
[43] K. J. Åström and T. Hägglund. Advanced PID Industrial Electronics, 51(6):1249–1258, 2004.
Control. ISA – The Instrumentation, Systems, and
Automation Society, Research Triangle Park, NC, [55] Spartan Peripheral Devices. Energy Harvesting Self
2006. Powered ME8430 Globe Terminal Unit Control Valve.
Application Sheet. Spartan Peripheral Devices,
[44] A. Ruiz, J. E. Jiménez, J. Sánchez, and S. Dormido. Vaudreuil, Quebec, 2014.
A practical tuning methodology for event-based PI
[56] T. Spinner, B. Srinivasan, and R. Rengaswamy.
control. Journal of Process Control, 24:278–295, 2014.
Data-based automated diagnosis and iterative
retuning of proportional-integral (PI) controllers.
[45] T. I. Salsbury. A survey of control technologies in
Control Engineering Practice, 29:23–41, 2014.
the building automation industry. In Proceedings
of the 16th Triennial World Congress, pages 90–100, [57] Y. S. Suh. Send-on-delta sensor data transmission
Prague, Czech Republic, July 2005. with a linear predictor. Sensors, 7:537–547, 2007.

[46] J. Sánchez, M. Á. Guarnes, and S. Dormido. On [58] K. K. Tan, Q.-G. Wang, C. C. Hang, and T. J.
the application of different event-based sampling Hägglund. Advances in PID Control. Advances in
strategies to the control of a simple industrial pro- Industrial Control. Springer, London, 1999.
cess. Sensors, 9:6795–6818, 2009. [59] U. Tiberi, J. Araújo, and K. H. Johansson. On event-
based PI control of first-order processes. In Pro-
[47] J. Sánchez, A. Visioli, and S. Dormido. A two- ceedings of the IFAC Conference on Advances in PID
degree-of-freedom PI controller based on events. Control, pages 448–453, Brescia, Italy, March 2012.
Journal of Process Control, 21:639–651, 2011.
[60] V. Vasyutynskyy. Send-on-Delta-Abtastung in PID-
[48] J. Sánchez, A. Visioli, and S. Dormido. Event- Regelungen [Send-on-Delta Sampling in PID Con-
based PID Control. In PID Control in the Third trols]. PhD thesis, Vogt Verlag, Dresden, 246 pp.,
Millennium—Lessons Learned and New Approaches 2009.
(R. Vilanova and A. Visioli, Eds.), pages 495–526. [61] V. Vasyutynskyy and K. Kabitzsch. Implementa-
Advances in Industrial Control. Springer, London, tion of PID controller with send-on-delta sampling.
2012. In Proceedings of the International Control Conference
2006, Glasgow, August/September 2006.
[49] J. H. Sandee, W. P. M. H. Heemels, and P. P. J.
van den Bosch. Event-driven control as an oppor- [62] V. Vasyutynskyy and K. Kabitzsch. Deadband sam-
tunity in the multidisciplinary development of pling in PID control. In Proceedings of the 5th IEEE
embedded controllers. In Proceedings of the American International Conference on Industrial Informatics 2007
Control Conference 2005, volume 3, pages 1776–1781, (INDIN 2007), pages 45–50, Vienna, Austria, July
Portland, OR, June 2005. 2007.
Event-Based PID Control 259

[63] V. Vasyutynskyy and K. Kabitzsch. Simple PID con- Communication Systems (WFCS 2010), pages 271–
trol algorithm adapted to deadband sampling. In 279, Nancy, France, May 2010.
Proceedings of the 12th IEEE Conference on Emerg-
ing Technologies and Factory Automation (ETFA 2007), [69] V. Vasyutynskyy, A. Luntovskyy, and K. Kabitzsch.
pages 932–940, Patras, Greece, September 2007. Two types of adaptive sampling in networked PID
control: Time-variant periodic and deadband sam-
[64] V. Vasyutynskyy and K. Kabitzsch. Towards com- pling. In Proceedings of the 17th International Crimean
parison of deadband sampling types. In Proceedings Conference “Microwave & Telecommunication Technol-
of the IEEE International Symposium on Industrial ogy” (CriMiCo 2007), pages 330–331, Sevastopol,
Electronics, pages 2899–2904, Vigo, Spain, June 2007. Crimea, Ukraine, September 2007.

[65] V. Vasyutynskyy and K. Kabitzsch. First order [70] V. Vasyutynskyy, A. Luntovskyy, and K. Kabitzsch.
observers in event-based PID controllers. In Pro- Limit cycles in PI control loops with absolute dead-
ceedings of the 14th IEEE International Conference band sampling. In Proceedings of the 18th IEEE
on Emerging Techonologies and Factory Automation International Conference on Microwave & Telecommu-
(ETFA 2009), pages 1–8, Mallorca, Spain, September nication Technology (CRIMICO 2008), pages 362–363,
2009. Sevastopol, Ukraine, September 2008.

[66] V. Vasyutynskyy and K. Kabitzsch. Time con- [71] R. Vilanova and A. Visioli, Eds. PID Control
straints in PID controls with send-on-delta. In Pro- in the Third Millennium. Lessons Learned and
ceedings of the 8th IFAC International Conference on New Approaches. Advances in Industrial Control.
Fieldbus Systems and their Applications (FeT 2009), Springer, London, 2012.
pages 48–55, Ansan, Korea, May 2009. [72] A. Visioli. Practical PID Control. Springer, London,
2006.
[67] V. Vasyutynskyy and K. Kabitzsch. A comparative
study of PID control algorithms adapted to send- [73] A. Willig, K. Matheus, and A. Wolisz. Wireless tech-
on-delta sampling. In Proceedings of the IEEE Inter- nology in industrial networks. Proceedings of the
national Symposium on Industrial Electronics (ISIE IEEE, 93(6):1130–1151, 2005.
2010), pages 3373–3379, Bari, Italy, July 2010.
[74] J. G. Ziegler and N. B. Nichols. Optimum set-
[68] V. Vasyutynskyy and K. Kabitzsch. Event-based tings for automatic controllers. Transactions of the
control: Overview and generic model. In Proceed- American Society of Mechanical Engineers, 64:759–768,
ings of the 8th IEEE International Workshop on Factory 1942.
13
Time-Periodic State Estimation with Event-Based
Measurement Updates

Joris Sijs
TNO Technical Sciences
The Hague, Netherlands

Benjamin Noack
Karlsruhe Institute of Technology
Karlsruhe, Germany

Mircea Lazar
Eindhoven University of Technology
Eindhoven, Netherlands

Uwe D. Hanebeck
Karlsruhe Institute of Technology
Karlsruhe, Germany

CONTENTS
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
13.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
13.3 A Hybrid System Description: Event Measurements into Time-Periodic Estimates . . . . . . . . . . . . . . . . . . . . . . 263
13.4 Event-Based Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
13.4.1 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
13.4.2 A Sensor Value Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
13.5 Existing State Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
13.5.1 Stochastic Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
13.5.2 Deterministic Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
13.6 A Hybrid State Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
13.6.1 Implied Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
13.6.2 Estimation Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
13.6.3 Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
13.7 Illustrative Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
13.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Appendix A Proof of Theorem 13.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Appendix B Proof of Theorem 13.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278

ABSTRACT To reduce the amount of data transfers occurs the estimated state is updated using the measure-
in networked systems, measurements can be taken at ment received, while at periodic instants the update is
an event on the sensor value rather than periodically based on knowledge that the sensor value lies within a
in time. Yet, this could lead to a divergence of esti- bounded subset of the measurement space. Several solu-
mation results when only the received measurement tions for event-based state estimation will be presented
values are exploited in a state estimation procedure. in this chapter, either based on stochastic representations
A solution to this issue has been found by developing of random vectors, on deterministic representations of
estimators that perform a state update at both the event random vectors or on a mixture of the two. All solu-
instants as well as periodically in time: when an event tions aim to limit the required computational resources
261
262 Event-Based Control and Signal Processing

by deriving explicit solutions for computing estimation More precisely, this chapter presents a recent
results. Yet, the main achievement for each estimation overview and outlook on state estimation approaches
that do not require periodic measurement samples but
solution is that stability of the estimation results are (not
directly) dependent on the employed event sampling instead were designed for exploiting a-periodic mea-
strategy. As such, changing the event sampling strat- surement samples. Some well-known a-periodic sam-
egy does not imply to change the event-based estimator pling strategies, known as event-based sampling or event
as well. This aspect is also illustrated in a case study triggered sampling, are “send-on-delta” (or Lebesgue
of tracking the distribution of a chemical compound sampling) and “integral sampling” proposed in
effected by wind via a wireless sensor network. [3,10,22,23]. When such strategies are used in estimation
(and control), as will be the case in this chapter, one typ-
ically starts with the setup depicted in Figure 13.1. Note
that the sensor may require access to estimation results
13.1 Introduction of the estimator, for example, in case the event sampling
Many people in our society manage their daily activi- strategy “matched sampling” of [20] is employed.
ties based on knowledge and information about weather Typically, the estimator is part of a larger networked
conditions, traffic jams, pollution levels, energy con- system where the output of the estimator is used by
sumptions, stock exchange, and so on. Sensor mea- a control algorithm for computing stabilizing con-
surements are the main source of information when trol actions. Advances in control theory provided
monitoring these surrounding processes, and a trend is several solutions for coping with the event sampled
to increase their amount, as they have become smaller, measurements directly by the controller. See, for exam-
cheaper, and easier to use. See, for example, detailed ple, the solutions on event-based control proposed
explanations on the design of “wireless sensor net- in [5,8–10,12,14,16,33,37]. Yet, most of the deployed
works” and its applications in [7]. As many more sensor controllers used in current practices run periodically
measurements are becoming available, resource limi- in time. Revisiting those periodic controller into an
tations of the overall system are gradually becoming event-based one is not favorable. Mainly because the
the bottleneck for processing measurements automat- infrastructure of the control system is inflexible or
ically. The main reason for this issue is that classi- because of fear for any down-time of the production
cal signal processing solutions have been developed in facility when new controllers are tested.
a time where sensor measurements were scarce and The event-based estimators addressed in this chapter
where systems were deployed with sufficient resources can serve as a solution to this problem:
for running sophisticated algorithms developed. Nowa- • Event-based sensor measurements are sent to the
days, these aspects are shifted, that is, there are far too estimator.
many sensor measurements available for the limited
resources deployed in the overall system and, moreover, • Estimation results are computed at events and peri-
it is typically not even necessary to process all sensor odically in time.
information available to achieve a desired performance.
• The time-periodic estimation results are sent to the
An example of resource-aware signal processing is
controller.
presented in this chapter, where several solutions for
state estimation will be presented that all aim to reduce • Stabilizing control actions are computed periodi-
the amount of measurement samples. cally in time.

Sensor system
ø
Physical Sensor value Event-based
Actuator Sensor
process/plant state estimator
Triggering
criteria

Time-periodic
controller

FIGURE 13.1
Schematic setup of event-based state estimation (and control). Therein, a sensor system studies the sensor signal on events in line with a
predefined “event triggering criteria.” In case of the event, the sensor signal is sampled and the corresponding measurement is sent to the
estimator. In case of no event, no measurement is sent (represented by an empty set ∅ at the input of the estimator). The estimator exploits the
event-triggered sensor measurements to compute time-periodic estimation results, which are possibly used by a time-periodic controller for
computing stabilizing control actions.
Time-Periodic State Estimation with Event-Based Measurement Updates 263

The advantage of such an approach is twofold. First,


the design of a (practical) stabilizing controller becomes 13.3 A Hybrid System Description: Event
independent from the employed event sampling
Measurements into Time-Periodic
strategy and existing control implementations might
even be kept in the loop when changing from time- Estimates
periodic to event-based measurements. Secondly, Event-based state estimation with time-periodic estima-
one can select a suitable control approach from the tion results deals with the setup depicted in Figure 13.1,
extensive set of time-periodic control algorithms. The that is, a sensor system forwarding sampled measure-
purpose of this article is to assess existing event- ments to a state estimator that is possibly connected to a
based state estimators for time-periodic observations control implementation. Important prerequisites for the
and apply them in an environmental monitoring event-based state estimator in the schematic set-up of
application. Figure 13.1 are the sampling strategies employed and
the physical process considered.
Event triggered sampling The sensor system employs
an event triggering criteria to generate a next mea-
surement y ∈ Rm at the event instants te ∈ R+ ,
where e ∈ Z+ denotes the e-th event sample. To that
13.2 Preliminaries
extent, let Te ⊂ R+ be the collection of time instants
R, R+ , Z, and Z+ define the sets of real numbers, at all events, that is ,
nonnegative real numbers, integer number, and non-
Te : = { t e ∈ R n | e ∈ Z+ }.
negative integer numbers, respectively, while, ZC :=
Z ∩ C, for some C ⊂ R. An ellipsoidal set L(μ,Σ) ⊂ Rn Further, let us introduce τe := te − te−1 as the sam-
centered at μ ∈ Rn is characterized as Lμ,Σ := { x ∈ pling interval between two consecutive events e and
Rl |( x − μ) Σ−1 ( x − μ) ≤ 1}, for some positive definite e − 1. A definition of the event instants te is given in
Σ ∈ Rn×n . The Minkowski sum of two sets C1 , C2 ∈ Rn the next section.
is denoted as C1 ⊕ C2 := { x + y| x ∈ C1 , y ∈ C2 }. The
Time-periodic sampling The control implementation
null-matrix and identity-matrix of suitable dimensions
employs a time-periodic sampling strategy charac-
(clear from the context) are denoted as 0 and I, respec-
terized by a constant sampling interval τs ∈ R>0 . To
tively. For a continuous-time signal x (t), where tk ∈ R+
that extent, let T p ⊂ R+ be the collection of periodic
denotes the time of the k-th sample, let us define
time instants, that is,
x [k] := x (tk ) and x (t0:k ) := ( x (t0 ), x (t1 ), · · · , x (tk )). The
q-th element of a vector x ∈ Rn is denoted as { x }q , T p : = { nτ s | n ∈ Z + } .
while { A}qr denotes the element of a matrix A ∈
Notice that event instants could coincide with time-
Rm×n in the q-th row and r-th column. The trans-
periodic instants, that is, Te ∩ Ts might be non-
pose, determinant, inverse, and trace of a matrix A ∈
empty.
Rn×n are denoted as A , | A|, A−1 , and tr( A), respec-
tively. The minimum and maximum eigenvalue of a Physical process Let us consider an autonomous, lin-
square matrix A are denoted as λmin ( A) and λmax ( A), ear process discretized in time for various sampling
respectively. The p-norm of a vector x ∈ Rn is denoted intervals τ ∈ R>0 :
as  x  p .
The Delta-function δ : Rn → R+ of a vector x ∈ Rn x (t) = Aτ x (t − τ) + Bτ w(t), (13.1)
vanishes at all values of x = 0, while it is infinity when y ( t ) = Cx ( t ) + v ( t ) . (13.2)

x = 0 and, moreover, −∞ δ( x )dx = 1. Basically, the above description could be perceived
The probability density function (PDF) of a random as a discretized version of a continuous time state-
vector x ∈ Rn is denoted as p( x ), while x ∼ G(μ, Σ)
is a short notation for stating that the PDF of x is a
space model  τẋ (Fη
t) = Fx (t) + Ew(t), where Aτ := e Fτ
and Bτ := 0 e dηE. The state vector is denoted
Gaussian function with mean μ ∈ Rn and covariance
as x ∈ Rn and both the process noise w ∈ Rl and
Σ ∈ Rn×n , that is, p ( x ) = G( x, μ, Σ) and G( x, μ, Σ) :=
 −1 measurement noise v ∈ Rm are characterized with a
√ 1 n −0.5( x −μ) Σ ( x −μ). Further, for a bounded
(2π) |Σ| deterministic part and a stochastic part, that is,
Borel set C ⊂ R [1], the corresponding uniform PDF
n
w(t) := wd (t) + ws (t) and v (t) := vd (t) + vs (t).
ΠC ( x ) : Rn → R+ is characterized by ΠC ( x ) = 0 if
x ∈ C and ΠC ( x ) = ν−1 if x ∈ C, with ν ∈ R defined as Herein, wd (t) ∈ W and v d (t) ∈ V denote “determin-
the Lebesgue measure of C [15]. istic,” yet unknown, vectors taking values in the
264 Event-Based Control and Signal Processing

sets W ∈ Rl and V ∈ Rm . Further, ws (t) ∈ Rl and


vs (t) ∈ Rm denote “stochastic” random vectors tak- 13.4 Event-Based Sampling
ing values according to the PDFs p (ws (t)) and
p (vs (t)). Exact characterizations of W, V, p(ws ), and 13.4.1 Illustrative Examples
p (vs ) vary per estimation approach and will be dis- Event-based sampling is an a-periodic sampling strat-
cussed in the preceding sections. Still, it is important egy where events are not triggered periodically in time
to note that an unbiased noise w(t) and v(t) implies but at instants of predefined events. Three examples are
that both W and V have their center-of-mass collo- “Send-on-Delta,” “Predictive Sampling,” and “Matched
cated with the origin and that both ws (t) and vs (t) sampling”, as proposed in [3,10,20,22,29]. These strate-
have an expected value of zero. gies define that triggering the next event sample e ∈ Z+
The purpose of the state estimator is to exploit the event depends on the current measurement y (t) and the pre-
measurements y to compute an estimate of the state x. viously sampled y (te−n ), for one or more n ∈ Z>1 . More
In line with the above noise characterizations, estima- precisely, for some design parameter Δ(t) and for a pre-
tion results of the event-based state estimator are also dicted measurement value ŷ (t), they define the instant
represented with a deterministic and a stochastic part, of a next event te , as follows:
that is,
• Send-on-delta:
x (t) := x̂ (t) + x̃ d (t) + x̃ s (t). te = inf{t > te−1 | y(t) − y(te−1 ) > Δ(t)}.
Herein, x̂ ∈ R is the estimated mean of x and both
• Predictive sampling:
x̃d , x̃ s ∈ Rn are estimation errors, that is, x̃ d + x̃ s =
te = inf{t > te−1 | y(t) − ŷ (t) > Δ(t)}.
x − x̂. More precisely, x̃ d (t) ∈ X is a deterministic,
yet unknown, vector taking values in the error-set
• Matched sampling: " #
X(t) ⊂ Rn and x̃ s (t) is a stochastic random vector fol-
te = inf{t > te−1 | DKL p1( x (t))|| p2( x (t)) > Δ(t)}.
lowing the error distribution p ( x̃ s (t)).
The challenges addressed in event-based state estima-
An illustrative impression of the above event sam-
tion are as follows:
pling strategies is depicted in Figure 13.2. Matched
• Providing stable estimation results periodically in sampling uses the Kullback–Leibler divergence for
time when new measurements are triggered by an triggering " new events. # This divergence, denoted as
event sampling strategy. D KL p 1 ( x )|| p 2 ( x ) ∈ R + is a nonsymmetric measure for
,
the difference of p2 ( x ) relative to p1 ( x ). Therein, p1 ( x ) is
• So that stability criteria of the feedback control considered to be the updated (true) PDF of x and p ( x ) is
2
loop do not depend on the employed sampling a prediction (model) of p ( x ). In line with this reasoning,
1
strategy, directly. let p2 ( x (t)) denote the prediction of x (t) based on the
Such an estimator allows the sensor system to adopt results at te−1 , while p1 ( x (t)) is the update of p2 ( x (t))
different event sampling strategies depending on, for with the sensor value y(t). Then, their Kullback–Leibler
example, the expected lifetime of its battery. Moreover, divergence can be regarded as a measure for the rele-
a computationally efficient algorithm of the event-based vance of y ( t ) to the estimation results.
state estimator is desired, to attain applicable solutions.
REMARK 13.2 Let the estimation " result at te−1 be the #
REMARK 13.1 Stability of estimation results, wherein Gaussian p ( x ( t e−1 )) = G x ( t e−1 ), x̂ ( te−1 ), P ( te−1 ) .
x̃d and x̃s denote the estimation errors, implies a Further, " let the # Kullback–Leibler divergence
nondivergent error-set X(tk ) and a nondivergent D p ( x ( t )) p ( x ( t )) correspond to p1 ( x (t)) =
"KL 1 2 # " #
error-covariance P(tk ). More precisely, there exists a G x (t), x̂1 (t), P1 (t) and p2 ( x (t)) = G x (t), x̂2 (t), P2(t) .
constant value " for both
# limits limk→∞ X(tk ) < ∞ and Then, the prediction p2 ( x (t)) and the update p1 ( x (t))
limk→∞ λmax P(tk ) < ∞. can be determined with the process model of (13.1) and
an a-periodic Kalman filter, that is,
After introducing some examples and properties of
event-based sampling, the chapter continues with an P2 (t) = Aτ P(te−1 ) Aτ + Bτ cov(ws (t)) Bτ ;
overview of existing estimators picking either stochas- x̂ (t) = A x̂ (t );
2 τ e−1
tic or deterministic representations for estimating x. In ' ( −1
" #
addition, an outlook is presented toward a hybrid state P1 (t) = P2−1 (t) + C  cov(vs (t)) −1 C ;
estimator computing the deterministic and the stochas- ' " #−1 (
tic part of estimation results simultaneously. x̂1 (t) = P1−1 (t) P2−1 (t) x̂2 (t) + C  cov(vs (t)) y(t) .
Time-Periodic State Estimation with Event-Based Measurement Updates 265
Send-on-delta Predictive sampling Matched sampling
y y(t3) y y(t3) y y(t3)
ŷ y(t2)
y(t2)
y(t1) y(t2) t
y(t1)
y(t1) t t

DKL

FIGURE 13.2
Illustrative impression for triggering event samples via send-on-delta, predictive sampling, and matched sampling. The latter one employs a
Kullback–Leibler divergence denoted as DKL .

Furthermore, the Kullback–Leibler divergence then With send-on-delta, the triggering set H(e, t) is an
yields: m-dimensional ball of radius Δ centered at y (te−1 ),
" # for all te−1 < t < te , while for predictive sampling the
DKL p1( x (t))|| p2( x (t)) triggering set H(e, t) is the same ball but then cen-
1' ' ( (
tered at ŷ (t). Similarly, the triggering set H(e, t) for
:= log | P2 (t)| | P1 (t)|−1 + tr P2−1 (t) P1 (t) − n
2 matched sampling is the ellipsoidal shaped set H(e, t) =
1' # " ( L(μ(t),Σ(t)). Values for the center μ(t) ∈ Rm and covari-
+ x̂1 (t) − x̂2 (t) P2−1 (t) x̂1 (t) − x̂2 (t) .
2 ance Σ(t) ∈ Rm×m defining the boundary of the ellip-
soid are found in [20].
The above examples will be used to derive a prop- Proposition 13.1 formalizes the inherent measure-
erty of event sampling that shall be exploited in several ment knowledge of event sampling, that is, not receiv-
estimation approaches presented later. ing a new measurement for any t ∈ [te−1 , te ) implies
that y (t) is included in H(e, t). A characterization of
13.4.2 A Sensor Value Property H(e, t) can be derived prior to the event instant te as
the triggering condition is available. It is exactly this
As new measurement samples are triggered at the set-membership property of event sampling that gives
instants of well-designed events, not receiving a new the additional measurement information for updating
measurement sample at the estimator implies that no estimation results.
event has been triggered.∗ Note that this situation still
contains valuable information on the sensor value, as is
derived, next.
It was already shown in [30] that the triggering crite-
ria for many of the existing event sampling approaches 13.5 Existing State Estimators
can be generalized into a set-criteria. To that extent, let The challenge in event-based state estimation is an
us introduce H(e, t) ⊂ Rm as a set in the measurement unknown time horizon until the next event occurs, if
space collecting all the sensor values that y (t) is allowed it even occurs at all. Solutions with a-periodic esti-
to take so that no event will be triggered. Then, a gener- mators, for example, [4,19,35], perform a prediction of
alization of event sampling has the following definition the state x periodically in time when no measurement
for triggering a next event te , that is, is received. It was shown in [31] that this leads to
$ %
te = min t > te−1 |y(t) ∈ H(e, t) . (13.3) a diverging behavior of the error-covariance cov( x̃ s )
(unless the triggering condition depends on the error-
Then, the sensor value property derived from the above covariance, as it is shown in [34–36]). The diverging
generalization yields property was proven by assuming Gaussian noise rep-
resentation and no deterministic noises or state-error
Proposition 13.1 representations, that is, W = ∅, V = ∅, and X = ∅. To
curtail the runaway error-covariance, some alternative
Let y (t) be sampled with an event strategy similar to a-periodic estimators were proposed in [13,17] focus-
(13.3). Then, y (t) ∈ H(e, t) holds for any t ∈ [te−1 , te ). ing on when to send new measurements so to minimize
estimation error. Criteria developed for sending a new
∗ Under the assumption that no package loss can occur. measurement are set to guarantee stability, implying that
266 Event-Based Control and Signal Processing
y(t) = y(te) Positive
y(t) tЄTe information
Sensor Event-based
Sensor state estimator
system tЄ{Tp Te} Negative
y(t)ЄH(e,t)
information

FIGURE 13.3
Illustrative setup of a typical event-based state estimator: a new measurement sample y( t) = y( te ) arrives at the instants of an event t ∈ Te ,
while the measurement information y( t) ∈ H(e, t) is implied periodically in time when no event occurs at t ∈ {T p \ Te }.

the stability of these estimators directly depends on the • Negative information: At time-periodic instants
employed event triggering condition. t ∈ {T p \ Te } that are not an event, the measure-
In the considered problem, a sensor is not limited to ment information is a property that the sensor
one event sampling strategy but employed strategies value lies within a bounded subset, that is, y(t) ∈
can be replaced depending on the situation at hand. H(e, t).
Therefore, guaranteeing stable estimation results under
these circumstances calls for a redesign of existing esti-
13.5.1 Stochastic Representations
mators. Key in this redesign is the additional knowledge
on the sensor value that becomes available when no new This section summarizes the stochastic event-based state
measurement was sampled, that is, Proposition 13.1. estimator (sEBSE) originally developed in [30] and later
An early solution able to exploit this idea was proposed applied for a target tracking application in videos in [24].
in [25], where the results of Proposition 13.1 are used The estimator was further extended in [28] toward mul-
to perform a state update periodically in time when tiple sensor systems each having their own triggering
no event was triggered. However, the setup therein is criteria. As mentioned, the sEBSE computes updated
restricted to the sampling strategy “send-on-delta” and estimation results at both event and periodic instants,
no proof of stability was derived. The remaining three that is, for all tk ∈ T = Te ∪ T p . It is important to note
estimators of this overview do give a proof of stability that the sEBSE proposed in [30] does not consider deter-
and are not restricted to one specific event sampling ministic noises or state errors, that is, W, V, and X are
strategy. empty sets. Further, the stochastic process and measure-
An overall summary of the three estimation ment noise are Gaussian distributed∗ :
approaches is presented, first, before continuing ws (tk ) ∼ G(0, W ) and v s (tk ) ∼ G(0, V ), ∀tk ∈ T.
with more details per approach. Assumptions on the (13.4)
characterization of process noise, measurement noise
and estimation results, that is, purely stochastic, purely Similarly, the estimation result x (tk ) = x̂ (tk ) + x̃ s (tk ) of
deterministic or a combination of the two, form the this estimator is Gaussian as well, where x̂ (tk ) is the
basis for distinguishing the three event-based estima- estimated mean and the error-covariance P(tk ) ∈ Rn×n
tion approaches. Apart from that, all three approaches characterizes x̃ s (tk ) ∼ G(0, P(tk )). Explicit formulas for
have an estimation setup as it is depicted in Figure 13.3. finding an approximation of x̂ (tk ) and P(tk ) are summa-
The estimator is able to exploit measurements from any rized next.
event sampling strategy, so that updated estimation Let us assume that e − 1 events were triggered until tk .
results are computed at least periodically in time. An Then, the new measurement information at tk is either
aspect that differs is that the deterministic estimation the received measurement value y (tk ) = y (te ), when
approach checks the event criteria periodically in time, tk ∈ Te is an event instant, or it is the inherent knowledge
implying that Te ⊂ T p and allowing the deterministic that y (tk ) ∈ H(e, tk ), when tk ∈ {T p \ Te } is not an event
estimator to run periodically as well. With the stochastic but a time-periodic instant. This measurement informa-
and the combined estimation approaches, the event tion can be rewritten by introducing the bounded Borel
instants can occur at any time, implying that Te and set Y(tk ) ∈ Rm as follows:
T p will have (almost) no overlapping time instants 
H(e, tk ) if tk ∈ {T p \ Te },
enforcing these two estimators to run at both types Y( tk ) : =
of instants, that is, Te ∩ T p . The main challenge in all {y(te )} if t k ∈ Te .
three estimators is to cope with the hybrid nature of ⇒ y(tk ) ∈ Y(tk ), ∀tk ∈ T. (13.5)
measurement information available:
With the above result one can derive that the PDF
• Positive information: At the instants t ∈ Te of an of x (t ), as it is to be determined by this sEBSE,
k
event, a new measurement value y(t) = y (te ) is
received. ∗ Recall that x ∼ G(μ, Σ) is a short notation for p ( x ) = G( x, μ, Σ).
Time-Periodic State Estimation with Event-Based Measurement Updates 267
" #
yields p x (tk )|y(t0 ) ∈ Y(t0 ), . . . , y (tk ) ∈ Y(tk ) , which ωq ( tk )
" # " # " # −1 " #
is denoted as p x (tk )|Y(t0:k ) for brevity. An exact solu- ŷ q ( tk )− C x̂( tk− ) CP ( tk− ) C  + R ( tk ) ŷ q ( tk )− C x̂( tk− )
tion for this PDF is found by applying Bayes’ rule, = e ,
⎛ ⎞ −1
see [21] for more details, that is,
" # αq ( tk ) = ωq ( tk ) ⎝ ∑ ωq ( tk ) ⎠ .
p x (tk )|Y(t0:k ) q∈Z[1,N ]
"  # " #
p x (tk )Y(t0:k−1 ) p y(tk ) ∈ Y(tk )| x (tk )
=  "  # " # . Herein, ωq (tk ) ∈ R+ , ŷ q (tk ) ∈ Rm , and R(tk ) ∈
( ) Y(t0:k−1 ) p y(tk ) ∈ Y(tk )| x (tk ) dx (tk )
R n p x t k Rm×m are obtained from the likelihood function,
(13.6) which is modeled with the following weighted
summation of N Gaussian:
The sEBSE developed in [30] finds an single Gaussian "  #
approximation of the above equation in three steps. p y ( tk ) ∈ Y( tk )  x ( tk )
" #
" # ≈ ∑ ωq (tk )G ŷq (tk ), Cx (tk ), R(tk ) . (13.9)
Step 1 Compute the prediction p x (tk )|Y(t0:k−1 ) of
q∈Z[1,N ]
(13.6) from the process model in"(13.1) and the esti- #
mation result at tk−1 , that is, p x (tk−1 )|Y(t0:k−1 ) .
Step 3 Approximate the result of Step 2 from a sum
Note
" that the process # "model is linear and that # of N Gaussians " into the desired # single Gaussian
p x (tk−1 )|Y(t0:k−1 ) ≈ G x (tk−1 ), x̂ (tk−1 ), P(tk−1 )
p ( x (tk )) = G x (tk ), x̂ (tk ) P(tk ) . The mean x̂ (tk ) and
is Gaussian. Hence, standard (a-periodic) Kalman
error-covariance P(tk ) should " correspond # to the
filtering" equations can#be used to find the predicted
mean and covariance of p x (tk )|Y(t0:k ) in (13.8),
PDF p x (tk )|Y(t0:k−1 ) , for some sampling time
yielding
τk := tk − tk−1 , that is∗ ,
" # " # x̂ (tk ) = ∑ αq (tk )θ̂q (tk ),
p x (tk )|Y(t0:k−1 ) := G x (tk ), x̂ (tk− ), P(tk− ) , (13.7) q∈Z[1,N ]

where, and
' " #

x̂ (tk ) := Aτk x̂ (tk−1 ), P ( tk ) = ∑ αq (tk ) Θ(tk ) + x̂ (tk ) − θ̂q (tk )
q∈Z[1,N ]
P(tk− ) := Aτk P(tk−1 ) Aτk + Bτk WBτk ; " # (
× x̂ (tk ) − θ̂q (tk ) .
" #
Step 2 Formulate the likelihood p y (tk ) ∈ Y(tk )| x (tk ) as Stability of the above stochastic estimator sEBSE has
a summation of N Gaussians and employ a sum- been derived in [30].
of-Gaussian
" approach
# proposed in [32] to solve
p x (tk )|Y(t0:k ) in (13.6). More details on this like- Theorem 13.1
lihood are presented later but, for now, let us point
out that this solution will result in a summation
Let H(e|tk ) be a given bounded Borel set for all k ∈ Z+
of N Gaussians characterized by N normalized and let ( Aτs , C ) be an observable pair. Then, the sEBSE
weights αq ∈ R+ , N means θ̂q ∈ Rn , and N covari- results in a stable estimate, that is, limk→∞ λmax ( P(tk ))
ances Θq ∈ Rn×n : exists and is bounded.
" # " #
p x (tk )|Y(t0:k ) ≈ ∑ αq (tk )G x (tk ), θ̂q (tk ), Θ(tk ) .
A key aspect of the sEBSE is to turn the set inclusion
q∈Z[1,N ]
y (tk ) ∈ H(e, tk ) into a stochastic likelihood characterized
(13.8) by N Gaussians (see Equation 13.9). Open questions
on how to come up with such a characterization are
The variables of this formula have the following discussed in the next example and finalizes the sEBSE.
expression:
" #−1 EXAMPLE 13.1: From set inclusion to likelihood
Θ(tk ) = P−1 (tk− ) + C  R−1 (tk )C , This
" −1 − # " example finds # a solution for the likelihood
−  −1
θ̂q (tk ) = Θ(tk ) P (tk ) x̂ (tk ) + C R (tk )ŷ q (tk ) , p y(tk ) ∈ Y(tk ) x (tk ) in (13.9) by starting from the
inclusion y (tk ) ∈ Y(tk ). Normally, that is, when y (tk )
∗ The notation tk− is used to emphasize the predictive character of a is an actual measurement, a likelihood is of the form
variable at tk . p (y(tk)| x (tk )). The process model in (13.2) together with
268 Event-Based Control and Signal Processing

the noise assumptions on vd and vs in (13.4) then give "Then, substituting


# the above approximation and
that such a standard likelihood is Gaussian, that is, p y(tk )x (tk ) of (13.10) into the expression of the
"  # " # “set-inclusion”-likelihood
"  # in (13.11) gives a result for
p y(tk )x (tk ) = G y(tk ), Cx (tk ), V . (13.10) p y (tk ) ∈ Y(tk )x (tk ) already pointed out in (13.9),
where
In the above estimator, y(tk ) has been generalized into
a set inclusion y (tk ) ∈ Y(tk ), which can be regarded • N = 1, ŷ1 (tk ) = y(te ) and R(tk ) = V when tk ∈ Te
as a quantized measurement. Results in [18] point out is an event.
that the corresponding likelihood "  of this
# inclusion is
found by a convolution of p y (tk ) x (tk ) in (13.10) for • N ≥ 1, ŷ q (tk ) are the equidistantly sampled val-
all possible measurement values y (tk ) ∈ Y(tk ). In a fully ues of H(e, tk ) and R(tk ) = U (tk ) + V when
stochastic description, this is similar to a convolution tk ∈ {T p \ Te } is a periodic instant.
" #
of the PDF in (13.10) with a uniform PDF pY(tk) y(tk )
being constant (> 0) for all y(tk ) ∈ Y(tk ) and 0 other- 13.5.2 Deterministic Representations
wise. The likelihood of such a quantized measurement
This section reviews the deterministic event-based state
y(tk ) ∈ Y(tk ) is then found via
estimator (dEBSE) developed in [27]. The main differ-
"  #
p y ( tk ) ∈ Y( tk )  x ( tk ) ence with respect to the previous approach is that the
 "  # " # dEBSE operates on time-periodic instants only, that is,
= p y(tk )x (tk ) pY(tk) y(tk ) dy (tk ). (13.11) tk ∈ T := T p and Te ⊆ T p . As such, the sensor value is
Rm
periodically checked on violation of the event triggering
This uniform PDF has a hybrid expression in line criteria, to generate new measurements. This simpli-
with Y(tk ) in (13.5). At instants of an event, the set is fies the a-periodic process model in (13.1) as the sam-
the actual" measurement
# Y(tk ) := {y(te )} and the PDF pling time is the constant τs , that is, Ā := Aτs , B̄ := Bτs ,
pY(tk) y(tk ) is described with a delta function at y(te ). resulting in:
Periodically in time, the set is defined by "the trigger- #
ing criteria Y(tk ) = H(e, tk ) at which pY(tk) y(tk ) will x (tk ) = Āx (tk−1 ) + B̄w(tk−1 ); (13.12)
be approximated by a summation of N Gaussians. As y (tk ) = Cx (tk ) + v(tk ). (13.13)
such, the following characterization is introduced:
" # It is important to note that the dEBSE does not assume
p Y ( tk ) y ( t k ) stochastic noise terms. Further, the deterministic pro-
 " # cess and measurement noise are in line with the problem
∑q=1 N G y(tk ), ŷ q (tk ), U (tk) if tk ∈ {T p \ Te },
N 1
≈ " # formulation presented in Section 13.3:
δ y ( tk ) − y ( te ) if tk ∈ Te .
wd (tk ) ∈ W and vd (tk ) ∈ V, ∀tk ∈ T p , (13.14)
Values of ŷ q (tk ) ∈ Rm , for all q ∈ Z[1,N ] , are retrieved by
taking N equidistant samples in the event triggering set where W ⊂ Rl and V ⊂ Rm are unbiased. In line with
Y(tk ) = H(e"|tk ). Each sample represents # the mean of a these noise representations, the dEBSE assumes a deter-
Gaussian G y(tk ), ŷ q (tk ), U (tk) , where U ∈ Rm×m is a ministic representation of its estimation results, that is,
constant for each Gaussian (see Figure 13.4). x̃s (tk ) = 0 for all tk ∈ T. The deterministic estimation

p (y(tk)) Real p (y(t )) Real


Real
y(tk) y(tk) k
0.3 Approximated (N = 3) 0.3 Approximated
Approximated(N(N=8)
= 8)

0.2 0.2

0.1 0.1

0 0
–2 0 2 y(tk) –2 0 2 y(tk)

FIGURE 13.4
" #
Two approximations of the uniform PDF p[−2,2] y( tk ) by a summation of either 3 or 8 Gaussians having equidistantly sampled means regarding
their measurement value ŷq ( tk ). The covariance matrix U ( tk ) of each individual Gaussian function is computed with the heuristic expression
' 4( N −1) 4( N −1) (
U ( tk ) = c 0.25 − 0.05e − 15 − 0.08e − 180 , where the positive scalar c is equal to the Eucledian distance between two neighboring means.
Time-Periodic State Estimation with Event-Based Measurement Updates 269

error is approximated as a circular set based on some prior result x̂ (tk− N −1 ) yields
p-norm, that is, ' (
x̂ (tk− N ) = αI + K  N KN
⎛ ⎛ ⎞⎞
 x̂ (tk ) − x (tk ) p ≤ γ(tk ), ∀tk ∈ T p and some γ(tk ) > 0. z ( t k− N )
⎜ ⎜ .. ⎟⎟
× ⎝α Āx (tk− N −1 ) + O
N WN ⎝ . ⎠⎠ ,
The event triggering criteria presented next follows a z ( tk )
similar approach, for which it is assumed that e − 1 (13.15)
events were triggered until tk . Then, for some known
where
measurement value ŷ cond conditioned on the prior esti-
" #
mation result, the event triggering criteria of the dEBSE K N = WN C C Ā · · · C Ā N +1 . (13.16)
adopt a p-norm criteria, that is,
The employed values for α and WN are an indication
of the confidence in measurements. A suitable value for
an event is triggered at tk if y(tk ) − ŷ cond (tk ) p > Δ, WN depending on a positive scalar β is found via a sin-
no event is triggered at tk if y(tk ) − ŷ cond (tk ) p ≤ Δ. gular value "
decomposition of the observability matrix
#

VSU := C CA · · · CA N +1 . Then, since some
singular values are likely zero, one can construct a
The new measurement information at tk , which is weighted “pseudo-inverse” of this observability matrix,
denoted with the implied measurement z(tk ), is either that is,
the received measurement y(tk ), when tk ∈ Te is an event 
instant, or it is the knowledge that y(tk ) − ŷ cond (tk ) p WN = βVS+ U  . (13.17)
≤ Δ, when tk ∈ {T p \ Te } is not an event instant. The
The advantage of the above weight matrix selection is
latter measurement information is treated as a measure- ' 0I 0 (
n
ment realization via z(tk ) = ŷ cond (tk ) + δ(tk ), where δ(tk ) that K N = 0√1β In , where n1 is the number of unob-
2
is bounded by a circular set of known radius. servable state elements and n2 := n − n1 is the number
The implied measurement values z(tk ) available at of observable state elements. With this result one can
any sample instant tk ∈ T are used to compute updated make the following generalization, which is instrumen-
values of the estimated mean x̂ (tk ). A value for the error tal for the stability result presented afterwards:
bound γ(tk ) is not actively tracked. Instead, it is proven  
that there exists a bound on γ(tk ) for all tk ∈ T. Yet, ' ( Ā 11 Ā 12
before presenting this bound, let us start with the esti- αI + K  N K N α Ā = α .
0 α+β Ā22
mation procedure adopted by the dEBSE, which is based
on the moving horizon approach presented in [2]. Theorem 13.2
This moving horizon approach focuses on estimating
the state at tk− N given all measurements from the cur-
rent time instant tk until tk− N .∗ As such, the method Let the pair ( Ā, C ) be detectable, let WN follow
focuses on estimating x̂ (tk− N ) given z(tk− N:k ). The esti- (13.17), " and let α ># 0 and β be chosen such that
| ( + ) −1 Ā
mation result at the current instant tk is then a forward λ max α α β 22 | < 1. Then, the dEBSE is a sta-
prediction of x̂(tk− N ). Let us denote this forward pre- ble observer for the process in (13.12), that is,  x̂ (tk ) −

diction as x̂ (tk ), to point out that it is still a prediction x ( t k ) <  for all t k ∈ T p if  x̂ (t0 ) − x (t0 ) < η, for some
which will receive its final estimation result at tk+ N . bounded , η > 0.
Then, the forward prediction yields
A proof of the above theorem is found in [27].
Note that the estimators in this section either address
x̂ − (tk ) = A N x̂ (tk− N ). stochastic or deterministic weights. Yet, more realis-
tic scenarios favor estimators that can address both
types of noises. In most practical cases, the process and
A solution for computing x̂ (tk− N ) requires the selection
measurement noise are represented by Gaussian distri-
of several design parameters, such as α ∈ R+ and WN ∈
× butions with no deterministic part. Yet, the implied mea-
R n Nm . Then, the estimated mean x̂ (tk− N ) based on its
surement information when exploiting event sampling
strategies is typically modeled as additive determinis-
∗ For clarity of expression, details on the estimation approach tic noise on the sensor measurement y. An estimator
directly after initialization and until the Nth sample instant are not able to cope with both types of noise representations is
presented. The interested reader is referred to [27] for more details. proposed in the next section.
270 Event-Based Control and Signal Processing

along with the values for W, V, and D. The proposed


13.6 A Hybrid State Estimator hEBSE is presented in two stages. Firstly, the implied
measurement information resulting from event trigger-
This section presents a hybrid event-based state esti- ing criteria is integrated with the deterministic part of
mator (hEBSE) allowing both stochastic and determinis- the measurement noise vd . Secondly, the state estimation
tic representations of noise and estimation results. Yet, formulas of the hEBSE are presented, which are based on
for clarity of exposition, the presented estimator does the combined stochastic and set-membership estimator
not consider deterministic process noises, that is, W = proposed in [26].
∅. The hEBSE is based on existing estimators combin-
ing stochastic and deterministic measurement noises, as
13.6.1 Implied Measurements
they were proposed in [26] and to some extent in [11]. In
line with the previous sEBSE, updated estimation results The measurement information available at any sam-
are computed at both event instants as well as at periodic ple instant, that is, event or time-periodic, has already
instants, that is, for all tk ∈ T = Te ∪ Ts . It is important been derived in Sections 13.4.2 and 13.5.1. To summa-
to note that the proposed estimator assumes a Gaussian rize those results, let us assume that e − 1 event were
distribution of the stochastic noises and an ellipsoidal set triggered until tk . Then, the measurement information at
inclusion for the deterministic noise parts.∗ More pre- tk is either a received measurement value y (tk ) = y(te ),
cisely, let us introduce the following noise characteristics when tk ∈ Te is an event instant, or it is a set-inclusion
in line with the problem formulation of Section 13.3: y (tk ) ∈ H(e, tk ), when tk ∈ T p \ Te is not an event but
a time-periodic instant. This measurement information
ws (tk ) ∼ G(0, W ), vd (tk ) ∈ L0,D and can be rewritten as follows:
vs (tk ) ∼ G(0, V ), ∀tk ∈ T. (13.18) 
H(e, tk ) if tk ∈ T p \ Te ,
× y ( tk ) ∈ . (13.20)
Herein, W ∈ R l l is a positive definite matrix charac- {y(te )} if t k ∈ Te .
terizing process noise, while D, V ∈ Rm×m are positive
definite matrices defining the ellipsoidal shaped set L0,D Now, let us assume that the event triggering set H(e, tk )
and the Gaussian function G(0, V ) for characterizing is (or can be approximated by) the ellipsoidal set
measurement noise. The hEBSE further defines tx (tk ) = L
(ŷe ( tk ),He ( tk )) , for example,
x̂ (tk ) + x̃ d (tk ) + x̃ s (tk ) for representing its estimation
results, consisting of a stochastic and a deterministic part
[ŷe (tk ), He (tk )] := arg min tr( H ),
as introduced in Section 13.3. The estimation result has a ŷ∈Rm , H 0
(13.21)
mean x̂ (tk ) and its estimation errors are characterized as subject to: L(ŷ,H ) ⊇ H(e, tk ).
follows:
Similarly, the remaining set {y(te )} of the measurement
x̃d (tk ) ∈ L0,X (tk ) , x̃ s (tk ) ∼ G(0, P(tk )),
∀tk ∈ T.
information in (13.20) can be approximated by the ellip-
(13.19) soidal set lim L
↓0 (y ( te ),I ) . Substituting these values into
× (13.20) and considering  ↓ 0 gives
Herein, X, P ∈ R n n are positive definite matrices defin-
ing the ellipsoidal shaped set L0,X (tk ) and the Gaussian 
L(ŷe (tk ),He (tk )) if tk ∈ T p \ Te ,
G(0, P(tk )), respectively. y ( tk ) ∈
L(y(te ),I ) if t k ∈ Te .
REMARK 13.3 The matrices D and X (tk ) are referred
to as shape matrices, whereas W, V, and P(tk ) are One can turn the above set-membership into an equal-
covariance matrices. Moreover, since x̃ d and x̃ s denote ity by introducing the unbiased noise term e(tk ), such
an estimation error, let us refer to X (tk ) and P(tk ) as that e(tk ) ∈ L(0,He (tk )) if tk ∈ T p \ Te and e(tk ) ∈ L(0,I ) if
the error-shape matrix and the error-covariance matrix, tk ∈ Te , resulting in the realization
respectively. 
ŷe (tk ) if tk ∈ T p \ Te ,
Explicit formulas for finding values of x̂ (tk ), P(tk ), y ( tk ) + e( tk ) = (13.22)
y (te ) if t k ∈ Te .
and X (tk ) are presented, next. To that extent, it is
assumed that the process model in (13.1) is available, Since y(t ) = Cx (t ) + v (t ) + v (t ) already contains a
k k s k d k
∗ Recall deterministic noise term vd ∈ L(0,D ) , one can introduce
that x ∼ G(μ, Σ) is a short notation for p( x ) = G( x, μ, Σ)
and that Lμ,Σ Rq is an ellipsoidal shaped set defined by Lμ,Σ : = { x ∈ ṽ(d)(tk ) := vd (tk ) + e(tk ) satisfying the following set-
Rq |( x − μ)  Σ−1 ( x − μ) ≤ 1}. inclusion, for some ς ∈ (0, 1), that is,
Time-Periodic State Estimation with Event-Based Measurement Updates 271
Send-on-delta Matched sampling
te–1 + 0.4 H(e,te–1 + 0.4)
H
1 1
te–1 + 0.3 H
H(e,te–1 + 0.3)
te–1 + 0.2 H
H(e,te–1 + 0.2) 0.5
0.5
te–1 + 0.1 H
H(e,te–1 + 0.1)
0 te–1 0

–0.5 –0.5

–1 –1
–1 –0.5 0 0.5 1 –1 –0.5 0 0.5 1

FIGURE 13.5
Illustrative examples of the event triggering sets H(e, t) ∈ R2 that result from send-on-delta and matched sampling for the periodic sample
instants te−1 + 0.1 until te−1 + 0.4 seconds. The initial measurement value is y( te−1 ) = (0 0)  for both strategies.

ṽ d (tk ) Remark 13.2 for computing the divergence,


⎧ with cov(ws ) = W and cov(vs ) = V, then
⎪ L
⎨ (0,D )
⊕ L(0,He (tk ))
" # if t k ∈ T p \ Te , ŷe (tk ) = Cx2 (tk− ) and He (tk ) = 2(Δ − α)−1
∈ ⊆ L


0,(1+ς−1 ) D +(1+ς) He( tk ) (V −1 CP1 (tk ) P2−1 (tk ) P1 (tk )CV −1 )−1 , where α = 0.5
L(0,D ) ⊕ L(0,I ) = L(0,D ) tk ∈ Te ,  ↓ 0. " #
if log | P2 (tk )|| P2 (tk )|−1 + tr( P2−1 (tk ) P1 (tk )) − n .
A suitable value for ς minimizes the trace of • Some illustrative examples of the ellipsoidal sets
(1 + ς−1 ) D + (1 + ς) He (tk ). for send-on-delta and matched sampling are found
Now, combining the measurement information in in Figure 13.5.
(13.22) with the above noise term ṽ d and the process
model in (13.1) results in a new (implied) measure- 13.6.2 Estimation Formulas
ment z(tk ) having the following measurement model
and measurement realizations, for a received y(te ) and a The hEBSE exploits the (implied) measurement values
computed ŷe (tk ): of z(tk ) as proposed in (13.23), for all tk ∈ T. The hEBSE
aims to solve the estimation problem for the state repre-
model z(tk ) = Cx (tk ) + vs (tk ) + ṽ d (tk ), (13.23) sentation x (tk ) = x̂ (tk ) + x̃ d (tk ) + x̃ s (tk ). The underlying
 idea is to compute a state estimate x̂ that minimizes
ŷe (tk ) if tk ∈ T p \ Te ,
realization z(tk ) = the (maximum possible) mean squared error (MSE) of
y (te ) if t k ∈ Te . x̂ − x in the presence of both stochastic and determinis-
(13.24) tic noises (or uncertainties). For clarity, let us point out
that the error associated with the state estimate x̂ (t) is
The stochastic measurement noise still follows vs (tk ) ∼ composed of a stochastic and a deterministic part, that is,
G(0, V ), while the deterministic measurement noise
ṽ d = vd + e follows ṽ d (tk ) ∈ L(0,E(tk)) and x̂ (tk ) − x (tk ) = x̃ s (tk ) + x̃ d (tk ),
 where
(1 + ς−1 ) D + (1 + ς) He (tk ) if tk ∈ T p \ Te , " #
E ( tk ) : = x̃ s (tk ) ∼ G 0, P(tk ) and x̃ d (tk ) ∈ L0,X (tk ) .
D if t k ∈ Te .
The advantage of assuming an ellipsoidal set-inclusion
EXAMPLE 13.2: Ellipsoidal sets for event sampling for x̃d and a Gaussian distribution for x̃ s is that estima-
strategies One can derive that the event sampling tion errors are characterized by P  0 and X  0. Since
strategies send-on-delta and matched sampling, as pre- the deterministic error is nonstochastic and independent
sented in Section 13.4.1," will# result in an ellipsoidal from stochastic errors, note that the considered MSE
shaped triggering set H e, tk . The center and shape then yields
matrix of this set directly define values for ŷe (tk ) and " # " #!
E x̂ (tk ) − x (tk ) x̂ (tk ) − x (tk )
He (tk ) in (13.21), yielding !
= E x̃s (tk ) x̃s (tk ) + x̃d (tk ) x̃d (tk )
• Send-on-delta: ŷe (tk ) = y(te−1 ) and He (tk ) = ΔI.   " #  (13.25)
" #
=tr P ( tk ) ≤tr X ( tk )
• Matched sampling: In case the sensor " #
employs the asynchronous Kalman filter of ≤ tr P(tk ) + X (tk ) .
272 Event-Based Control and Signal Processing

Thus, the MSE is bounded by the trace of P(tk ) + X (tk ). The parameter ω(tk ) ∈ (0, 1) in (13.29) guarantees
The estimator proposed in [26] forms the basis of the that the shape matrix X (tk ) corresponds to an outer
hEBSE, as it minimizes exactly this bound. Additional ellipsoidal approximation of two ellipsoidal sets: the
to standard Kalman filtering, the estimate x̂ is associated sets being a weighted prediction error and a mea-
not only with an error-covariance P but also with a error- surement error, that is, L(0,( I −K (t )C) X (t− ) ( I −K (t )C))
k k k
shape matrix X. The values of these matrices and of the and L(0,K (t
state estimate x̂ are computed in line with standard esti- k) E ( tk ) K ( tk ) ).
mation approaches, that is, with a prediction step and a Results in [26] point out that the gain K (tk ) is
measurement update. given by
' (
Step 1 Compute the predicted values∗ x̂ (tk− ), P(tk− ) and K (tk ) = P(tk− ) C  + 1−ω1(t ) X (tk− ) C 
X (tk− ) given their prior results at k − 1. Note that '
k

the process model in (13.1) is linear and that ws (tk ) · C P(tk ) C + 1−ω1(t ) C X (tk− ) C 
− 
(13.30)
is unbiased and characterized by W  0. Moreover, ( −1
k

the estimation error of x̂ (tk ) is characterized by posi- + V + ω1 E(tk ) .


tive definite matrices P(tk− ) and X (tk− ) as well. These
prerequisites are in line with standard Kalman filter- A one-dimensional convex optimization problem
ing (extended with shape matrices) and it was shown for ωopt ∈ (0, 1) that minimizes the posterior MSE
in [26] that a similar prediction step can be employed bound in (13.25) remains to be solved, for example,
here as well, that is, with the aid of Brent’s method.
x̂ (tk− ) = Aτk x̂ (tk−1 ),
REMARK 13.4 The derived gain (13.30) embodies a
P(tk− ) = Aτk P(tk−1 ) Aτk + Bτk WBτk ,
(13.26) systematic and consistent generalization of the stan-
X (tk− ) = Aτk X (tk−1 ) Aτk . dard Kalman filter for additional unknown but bounded
uncertainties. Accordingly, K in (13.30) reduces to the
Recall that τk := tk − tk−1 is the sampling time. Evi- standard Kalman gain in the absence of set-membership
dently, this prediction can be computed in closed errors, that is, Q = 0, E(tk ) = 0 implying that X (tk ) = 0
form and only differs from a Kalman filter by the for all tk ∈ T. In the opposite case of vanishing error
additional third expression determining X (tk− ). covariance matrices, the hEBSE yields a deterministic
estimator of intersecting ellipsoidal sets.
Step 2 Compute the updated values x̂ (tk ), P(tk ), and
X (tk ), given their prediction results from Step 1 and REMARK 13.5 The sum of the error matrices is
the measurement z(tk ) of (13.23). Again, the process expressed via
model in (13.1) is linear and the unbiased measure- '
ment noises ṽ d (tk ) and vs (tk ) are characterized by " #−1
P(tk ) + X (tk ) = ω(tk ) ω(tk ) P(tk− ) + X (tk− )
E(tk )  0 and V  0, respectively, where ṽ d is a sub-
stitute of vd to include the event triggering set. As + (1 − ω(tk ))C 
such, one can employ an unbiased update expres- " # −1 ( −1
sion in line with any linear estimator, for some gain × (1 − ω(tk ))V + E(tk ) C ,
K (tk ) ∈ Rn×m , that is, (13.31)
x̂ (tk ) = ( I − K (tk )C ) x̂ (tk− ) + K (tk )z(tk ).
(13.27) which can be utilized to determine ωopt that minimizes
the right-hand side bound in (13.25). The special cases
In line with the above expression, the updated error-
ωopt = 0 or ωopt = 1 will not be considered.
covariance and error-shape matrices also follow
standard Kalman filtering expressions, yielding
This completes the hybrid EBSE resulting in a correct
" # − " # description of the estimation results for including the
P ( tk ) = I − K ( tk ) C P ( tk ) I − K ( tk ) C
event triggering set H(e, tk ) into the estimation results.
+ K (tk )VK (tk ) , (13.28) Before continuing with an observation case study, let us
" # " # first point out the stability of estimation errors.
X (tk ) = 1−ω1(t ) I − K (tk )C X (tk− ) I − K (tk )C
k

+ 1
ω ( tk )
K ( tk ) E ( tk ) K ( tk )  . (13.29) 13.6.3 Asymptotic Analysis

∗ The notation tk− is used to emphasize the predictive character of a The hybrid EBSE is said to be stable iff both P(tk )
variable at tk . and X (tk ) have finite eigenvalues for tk → ∞. Proving
Time-Periodic State Estimation with Event-Based Measurement Updates 273

stability is done in two steps: Theorem 13.4

1. Introduce a Γ(tk ), such that P(tk ) + X (tk )  Consider Γ(tk ) in (13.32) and Σ(tn ) in (13.33) and let
Γ(tk ) holds for all tk ∈ T. Σ(0) := Γ(0). Then, Γ(tk ) ≺ Σ(tn ) holds for all tk = tn
2. Show that Γ(tk )  Σ(tk ), for some Σ(tk ) being and tn ∈ T p .
the result of a standard, periodic Kalman filter.
The proof of this theorem is found in the appendix.
Then, the proposed estimator enjoys the same stability Moreover, the results in Theorem 13.4 guarantee that
conditions as a (standard) periodic Kalman filter would the hybrid EBSE proposed has asymptotically stable
have, that is, depending on detectability and observabil- estimation results in case ( Ā, C ) is detectable.
ity properties. For clarity of the presented results, it is
assumed that the scalar weight ω will be constant at
all sampling instants, that is, the hybrid EBSE employs
ω(tk ) = ω in (13.28), (13.29), (13.30), and (13.31) for all
tk ∈ T, though the results can be generalized to weights
13.7 Illustrative Case Study
varying in time.
Let us start with the first step where P(tk ) + X (tk )  Results of the stochastic and hybrid event-based state
Γ(tk ). After this step, one can guarantee that iff Γ(tk )  0 estimator are studied here in terms of estimation errors
is asymptotically stable, then both P(tk ) and X (tk ) shall for tracking a 1D object. The process model in line with
be asymptotically stable as well. In line with the results (13.1) is a double integrator, that is,
of (13.26) and of (13.31), let us introduce the following    1 2
update equation for Γ(tk ), that is, 1 τ τ
x (t) = x ( t − τ) + 2 a ( t − τ) ,
0 1 τ
!
Γ(tk− ) = Aτk Γ(tk−1 ) Aτk + Bτk WBτk , y(t) = 1 0 x (t) + vs (t) .
Γ( t k )
" " # −1 " #−1 #−1 The state vector x (t) combines the object’s position and
= ω Γ(tk− ) + (1 − ω) C  (1 − ω)V + E ( tk ) C . speed. Further, a(t) = 30 1
t · cos( 10
1
t) denotes the object’s
(13.32) acceleration, while only the position is measured in
y (t). Since acceleration is assumed unknown, the pro-
Theorem 13.3 cess model in (13.1) is characterized with a process
noise w(t) := a(t). During the simulation, the accelera-
Consider P(tk ) and X (tk ) of the hybrid EBSE, for some tion is bounded by | a(t)| ≤ 0.9, due to which a suitable
constant ω ∈ (0, 1), and consider Γ(tk ) in (13.32). Fur- covariance in line with [6] is cov( a(t)) = 1.1, result-
ther, let Γ(0) := P(0) + X (0). Then, P(tk ) + X (tk ) ≺ ing in an unbiased distribution p (w(t)) with covariance
Γ(tk ) holds for all tk ∈ T. W = 1.1. Further, the sampling time is τs = 0.1 seconds
and the sensor noise covariance is set to V = 2 · 10−3.
The proof of this theorem is found in the appendix. The object’s true position, speed, and acceleration are
Let us continue with the second step of this asymp- depicted in Figure 13.6.
totic analysis, for which we will introduce Σ(tn ) com- The hEBSE of Section 13.6 combining stochastic and
puted via an update equation similar to a time-periodic set-membership measurement information is compared
Kalman filter, that is, for all time-periodic instants to the sEBSE presented in 13.5.1 limited to stochastic rep-
tn ∈ T p , resentations. Both estimators
" start
# with the initial esti-
mation results x̂ (0) = 0.1 0.1 and P(0) = 0.01 · I,
Σ(tn− ) = ĀΣ(tn−1 ) Ā + B̄W B̄ ,
" " # −1 15 2
Σ( t ) = ω −κ +1 Σ( t − )
" # −1 # −1
+ (1 − ω) C  (1 − ω)V + E ( tn ) 1
Position (m)

Speed (m/s)

C . 10
(13.33) 0
5
The scalar κ ∈ (0, 1) is an upper bound on the amount of FIGURE 13.6 −1
events that can occur between two consecutive periodic The position, speed, and 0 −2
sample instants (tn ) and (tn−1 ). The constant system acceleration of the tracked 0 10 20 30 0 10 20 30
matrices above are defined as Ā := Aτs and B̄ := Bτs . object.
Time (s) Time (s)
274 Event-Based Control and Signal Processing

while X (0) = 0 is chosen as the initial ellipsoidal shape Matched sampling:


matrix for the hEBSE. Next, the measurement informa- ŷ(t) = y(te ), U (t) = 0, ∀ t ∈ Te ,
tion of both EBSEs is characterized.
1
ŷ(t) = CAt−te−1 x̂ (te−1 ), U ( t ) = Φ( t ) , ∀t ∈ T p .
hEBSE 4
Measurement information of the hEBSE is represented
Figure 13.7 until Figure 13.10 depict the actual squared
by the implied measurement z(t) = Cx (t) + vs (t) +
estimation error, that is,  x̂ (t) − x (t)22 , in comparison
ṽ d (t). Note that the original measurement is only
to the modeled estimation error, that is, tr( P(t)) for the
affected by stochastic noise and that v d (t) ∈ ∅, due to
sEBSE and tr( P(t))+ tr( X (t)) for the hEBSE. The results
which ṽ d (t) = e(t) is characterized by an ellipsoidal
depicted were obtained after averaging the outcome of
approximation e(t) ∈ L0,E of the event triggering set.
1000 runs of the considered simulation case study.
In case of an event instant t ∈ Te , the measurement
Figures 13.7 and 13.8 depict the estimation results of
y (te ) is received and one obtains that z(t) = y(te ),
the hEBSE and the sEBSE, respectively, when matched
that is, e(te ) ∈ ∅ and E(t) = 0. At periodic time instants
sampling is employed as the event sampling strategy.
t ∈ T p , one has the information that y (t) ∈ H(e, t).
Although it is not pointed out in the figures, it is worth
This ellipsoidal set H(e, t) can be characterized with
mentioning that the hEBSE triggered a total amount
a “mass”-center, yielding an estimate of y (t) and an
ellipsoidal error-set resulting in an characterization of
e(t) ∈ L0,E(t) via E(t) = He (t). Suitable realizations of Squared error of the hEBSE with MS
0.8
z(t) and E(t) for the two employed event strategies were Real
already given in Example 13.2, where Φ(t) := 2(Δ − Modeled
α)−1 (V −1 CP1 (t) P2−1 (t) P1(t)CV −1 )−1 , that is, 0.6

Send-on-delta: 0.4
z ( t ) = y ( te ), E(t) = 0, ∀ t ∈ Te ,
z ( t ) = y ( t e− 1 ) , E(t) = Δ , 2
∀t ∈ T p , 0.2

Matched sampling:
0
z ( t ) = y ( te ), E(t) = 0, ∀ t ∈ Te , 0 5 10 15 20 25
z(t) = CAt−te−1 x̂ (te−1 ), E ( t ) = Φ( t ) , ∀t ∈ T p . Time (s)

FIGURE 13.7
sEBSE
Simulation results for matched sampling (MS) in combination with
Measurement information of the sEBSE is rep-
the hEBSE allowing stochastic and set-membership representations.
resented by " a single #Gaussian PDF, that is, The real squared estimation error  x̂( t) − x ( t)22 is depicted versus the
p (y(t)) = G y(t), ŷ(t), R(t) for some (estimated)
modeled (bound) of the estimation error tr( P ( t)) + tr( X ( t)).
measurement value ŷ(t) and covariance matrix
R(t) = V (t) + U (t). Herein, V (t) is the covariance Squared error of the sEBSE with MS
of the stochastic measurement noise v s (t), while U (t) is 0.4
Real
a covariance due to any implied measurement informa- Modeled
tion as it is treated as an additional stochastic noise. In 0.3
case of an event instant t ∈ Te the measurement y(te ) is
received and one obtains that ŷ(t) = y(te ) and U (t) = 0.
0.2
At periodic time instants t ∈ T p , one has the information
that y (t) ∈ H(e, t), which is then turned into a particular
value for ŷ(t) and U (t). A suitable characterization of 0.1
ŷ(t) and U (t) for the two employed event sampling
strategies were already given in Example 13.2, where 0
Φ(t) := 2(Δ − α)−1 (V −1 CP1 (t) P2−1 (t) P1 (t)CV −1 )−1 ,
0 5 10 15 20 25
Time (s)
that is,
FIGURE 13.8
Send-on-delta: Simulation results for matched sampling (MS) in combination with the
ŷ (t) = y(te ), U (t) = 0, ∀ t ∈ Te , sEBSE limited to stochastic representations. The real squared estima-
3 tion error  x̂( t) − x ( t)22 is depicted versus the modeled (bound) of
ŷ(t) = y(te−1 ), U ( t ) = Δ2 , ∀t ∈ T p , the estimation error tr( P ( t)).
4
Time-Periodic State Estimation with Event-Based Measurement Updates 275

of 31 events (on average), while the sEBSE triggered property is important when estimation results are used
40 events (on average). Further, the real squared esti- for control purposes.
mation error of both EBSEs considered is comparable. Figures 13.9 and 13.10 depict the estimation results of
Hence, the hEBSE has similar estimation results with the hEBSE and the sEBSE, respectively, when send-on-
fewer events triggered, due to which less measurement delta is employed as the event sampling strategy. Since
samples are required saving communication resources. this event sampling strategy does not depend on previ-
Yet, the main advantage of the hEBSE is the modeled ous estimation results but merely on the previous mea-
bound on the estimation error. Figure 13.7 indicates that surement sample, both estimators received the events at
this modeled bound is conservative when the hEBSE is the same time instants giving a total of 115 events. Note
employed, which is not the case for the sEBSE depicted that this is an increase of events by a factor of 3 to 4 when
in Figure 13.8. This means that the hEBSE gives a bet- compared to the EBSEs in combination with matched
ter guarantee that the real estimation error stays within sampling. Yet, this increase of events and thus of mea-
the bound as it is computed by the estimator. Such a surement samples is not reflected in a corresponding

Squared error of the hEBSE with SoD


0.8
Real
Modeled
0.6

0.4

0.2

0
0 5 10 15 20 25
Time (s)

FIGURE 13.9
Simulation results for send-on-delta (SoD) in combination with the hEBSE allowing stochastic and set-membership representations. The real
squared estimation error  x̂( t) − x ( t)22 is depicted versus the modeled (bound) of the estimation error tr( P ( t)) + tr( X ( t)).

Squared error of the sEBSE with SoD


0.4
Real
Modeled

0.3

0.2

0.1

0
0 5 10 15 20 25
Time (s)

FIGURE 13.10
Simulation results for send-on-delta (SoD) in combination with the sEBSE limited to stochastic representations. The real squared estimation error
 x̂( t) − x ( t)22 is depicted versus the modeled (bound) of the estimation error tr( P ( t)).
276 Event-Based Control and Signal Processing

decrease of estimation errors. Further, similar conclu-


sions can be drawn from the estimation results with Appendix A Proof of Theorem 13.3
send-on-delta when comparing Figures 13.9 and 13.10. "
Again, the squared estimation error of the two consid- Let us introduce R(tk ) := (1 − ω)C  (1 − ω)V +
# −1
ered EBSEs is comparable and the main advantage of E ( tk ) C. Then, the update formulas of Γ(tk ) in (13.32),
the hEBSE is in the improved bound of the modeled yields
estimation error.
Therefore, a fair conclusion of the hEBSE is that similar Γ(tk− ) = Aτk Γ(tk−1 ) Aτk + Bτk WBτk ,
estimation errors are achieved when compared to sEBSE, ' " # −1 ( −1 (13.34)
although the modeled estimation error of the hEBSE is Γ(t) = ω Γ(tk− ) + R( tk ) , ∀tk ∈ T.
a far better bound on real estimation errors. Similar
results are expected when comparing the hEBSE with Similarly, let us derive the update equation for P(t) +
the deterministic dEBSE, as in the latter EBSE noises X (t) in line with the results in (13.26) and (13.31), that is,
should be represented as a set-membership. As such, the P(tk− ) + X (tk− )
hEBSE is advantageous in networked control systems
where estimation results are being used by a (stabilizing) = Aτk ( P(tk−1 ) + X (tk−1 )) Aτk + Bτk WBτk ,
controller. P ( tk ) + X ( tk )
' " # −1 ( −1
= ω ωP(tk− ) + X (tk− ) + R( tk ) , ∀tk ∈ T.
(13.35)

13.8 Conclusions The inequality P(tk ) + X (tk ) ≺ Γ(tk ), for all Tk ∈ T,


is proven by induction: first show that P(t1 ) + X (t1 ) ≺
In networked systems, high measurement frequencies Γ(t1 ) when Γ(0) = P(0) + X (0), followed by a proof of
may rapidly exhaust communication bandwidth and P(tk ) + X (tk ) ≺ Γ(tk ) iff P(tk−1 ) + X (tk−1 ) ≺ Γ(tk−1 ).
power resources when sensor data must be transmitted The first step of induction starts from Γ(0) = P(0) +
periodically to the state estimator. The transmission rate X (0). The prediction Γ(t1− ) in (13.34) and P(t1− ) + X (t1− )
can significantly be reduced if an event-based strategy
in (13.35) give that Γ(t1− ) = P(t1− ) + X (t1− ). Substitut-
is employed for sampling sensor data. “send-on-delta”
ing this result in the update equation of Γ(t1 ) in (13.34)
and “matched sampling” have been discussed as exam-
yields
ples of such strategies. This chapter discussed several
ideas to process these event sampled measurements. ' " # −1 ( −1
Γ(t1 ) = ω P(t1− ) + X (t1− ) + R( t1 ) .
Typical estimation approaches perform a measurement
update whenever an event is triggered, that is, at the
Since ω ∈ (0, 1), this latter equality further implies that
event instants when a new measurement is received.
' " # −1 ( −1
However, observation and automation systems are
Γ(t1 )  ω ωP(t1− ) + X (t1− ) + R( t1 ) , (13.36)
mainly designed to rely on time-periodic estimation
results. The time gap between events and periodic = P ( t1 ) + X ( t1 ), (13.37)
instants can simply be bridged by prediction steps, but
additional knowledge then remains untapped: as long which proves the first step of induction.
as no event is triggered, the actual sensor value does The second step starts from P(tk−1 ) + X (tk−1 ) ≺
not fulfill the event-sampling criterion, thereby imply- Γ(tk−1 ). The prediction Γ(tk− ) in (13.34) and P(tk− ) +
ing that it did not cross the edge of a particular closed X (tk− ) in (13.35) give that P(tk− ) + X (tk− ) ≺ Γ(tk− ). Sub-
set. Recent estimators do exploit this implied measure- stituting this result in the update equation of Γ(tk ) in
ment information and three of them were discussed, (13.34) yields
here; one restricted to stochastic noise representations, ' " # −1 ( −1
one restricted to set membership noise representations, Γ(tk )  ω P(tk− ) + X (tk− ) + R( tk ) .
and one hybrid solution allowing both stochastic and
set membership representations. The latter one is more Since ω ∈ (0, 1), this latter equality further implies that
advantageous, as measurement and process noise are ' " # −1 ( −1
typically characterized as a stochastic random vector, Γ(tk )  ω ωP(tk− ) + X (tk− ) + R( tk ) , (13.38)
while the implied information results in a set member-
= P ( tk ) + X ( tk ), (13.39)
ship property on the sensor value. Prospective research
focuses also on unreliable networks, where delays and which proves the second step of induction and thereby
packet losses have to be taken into account. Theorem 13.3.
Time-Periodic State Estimation with Event-Based Measurement Updates 277

Substituting this result in the above inequality of Γ(tk− )


thus results in
Appendix B Proof of Theorem 13.4
" 
Let us introduce R(t) := (1 − ω)C  (1 − ω)V + Γ ( t −
)  ω −2
Aτk−1 +τk−2 Γ(tk−−2 ) Aτk−1 +τk−2
#−1 k
E(t) C. Then, the update formulas of Γ(tk ) in (13.32), 
yields
+ Bτk−1 +τk−2 WBτk−1 +τk−2 ,
Γ(tk− ) = Aτk Γ(tk−1 ) Aτk + Bτk WBτk , 
−κ
' " #−1 ( −1 (13.40) ω A∑κi=1 τk−i Γ(tk−−κ ) A ∑κi =1 τk−i
Γ(tk ) = ω Γ(tk− ) + R( tk ) , ∀tk ∈ T.


+ B∑κi=1 τk−i WB∑ κ .
Similarly, the update formulas of Σ(tn ) in (13.33) are as i =1 τk− i

follows:

Σ(tn− ) = Ā (Σ(tn−1 )) Ā + B̄W B̄ , Note that Γ(tk−−κ ) = Aτk−κ−1 Γ(tk−κ−1 ) Aτk−κ−1 +
' " #−1 ( −1 Bτk−κ−1 WBτk−κ−1 , which after substituting in the above
Σ(tn ) = ω−κ+1 Σ(tn− ) + R( tn ) , ∀ tn ∈ T p . inequality gives that
(13.41) ' (
Γ(tk− )  ω−κ Aδk,κ−1 Γ(tk−κ−1 ) Aδk,κ−1 + Bδk,κ−1 WBδk,κ−1
The following result is instrumental for proving The-
(13.43)
orem 13.4. To that extent, let κ ∈ Z+ be the amount
of event instants in between the two consecutive
time-periodic instants t − τs and t, or differently, t − where δk,κ−1 := ∑κi=+11 τk−i . The lemma considers time-
τs = tk−κ−1 < tk−κ < tk−κ+1 < . . . < tk−1 < tk = t. Fur- periodic instants, that is, tk = t ∈ T p and tk−κ−1 =
ther, let us introduce the time instant t ∈ T p , such that t − τs . For those instants, one obtains δk,κ−1 = τs and
tk = t and tn = t for some k, n ∈ Z+ . As τs ∈ R+ is the thus Aδk,κ−1 = Aτs = Ā and Bδk,κ−1 = Bτs = B̄. Substitut-
sampling time, one has that tn−1 = t − τs and tk−κ−1 = ing these results into (13.43) further implies that
t − τs , that is, tn−1 = tk−κ−1 . ' (
Γ(t− )  ω−κ ĀΓ(t − τs ) Ā + B̄W B̄ .
Lemma 13.1
From the fact that Σ(t− ) = ĀΣ(t − τs ) Ā + B̄W B̄ ,
Let us consider Γ(t) characterized by (13.40) and Σ(t) in combination with the assumption Γ(t − τs )  Σ(t −
by (13.41), for some t ∈ T p , while satisfying Γ(t − τs )  τs ), one can then obtain that Γ(t− )  ω−κ Σ(t− ), which
Σ(t − τs ). Then, Γ(t− )  ω−κ Σ(t− ) holds for any suit- completes the proof of this lemma.
able k, n such that t = tk and t = tn .
Next, let us continue with the result of Theorem 13.4,
PROOF The proof of this lemma start with the inequal- which is proven by induction. The first step is to verify
ity that that Γ(τs )  Σ(τs ) when Γ(0) = Σ(0). Substituting the
time-periodic instant t = τs into Lemma 13.1 gives that
Γ(tk )  ω−1 Γ(tk− ), ∀tk ∈ T, see (13.40). (13.42) the predicted covariance matrices Γ(τ− ) and Σ(τ− ) sat-
s s
isfy Γ ( τ − )  ω−κ Σ(τ− ). The result of this latter inequal-
When substituting this result in the prediction step of s s
(13.40), one can further derive that ity implies that after the update equations of (13.40) and
(13.41) one has that Γ(τs )  Σ(τs ), which completes the
' (
−  
Γ(tk ) = Aτk−1 Γ(tk−1 ) Aτk−1 + Bτk−1 WBτk−1 , first step.
' ( The second step is to show that Γ(tk )  Σ(tn ) holds
 ω−1 Aτk−1 Γ(tk−−1 ) Aτk−1 + Bτk−1 WBτk−1 , for any tk = tn ∈ T p , when Γ(tk − τs )  Σ(tn − τs ) holds.
' Since tk = tn is a time-periodic instant and Γ(tk −
 ω−2 Aτk−1 Aτk−2 Γ(tk−−2 ) Aτk−2 Aτk−1 τs )  Σ(tn − τs ) holds, one can employ the results of
( Lemma 13.1 by considering tk = t and tn = t. This
+ Aτk−1 Bτk−2 WBτk−2 Aτk−1 + Bτk−1 WBτk−1 . lemma then states that Γ(tk− ) and Σ(tn− ) satisfy Γ(tk− ) 
ω−κ Σ(tn− ). Substituting this result in the update equa-
From the definition of Aτ and Bτ in Section 13.3, tions of (13.40) and (13.41) further implies that Γ(tk ) 
one obtains that Aτi +τi−1 = Aτi Aτi−1 and Bτi +τi−1 = Σ(tn ), which completes the second step of induction and
Aτi Bτi−1 + Bτi for any bounded τi > 0 and τi−1 > 0. thereby, the proof of this theorem.
278 Event-Based Control and Signal Processing

[13] O. C. Imer and T. Basar. Optimal estimation with


limited measurements. In Proceedings of the 44th
Bibliography
IEEE Conference on Decision and Control, pages 1029–
[1] L. Aggoun and R. Elliot. Measure Theory and Filter- 1034, Seville, Spain, December 13–15, 2005.
ing. Cambridge: Cambridge University Press, UK,
2004. [14] O. C. Imer and T. Basar. Optimal control with lim-
ited controls. In American Control Conference, pages
[2] A. Alessandri, M. Baglietto, and G. Battistelli. 298–303, Minneapolis, MN, June 14–16, 2006.
Receding-horizon estimation for discrete-time lin-
ear systems. IEEE Transactions on Automatic Control, [15] H. L. Lebesgue. Integrale, longueur, aire. PhD
48(3):473–478, 2003. thesis, University of Nancy, 1902.
[3] K. J. Åström and B. M. Bernhardsson. Comparison [16] D. Lehmann and J. Lunze. A state-feedback
of Riemann and Lebesgue sampling for first order approach to event-based control. Automatica,
stochastic systems. In Proceedings of the 41st IEEE 46:211–215, 2010.
Conf. on Decision and Control, pages 2011–2016, Las
Vegas, NV, 2002. [17] G. V. Moustakides, M. Rabi, and J. S. Baras. Multiple
sampling for estimation on a finite horizon. In Pro-
[4] F. L. Chernousko. State Estimation for Dynamic ceedings of the 45th IEEE Conference on Decision and
Systems. Boca Raton, FL: CRC Press, 1994. Control, pages 1351–1357, San Diego, CA, December
[5] R. Cogill. Event-based control using quadratic 13–15, 2006.
approximate value functions. In 48th IEEE Con-
ference on Decision and Control, pages 5883–5888, [18] R. Mahler. General Bayes filtering of quantized
Shanghai, China, 2009. measurements. In Proceedings of the 14th Interna-
tional Conference on Information Fusion, pages 346–
[6] R. E. Curry. Estimation and Control with Quantized 352, Chicago, IL, July 5–8, 2011.
Measurements. Clinton, MA: MIT Press, 1970.
[19] M. Mallick, S. Coraluppi, and C. Carthel. Advances
[7] W. Dargie and C. Poellabauer. Fundamentals of Wire- in asynchronous and decentralized estimation. In
less Sensor Networks: Theory and Practice. Wiley, 2010. Proceeding of the 2001 Aerospace Conference, Big Sky,
MT, March 10–17, 2001.
[8] D. V. Dimarogonas and K. H. Johansson. Event-
triggered control for multi-agent systems. In Pro- [20] J. W. Marck and J. Sijs. Relevant sampling applied
ceedings of the 48th IEEE Conference on Decision and to event-based state-estimation. In Proceedings of
Control, pages 7131–7136, Shanghai, China, Decem- the 4th International Conference on Sensor Technologies
ber 16–18, 2009. and Applications, pages 618–624, Venice, Italy, July
[9] M. C. F. Donkers. Networked and Event-Triggered 18–25, 2010.
Control Systems. PhD thesis, Eindhoven University
[21] K. V. Mardia, J. T. Kent, and J. M. Bibby. Multivariate
of Technology, 2012.
Analysis. Academic Press, London, 1979.
[10] W. P. M. H. Heemels, R. J. A. Gorter, A. van Zijl,
P. P. J. van den Bosch, S. Weiland, W. H. A. Hendrix, [22] M. Miskowicz. Send-on-delta concept: An event-
and M. R. Vonder. Asynchronous measurement and based data-reporting strategy. Sensors, 6:49–63,
control: A case study on motor synchronization. 2006.
Control Engineering Practice, 7:1467–1482, 1999.
[23] M. Miskowicz. Asymptotic effectiveness of the
[11] T. Henningsson. Recursive state estimation for lin- event-based sampling according to the integral cri-
ear systems with mixed stochastic and set-bounded terion. Sensors, 7:16–37, 2007.
disturbances. In Proceedings of the 47th IEEE Confer-
ence on Decision making and Control (CDC08), pages [24] P. Morerio, M. Pompei, L. Marcenaro, and C. S.
678–683, Cancun, Mexico, December 9–11, 2008. Regazzoni. Exploiting an event based state estima-
tor in presence of sparse measurements in video
[12] T. Henningsson, E. Johannesson, and A. Cervin. analytics. In Proceedings of the 2014 International
Sporadic event-based control of first-order linear Conference on Acoustics, Speech and Signal Process-
stochastic systems. Automatica, 44(11):2890–2895, ing (ICASSP), pages 1871–1875, Florence, Italy, May
2008. 4–9, 2014.
Time-Periodic State Estimation with Event-Based Measurement Updates 279

[25] V. H. Nguyen and Y. S. Suh. Improving estima- [32] H. W. Sorenson and D. L. Alspach. Recursive
tion performance in networked control systems Bayesian estimation using Gaussian sums. Auto-
applying the send-on-delta transmission method. matica, 7:465–479, 1971.
Sensors, 7:2128–2138, 2007.
[33] C. Stocker. Event-Based State-Feedback Control of
[26] B. Noack, F. Pfaff, and U. D. Hanebeck. Optimal Physically Interconnected Systems. Berlin: Logos Ver-
Kalman gains for combined stochastic and set- lag Berlin Gmbh, 2014.
membership state estimation. In Proceedings of the
51st IEEE Conference on Decision and Control (CDC [34] S. Trimpe and R. D’Andrea. An experimental
2012), pages 4035–4040, Maui, HI, December 10–13, demonstration of a distributed and event-based
2012. state estimation algorithm. In Proceedings of the 18th
IFAC World Congress, pages 8811–8818, Milan, Italy,
[27] B. Saltik. Output feedback control of linear systems August 28–September 2, 2011.
with event-triggered measurements. Master thesis,
University of Technology, Eindhoven, 2013. [35] S. Trimpe and R. D’Andrea. Event-based state esti-
mation with variance-based triggering. In Proceed-
[28] D. Shi, T. Chen, and L. Shi. An event-triggered ings of the 51st IEEE Conference on Decision and
approach to state estimation with multiple Control (CDC 2012), pages 6583–6590, Maui, HI,
point- and set-valued measurements. Automatica, December 10–13, 2012.
50(6):1641–1648, 2014.
[36] J. Wu, K. H. Johansson, and L. Shi. Event-based sen-
[29] J. Sijs. State estimation in networked systems. sor data scheduling: Trade-off between communi-
PhD thesis, Eindhoven University of Technology, cation rate and estimation quality. IEEE Transactions
2012. on Automatic Control, 58(4):1041–1046, 2013.

[30] J. Sijs and M. Lazar. Event based state estimation [37] Y. Xu and J. P. Hespanha. Optimal communication
with time synchronous updates. IEEE Transactions logics for networked control systems. In Proceedings
on Automatic Control, 57(10):2650–2655, 2012. of the 43rd IEEE Conference on Decision and Con-
trol, pages 3527–3532, Paradise Island, Bahamas,
[31] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, December 14–17, 2004.
M. Jordan, and S. Sastry. Kalman filter with inter-
mittent observations. IEEE Transactions on Auto-
matic Control, 49:1453–1464, 2004.
14
Intermittent Control in Man and Machine

Peter Gawthrop
University of Melbourne
Melbourne, Victoria, Australia

Henrik Gollee
University of Glasgow
Glasgow, UK

Ian Loram
Manchester Metropolitan University
Manchester, UK

CONTENTS
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
14.2 Continuous Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
14.2.1 Observer Design and Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
14.2.2 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
14.2.3 Controller Design and Motor Synergies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
14.2.4 Steady-State Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
14.3 Intermittent Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
14.3.1 Time Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
14.3.2 System-Matched Hold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
14.3.3 Intermittent Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
14.3.4 Intermittent Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
14.3.5 State Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
14.3.6 Event Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
14.3.7 The Intermittent-Equivalent Setpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
14.3.8 The Intermittent Separation Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
14.4 Examples: Basic Properties of Intermittent Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
14.4.1 Elementary Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
14.4.2 The Psychological Refractory Period and Intermittent-Equivalent Setpoint . . . . . . . . . . . . . . . . . . . . . 294
14.4.3 The Amplitude Transition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
14.5 Constrained Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
14.5.1 Constrained Steady-State Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
14.5.2 Constrained Dynamical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
14.5.2.1 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
14.5.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
14.6 Example: Constrained Control of Mass–Spring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
14.7 Examples: Human Standing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
14.7.1 A Three-Segment Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
14.7.2 Muscle Model and Hierarchical Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
14.7.3 Quiet Standing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
14.7.4 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
14.7.5 Disturbance Rejection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
14.8 Intermittency Induces Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
14.8.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
14.8.2 Identification of the Linear Time-Invariable (LTI) Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
281
282 Event-Based Control and Signal Processing

14.8.3
Identification of the Remnant Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
14.8.3.1 Variability by Adding Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
14.8.3.2 Variability by Intermittency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
14.8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
14.9 Identification of Intermittent Control: The Underlying Continuous System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
14.9.1 Closed-Loop Frequency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
14.9.1.1 Predictive Continuous Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
14.9.1.2 Intermittent Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
14.9.2 System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
14.9.2.1 System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
14.9.2.2 Nonparametric Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
14.9.2.3 Parametric Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
14.9.3 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
14.9.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
14.10 Identification of Intermittent Control: Detecting Intermittency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
14.10.1 Outline of Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
14.10.1.1 Stage 1: Reconstruction of the Set-Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
14.10.1.2 Stage 2: Statistical Analysis of Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
14.10.1.3 Stage 3: Model-Based Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
14.10.2 Refractoriness in Sustained Manual Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
14.10.3 Refractoriness in Whole Body Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
14.11 Adaptive Intermittent Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
14.11.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
14.11.2 Continuous-Time Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
14.11.3 Intermittent Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
14.12 Examples: Adaptive Human Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
14.13 Examples: Adaptive Human Reaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
14.14 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
ABSTRACT It is now over 70 years since Kenneth J. and can be used to give a sampled-data implementation,
Craik postulated that human control systems behave in which approximates a previously designed continuous-
an intermittent, rather than a continuous, fashion. This time controller. In contrast to conventional sampled-
chapter provides a mathematical model of event-driven data control, intermittent control (Gawthrop and Wang,
intermittent control, examines how this model explains 2007) explicitly embeds the underlying continuous-time
some phenomena related to human motion control, and closed-loop system in a generalized hold. A number of ver-
presents some experimental evidence for intermittency. sions of the generalized hold are available; this chapter
Some new material related to constrained multivariable focuses on the system-matched hold (SMH) (Gawthrop
intermittent control is presented in the context of human and Wang, 2011), which explicitly generates an open-
standing, and some new material related to adaptive loop intersample control trajectory based on the under-
intermittent control is presented in the context of human lying continuous-time closed-loop control system. Other
balance and reaching. versions of the generalized hold include Laguerre func-
We believe that the ideas presented here in a phys- tion based holds (Gawthrop and Wang, 2007) and a
iological context will also prove to be useful in an “tapping” hold (Gawthrop and Gollee, 2012).
engineering context. There are three areas where intermittent control has
been used:
1. Continuous-time model-based predictive
control (MPC) where the intermittency is
associated with online optimization (Ronco,
14.1 Introduction Arsan, and Gawthrop, 1999; Gawthrop and
Conventional sampled-data control uses a zero-order Wang, 2009a, 2010).
hold (ZOH), which produces a piecewise constant con- 2. Event-driven control systems where the
trol signal (Franklin, Powell, and Emami-Naeini, 1994), intersample interval is time varying and
Intermittent Control in Man and Machine 283

determined by the event times (Gawthrop the threshold is small compared to errors caused by rel-
and Wang, 2009b, 2011). atively large disturbances. There is evidence that human
control systems are, in fact, event driven (Navas and
3. Physiological control systems which, in some Stark, 1968; Loram et al., 2012; van de Kamp et al., 2013a;
cases, have an event-driven intermittent char- Loram, van de Kamp, Lakie, Gollee, and Gawthrop,
acter (Loram and Lakie, 2002; Gawthrop, 2014). For this reason, this chapter focuses on event-
Loram, Lakie, and Gollee, 2011). This inter- driven control.
mittency may be due to the “computation” in As mentioned previously, intermittent control is based
the central nervous system (CNS). Although on an underlying continuous-time design method; in par-
this chapter is orientated toward physiologi- ticular, the classical state-space approach is the basis
cal control systems, we believe that it is more of the intermittent control of Gawthrop et al. (2011).
widely applicable. There are two relevant versions of this approach: state
feedback and output feedback. State-feedback control
Intermittent control has a long history in the physio- requires that the current system state (e.g., angular posi-
logical literature (e.g., Craik, 1947a,b; Vince, 1948; Navas tion and velocity of an inverted pendulum) is available
and Stark, 1968; Neilson, Neilson, and O’Dwyer, 1988; for feedback. In contrast, output feedback requires a
Miall, Weir, and Stein, 1993a; Bhushan and Shadmehr, measurement of the system output (e.g., angular posi-
1999; Loram and Lakie, 2002; Loram, Gollee, Lakie, tion of an inverted pendulum). The classical approach
and Gawthrop, 2011; Gawthrop et al., 2011). There to output feedback in a state-space context (Kwakernaak
is strong experimental evidence that some human and Sivan, 1972; Goodwin, Graebe, and Salgado, 2001)
control systems are intermittent (Craik, 1947a; Vince, is to use an observer (or the optimal version, a Kalman
1948; Navas and Stark, 1968; Bottaro, Casadio, Morasso, filter) to deduce the state from the system output.
and Sanguineti, 2005; Loram, van de Kamp, Gollee, and Human control systems are associated with time
Gawthrop, 2012; van de Kamp, Gawthrop, Gollee, and delays. In engineering terms, it is well known that a pre-
Loram, 2013b), and it has been suggested that this inter- dictor can be used to overcome time delay (Smith, 1959;
mittency arises in the CNS (van de Kamp, Gawthrop, Kleinman, 1969; Gawthrop, 1982). As discussed by many
Gollee, Lakie, and Loram, 2013a). For this reason, com- authors (Kleinman, Baron, and Levison, 1970; Baron,
putational models of intermittent control are important Kleinman, and Levison, 1970; McRuer, 1980; Miall, Weir,
and, as discussed below, a number of versions with Wolpert, and Stein, 1993b; Wolpert, Miall, and Kawato,
various characteristics have appeared in the literature. 1998; Bhushan and Shadmehr, 1999; Van Der Kooij,
Intermittent control has also appeared in various forms Jacobs, Koopman, and Van Der Helm, 2001; Gawthrop,
in the engineering literature including (Ronco et al., Lakie, and Loram, 2008; Gawthrop, Loram, and Lakie,
1999; Zhivoglyadov and Middleton, 2003; Montestruque 2009; Gawthrop et al., 2011; Loram et al., 2012), it is
and Antsaklis, 2003; Insperger, 2006; Astrom, 2008; plausible that physiological control systems have built
Gawthrop and Wang, 2007, 2009b; Gawthrop, Neild, and in model-based prediction. Following Gawthrop et al.
Wagg, 2012). (2011), this chapter bases intermittent controller (IC) on
Intermittent control action may be initiated at regular an underlying predictive design.
intervals determined by a clock, or at irregular intervals The use of networked control systems leads to the
determined by events; an event is typically triggered by “sampling period jitter problem” (Sala, 2007) where
an error signal crossing a threshold. Clock-driven con- uncertainties in transmission time lead to unpredictable
trol is discussed by Neilson et al. (1988) and Gawthrop nonuniform sampling and stability issues (Cloosterman,
and Wang (2007) and analysed in the frequency domain van de Wouw, Heemels, and Nijmeijer, 2009). A num-
by Gawthrop (2009). Event-driven control is used ber of authors have suggested that performance may be
by Bottaro et al. (2005); Bottaro, Yasutake, Nomura, improved by replacing the standard ZOH by a general-
Casadio, and Morasso (2008), Astrom (2008), Asai, ized hold (Sala, 2005, 2007) or using a dynamical model
Tasaka, Nomura, Nomura, Casadio, and Morasso (2009), of the system between samples (Zhivoglyadov and
Gawthrop and Wang (2009b), and Kowalczyk, Glendin- Middleton, 2003; Montestruque and Antsaklis, 2003).
ning, Brown, Medrano-Cerda, Dallali, and Shapiro Similarly, event-driven control (Heemels, Sandee, and
(2012). Gawthrop et al. (2011, Section 4) discuss event- Bosch, 2008; Astrom, 2008), where sampling is deter-
driven control but with a lower limit Δmin on the time mined by events rather than a clock, also leads to
interval between events; this gives a range of behaviors unpredictable nonuniform sampling. Hence, strategies
including continuous, timed, and event-driven control. for event-driven control would be expected to be sim-
Thus, for example, threshold-based event-driven control ilar to strategies for networked control. One particular
becomes effectively clock driven with interval Δmin if form of event-driven control where events correspond
284 Event-Based Control and Signal Processing

to the system state moving beyond a fixed boundary requirement achievable by a range of possible joint
has been called Lebesgue sampling in contrast to the torques each of which in turn corresponds to a range of
so-called Riemann sampling of fixed-interval sampling possible muscle activation. This chapter focuses on the
(Astrom and Bernhardsson, 2002, 2003). In particular, task space—joint space hierarchy previously examined
Astrom (2008) uses a “control signal generator”: essen- in the context of robotics (Khatib, 1987).
tially a dynamical model of the system between samples In a similar way, humans have an abundance of
as advocated by Zhivoglyadov and Middleton (2003) for measurements available; in control system terms, the
the networked control case. system has redundant sensors. As discussed by Van
As discussed previously, intermittent control has Der Kooij, Jacobs, Koopman, and Grootenboer (1999)
an interpretation which contains a generalized hold and Van Der Kooij et al. (2001), such sensors are utilized
(Gawthrop and Wang, 2007). One particular form of with appropriate sensor integration. In control system
hold is based on the closed-loop system dynamics of terms, sensor redundancy can be incorporated into state-
an underlying continuous control design: this will be space control using observers or Kalman–Bucy filters
called the SMH in this chapter. Insofar as this special (Kwakernaak and Sivan, 1972; Goodwin et al., 2001); this
case of intermittent control uses a dynamical model of is the dual of the optimal control problem. Again sensors
the controlled system to generate the (open-loop) control can be arranged in a hierarchical fashion. Hence, optimal
between sample intervals, it is related to the strategies of control and filtering provides the basis for a continuous-
both Zhivoglyadov and Middleton (2003) and Astrom time control system that simultaneously applies sensor
(2008). However, as shown in this chapter, intermittent fusion to utilize sensor redundancy and optimal control
control provides a framework within to analyze and to utilize actuator redundancy.
design a range of control systems with unpredictable For these reasons, this chapter extends the single-
nonuniform sampling possibly arising from an event- input single-output IC of Gawthrop et al. (2011) to the
driven design. In particular, it is shown by Gawthrop multivariable case. As the formulation of Gawthrop et al.
and Wang (2011) that the SMH-based IC is associated (2011) is set in the state space, this extension is quite
with a separation principle similar to that of the under- straightforward. Crucially, the generalized hold, and in
lying continuous-time controller, which states that the particular the SMH, remains as the heart of multivari-
closed-loop poles of the intermittent control system con- able intermittent control.
sist of the control system poles and the observer system The particular mathematical model of intermittent
poles, and the interpolation using the system matched control proposed by Gawthrop et al. (2011) combines
hold does not lead to the changes of closed-loop poles. event-driven control action based on estimates of the
As discussed by Gawthrop and Wang (2011), this sep- controlled system state (position, velocity, etc.) obtained
aration principle is only valid when using the SMH. using a standard continuous-time state observer with
For example, intermittent control based on the standard continuous measurement of the system outputs. This
ZOH does not lead to such a separation principle and model of intermittent control can be summarized as
therefore closed-loop stability is compromised when the “continuous attention with intermittent action.” How-
sample interval is not fixed. ever, the state estimate is only used at the event-driven
Human movement is characterized by low- sample time; hence, it would seem that it is not necessary
dimensional goals achieved using high-dimensional for the state observer to monitor the controlled sys-
muscle input (Shadmehr and Wise, 2005); in control tem all of the time. Moreover, the experimental results
system terms, the system has redundant actuators. of Osborne (2013) suggest that humans can perform
As pointed out by Latash (2012), the abundance of well even when vision is intermittently occluded. This
actuators is an advantage rather than a problem. One chapter proposes an intermittent control model where
approach to redundancy is by using the concept of a continuous-time observer monitors the controlled sys-
synergies (Neilson and Neilson, 2005): groups of muscles tem intermittently: the periods of monitoring the system
which act in concert to give a desired action. It has measurements are interleaved with periods where the
been shown that such synergies arise naturally in the measurement is occluded. This model of intermittent con-
context of optimal control (Todorov, 2004; Todorov and trol can be summarized as “intermittent attention with
Jordan, 2002) and experimental work has verified the intermittent action.”
existence of synergies in vivo (Ting, 2007; Safavynia and This chapter has two main parts:
Ting, 2012). Synergies may be arranged in hierarchies.
For example, in the context of posture, there is a nat- 1. Sections 14.2–14.4 give basic ideas about inter-
ural three-level hierarchy with increasing dimension mittent control.
comprising task space, joint space, and muscle space. 2. Sections 14.5–14.13 explore more advanced
Thus, for example, a balanced posture could be a task topics and applications.
Intermittent Control in Man and Machine 285

with respect to u and that A and Co are such that the


system (14.1) is observable with respect to yo .
14.2 Continuous Control
As described previously (Gawthrop et al., 2011),
Intermittent control is based on an underlying design Equation 14.1 subsumes a number of subsystems
method which, in this chapter, is taken to be conven- including the neuromuscular (actuator dynamics in the
tional state space–based observer/state-feedback con- engineering context) and disturbance subsystems of
trol (Kwakernaak and Sivan, 1972; Goodwin et al., Figure 14.1.
2001) with the addition of a state predictor (Fuller,
1968; Kleinman, 1969; Sage and Melsa, 1971; Gawthrop, 14.2.1 Observer Design and Sensor Fusion
1976). Other control design approaches have been used The system states x of Equation 14.1 are rarely available
in this context including pole-placement (Gawthrop directly due to sensor placement or sensor noise. As dis-
and Ronco, 2002) and cascade control (Gawthrop, Lee, cussed in the textbooks (Kwakernaak and Sivan, 1972;
Halaki, and O’Dwyer, 2013b). It is also noted that Goodwin et al., 2001), an observer can be designed based
many control designs can be embedded in LQ design on the system model (14.1) to approximately deduce the
(Maciejowski, 2007; Foo and Weyer, 2011) and thence system states x from the measured signals encapsulated
used as a basis for intermittent control (Gawthrop and in the vector y . In particular, the observer is given by
o
Wang, 2010).
Gawthrop et al. (2011) consider a single-input single- dxo
output formulation of intermittent control; this chapter (t) = Ao xo (t) + Bu(t) + L[yo (t) − vy (t)], (14.2)
dt
considers a multi-input multi-output formulation. As in
the single-input single-output case, this chapter consid- where
ers linear time invariant systems with an n × 1 vector
state x. As discussed by Gawthrop et al. (2011), the Ao = A − LCo , (14.3)
system, neuromuscular (NMS) and disturbances can be
combined into a state-space model. For simplicity, the where the signal vy (t) is the measurement noise. The
measurement noise signal vy will be omitted in this n × no matrix L is the observer gain matrix. As discussed
chapter except where needed. In contrast, however, this by, for example, Kwakernaak and Sivan (1972) and
chapter is based on a multiple input, multiple output Goodwin et al. (2001), it is straightforward to design L
formulation. Thus, the corresponding state-space system using a number of approaches including pole-placement
has multiple outputs represented by the ny × 1 vector and the linear-quadratic optimization approach. The
y and no × 1 vector yo , multiple control inputs repre- latter is used here and thus
sented by the nu × 1 vector u and multiple unknown
disturbance inputs represented by the nu × 1 vector d
L = Lo , (14.4)
where:
where Lo is the observer gain matrix obtained using
⎧ linear-quadratic optimization.

⎨ dt (t) = Ax (t) + Bu(t) + Bd d (t)


dx
The observer deduces system states from the no
y(t) = Cx(t) , (14.1) observed signals contained in y ; it is thus a particular

⎩ o
yo ( t) = Co x ( t) form of sensor fusion with properties determined by the
n × ny matrix L.
A is an n × n matrix, B and Bd are a n × nu matrices, C As discussed by Gawthrop et al. (2011), because
is a ny × n matrix and Co is a no × n matrix. The n × 1 the system (14.1) contains the disturbance dynamics of
column vector x is the system state. In the multivariable Figures 14.1 and 14.2, the corresponding observer
context, there is a distinction between the ny × n task deduces not only the state of the blocks labeled “System”
vector y and the no × n observed vector yo : the former and “NMS” in Figures 14.1 and 14.2, but also the state of
corresponds to control objectives, whereas the latter cor- block labeled “Dist.”; thus, it acts as a disturbance observer
responds to system sensors and so provides information (Goodwin et al., 2001, Chap. 14). A simple example
to the observer. Equation 14.1 is identical to Gawthrop appears in Section 14.4.1.
et al. (2011, Equation 5) except that the scalar output y
is replaced by the vector outputs y and yo , the scalar
14.2.2 Prediction
input u is replaced by the vector input u and the scalar
input disturbance d is replaced by the vector input dis- Systems and controllers may contain pure time delays.
turbance d
. Following standard practice (Kwakernaak Time delays are traditionally overcome using a predic-
and Sivan, 1972; Goodwin et al., 2001), it is assumed that tor. The predictor of Smith (1959) [discussed by Astrom
A and B are such that the system (14.1) is controllable (1977)] was an early attempt at predictor design which,
286 Event-Based Control and Signal Processing

Dist. v(t)
w(t)
y(t)
vu(t) d(t) vy(t)

+ ue(t) − − xo(t) − xw(t)


NMS System Observer
+ + + +
yo(t)

u(t) xp(ti) xw(ti)


State FB Delay Predictor

FIGURE 14.1
The Observer, Predictor, State-feedback (OPF) model. The block labeled “NMS” is a linear model of the neuromuscular dynamics with input u ( t);
in the engineering context, this would represent actuator dynamics. “System” is the linear external controlled system driven by the externally
observed control signal u e and disturbance d, and with output y and associated measurement noise vy . The input disturbance vu is modeled as
the output of the block labeled “Dist.” and driven by the external signal v. The block labeled “Delay” is a pure time-delay of Δ which accounts
for the various delays in the human controller. The block labeled “Observer” gives an estimate xo of the state x of the composite “NMS” and
“System” (and, optionally, the “Dist.”) blocks. The predictor provides an estimate of the future state error x p ( t) the delayed version of which is
multiplied by the feedback gain vector k (block “State FB”) to give the feedback control signal u. This figure is based on Gawthrop et al. (2011,
Fig. 1) which is in turn based on Kleinman (1969, Fig. 2).

Dist. v(t)
y(t) w(t)
vu(t) d(t) vy(t)

+ ue(t) − − xo(t) – xw(t)


NMS System Observer
+ + + +
yo(t)

xp(ti–td) xh(t) xw(t)


Hold Trig.

ti

u(t) xp(ti) xw(ti)


State FB Delay Predictor

FIGURE 14.2
Intermittent control. This diagram has blocks in common with those of the OPF of Figure 14.1: “NMS”, “Dist.”, “System”, “Observer”,
“Predictor”, and “State FB”, which have the same function; the continuous-time “Predictor” block of Figure 14.1 is replaced by the much
simpler intermittent version here. There are three new elements: a sampling element which samples xw at discrete times ti ; the block labeled
“Hold”, the system-matched hold, which provides the continuous-time input to the “State FB” block, and and the event detector block labeled
“Trig.,” which provides the trigger for the sampling times ti . The dashed lines represent sampled signals defined only at the sample instants ti .
This figure is based on Gawthrop et al. (2011, Fig. 2).

however, cannot be used when the controlled system is In particular, following Kleinman (1969), a state predic-
unstable. State space–based predictors have been devel- tor is given by
oped and used by a number of authors including Fuller
 Δ
(1968), Kleinman (1969), Sage and Melsa (1971), and

x p (t + Δ) = eAΔ xo (t) + eAt Bu(t − t


)dt
. (14.5)
Gawthrop (1976). 0
Intermittent Control in Man and Machine 287

Again, apart from the scalar u being replaced by the vec- and uss are uniquely determined by yss . In contrast,
tor u and B becoming an nu × n matrix, Equation 14.5 is the multivariable case has additional flexibility; this sec-
the same as in the single input (nu = 1) case. tion takes advantage of this flexibility by extending the
equilibrium design in various ways.
In particular, Equation 14.11 is replaced by
14.2.3 Controller Design and Motor Synergies
As described in the textbooks, for example, Kwaker- yss = Css xss , (14.12)
naak and Sivan (1972) and Goodwin et al. (2001), the LQ where yss is a constant nss × mss matrix, xss is a constant
controller problem involves minimization of n × mss matrix, and Css is an nss × mss matrix.
 t Typically, the equilibrium space defined by yss cor-
1
xT (t)Qc x(t) + uT (t)Rc u(t) dt, (14.6) responds to the task space so that, with reference to
0
Equation 14.1, each column of yss is a steady-state value
and letting t1 → ∞. Qc is the n × n state-weighting of y (e.g., yss = Iny ×ny ) and Css = C. Further, assume
matrix and Rc is the nu × nu control-weighting matrix. that the disturbance d
(t) of (14.1) has mss alternative
Qc and Rc are used as design parameters in the rest of constant values that form the columns of the nu × mss
this chapter. As discussed previously (Gawthrop et al., matrix dss .
2011), the resultant state-feedback gain k (n × nu ) may Substituting the steady-state condition of Equation
be combined with the predictor equation (14.5) to give 14.10 into Equation 14.1 and combining with Equation
the control signal u 14.12 gives
u(t) = kxw (t), (14.7)    
x −Bd dss
S ss = , (14.13)
uss yss
where
where
xw = x p (t) − xss w(t). (14.8)
 
A B
As discussed by Kleinman (1969), the use of the state S= . (14.14)
Css
0nss ×nu
predictor gives a closed-loop system with no feedback
delay and dynamics determined by the delay-free closed The matrix S, has n + nss rows n + nu columns, thus
loop system matrix Ac given by there are three possibilities:
Ac = A − Bk. (14.9) nss = nu If S is full rank, Equation 14.13 has a unique
solution for x ss and uss .
As mentioned by Todorov and Jordan (2002) and
Todorov (2004), control synergies arise naturally from nss < nu Equation 14.13 has many solutions corre-
optimal control and are defined by the elements the sponding to a low dimensional manifold in a high
nu × n matrix k. dimensional space. A particular solution may be
A key result of state-space design in the delay free chosen to satisfy an additional criterion such as a
case is the separation principle [see Kwakernaak and minimum norm solution. An example is given in
Sivan (1972, section 5.3) and Goodwin et al. (2001, Section 14.7.4.
section 18.4)] whereby the observer and the controller
nss > nu Equation 14.13 is over-determined; a least-
can be designed separately.
squares solution is possible. This case is considered in
more detail in Section 14.5 and an example is given in
14.2.4 Steady-State Design Section 14.7.5.
As discussed in the single-input, single output case by Having obtained a solution for x ss , each of the mss
Gawthrop et al. (2011), there are many ways to include columns of the n × mss steady-state matrix xss can be
the setpoint in the feedback controller and one way is to associated with an element of a mss × 1 weighting vector
compute the steady-state state x ss and control signal uss w(t). The error signal xw (t) is then defined as the differ-
corresponding to the equilibrium of the ODE (14.1): ence between the estimated state xo (t) and the weighted
columns of x ss as
dx
= 0 n ×1 , (14.10)
dt xw (t) = xo (t) − xss w(t). (14.15)
yss = Cxss , (14.11)
Following Gawthrop et al. (2011), xw (t) replaces xo in
corresponding to a given constant value of output yss . the predictor equation (14.5) and the state-feedback con-
As discussed by Gawthrop et al. (2011), the scalars xss troller remains Equation 14.7.
288 Event-Based Control and Signal Processing

Remarks 14.3.1 Time Frames


As discussed by Gawthrop et al. (2011), intermittent
1. In the single-input case (nu = 1) setting control makes use of three time frames:
yss = 1 and dss = 0 gives the same formula-
tion as given by Gawthrop et al. (2011) and 1. Continuous-time, within which the con-
w(t) is the setpoint. trolled system (14.1) evolves is denoted by t.
2. Discrete-time points at which feedback
2. Disturbances may be unknown. Thus, using occurs is indexed by i. Thus, for example,
this approach requires disturbances to be esti- the discrete-time time instants are denoted
mated in some way. by t i and the corresponding estimated state
is x oi = xo (ti ). The ith intermittent interval
3. Setpoint tracking is considered in Section Δol = Δi ∗ is defined as
14.7.4.
Δol = Δi = ti+1 − ti . (14.16)
4. The effect of a constant disturbance is consid- This chapter distinguishes between event
ered in Section 14.7.5. times ti and the corresponding sample times
tsi . In particular, the model of Gawthrop et al.
5. Constrained solutions are considered in (2011) is extended so that sampling occurs a
Section 14.5.1. fixed time Δs after an event at time ti thus:

tsi = ti + Δs (14.17)

Δs is called the sampling delay in the sequel.


3. Intermittent-time is a continuous-time vari-
14.3 Intermittent Control
able, denoted by τ, restarting at each intermit-
Intermittent control is based on the underlying tent interval. Thus, within the ith intermittent
continuous-time design of Section 14.2. The pur- interval:
pose is to allow control computation to be performed τ = t − ti . (14.18)
intermittently at discrete time points—which may Similarly, define the intermittent time τs after
be determined by time (clock-driven) or the system a sample by
state (event-driven)—while retaining much of the
τs = t − tsi . (14.19)
continuous-time behavior.
A disadvantage of traditional clock-driven discrete- A lower bound Δmin is imposed on each inter-
time control (Franklin and Powell, 1980; Kuo, 1980) mittent interval Δi > 0 (14.16):
based on the ZOH is that the control needs to be
redesigned for each sample interval. This also means Δi > Δmin > 0. (14.20)
that the ZOH approach is inappropriate for event-driven
As discussed by Gawthrop et al. (2011) and
control. The intermittent approach avoids these issues
in Section 14.4.2, Δmin is related to the Psy-
by replacing the ZOH by the SMH. Because the SMH is
chological Refractory Period (PRP) of Telford
based on the system state, it turns out that it does not
(1931) as discussed by Vince (1948) to explain
depend on the number of system inputs nu or outputs
the human response to double stimuli. As
ny and therefore the SMH described by Gawthrop et al.
well as corresponding to the PRP explanation,
(2011) in the single input nu = 1, single output context
the lower bound of (14.20) has two imple-
ny = 1 context carries over to the multi-input nu > 1,
mentation advantages. Firstly, as discussed by
and multi-output ny > 1 case.
Ronco et al. (1999), the time taken to compute
This section is a tutorial introduction to the SMH-
the control signal (and possibly other compet-
based IC in both clock-driven and event-driven cases.
ing tasks) can be up to Δmin . It thus provides
Section 14.3.1 looks at the various time-frames involved,
a model for a single processor bottleneck. Sec-
Section 14.3.2 describes the SMH, and Sections 14.3.3–
ondly, as discussed by Gawthrop et al. (2011),
14.3.5 look at the observer, predictor, and feedback con-
the predictor equations are particularly simple
trol, developed in the continuous-time context in Section
if the system time-delay Δ ≤ Δmin .
14.2, in the intermittent context. Section 14.3.6 looks at
the event detector used for the event-driven version of ∗ Within this chapter, we will use Δ to refer to the generic concept
ol
intermittent control. of intermittent interval and Δi to refer to the length of the ith interval.
Intermittent Control in Man and Machine 289

14.3.2 System-Matched Hold Event Sample Control Event

The SMH is the key component of the intermittent con- Δoo


trol. As described by Gawthrop et al. (2011, Equation 23),
the SMH state x h evolves in the intermittent time frame
τ as Occlusion
d
x (τ) = A h x h (τ), (14.21)
dτ h Δs Δ

where Δi
ti ti+1
Ah = Ac , (14.22)
xh (0) = x p (tsi − Δ), (14.23) FIGURE 14.3
Self-occlusion. Following an event, the observer is sampled at a time
where Ac is the closed-loop system matrix (14.9) and
Δs and a new control trajectory is generated. The observer is then
x p is given by the predictor equation (14.5). The hold
occluded for a further time Δoo = Δo where Δo is the internal occlusion
state xh replaces the predictor state x p in the controller
interval. Following that time, the observer is operational and an event
equation (14.7). Other holds (where Ah = Ac ) are pos-
can be detected (14.35). The actual time between events Δi > Δs + Δoo .
sible (Gawthrop and Wang, 2007; Gawthrop and Gollee,
2012).
The IC generates an open loop control signal based observer state estimate is used in Equation 14.35 to
on the hold state xh (14.21). At the intermittent sample determine the event times ti and thus tsi . Hence, a good
times ti , the hold state is reset to the estimated system state estimate immediately after an sample at time tsi
state x w generated by the observer (14.2); thus feedback is not required and so one would expect that occlusion
occurs at the intermittent sample times ti . The sample (L = 0) would have little effect immediately after t = tsi .
times are constrained by (14.20) to be at least Δmin apart. For this reason, define the occlusion time. Δoo as the time
But, in addition to this constraint, feedback only takes after t = tsi for which the observer is open-loop L = 0.
place when it is needed; the event detector discussed in That is, the constant observer gain is replaced by the
Section 14.3.6 provides this information. time varying observer gain:

0 τs < Δoo
14.3.3 Intermittent Observer L( t) = , (14.24)
Lo τs ≥ Δoo
The IC of Gawthrop et al. (2011) uses continuous obser-
vation however, motivated by the occlusion experiments where Lo is the observer gain designed using stan-
of Osborne (2013), this chapter looks a intermittent dard techniques (Kwakernaak and Sivan, 1972;
observation. Goodwin et al., 2001) and the intermittent time τs
As discussed in Section 14.3.2, the predictor state x p is given by (14.19).
is only sampled at discrete-times ti . Further, from Equa-
tion 14.5, x p is a function of xo at these times. Thus the
14.3.4 Intermittent Predictor
only the observer performance at the discrete-times ti
is important. With this in mind, this chapter proposes, The continuous-time predictor of Equation 14.5 con-
in the context of intermittent control, that the contin- tains a convolution integral which, in general, must be
uous observer is replaced by an intermittent observer approximated for real-time purposes and therefore has
where periods of monitoring the system measurements a speed-accuracy trade-off. This section shows that the
are interleaved with periods where the measurement is use of intermittent control, together with the hold of
occluded. In particular, and with reference to Figure 14.3, Section 14.3.2, means that Equation 14.5 can be replaced
this chapter examines the situation where observation by a simple exact formula.
is occluded for a time Δoo following sampling. Such Equation 14.5 is the solution of the differential equa-
occlusion is equivalent to setting the observer gain L = 0 tion (in the intermittent time τ (14.18) time frame)
in Equation 14.2. Setting L = 0 has two consequences: 
dτ x p (τ) = Ax p (τ) + Bu(τ) ,
d
the measured signal y is ignored and the observer state
(14.25)
evolves as the disturbance-free system. x p (0) = xw (tsi )
With reference to Equation 14.23; the IC only makes
use of the state estimate at the discrete time points at evaluated at time τs = Δ where τsi is given by Equa-
t = tsi (14.17); moreover, in the event-driven case, the tion 14.17. However, the control signal u is not arbitrary
290 Event-Based Control and Signal Processing

but rather given by the hold Equation 14.21. Combining 14.3.6 Event Detector
Equations 14.21 and 14.25 gives
The purpose of the event detector is to generate the
 intermittent sample times ti and thus trigger feedback.
dτ X(τ) = A ph X(τ)
d
, (14.26) Such feedback is required when the open-loop hold
X(0) = Xi state xh (14.21) differs significantly from the closed-loop
observer state x w (14.15) indicating the presence of dis-
where turbances. There are many ways to measure such a
  discrepancy; following Gawthrop et al. (2011), the one
x p (τ)
X (τ) = , (14.27) chosen here is to look for a quadratic function of the
x h (τ) error e hp exceeding a threshold q2t :
 
xw (ti )
Xi = , (14.28)
x p ( t i − Δ) E = eThp (t)Qt ehp (t) − q2t ≥ 0, (14.35)

and
where
 
A −Bk
A ph = , (14.29)
0 n×n Ah ehp (t) = xh (t) − xw (t), (14.36)

where 0 is a zero matrix of the indicated dimensions and where Qt is a positive semi-definite matrix.
the hold matrix Ah can be Ac (SMH) or 0 (ZOH).
The Equation 14.26 has an explicit solution at time
14.3.7 The Intermittent-Equivalent Setpoint
τ = Δ given by
Loram et al. (2012) introduce the concept of the equivalent
X(Δ) = eA ph Δ Xi . (14.30) setpoint for intermittent control. This section extends the
concept and there are two differences:
The prediction x p can be extracted from (14.30) to give
1. The setpoint sampling occurs at ti + Δs rather
x p (ti ) = E pp xw (ti ) + E ph xh (ti ), (14.31) than at ti and

where the n × n matrices E pp and E ph are partitions of 2. The filtered setpoint w f (rather than w) is
the 2n × 2n matrix E: sampled.
  Define the sample time tsi (as opposed to the event time
E pp E ph
E= , (14.32) ti and the corresponding intermittent time τs by
Ehp Ehh
tsi = ti + Δs , (14.37)
where
τs = τ − Δs = t − ti − Δs = t − tsi . (14.38)
E = eA ph Δ . (14.33)
In particular, the sampled setpoint ws becomes
The intermittent predictor (14.31) replaces the
continuous-time predictor (14.5); there is no convolution ws (t) = w f (tsi ) for tsi ≤ t < tsi+1, (14.39)
involved and the matrices E pp and E ph can be computed
off-line and so do not impose a computational burden where w f is the filtered setpoint w. That is the sampled
in real-time. setpoint ws is the filtered setpoint at time tsi = ti + Δs .
The equivalent setpoint wic is then given by
14.3.5 State Feedback
wic (t) = ws (t − td ) (14.40)
The “state-feedback” block of Figure 14.2 is imple- = w f (tsi − td ) (14.41)
mented as
= w f (t − τ − td ) for
s
tsi ≤t< tsi+1. (14.42)
u(t) = −kxh (t). (14.34)
This is similar to the conventional state feedback of This corresponds to the previous result (Loram et al.,
Figure 14.1 given by Equation 14.7, but the continuous 2012) when Δs = 0 and w f (t) = w(t).
predicted state xw (t) is replaced by the hold state xh (t) If, however, the setpoint w(t) is such that w f (tsi ) ≈
generated by Equation 14.21. w(ts ) (i.e., no second stimulus within the filter settling
Intermittent Control in Man and Machine 291

time and Δs is greater than the filter settling time) then • The roles of the disturbance observer and series
Equation 14.40 may be approximated by integrator (Section 14.2.1).
wic (t) ≈ w(tsi − td ) for tsi ≤ t < tsi+1 (14.43) • The choice of event threshold (Section 14.3.6).
= w(t − τ − td ) for
s
tsi ≤t< tsi+1 . (14.44) • The difference between control-delay and sam-
As discussed in Section 14.10, the intermittent- pling delay (Section 14.3.1).
equivalent setpoint is the basis for identification of • The effect of low and high observer gain (Section
intermittent control. 14.2.1) and
• The effect of occlusion (Section 14.3.4).
14.3.8 The Intermittent Separation Principle
Sections 14.4.2 and 14.4.3 illustrates how the IC models
As discussed in Section 14.3.2, the IC contains an SMH two basic psychological phenomenon: the Psychological
which can be views as a particular form of generalized Refractory Period and the Amplitude Transition Function
hold (Gawthrop and Wang, 2007). Insofar as this spe- (ATF).
cial case of intermittent control uses a dynamical model
of the controlled system to generate the (open-loop)
control between sample intervals, it is related to the 14.4.1 Elementary Examples
strategies of both Zhivoglyadov and Middleton (2003) This section illustrates the basic properties of intermit-
and Astrom (2008). However, as shown in this chap- tent control using simple examples. In all cases, the
ter, intermittent control provides a framework within system is given by
which to analyze and design a range of control sys-
1 1
tems with unpredictable nonuniform sampling possibly G0 (s) = =
arising from an event-driven design. s2 − 1 (s − 1)(s + 1)
In particular, it is shown by Gawthrop and Wang Second-order unstable system, (14.45)
(2011), that the SMH-based IC is associated with a 1
separation principle similar to that of the underlying Gv ( s ) =
s
continuous-time controller, which states that the closed-
Simple integrator for disturbance observer.
loop poles of the intermittent control system consist
(14.46)
of the control system poles and the observer system
poles, and the interpolation using the system matched The corresponding state-space system (14.1) is
hold does not lead to the changes of closed-loop poles. ⎛ ⎞
0 0 1
As discussed by Gawthrop and Wang (2011), this sep-
A = ⎝1 0 0 ⎠ , (14.47)
aration principle is only valid when using the SMH.
0 0 0
For example, intermittent control based on the standard ⎛ ⎞
ZOH does not lead to such a separation principle and 1
therefore closed-loop stability is compromised when the B = B d = ⎝ 0⎠ , (14.48)
sample interval is not fixed. 0
⎛ ⎞
As discussed by Gawthrop and Wang (2011), an 0
important consequence of this separation principle is B v = ⎝0 ⎠ , (14.49)
that the neither the design of the SMH, nor the stability 1
of the closed-loop system in the fixed sampling case, is " #
C= 0 1 0 , (14.50)
dependent on sample interval. It is therefore conjectured
that the SMH is particularly appropriate when sample All signals are zero except:
times are unpredictable or nonuniform, possibly arising
from an event-driven design. w(t) = 1 t ≥ 1.1, (14.51)
d(t) = 0.5 t ≥ 5.1. (14.52)
Except where stated, the intermittent control parame-
ters are
14.4 Examples: Basic Properties of Intermittent
Control Δmin = 0.5 Min. intermittent interval (14.20)
This section uses simulation to illustrate key properties qt = 0.1 Threshold (14.35)
of intermittent control. Section 14.4.1 illustrates Δ=0 Control delay (14.5)
• Timed and event-driven control (Section 14.3.6). Δs = 0 Sampling delay (14.17)
292 Event-Based Control and Signal Processing

1.4 4
uc
1.2 3 d
ue
1
2
0.8
1
0.6
0
0.4
yc
0.2 w –1
y
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(a) (b)
1.4 4
uc
1.2 3 d
ue
1
2
0.8
1
0.6
0
0.4
yc
0.2 w –1
y
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(c) (d)

FIGURE 14.4
Elementary example: timed and event-driven. (a) Timed: y. (b) Timed: u. (c) Event-driven: y. (d) Event-driven: u.

Figures 14.4–14.9 are all of the same format. The left 14.46; this means that the controllers are able to asymp-
column of figures shows the system output y together totically eliminate the constant disturbance d. Figure
with the setpoint w and the output y c corresponding 14.5a and b shows the effect of not using the dis-
to the underlying continuous-time design; the right col- turbance observer. The constant disturbance d is not
umn shows the corresponding control signal ue together eliminated and the IC exhibits limit cycling behavior
with the negative disturbance −d and the control uc . (analyzed further by Gawthrop (2009)). As an alterna-
In each case, the symbol (•) corresponds to an event. tive to the disturbance observer used in the simula-
Figure 14.4 contrasts timed and event driven control. tion of Figure 14.4, a series integrator can be used by
In particular, Figure 14.4a and b corresponds to zero setting:
threshold (qt = 0) and thus timed intermittent control
with fixed interval Δmin = 0.5 and Figure 14.4c and d 1
Gs ( s ) = Series integrator for disturbance rejection.
corresponds to event-driven control. The event driven s
case has two advantages: the controller responds imme- (14.53)
diately to the setpoint change at time t = 1.1 whereas the
The corresponding simulation is shown in Figure 14.5c
timed case has to wait until the next sample at t = 1.5
and d.∗ Although the constant disturbance d is now
and the control is only computed when required. In par-
asymptotically eliminated, the additional integrator
ticular, the initial setpoint response does not need to
increases both the system order and the system relative
be corrected, but the unknown disturbance means that
degree by one giving a more difficult system to control.
the observer state is different from the system state for
The event detector behavior depends on the thresh-
a while and so corrections need to be made until the
old qt (14.35); this has already been examined in
disturbance is correctly deduced by the observer.
The simulation of Figure 14.4 includes the distur- ∗ The system dynamics are now different; the LQ design parameter

bance observer implied by the integrator of Equation is set to Q c = 100 to account for this.
Intermittent Control in Man and Machine 293
1.4 5

1.2 4

1 3

0.8 2

0.6 1

0.4 0

0.2 –1

0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
yc w y uc d ue
(a) (b)

1.4 1.5

1.2 1

1 0.5

0.8 0

0.6 –0.5

0.4 –1

0.2 –1.5
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
yc w y uc d ue
(c) (d)

FIGURE 14.5
Elementary example: no disturbance observer, with and without integrator. (a) No integrator: y. (b) No integrator: u. (c) Integrator: y.
(d) Integrator: u.

the simulations of Figure 14.4. Figure 14.6 shows the corresponding to the unknown disturbance in particular.
effect of a low (qt = 0.01) and high (qt = 1) threshold. As discussed in the textbooks (Kwakernaak and Sivan,
As discussed in the context of Figure 14.4, the initial 1972; Goodwin et al., 2001), the choice of observer
setpoint response does not need to be corrected, but gain gives a trade-off between measurement noise and
the unknown disturbance generates events. The sim- disturbance responses. The gain used in the simulations
ulations of Figure 14.6 indicate the trade-off between of Figure 14.4 can be regarded as medium; Figure 14.8
performance and event rate determined by the choice of looks at low and high gains. As there is no measure-
the threshold qt . ment noise in this case, the low gain observer gives a
The simulations of Figure 14.7 compare and contrast poor disturbance response while the high gain gives an
the two delays: control delay Δ and sample delay Δs . improved disturbance response.
In particular, Figure 14.7a and b corresponds to Δ = The simulations presented in Figure 14.9 investigate
0.4 and Δs = 0 but Figure 14.7c and d corresponds to the intermittent observer of Section 14.3.3. In particular,
Δ = 0 and Δs = 0.4. The response to the setpoint is the measurement of the system output y is assumed
identical as the prediction error is zero in this case; to be occluded for a period Δoo following a sample.
the response to the disturbance change is similar, but Figure 14.9a and b shows simulation with Δoo = 0.1 and
not identical as the prediction error is not zero in this Figure 14.9c and d shows simulation with Δoo = 0.5.
case. It can be seen that occlusion has little effect on perfor-
The state observer of Equation 14.2 is needed to mance for the lower value, but performance is poor for
deduce unknown states in general and the state the larger value.
294 Event-Based Control and Signal Processing
1.4 4
uc
1.2 3 d
ue
1
2
0.8
1
0.6
0
0.4 yc
0.2 w –1
y
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(a) (b)
2 10
yc uc
w 8 d
1.5 y ue
6
1
4
0.5
2

0 0

–0.5 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t

(c) (d)

FIGURE 14.6
Elementary example: low and high threshold. (a) Low threshold: y. (b) Low threshold: u. (c) High threshold: y. (d) High threshold: u.

14.4.2 The Psychological Refractory Period and All signals are zero except the signal w0 , which is
Intermittent-Equivalent Setpoint defined as
As noted in Section 14.3.7, the intermittent sampling of
the setpoint w leads to the concept of the intermittent- w0 (t) = 1 0.5 ≤ t ≤ 1.5, 2.0 ≤ t ≤ 2.5, 3.0 ≤ t
equivalent setpoint: the setpoint that is actually used ≤ 3.2, 4.0 ≤ t ≤ 4.1, (14.56)
within the IC. Moreover, as noted in Section 14.3.1, there
is a minimum intermittent interval Δmin . As discussed
by Gawthrop et al. (2011), Δmin is related to the psy- and the filtered setpoint w is obtained by passing w
chological refractory period (Telford, 1931) which explains through the low-pass filter Gw (s) where
the experimental results of Vince (1948) where a second
reaction time may be longer than the first. These ideas 1
Gw ( s ) = . (14.57)
are explored by simulation in Figures 14.10–14.12. In all 1 + sT f
cases, the system is given by
Except where stated, the intermittent control parame-
1 ters are
G0 (s) = Simple integrator. (14.54)
s
Δmin = 0.5 Min. intermittent interval (14.20)
The corresponding state-space system (14.1) is qt = 0.1 Threshold (14.35)
Δ=0 Control delay (14.5)
A = 0, B = C = 1. (14.55) Δs = 0 Sampling delay (14.17)
Intermittent Control in Man and Machine 295
1.4 4
uc
1.2 3 d
ue
1
2
0.8
1
0.6
0
0.4 yc
0.2 w –1
y
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(a) (b)
1.4 4
uc
1.2 3 d
ue
1
2
0.8
1
0.6
0
0.4 yc
0.2 w –1
y
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(c) (d)

FIGURE 14.7
Elementary example: control-delay and sampling delay. (a) Control delay: y. (b) Control delay: u. (c) Sample delay: y. (d) Sample delay: u.

Figure 14.10a corresponds to the unfiltered setpoint of Figure 14.10a. The fourth (shortest) pulse gives, how-
with T f = 0 and w = w0 where w0 is given by (14.56). ever, a reduced amplitude output; this is because the
For the first two (wider) pulses, events (•) occur at each sample occurs on the trailing edge of the pulse. This
setpoint change; but the second two (narrower) pulses, behavior has been observed by Vince (1948) as is related
the trailing edges occur at a time less than Δmin = to the ATF of Barrett and Glencross (1988). Figure 14.12b
0.5 from the leading edges and thus the events corre- shows the intermittent-equivalent setpoint wic superim-
sponding to the trailing edges are delayed until Δmin posed on the actual setpoint w. This phenomenon is
has elapsed. Thus, the second two (narrower) pulses further investigated in Section 14.4.3.
lead to outputs as if the pulses were Δmin wide. Fig-
ure 14.10b shows the intermittent-equivalent setpoint
14.4.3 The Amplitude Transition Function
wic superimposed on the actual setpoint w.
Figure 14.11a corresponds to the filtered setpoint with This section expands on the observation in Section
T f = 0.01 and w = w0 where w0 is given by (14.56). At 14.4.2, Figure 14.12, that the combination of sampling
the event times, the setpoint has not yet reached its final delay and a bandwidth limited setpoint can lead to
value and thus the initial response is too small which narrow pulses being “missed.” It turns out that the
is then corrected; Figure 14.11b shows the intermittent- physiological equivalent of this behavior is the so-called
equivalent setpoint wic superimposed on the actual set- Amplitude Transition Function (ATF) described by Barrett
point w. and Glencross (1988). Instead of the symmetric pulse
The unsatisfactory behavior can be improved by discussed in the PRP context in Section 14.4.2, the ATF
delaying the sample time by Δs as discussed in Section concept is based on asymmetric pulses where the step
14.3.1. Figure 14.12a corresponds to Figure 14.11a except down is less than the step up leading to a nonzero final
that Δs = 0.1. Except for the short delay of Δs = 0.1, the value. An example of an asymmetric pulse appears in
behavior of the first three pulses is now similar to that Figure 14.13. The simulations in this section use the same
296 Event-Based Control and Signal Processing
1.4 4

1.2 3
1
2
0.8
1
0.6
0
0.4

0.2 –1

0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
yc w y uc d ue
(a) (b)

1.4 4

1.2 3
1
2
0.8
1
0.6
0
0.4

0.2 –1

0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t
t
yc w y uc d ue
(c) (d)

FIGURE 14.8
Elementary example: low and high observer gain. (a) Low observer gain: y. (b) Low observer gain: u. (c) High observer gain: y. (d) High observer
gain: u.

system as in Section 14.4.2 Equations 14.54 and 14.55, the pulse, the amplitude is reduced with increasing Δs .
but the setpoint w0 of Equation 14.56 is replaced by Figure 14.13a is closely related to Figure 2 of Barrett and
⎧ Glencross (1988).

⎨0 t<1
w0 ( t ) = 1 1 ≤ t ≤ 1 + Δp , (14.58)


0.5 t > 1 + Δ p
14.5 Constrained Design
where Δ p is the pulse-width.
The system was simulated for two pulse widths: Δ p = The design approach outlined in Sections 14.2 and 14.3
200 ms (Figure 14.13a) and Δ p = 100 ms (Figure 14.13b). assumes that system inputs and outputs can take any
In each case, following Equation 14.58, the pulse was value. In practice, this is not always the case and so
asymmetric going from 0 to 1 and back to 0.5. constraints on both system inputs and outputs must
At each pulse width, the system was simulated with be taken into account. There are at least three classes
event delay Δs = 90, 100, . . . , 150 ms and the control of constraints of interest in the context of intermittent
delay was set to 100 ms. Figure 14.13a shows the “usual” control:
behavior, the 200 ms pulse is expanded to Δol = 500 ms
and delayed by Δ + Δs . In contrast, Figure 14.13a shows 1. Constraints on the steady-state behavior of
the “Amplitude Transition Function” behavior: because a system. These are particularly relevant
the sampling is occurring on the downwards side of in the context of multi-input (nu > 1) and
Intermittent Control in Man and Machine 297
1.4 4
uc
1.2 3 d
ue
1
2
0.8
1
0.6
0
0.4 yc
w –1
0.2
y
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(a) (b)

1.4 4
uc
1.2 3 d
ue
1
2
0.8
1
0.6
0
0.4
yc
w –1
0.2
y
0 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(c) (d)

FIGURE 14.9
Elementary example: low and high occlusion time. (a) Low occlusion time: y. (b) Low occlusion time: u. (c) High occlusion time: y. (d) High
occlusion time: u.

1.4 yc 1.4
w
1.2 w 1.2 wic
y
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
–0.2 –0.2
0 1 2 3 4 5 0 1 2 3 4 5
t t
(a) (b)

FIGURE 14.10
Psychological refractory period: square setpoint. (a) Intermittent control. (b) Intermittent-equivalent setpoint.

multi-output (ny > 1) systems. This issue is much discussed in the Model-based Predic-
discussed in Section 14.5.1 and is illustrated tive Control literature [e.g., Rawlings (2000);
by an example in Section 14.7. Maciejowski (2002); Wang (2009)]. In the
2. Amplitude constraints on the dynamical context of intermittent control, constraints
behavior of a system. This is a topic that is have been considered in the single-input
298 Event-Based Control and Signal Processing
1.4 1.4
yc w
1.2 w 1.2 wic
y
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
–0.2 –0.2
0 1 2 3 4 5 0 1 2 3 4 5
t t
(a) (b)

FIGURE 14.11
Psychological refractory period: filtered setpoint. (a) Intermittent control. (b) Intermittent-equivalent setpoint.

1.4 1.4
yc w
1.2 w 1.2 wic
y
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
–0.2 –0.2
0 1 2 3 4 5 0 1 2 3 4 5
t t
(a) (b)

FIGURE 14.12
Psychological refractory period: sampling delay. (a) Intermittent control. (b) Intermittent-equivalent setpoint.

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

1 1.2 1.4 1.6 1.8 2 1 1.2 1.4 1.6 1.8 2


t t
(a) (b)

FIGURE 14.13
Amplitude transition function. (a) y&w: pulse width 200 ms. (b) y&w: pulse width 100 ms.
Intermittent Control in Man and Machine 299

single-output context by Gawthrop and Wang As x ss corresponds to a steady-state solution corre-


(2009a); the corresponding multivariable case sponding to Equation 14.10, the solution of the least-
is considered in Section 14.5.2 and illustrated squares problem is subject to the equality constraint:
in Section 14.6. !
A B X̂ss = Axss + Buss = −Bd dss . (14.66)
3. Power constraints on the dynamical behavior
of a system. This topic has been discussed by Furthermore, suppose that the solution must be such
Gawthrop, Wagg, Neild, and Wang (2013c). that the components of Y corresponding to yss are
bounded above and below:

14.5.1 Constrained Steady-State Design ŷmin ≤ ŷ = Css X̂ ≤ ŷmax . (14.67)

Section 14.2.4 considers the steady-state design of the Inequality (14.67) can be rewritten as
   
continuous controller (CC) underlying intermittent con- −Css −ŷmin
trol. In particular, Equation 14.13 gives a linear algebraic X̂ ≤ . (14.68)
Css ŷmax
equation giving the steady-state system state xss and
corresponding control signal uss yielding a particular The quadratic cost function (14.63) together with the lin-
steady-state output yss . Although in the single-input ear equality constraint (14.66) and the linear inequality
single-output case considered by Gawthrop et al. (2011, constraint (14.68) forms a quadratic program (QP), which
equation 13), the solution is unique; as discussed in has well-established numerical algorithms available for
Section 14.2.4, the multi-input, multi-output case gives its solution (Fletcher, 1987).
rise to more possibilities. In particular, it is not possible An example of constrained steady-state optimization
to exactly solve Equation 14.13 in the over-determined is given in Section 14.7.5.
case where nss > nu , but a least-squares solution exists.
In the constrained case, this solution must satisfy two 14.5.2 Constrained Dynamical Design
sets of constraints: an equality constraint ensuring that
the equilibrium condition (14.10) holds and inequality MPC (Rawlings, 2000; Maciejowski, 2002; Wang, 2009)
constraints to reject physically impossible solutions. combines a quadratic cost function with linear con-
In this context, the nss × nss weighting matrix Qss can straints to provide optimal control subject to (hard)
be used to vary the relative importance of each element constraints on both state and control signal; this com-
of yss . In particular, define bination of quadratic cost and linear constraints can be
  solved by using quadratic programming (QP) (Fletcher,
A B 1987; Boyd and Vandenberghe, 2004). Almost all MPC
SQ = , (14.59)
Qss Css 0nss ×nu algorithms have a discrete-time framework. As a move
  toward a continuous-time formulation of intermittent
x
Xss = ss , (14.60) control, the intermittent approach to MPC was intro-
uss
  duced (Ronco et al., 1999) to reduce online compu-
−Bd dss
yss = , (14.61) tational demand while retaining continuous-time like
Qss yss behavior (Gawthrop and Wang, 2007, 2009a; Gawthrop
et al., 2011). This section introduces and illustrates this
and
material.∗
 
−Bd dss Using the feedback control comprising the system
Ŷss = = SQ X̂ss . (14.62) matched hold (14.21), its initialization (14.23), and feed-
Qss ŷss
back (14.34) may cause state or input constraints to be
This gives rise to the least-squares cost function: violated over the intermittent interval. The key idea
" #T " # introduced by Chen and Gawthrop (2006) and exploited
Jss = yss − Ŷss yss − Ŷss
by Gawthrop and Wang (2009a) is to replace the SMH
" #T " #
= yss − SQ X̂ss yss − SQ X̂ss . (14.63) initialization (at time t = ti (14.16)) of Equation 14.23 by

Differentiating with respect to X̂ss gives the weighted ⎪
⎨x p (ti − Δ) − xss w(ti ) when constraints
least-squares solution of (14.13): x h (0) = not violated .


" # U otherwise
STQ yss − SQ X̂ss = 0, (14.64) i
(14.69)
or ∗ Hard constraints on input power flow are considered by Gawthrop
' ( −1 et al. (2013c)—these lead to quadratically-constrained quadratic program-
X̂ss = STQ SQ STQ yss . (14.65) ming (QCQP) (Boyd and Vandenberghe, 2004).
300 Event-Based Control and Signal Processing

where Ui is the result of the online optimization to be algebraic Riccati equation (ARE):
discussed in Section 14.5.2.2.
The first step is to construct a set of equations describ- AT P + PA − PBR−1 BT P + Q = 0. (14.77)
ing the evolution of the system state x and the general-
ized hold state xh as a function of the initial states and There are a number of differences between our
assuming that disturbances are zero. approach to minimizing Jic (14.76) and the LQR
The differential equation (14.26) has the explicit approach to minimizing JLQR (14.6).
solution
1. Following the standard MPC approach
X (τ) = E (τ) X i , (14.70) (Maciejowski, 2002), this is a receding-horizon
optimization in the time frame of τ not t.
where
2. The integral is over a finite time τ1 .
E(τ) = eA xu τ , (14.71)
3. A terminal cost is added based on the steady-
where τ is the intermittent continuous-time variable state ARE (14.77). In the discrete-time context,
based on ti . this idea is due to Rawlings and Muske (1993).
4. The minimization is with respect to the inter-
14.5.2.1 Constraints
mittent control vector Ui generating the con-
The vector X (14.26) contains the system state and the trol signal u (14.7) through the generalized
state of the generalized hold; Equation 14.70 explicitly hold (14.21).
give X in terms of the system state x i (ti ) and the hold
Using X from (14.70), (14.76) can be rewritten as
state xh (ti ) = Ui at time ti . Therefore, any constraint
expressed at a future time τ as a linear combination of X  τ1
can be re-expressed in terms of xh and Ui . In particular, Jic = X(τ)T Q xu X(τ) dτ + X(τ1 )T P xu X(τ1 ),
0
if the constraint at time τ is expressed as
(14.78)
Γτ X(τ) ≤ γτ , (14.72)
where Γτ is a 2n-dimensional row vector and γτ a scalar where
then the constraint can be re expressed using (14.70) in  
Q 0n×n
terms of the intermittent control vector Ui as Q xu = , (14.79)
0n×n xuo RxTuo
Γτ Eu (τ)Ui ≤ γτ − Γτ Ex (τ)xi , (14.73)
where E has been partitioned into the two 2n × n sub- and
matrices Ex and Eu as  
" # P xu =
P 0n×n
. (14.80)
E ( τ ) = E x ( τ ) Eu ( τ ) . (14.74) 0n×n 0n×n
If there are nc such constraints, they can be combined as
Using (14.70), Equation 14.78 can be rewritten as
ΓUi ≤ γ − Γx xi , (14.75)
where each row of Γ is Γτ Eu (τ), each row of Γx is Jic = XTi JXX Xi , (14.81)
Γτ Ex (τ), and each (scalar) row of γ is γτ .
Following standard MPC practice, constraints beyond where
the intermittent interval can be included by assuming
JXX = J1 + eA xu τ1 P xu eA xu τ1 ,
T
that the control strategy will be open-loop in the future. (14.82)

14.5.2.2 Optimization and


Following, for example, Chen and Gawthrop (2006),  τ1
eA xu τ Q xu eA xu τ dτ.
T
a modified version of the infinite-horizon LQR cost J1 = (14.83)
0
(14.6) is used:
 τ
1 The 2n × 2n matrix JXX can be partitioned into four
Jic = x(τ)T Qx (τ) + u(τ)Ru(τ) dτ + x(τ1 )T Px(τ1 ),
n × n matrices as
0
(14.76)  
where the weighting matrices Q and R are as used Jxx JxU
JXX = . (14.84)
in (14.6) and P is the positive-definite solution of the JUx JUU
Intermittent Control in Man and Machine 301

Lemma 14.1: Constrained optimization This system has 15 states (n x = 15), 5 inputs (nu = 5),
and 5 outputs (ny = 5).
The minimization of the cost function Jic of To examine the effect of constraints, consider the case
Equation 14.76 subject to the constraints (14.75) is where it is required that the velocity of the center mass
equivalent to the solution of the QP for the optimum (i = 3) is constrained above by
value of Ui :
* +
min UTi JUU Ui + xTi JUx Ui , (14.85) v3 < 0.2, (14.87)
Ui

subject to ΓUi ≤ γ − Γx xi , where JUU and JUx are


but unconstrained below. As noted in Section 14.5.2.1,
given by (14.84) and Γ, Γx and γ as described in
the constraints are at discrete values of intersample
Section 14.5.2.1.
time τ. In this case, 50 points were chosen at τ =
0.1, 0.2, . . . , 5.0. The precise choice of these points is not
PROOF See Chen and Gawthrop (2006).
critical.
In addition, the system setpoint is given by
Remarks
1. This optimization is dependent on the system 
state x and therefore must be accomplished at 1 i = 3 and 1 ≤ t < 10
every intermittent interval Δi . wi (t) = . (14.88)
0 otherwise
2. The computation time is reflected in the time
delay Δ.
3. As discussed by Chen and Gawthrop (2006), Figure 14.15 shows the results of simulating the cou-
the relation between the cost function (14.85) pled mass–spring system of Figure 14.14 with con-
and the LQ cost function (14.6) means that strained intermittent control with constraint given by
the solution of the QP is the same as the LQ (14.87) and setpoint by (14.88). Figure 14.15a shows the
solution when constraints are not violated. position of the first mass and Figure 14.15b the corre-
sponding velocity; Figure 14.15c shows the position of
the second mass and Figure 14.15d the corresponding
velocity; Figure 14.15e shows the position of the third
mass and Figure 14.15f the corresponding velocity. The
fourth and fifth masses are not shown. In each case,
14.6 Example: Constrained Control of
the corresponding simulation result for the underlying
Mass–Spring System continuous (unconstrained) simulation is also shown.
Figure 14.14 shows a coupled mass–spring system. The Note that on the forward motion of mass three,
five masses m1 –m5 all have unit mass and the four the velocity (Figure 14.15f) is constrained and this is
springs κ1 –κ5 all have unit stiffness. The mass posi- reflected in the constant slope of the corresponding posi-
tions are denoted by y1 –y5 , velocities by v1 –v5 , and the tion (Figure 14.15e). However, the backward motion
applied forces by F1 –F5 . In addition, it is assumed that is unconstrained and closely approximates that corre-
the five forces Fi are generated from the five control sponding to the unconstrained CC. The other masses
signals ui by simple integrators thus: (which have a zero setpoint) deviate more from zero
whilst mass three is constrained, but are similar to the
Ḟi = ui , i = 1, . . . , 5. (14.86) unconstrained case when mass three is not constrained.

y1 y2 y3 y4 y5

κ1 κ2 κ3 κ4
m1 m2 m3 m4 m5
F1 F2 F3 F4 F5

FIGURE 14.14
Coupled mass–spring system. The five masses m1 –m5 all have unit mass and the four springs κ1 –κ5 all have unit stiffness. The mass positions
are denoted by y1 –y5 , velocities by v1 –v5 , and the applied forces by F1 –F5 .
302 Event-Based Control and Signal Processing
0.04 0.04

0.02 0.02

0 0

–0.02 –0.02

v
y

–0.04 –0.04

–0.06 –0.06

–0.08 CC –0.08 CC
IC IC
–0.1 –0.1
0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14
t t
(a) (b)

0.1 0.1

0.05 0.05

0 0

–0.05 –0.05
y

–0.1 –0.1

–0.15 –0.15
CC CC
IC IC
–0.2 –0.2
0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14
t t
(c) (d)

1.4 1
CC CC
1.2 IC IC

1 0.5

0.8

0.6 0
v
y

0.4

0.2 –0.5

–0.2 –1
0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14
t t
(e) (f)

FIGURE 14.15
Constrained control of mass–spring system. The left-hand column shows the positions of masses 1–3 and the right-hand column the correspond-
ing velocities. The gray line corresponds to the simulation of the underlying unconstrained continuous system and the black lines correspond to
intermittent control; the • correspond to the intermittent sampling times ti . (a) y1 . (b) v1 . (c) y2 . (d) v2 . (e) y3 . (f) v3 .
Intermittent Control in Man and Machine 303

where
14.7 Examples: Human Standing mll = ml c2l + (mm + mu )ll2 + Il , (14.92)
Human control strategies in the context of quiet standing mmm = mm c2m + mu lm2
+ Im , (14.93)
have been investigated over many years by a number
muu = mu c2u + Iu , (14.94)
of authors. Early work (Peterka, 2002; Lakie, Caplan,
and Loram, 2003; Bottaro et al., 2005; Loram, Maga- mml = mlm = mm cm ll + mu ll lm , (14.95)
naris, and Lakie, 2005) was based on a single inverted mul = mlu = mu cu ll , (14.96)
pendulum, single-input model of the system. More mum = mmu = mu cu lm , (14.97)
recently, it has been shown (Pinter, van Swigchem,
van Soest, and Rozendaal, 2008; Günther, Grimmer, the gravity matrix G by
⎡ ⎤
Siebert, and Blickhan, 2009; Günther, Müller, and Blick- gll 0 0
han, 2011, 2012) that a multiple segment, multiple input G = g⎣ 0 gmm 0 ⎦, (14.98)
model is required to model unconstrained quiet stand- 0 0 guu
ing and this clearly has implications for the correspond-
ing human control system. Intermittent control has been where
suggested as the basic algorithm by Gawthrop et al.
gll = ml cl + (mm + mu )ll , (14.99)
(2011), Gawthrop et al. (2013b), and Gawthrop, Loram,
Gollee, and Lakie (2014) and related algorithms have gmm = mm cm + mu lm , (14.100)
been analyzed by Insperger (2006), Stepan and Insperger guu = mu cu , (14.101)
(2006), Asai et al. (2009), and Kowalczyk et al. (2012). and the input matrix N by
This section uses a linear three-segment model to ⎡ ⎤
illustrate key features of the constrained multivariable 1 −1 0
intermittent control described in Sections 14.3 and 14.5. N = ⎣0 1 −1 ⎦ . (14.102)
Section 14.7.1 describes the three-link model, Section 0 0 1
14.7.2 looks at a heirachical approach to muscle-level The joint angles φl , . . . , φu can be written in terms of
control, Section 14.7.3 looks at an intermittent explana- the link angles as
tion of quiet standing, and Sections 14.7.4 and 14.7.5
discuss tracking and disturbance rejection, respectively. φ l = θl , (14.103)
φ m = θm − θl , (14.104)
14.7.1 A Three-Segment Model φ u = θu − θm , (14.105)
This section uses the linearised version of the three- or more compactly as
link, three-joint model of posture given by Alexandrov, φ= NT θ, (14.106)
Frolov, Horak, Carlson-Kuhta, and Park (2005). The
upper, middle, and lower links are indicated by sub- where
scripts u, m, and l, respectively. The linearized equations ⎡ ⎤
φl
correspond to ⎣φ m ⎦ .
φ= (14.107)
Mθ̈ − Gθ = NT, (14.89) φu
The values for the link lengths l, CoM location c, masses
where θ is the vector of link angles given by m, and moments of inertia (about CoM) were taken from
Figure 4.1 and Table 4.1 of Winter (2009).
⎡ ⎤ The model of Equation 14.89 can be rewritten as
θl
θ = ⎣ θm ⎦ , (14.90) dx0
= A0 x0 + B0 T, (14.108)
θu dt  
θ̇
x0 = , (14.109)
θ
and T the vector of joint torques.
The mass matrix M is given by and
 
− M −1 G
0 3 ×3
⎡ ⎤ A0 = , (14.110)
mll mlm mlu I3 ×3
0 3 ×3
 −1 
M = b ⎣ mml mmm mmu ⎦ , (14.91) M N
B0 = . (14.111)
mul mum muu 0 3 ×3
304 Event-Based Control and Signal Processing

The eigenvalues of A0 are ±2.62, ±6.54, and ±20.4. The (14.89) and the spring dynamics (14.115) is given by
positive eigenvalues indicate that this system is (without Equation 14.1 where
control) unstable. ⎡ ⎤
More sophisticated models would include nonlinear   θ̇
x0
geometric and damping effects; but this model provides x= = ⎣θ⎦ , (14.117)
T
the basis for illustrating the properties of constrained T
intermittent control. y = θ, (14.118)
d = Td , (14.119)
14.7.2 Muscle Model and Hierarchical Control
and
As discussed by Lakie et al. (2003) and Loram et al.
⎡ ⎤
(2005), the single-inverted pendulum model of balance   0 3 ×3 − M −1 G M −1 N
A0 B0
control uses a muscle model comprising a spring and A= = ⎣ I3 ×3 0 3 ×3 0 3 ×3 ⎦ ,
a contractile element. In this context, the effect of the − K φ NT 0 3 ×6
− K φ NT 0 3 ×3 0 3 ×3
spring is to counteract gravity and thus effectively slow (14.120)
down the toppling speed on the pendulum. This top- ⎡ ⎤
pling speed is directly related to the maximum real 0 3 ×3
part of the system eigenvalues. This is important as it B = ⎣ 0 3 ×3 ⎦ , (14.121)
reduces the control bandwidth necessary to stabilize the Kφ
⎡ −1 ⎤
unstable inverted pendulum system (Stein, 2003; Loram,   M N
B0
Gawthrop, and Lakie, 2006). Bd = = ⎣ 0 3 ×3 ⎦ , (14.122)
0 3 ×3
The situation is more complicated in the multiple 0 3 ×3
link case as, unlike the single inverted pendulum case, !
C = 0 3 ×3 I3 ×3 0 3 ×3 . (14.123)
the joint angles are distinct from the link angles. From
Equation 14.98, the gravity matrix is diagonal in link There are, of course, many other state-space represen-
space; on the other hand, as the muscle springs act at tations with the same input–output properties, but this
the joints, the corresponding stiffness matrix is diago- particular state-space representation has two useful fea-
nal in joint space and therefore cannot cancel the gravity tures: first, the velocity control input of Equation 14.114
matrix in all configurations. induces an integrator in each of the three inputs and sec-
The spring model used here is the multi-link exten- ondly the state explicitly contains the joint torque due to
sion of the model of Loram et al. (2005, Figure 1) and is the springs. The former feature simplifies control design
given by in the presence of input disturbances with constant com-
T k = K φ (φ 0 − φ ) , (14.112) ponents and the latter feature allows spring preloading
(in anticipation of a disturbance) to be modeled as a state
where initial condition. These features are used in the example
⎡ ⎤ ⎡ ⎤ of Section 14.7.5.
k1 0 0 φl0
It has been argued (Hogan, 1984) that humans use
Kφ = ⎣ 0 k2 0 ⎦ and φ0 = ⎣φm0 ⎦ , (14.113)
muscle co-activation of antagonist muscles to manip-
0 0 k3 φu0
ulate the passive muscle stiffness and thus Kφ . As
Tk is the vector of spring torques at each joint, φ contains mentioned above, the choice of Kφ in the single-link
the joint angles (14.107), and k1 , . . . , k3 are the spring case (Loram et al., 2005, Figure 1) directly affects the
stiffnesses at each joint. It is convenient to choose the toppling speed via the maximum real part of the sys-
control signal u to be tem eigenvalues. Hence, we argue that such muscle
co-activation could be used to choose the maximum
dφ0
u= , (14.114) real part of the system eigenvalues and thus manipulate
dt
the required closed-loop control bandwidth. However,
and thus Equation 14.112 can be rewritten as muscle co-activation requires the flow of energy and so
  it makes sense to choose the minimal stiffness consis-
dTk dφ
= Kφ u − (14.115) tent with the required maximum real part of the system
dt dt
  eigenvalues. Defining:
T dθ
= Kφ u − N . (14.116) ⎡ ⎤
dt k1
Setting T = Tk + Td where Td is a disturbance torque, k phi = ⎣k2 ⎦ , (14.124)
the composite system formed from the link dynamics k3
Intermittent Control in Man and Machine 305

this can be expressed mathematically as Joint damping can be modeled by the equation:
min ||Kφ ||, (14.125) dφ dθ
kφ T c = − Cφ = − Cφ NT , (14.127)
dt dt
subject to
where
max ['σi ] < σmax , (14.126) ⎡ ⎤
c1 0 0
where σi is the ith eigenvalue of A. This is a quadratic Cφ = ⎣ 0 c2 0⎦. (14.128)
optimization with nonlinear constraints, which can be 0 0 c3
solved by sequential quadratic programming (SQP)
(Fletcher, 1987). Setting T = Tk + Tc , the matrix A of Equation 14.120 is
In the single-link case, increasing spring stiffness replaced by
from zero decreases the value of the positive eigenvalue ⎡ ⎤
until it reaches zero, after that point the two eigenvalues −M−1 NCφ NT −M−1 G M−1 N
form a complex-conjugate pair with zero real part. A=⎣ I3 ×3 0 3 ×3 03×3 ⎦ . (14.129)
The three-link case corresponds to three eigenvalue − K φ NT 0 3 ×3 0 3 ×3
pairs. Figure 14.16 shows how the real and imaginary
parts of these six eigenvalues vary with the constraint
14.7.3 Quiet Standing
σmax together with the spring constants kφ . Note that
the spring constants and imaginary parts rise rapidly In the case of quiet standing, there is no setpoint track-
when the maximum real eigenvalue is reduced to below ing and no constant disturbance and thus w = 0 and xss
about 2.3. is not computed. The spring constants were computed

10 100

5 50
ℜσ

0 0
ℑσ

–5 –50

–10 –100
0 2 4 6 8 10 0 2 4 6 8 10
σmax σmax
(a) (b)
1400

1200

1000

800

600

400

200

0
0 2 4 6 8 10
σmax
(c)

FIGURE 14.16
Choosing the spring constants. (a) The real parts of the nonzero eigenvalues plotted against σmax , the specified maximum real part of all
eigenvalues resulting from (14.125) and (14.126). (b) The imaginary parts corresponding to (a). (c) The spring constants kφ .
306 Event-Based Control and Signal Processing

as in Section 14.7.2 with σmax = 3. The corresponding 14.7.4 Tracking


nonzero eigenvalues of A are ±3, ±2.38, and ± j17.9.
As discussed in Section 14.2.4, the equilibrium state
The IC of Section 14.3, based on the continuous-time
x ss has to be designed for tracking purposes. As there
controller of Section 14.2.3, was simulated using the
are three inputs, it is possible to satisfy up to three
following control design parameters:
steady-state conditions. Three possible steady-state con-
⎡ ⎤ ditions are
q v I3 ×3 0 3 ×3 0 3 ×3
Q c = ⎣ 0 3 ×3 q p I3 ×3 0 3 ×3 ⎦ , (14.130) 1. The upper link should follow a setpoint:
0 3 ×3 0 3 ×3 q T I3 ×3
θu = w u . (14.137)
where 2. The component of ankle torque due to gravity
should be zero:
qv = 0, q p = 1, qT = 10, (14.131) !
T1 = 1 0 0 N−1 Gθ = 0. (14.138)
Rc = K2phi . (14.132)
3. The knee angle should follow a set point:
The corresponding closed-loop poles are −3.45 ± 18.3,
φ m = θm − θl = w m . (14.139)
−3.95 ± 3.82, −2.87 ± 0.590, −5.31, −3.28, and −2.48.
The intermittent control parameters (Section 14.3) time These conditions correspond to
delay Δ and minimum intermittent interval Δmin were ⎡ ⎤
0 0 0 0 0 1 0 0 0
chosen as
Css = ⎣0 0 0 27.14627 22.51155 23.74238 0 0 0⎦,
Δ = 0.1 s, (14.133) 0 0 0 −1 1 0 0 0 0

Δmin = 0.25 s. (14.134) (14.140)


yss = I3×3 , (14.141)
These parameters are used in all of the following ⎡ ⎤
wu (t)
simulations.
w( t) = ⎣ 0 ⎦ . (14.142)
A multisine disturbance with standard deviation wm (t)
0.01 Nm was added to the control signal at the lower
(ankle) joint. With reference to Equation 14.35, the This choice is examined in Figures 14.19 and 14.20
threshold was set on the three segment angles so that the by choosing the knee angle φm = θm − θl . Figure 14.20
threshold surface (in the 9D state space) was defined as shows how the link and joint angles, and the corre-
sponding torques, vary with φm . Figure 14.19 shows a
θT θ = xT Qt x = q2t , (14.135) picture of the three links for three values of φm . In each
case, note that the upper link and the corresponding
where hip torque remain constant due to the first condition
and that each configuration appears balanced due to
⎡ ⎤
0 3 ×3 0 3 ×3 0 3 ×3 condition 2.
Q t = ⎣ 0 3 ×3 I3 ×3 0 3 ×3 ⎦ , (14.136) The simulations shown in Figure 14.21 shows the
0 3 ×3 0 3 ×3 0 3 ×3 tracking of a setpoint w(t) (14.142) using the three
conditions for determining the steady state. In this
Three simulations of both IC and CC were performed example, the individual setpoint components of
with event threshold qt = 0◦ , qt = 0.1◦ , and qt = 1◦ and Equation 14.142 are
the resultant link angles θ are plotted against time in ⎧ ◦
Figure 14.17; the black lines show the IC simulations and ⎪
⎪ 10 0 < t ≤ 10

⎪ ◦
the gray lines the CC simulations. The three-segment ⎪
⎪0 10 < t ≤ 15


model together with the spring model has nine states. ⎪
⎪ ◦ 10 < t ≤ 15
⎨0
Figure 14.18 shows three cross sections through this wu (t) = 10◦ 15 < t ≤ 15.1 , (14.143)
space (by plotting segment angular velocity against seg- ⎪


⎪ 0 ◦ 15.1 < t ≤ 20
ment angle) for the three thresholds. ⎪


⎪ ◦ 20 < t ≤ 20.25
As expected, the small threshold gives smaller dis- ⎪
⎪ 10
⎩ ◦
placements from vertical; but the disturbance is more 0 t > 20.25

apparent. The large threshold gives largely self-driven 0◦ 0<t≤5
behavior. This behavior is discussed in more detail by wm (t) = ◦ t>5
. (14.144)
Gawthrop et al. (2014). − 20
Intermittent Control in Man and Machine 307
0.01 0.015 0.015

0.01 0.01
0.005
0.005
0.005

θm (deg)

θu (deg)
0
θl (deg)

0
–0.005 0
–0.005
–0.01 –0.01 –0.005

–0.015 –0.015 –0.01


0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60
t (s) t (s) t (s)
(a) (b) (c)
0.4 0.4 0.4
0.3 0.3 0.3
0.2 0.2 0.2
0.1 0.1 0.1
θm (deg)

θu (deg)
θl (deg)

0 0 0
–0.1 –0.1 –0.1
–0.2 –0.2 –0.2
–0.3 –0.3 –0.3
–0.4 –0.4 –0.4
0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60
t (s) t (s) t (s)

(d) (e) (f)

1.5 3 2

1 2 1
0.5 1
θm (deg)

0
θu (deg)
θl (deg)

0 0
–1
–0.5 –1

–2 –2
–1

–1.5 –3 –3
0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60
t (s) t (s) t (s)
(g) (h) (i)

FIGURE 14.17
Quiet standing with torque disturbance at the lower joint. Each row shows the lower (θl ), middle (θm ), and upper (θu ) link angles (deg) plotted
against time t (s) for a different thresholds q t . Larger thresholds give larger and more regular sway angles. (a) q t = 0◦ : θ◦l . (b) q t = 0◦ : θ◦m .
(c) q t = 0◦ : θ◦u . (d) q t = 0.1◦ : θ◦l . (e) q t = 0.1◦ : θ◦m . (f) q t = 0.1◦ : θ◦u . (g) q t = 1◦ : θ◦l . (h) q t = 1◦ : θ◦m . (i) q t = 1◦ : θ◦u .

As a further example, only the first two conditions for (14.142) using the first two conditions for determining
determining the steady state are used; the knee is not the steady state. In this example, the individual set-
included. These conditions correspond to: point component wu is given by (14.143). Comparing
  Figure 14.22b, e, and h with Figure 14.21b, e, and h, it
0 0 0 0 0 1 0 0 0
Css = , can be seen that the knee angle is no longer explicitly
0 0 0 27.14627 22.51155 23.74238 0 0 0
controlled.
(14.145)
yss = I2×2 , (14.146)
 
wu (t) 14.7.5 Disturbance Rejection
w( t) = . (14.147)
0
Detailed modeling of a human lifting and holding a
The under-determined Equation 14.13 is solved using heavy pole would require complicated dynamical equa-
the pseudo inverse. The simulations shown in tions. This section looks at a simple approximation to
Figure 14.22 shows the tracking of a setpoint w(t) the case where a heavy pole of mass m p is held at a
308 Event-Based Control and Signal Processing
0.2 0.2 0.06
0.15 0.15
0.04
0.1 0.1

dθm/dt (deg/s)

dθu/dt (deg/s)
dθl/dt (deg/s)

0.02
0.05 0.05
0 0 0
–0.05 –0.05 –0.02
–0.1 –0.1
–0.15 –0.04
–0.15
–0.2 –0.2 –0.06
–0.015 –0.01 –0.005 0 0.005 0.01 –0.015 –0.01 –0.005 0 0.005 0.01 0.015 –0.01 –0.005 0 0.005 0.01 0.015
θl (deg) θu (deg)
θm (deg)
(a) (b) (c)

3 3 1

2 2
0.5

dθu/dt (deg/s)
dθm/dt (deg/s)
dθl/dt (deg/s)

1 1

0 0 0

–1 –1
–0.5
–2 –2

–3 –1
–3 –0.4 –0.3 –0.2 –0.1 0 0.1 0.2 0.3 0.4
–0.4 –0.3 –0.2 –0.1 0 0.1 0.2 0.3 0.4 –0.4 –0.3 –0.2 –0.1 0 0.1 0.2 0.3 0.4
θl (deg) θm (deg) θu (deg)

(d) (e) (f)

15 20 6
15
10 4
10
dθm/dt (deg/s)

dθu/dt (deg/s)
dθl/dt (deg/s)

5 2
5
0 0 0
–5
–5 –2
–10
–10 –15 –4
–15 –20 –6
–1.5 –1 –0.5 0 0.5 1 1.5 –3 –2 –1 0 1 2 3 –3 –2 –1 0 1 2
θl (deg) θm (deg) θu (deg)
(g) (h) (i)

FIGURE 14.18
Quiet standing: phase-plane. Each plot corresponds to that of Figure 14.17 but the angular velocity is plotted against angle. Again, the increase
in sway angle and angular velocity with threshold is evident. (a) q t = 0◦ : θ◦l . (b) q t = 0◦ : θ◦m . (c) q t = 0◦ : θ◦u . (d) q t = 0.1◦ : θ◦l . (e) q t = 0.1◦ : θ◦m .
(f) q t = 0.1◦ : θ◦u . (g) q t = 1◦ : θ◦l . (h) q t = 1◦ : θ◦m . (i) q t = 1◦ : θ◦u .

fixed distance l p to the body. In particular, the effect is As discussed in Section 14.7.2, it is possible to preload
modeled by the joint spring to give an initial torque. In this con-
text, this is done by initializing the system state x of
1. Adding a torque disturbance Td to the upper Equation 14.117 as
link where
⎡ ⎤ ⎡ ⎤
Td = gm p l p . (14.148) θ̇(0) 0 3 ×1
x ( 0 ) = ⎣ θ( 0 ) ⎦ = ⎣ 0 3 × 1 ⎦ , (14.151)
2. Adding a mass m p to the upper link.
T(0) κd
In terms of the system Equation 14.1 and the three-link
model of Equation 14.89, the disturbance d is given by κ will be referred to as the spring preload and will be
⎡ ⎤ expressed as a percentage: thus, κ = 0.8 will be referred
0 to as 80% preload.
d = N −1 ⎣ 0 ⎦ (14.149) There are many postures appropriate to this situation,
Td two of which are as follows:
⎡ ⎤
1 Upright: All joint angles are zero and the pole is
= ⎣1⎦ Td . (14.150) balanced by appropriate joint torques (Figure 14.24
1 and b).
Intermittent Control in Man and Machine 309
2 2 2

1.5 1.5 1.5

1 1 1

0.5 0.5 0.5

0 0 0
–0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2 –0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2 –0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2
(a) (b) (c)
2 2 2

1.5 1.5 1.5

1 1 1

0.5 0.5 0.5

0 0 0
–0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2 –0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2 –0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2
(d) (e) (f)

FIGURE 14.19
Equilibria: Link configuration. (a–c) In each configuration, the upper link is set at θu = 0◦ and the posture is balanced (no ankle torque); the knee
angle is set to three possible values. (d), (e), (f) as (a), (b), (c) but θu = 10◦ .

Balanced: All joint torques are zero and the pole is A combination of both can be specified by choosing:
balanced by appropriate joint (and thus link) angles  
0 I3 ×3 0 3 ×3
(Figure 14.24c and f). Css = 3×3 , (14.156)
0 3 ×3 0 3 ×3 I3 ×3
In terms of Equation 14.12, the upright posture is speci- yss = 06×1, (14.157)
fied by choosing:  
( 1 − λ ) I3 × 3 0 3 ×3
Qss = λ . (14.158)
! 0 3 ×3 T d I3 ×3
Css = 03×3 I3 ×3 0 3 ×3 , (14.152)
The parameter 0 ≤ λ ≤ 1 weights the two postures and
yss = 03×1, (14.153)
division by Td renders the equations dimensionless.
When Css is given by (14.156), nss = 6. As nu = 3,
and the balanced posture is specified by choosing: nss > nu and so, as discussed in Section 14.2.4, the set
! of Equations 14.13 is overdetermined and the approach
Css = 03×3 03×3 I3×3 , (14.154) of Section 14.5 is used. Two situations are examined:
yss = 03×1. (14.155) unconstrained and constrained with hip angle and knee
310 Event-Based Control and Signal Processing
20 40
l m u l m u
30
10
20

0 10

φ (deg)
θ (deg)
0
–10 –10
–20
–20
–30
–30 –40
–40 –35 –30 –25 –20 –15 –10 –5 0 –40 –35 –30 –25 –20 –15 –10 –5 0
φm (deg) φm (deg)
(a) (b)
80
l m u
60
40
T (Nm)

20
0
–20
–40
–60
–40 –35 –30 –25 –20 –15 –10 –5 0
φm (deg)
(c)

FIGURE 14.20
Equilibria: angles and torques. (a) The equilibrium link angles θ are plotted against the fixed knee angle φm with balanced posture and the
upper link at an angle of θu = 10◦ . (b) and (c) as (a) but with joint angles φ and joint torques T, respectively. Note that the ankle torque Tl is zero
(balanced posture) and the waist torque Tu balances the fixed θu = 10◦ .

angle subject to the inequality constraints:

φu > −0.1◦ , (14.159) 14.8 Intermittency Induces Variability


φm < 0. (14.160) Variability is an important characteristic of human
motor control: when repeatedly exposed to identical
In each case, the equality constraint (14.66) is imposed. excitation, the response of the human operator is differ-
Figure 14.23 shows how the equilibrium joint angle φ ent for each repetition. This is illustrated in Figure 14.29:
and torque T vary with λ for the two cases. As illustrated Figure 14.29a shows a periodic input disturbance (peri-
in Figure 14.24, the two extreme cases λ = 0 and λ = 1 odicity 10 s), while Figure 14.29c and e shows the
correspond to the upright and balanced postures; other corresponding output signal of a human controller for
values give intermediate postures. different control aims. It is clear that in both cases, the
Figures 14.25 and 14.27 show simulation results for control signal is different for each 10 s period of identical
the two extreme cases of λ for the unconstrained case disturbance.
with m p = 5kg and l p = 0.5m. In each case, the initial In the frequency domain, variability is represented by
link angles are all zero (θl = θm = θu = 0) and the dis- the observation that, when the system is excited at a
turbance torque Td = g is applied at t = 0. Apart from range of discrete frequencies (as shown in Figure 14.29b,
the equilibrium vector xss , the control parameters are the the disturbance signal contains frequency components
same in each case. at 0.1, 0.2, . . . , 10 Hz), the output response contains
In the case of Figure 14.25, the steady-state torques are information at both the excited and the nonexcited fre-
Tl = Tm = Tu = −Td to balance Td = gm p l p ; in the case quencies (Figure 14.29d and f). The response at the
of Figure 14.27, the links balance the applied torque by nonexcited frequencies (at which the excitation signal is
−Δ zero) is termed the remnant.
setting θl = θm = 0 and θu = .
gcu mu
Intermittent Control in Man and Machine 311
12 10 600

10 5
400
0
8
200
–5

φm (deg)
θu (deg)

gl (Nm)
6
–10 0
4
–15
–200
2
–20
0 –25 –400

–2 –30 –600
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
t (s) t (s) t (s)
(a) (b) (c)
12 10 600

10 5
400
0
8
200
–5
φm (deg)

gl (Nm)
θu (deg)

6
–10 0
4
–15
–200
2 –20
0 –400
–25
–2 –30 –600
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
t (s) t (s) t (s)
(d) (e) (f)

12 10 800

10 5 600
0
8 400
–5
φm (deg)
θu (deg)

gl (Nm)

6 200
–10
4 0
–15
2 –200
–20
0 –25 –400
–2 –30 –600
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
t (s) t (s) t (s)
(g) (h) (i)

FIGURE 14.21
Tracking: controlled knee joint. The equilibrium design (Section 14.2.4) sets the upper link angle θu to 10◦ for t < 10 and to 0◦ for 10 ≤ t < 15 and
sets the gravity torque at the ankle joint to zero; it also sets the knee angle φm to zero for t < 5 s and to −20◦ for t ≥ 5 s. At time t = 15 s, a pulse
of width 0.1 s is applied to the upper link angle setpoint and a pulse of width 0.25 s is applied at time t = 20 s. Note that the intermittent control
response is similar in each case: this refractory behavior is due to event-driven control with a minimum intermittent interval Δmin = 0.25 (14.134).
(a) q t = 0◦ : θ◦u . (b) q t = 0◦ : φ◦m . (c) q t = 0◦ : gl◦ . (d) q t = 0.1◦ : θ◦u . (e) q t = 0.1◦ : φ◦m . (f) q t = 0.1◦ : gl◦ . (g) q t = 1◦ : θ◦u . (h) q t = 1◦ : φ◦m . (i) q t = 1◦ : gl◦ .

Variability is usually explained by appropriately con- Intermittent control includes a sampling process,
structed motor and observation noise, which is added which is generally based on thresholds associated with
to a linear continuous-time model of the human con- a trigger (see Figure 14.2). This nonuniform sampling
troller (signals v u and v y in Figure 14.1; Levison, Baron, process leads to a time-varying response of the con-
and Kleinman, 1969; Kleinman et al., 1970). While this troller. It has been suggested that the remnant can be
is currently the prominent model in human control, explained by event-driven intermittent control without
its physiological basis is not fully established. This has the need for added noise (Mamma, Gollee, Gawthrop,
led to the idea that the remnant signal might be based and Loram, 2011; Gawthrop, Gollee, Mamma, Loram,
on structure rather than randomness (Newell, Deutch, and Lakie, 2013a), and that this sampling process intro-
Sosnoff, and Mayer-Kress, 2006). duces variability (Gawthrop et al. 2013b).
312 Event-Based Control and Signal Processing
12 2 600

10 0 400
8 –2
200

φm (deg)
θu (deg)

6 –4

gl (Nm)
0
4 –6

2 –200
–8

0 –10 –400

–2 –12 –600
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
t (s) t (s) t (s)
(a) (b) (c)
12 2 600

10 0 400
8 –2
200
φm (deg)

gl (Nm)
θu (deg)

6 –4
0
4 –6
–200
2 –8

0 –10 –400

–2 –12 –600
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
t (s) t (s) t (s)
(d) (e) (f)
12 5 600

10 400
0
8
200
φm (deg)
θu (deg)

gl (Nm)
6
–5 0
4
–200
2
–10
0 –400

–2 –15 –600
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
t (s) t (s) t (s)
(g) (h) (i)

FIGURE 14.22
Tracking: free knee joint. This figure corresponds to Figure 14.21 except that the knee joint angle φm is not constrained. (a) q t = 0◦ : θ◦u . (b) q t = 0◦ :
φ◦m . (c) q t = 0◦ : gl◦ . (d) q t = 0.1◦ : θ◦u . (e) q t = 0.1◦ : φ◦m . (f) q t = 0.1◦ : gl◦ . (g) q t = 1◦ : θ◦u . (h) q t = 1◦ : φ◦m . (i) q t = 1◦ : gl◦ .

In this section, we will discuss how intermittency can an inverted pendulum with a dynamic response similar
provide an explanation for variability which is based on to that of a human standing [Load 2 of Table 1 in Loram,
the controller structure and does not require a random Lakie, and Gawthrop (2009)]:
process. Experimental data from a visual-manual control ⎧  

⎪ − 0.0372 1.231
task will be used as an illustrative example. ⎪
dt ( t ) = x(t)
dx



⎪ 1 0



⎪  
14.8.1 Experimental Setup ⎪


⎪ 6.977
⎨ + (u(t) − d
(t))
In this section, experimental data from a visual-manual 0 . (14.161)
control task are used in which the participant were ⎪
⎪ - .


asked to use a sensitive, contactless, uniaxial joystick ⎪
⎪ y( t) = 0 1 x ( t)


to sustain control of an unstable second-order system ⎪
⎪  


whose output was displayed as a dot on a oscilloscope ⎪
⎪ 1 0
(Loram et al., 2011). The controlled system represented ⎩yo ( t) = 0 1 x ( t)

Intermittent Control in Man and Machine 313
1 1

0 0

–1
–1
φ (deg)

–2

φ (deg)
–2
–3

l –3 l
–4 m m
u u
–5 –4

–6 –5
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
λ λ
(a) (b)

5 5

0 0

–5 –5
T (Nm)
T (Nm)

–10 –10

–15 –15

–20 l –20 l
m m
u u
–25 –25
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
λ λ
(c) (d)

FIGURE 14.23
Equilibria. For a constant disturbance torque Td acting on the upper link, the plots show how link angle and joint torque vary with the
weighting factor λ. λ = 0 gives an upright posture (zero link and joint angles—Figure 14.24a) and λ = 1 gives a balanced posture (zero joint
torques—Figure 14.24e). λ = 0.5 gives an intermediate posture (Figure 14.24c). The left column is the unconstrained case, the right column is
the constrained case where hip angle φu > −0.1◦ and knee angle φm < 0; the former constraint becomes active as λ increases and the knee and
hip joint torques are no longer zero at λ = 1. (a) Joint angle φ (deg)—unconstrained. (b) Joint angle φ (deg)—constrained. (c) Joint torque T
(Nm)—unconstrained. (d) Joint torque T (Nm)—constrained.

The external disturbance signal, d (t), applied to the load The signal d(t) is periodic with T0 = 1/ f 0 = 10 s. To
input, was a multi-sine consisting of N f = 100 discrete obtain an unpredictable excitation, the phases φk are
frequencies ωk , with resolution ω0 = 2π f 0 , f 0 = 0.1 Hz random values taken from a uniform distribution on the
(Pintelon and Schoukens, 2001) open interval (0, 2π), while ak = 1 for all k to ensure that
all frequencies are equally excited.
We considered two control priorities using the instruc-
Nf tions “keep the dot as close to the center as possible”
d(t) = ∑ ak cos (ωk t + φk ) with ωk = 2π k f 0 . (“cc,” prioritizing position), and “while keeping the dot
k =1 on screen, wait as long as possible before intervening”
(14.162) (“mi,” minimizing intervention).
314 Event-Based Control and Signal Processing
2 2

1.5 1.5

1 1

0.5 0.5

0 0
–0.2–0.15–0.1–0.05 0 0.05 0.1 0.15 0.2 –0.2–0.15 –0.1–0.05 0 0.05 0.1 0.15 0.2
(a) (b)

2 2

1.5 1.5

1 1

0.5 0.5

0 0
–0.2–0.15 –0.1–0.05 0 0.05 0.1 0.15 0.2 –0.2–0.15 –0.1–0.05 0 0.05 0.1 0.15 0.2
(c) (d)

2 2

1.5 1.5

1 1

0.5 0.5

0 0
–0.2–0.15 –0.1–0.05 0 0.05 0.1 0.15 0.2 –0.2–0.15 –0.1–0.05 0 0.05 0.1 0.15 0.2
(e) (f)

FIGURE 14.24
Equilibria: link configurations. (a), (c), (e) unconstrained; (b), (d), (f) constrained where hip angle φu > −0.1◦ and knee angle φm < 0. (a) Upright
(λ = 0)—unconstrained. (b) Upright (λ = 0)—constrained. (c) Intermediate (λ = 0.5)—unconstrained. (d) Intermediate (λ = 0.5)—constrained.
(e) Balanced (λ = 1)—unconstrained. (f) Balanced (λ = 1)—constrained.
Intermittent Control in Man and Machine 315
3 –10

2 –15

–20
1

Tl (Nm)
φl (deg)
–25
0
–30
–1
–35
–2 –40

–3 –45
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(a) (b)

5 –15

–20
0
–25

Tm (Nm)
φm (deg)

–5 –30

–35
–10
–40

–15 –45
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(c) (d)

20 –10

–20
15
–30
10
Tu (Nm)
φu (deg)

–40

5 –50

–60
0
–70

–5 –80
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(e) (f)

FIGURE 14.25
Pole-lifting simulation—constrained: upright posture (λ = 0). The spring preload κ (14.151) is 80%. Note that the steady-state link angles are
zero and the steady-state torques are all − Td to balance Td = gm p l p = 24.5 Nm (14.148). The dots correspond to the sample times ti . The intervals
Δi (14.16) are irregular and greater than the minimum Δmin . (a) Lower joint: angle θl (deg). (b) Lower joint: torque Tl (Nm). (c) Middle joint:
angle θm (deg). (d) Middle joint: torque Tm (Nm). (e) Upper joint: angle θu (deg). (f) Upper joint: torque Tu (Nm).

14.8.2 Identification of the Linear Time-Invariable (LTI) optimal, continuous-time linear predictive controller
Response (PC) (Figure 14.1) are identified by fitting the complex
frequency response function (FRF) relating d to ue at the
Using previously established methods discussed in
excited frequencies. The linear fit to the experimental
Section 14.9 and by Gollee, Mamma, Loram, and
data is shown in Figure 14.30a and b for the two differ-
Gawthrop (2012), the design parameters (i.e., LQ
ent experimental instructions (“cc” and “mi”). Note that
design weightings and mean time-delay, Δ) for an
316 Event-Based Control and Signal Processing
8 10

6 0

4 –10

Tl (Nm)
φl (deg)

2 –20

0 –30

–2 –40

–4 –50
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(a) (b)

5 10

0
0
–10

Tm (Nm)
–5
φm (deg)

–20
–10
–30

–15 –40

–50
–20
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(c) (d)

20 20

15 0
10
–20
Tu (Nm)
φu (deg)

5
–40
0
–60
–5

–10 –80
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(e) (f)

FIGURE 14.26
Pole-lifting simulation—unconstrained steady state: balanced posture (λ = 1). The spring preload κ (14.151) is 80%. The steady-state joint
torques are zero. (a) Lower joint: angle θl (deg). (b) Lower joint: torque Tl (Nm). (c) Middle joint: angle θm (deg). (d) Middle joint: torque
Tm (Nm). (e) Upper joint: angle θu (deg). (f) Upper joint: torque Tu (Nm).

the PC only fits the excited frequency components; itsare used as the basis to model the response at the
nonexcited (remnant) frequencies. First, the standard
response at the nonexcited frequencies (bottom plots) is
zero. approach of adding noise to a continuous PC is demon-
strated. Following this, it is shown that event-driven
14.8.3 Identification of the Remnant Response IC can approximate the experimental remnant response,
The controller design parameters (i.e., the LQ design by adjusting the threshold parameters associated with
weightings) obtained when fitting the LTI response, the event trigger.
Intermittent Control in Man and Machine 317
8 10

6 0

4 –10

Tl (Nm)
φl (deg)

2 –20

0 –30

–2 –40

–4 –50
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(a) (b)

5 10

0 0

–5 –10

Tm (Nm)
φm (deg)

–10 –20

–15 –30

–20 –40

–25 –50
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(c) (d)

25 0
–10
20
–20
15
–30
Tu (Nm)
φu (deg)

10 –40
–50
5
–60
0
–70
–5 –80
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(e) (f)

FIGURE 14.27
Pole-lifting simulation—constrained steady state: balanced posture (λ = 1). The spring preload κ (14.151) is 80%. Due to the constraints, the
steady-state joint torques Tu and Tm are not zero, but the upper (hip) joint is now constrained. (a) Lower joint: angle θl (deg). (b) Lower joint:
torque Tl (Nm). (c) Middle joint: angle θm (deg). (d) Middle joint: torque Tm (Nm). (e) Upper joint: angle θu (deg). (f) Upper joint: torque Tu (Nm).

this. The calculated noise signal is then interpolated at


14.8.3.1 Variability by Adding Noise
the excited frequencies.
For the PC, noise can be injected either as observa- Results for added input noise (vu ) are shown in
tion noise, v y , or as noise added to the input, v u . The Figure 14.31. As expected, the fit at the nonexcited fre-
noise spectrum is obtained by considering the measured quencies is nearly perfect (Figure 14.31a and b, bottom
response ue at nonexcited frequencies and, using the cor- panels). Notably, the added input noise also improves
responding loop transfer function (see Section 14.9.1.1), the fit at the excited frequencies (Figure 14.31a and b,
calculating the noise input (v u or v y ) required to generate top panels).
318 Event-Based Control and Signal Processing
20 50

15
0
10

Tl (Nm)
φl (deg)
5
–50
0

–5 –100
–10

–15 –150
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(a) (b)

20 20

0
0
–20
–20
φm (deg)

Tm (Nm)
–40

–40 –60

–80
–60
–100

–80 –120
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(c) (d)

80 0

60 –50

–100
40
Tu (Nm)
φu (deg)

–150
20
–200
0
–250

-20 –300
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
(e) (f)

FIGURE 14.28
Pole-lifting simulation—constrained steady state but with 0% preload. This is the same as Figure 14.27 except that the spring preload is 0%. (a)
Lower joint: angle θl (deg). (b) Lower joint: torque Tl (Nm). (c) Middle joint: angle θm (deg). (d) Middle joint: torque Tm (Nm). (e) Upper joint:
angle θu (deg). (f) Upper joint: torque Tu (Nm).

The spectra of the input noise vu are shown in 14.8.3.2 Variability by Intermittency
Figure 14.31c and d. It can be observed that the noise
As an alternative explanation, a noise-free event-driven
spectra are dependent on the instructions given (“cc”
IC is considered (cf. Figure 14.2). The same design
or “mi”), with no obvious physiological basis to explain
parameters as for the PC are used, with the time-delay
this difference.
Intermittent Control in Man and Machine 319

0.4 10–1
d(t) 0.2

|D|2
0
10–4
−0.2 Excited
−0.4 10–6
20 30 40 50 60 70 80 10–1 100 101
Time (s) Frequency (Hz)
(a) (b)
−1
2 10
ue(t)

|Ue|2
0
10–4 Excited
Nonexcited
−2 10−6
20 30 40 50 60 70 80 10−1 100 101
Time (s) Frequency (Hz)
(c) (d)

2 10−1
ue(t)

|Ue|2
0
10−4 Excited
Nonexcited
−2
10−6 −1
20 30 40 50 60 70 80 10 100 101
Time (s) Frequency (Hz)
(e) (f)

FIGURE 14.29
Experimental data showing variability during human motor control. Plots on the left side show time-domain signals, plots on the right
side depict the corresponding frequency-domain signals. “cc”—keep close to center (position control), “mi”—minimal intervention mini-
mize control). (a) Disturbance signal. (b) Disturbance signal. (c) Control signal (cc). (d) Control signal (cc). (e) Control signal (mi). (f) Control
signal (mi).

set to a minimal value of Δmin = 0.1 s and a correspond- to clock-driven IC) and 3, and the threshold combi-
ol = 0.1 s.
ing minimal intermittent interval, Δmin nation that resulted in the best least-squares fit at all
Variations in the loop delay are now the result of frequencies (excited and nonexcited) was selected as the
the event thresholds, cf. Equation 14.35. In particular, optimum. The resulting fit is shown in Figure 14.32a
we consider the first two elements of the state prediction and b. For both instructions, the event-driven IC can
error ehp , corresponding to the velocity (evhp ) and posi- both explain the remnant signal and improve the fit at
tion (ehp ) states, and define an ellipsoidal event detection excited frequencies.
p
The corresponding thresholds (for “cc”: θv = 2.0, θ p =
surface given by
0.2, for “mi”: θv = 2.1, θ p = 0.8) reflect the control pri-
orities for each instruction: for “cc” position control
 p 2  v 2
ehp ehp should be prioritized, resulting in a small value for the
+ > 1, (14.163) position threshold, while the velocity is relatively unim-
θ p θ v
portant. For “mi,” the control intervention should be
minimal, which is associated with large thresholds on
where θ p and θv are the thresholds associated with the both velocity and position.
corresponding states. Figure 14.32c and d shows the distributions of
To find the threshold values that resulted in simula- the open-loop intervals for each condition, together
tion which best approximates the experimental remnant, with an approximation by a series of weighted Gaus-
both thresholds were varied between 0 (corresponding sian distributions (McLachlan and Peel, 2000). For
320 Event-Based Control and Signal Processing
Excited frequencies
10−1

Power (V2/Hz) 10−5


Exp
PC
10−8
10−1 100 101

Nonexcited frequencies
10−1
Power (V2/Hz)

10−5

10−8
10−1 100 101

Frequency (Hz)

(a)

Excited frequencies
10−1
Power (V2/Hz)

10−5
Exp
PC
10−8
10−1 100 101

Nonexcited frequencies
10−1
Power (V2/Hz)

10−5

10−8
10−1 100 101

Frequency (Hz)

(b)

FIGURE 14.30
Example individual result for identification of LTI response for two experimental instructions. The continuous predictive controller can fit
the excited frequencies [top graphs in (a) and (b)], but cannot explain the experimental response at nonexcited frequencies [bottom graphs in
(a) and (b)]. (a) Control signal (instruction “cc”). (b) Control signal (instruction “ol”).
Intermittent Control in Man and Machine 321
Excited frequencies Excited frequencies
10−1 10−1
Power (V2/Hz)

Power (V2/Hz)
10−3
10−4 Exp Exp
PC+noise PC+noise
10−6 −1 10−5
10 100 101 10−1 100 101

Nonexcited frequencies Nonexcited frequencies


10−1 10−1

Power (V2/Hz)
Power (V2/Hz)

10−4 10−3

10−6 −1 10−5 −1
10 100 101 10 100 101
Frequency (Hz) Frequency (Hz)
(a) (b)
10−1

Power (V2/Hz)
Power (V2/Hz)

10−2
10−2

10−4 10−3
10−1 100 101 10−1 100 101
Frequency (Hz) Frequency (Hz)
(c) (d)

FIGURE 14.31
Variability as a result of colored input noise added to a predictive continuous controller. The top graphs show the resulting fit to experimental
data at excited and nonexcited frequencies. The bottom graphs show the input noise added. (a) Control signal (instruction “cc”). (b) Control
signal (instruction “mi”). (c) Added input noise vu (instruction “cc”). (d) Added input noise vu (instruction “mi”).

position control (“cc”), open-loop intervals are clustered threshold parameters that are clearly related to underly-
around a modal interval of approximately 1 s, with ing control aims.
all Δol > 0.5 s. For the minimal intervention condition
(“mi”), the open-loop intervals are clustered around a
modal interval of approximately 2 s, and all Δol > 1 s.
This corresponds to the expected behavior of the
human operator where more frequent updates of the
intermittent control trajectory are associated with 14.9 Identification of Intermittent Control:
the more demanding position control instruction, The Underlying Continuous System
while the instruction to minimize intervention results This section, together with Section 14.10, addresses
in longer intermittent intervals. Thus, the identified the question of how intermittency can be identified
thresholds not only result in IC models which approxi- when observing closed-loop control. In this section, it
mate the response at excited and nonexcited frequency, is discussed how intermittent control can masquerade
but also reflect the underlying control aims. as continuous control, and how the underlying continu-
ous system can be identified. Section 14.10 addresses the
question how intermittency can be detected in experi-
mental data.
14.8.4 Conclusion
System identification provides one approach to
The hypothesis that variability is the result of a contin- hypothesis testing and has been used by Johansson,
uous control process with added noise (PC with added Magnusson, and Akesson (1988) and Peterka (2002)
noise) requires that the remnant is explained by a non- to test the nonpredictive hypothesis and by Gawthrop
parametric input noise component. In comparison, IC et al. (2009) to test the nonpredictive and predictive
introduces variability as a result of a small number of hypotheses. Given time-domain data from an sustained
322 Event-Based Control and Signal Processing
Excited frequencies Excited frequencies
10−1 10−1
Power (V2/Hz)

Power (V2/Hz)
10−3
10−4
Exp Exp
IC IC
−6 −5
10 10
10−1 100 101 10−1 100 101

Nonexcited frequencies Nonexcited frequencies


10−1 10−1
Power (V2/Hz)

Power (V2/Hz)
10−4 10−3

10−6 10−5 −1
10−1 100 101 10 100 101
Frequency (Hz) Frequency (Hz)
(a) (b)

1.4 1
θ =2.00
vel
0.9 θvel=2.10
1.2 θpos=0.20 θpos=0.80
0.8
1 0.7

0.6
Frequency

Frequency

0.8
0.5
0.6
0.4

0.4 0.3

0.2
0.2
0.1

0 0
0 1 2 3 4 5 0 1 2 3 4 5
Δol (s) Δol (s)
(c) (d)

FIGURE 14.32
Variability resulting from event-driven IC. The top graphs show the resulting fit to experimental data at excited and nonexcited frequencies. The
bottom graphs show the distribution of intermittent intervals, together with the optimal threshold values. (a) Control signal (instruction “cc”).
(b) Control signal (instruction “ol”). (c) Δol (instruction “cc”). (d) Δol (instruction “ol”).

control task which is excited by an external disturbance advantageous properties of a periodic input signal (as
signal, a two-stage approach to controller estimation can advocated by Pintelon et al. (2008)) can be exploited.
be used in order to perform the parameter estimation In this section, first the derivation of the underly-
in the frequency domain: firstly, the FRF is estimated ing frequency responses for a predictive continuous
from measured data, and secondly, a parametric model time controller and for the intermittent, clock-driven
is fitted to the frequency response using nonlinear controller (i.e., Δol = const) is discussed. The method is
optimization (Pintelon and Schoukens, 2001; Pintelon, limited to clock-driven IC since frequency analysis tools
Schoukens, and Rolain, 2008). This approach has two are readily available only for this case (Gawthrop, 2009).
advantages: firstly, computationally expensive analy- The two-stage identification procedure is then outlined,
sis of long time-domain data sets can be reduced by followed by example results from a visual-manual con-
estimation in the frequency domain, and secondly, trol task.
Intermittent Control in Man and Machine 323

The material in this section is partially based on Gollee where


et al. (2012). u( s)
= −e−sΔ H (s). (14.173)
y o ( s)
14.9.1 Closed-Loop Frequency Response
With Equations 14.164 and 14.173, the system loop-gain
As a prerequisite for system identification in the fre- L(s) and closed-loop transfer function T are given by
quency domain, this section looks at the frequency
response of closed-loop system corresponding to the L(s) = e−sΔ G (s) H (s), (14.174)
underlying predictive continuous design method as well u( s) L( s)
as that of the IC with a fixed intermittent interval T ( s) = = . (14.175)
d ( s) 1 + L( s)
(Gawthrop, 2009).
Equation 14.175 gives a parameterized expression
14.9.1.1 Predictive Continuous Control relating u(s) and d (s).
The system equations (14.1) can be rewritten in transfer
14.9.1.2 Intermittent Control
function form as
The sampling operation in Figure 14.2 makes it harder
with G (s) = C [sI − A]−1 B,
y o ( s) = G ( s) u( s) to derive a (continuous-time) frequency response and
(14.164) so the details are omitted here. For the case where the

where I is the n × n unit matrix and s denotes the intermittent interval is assumed to be constant, the basic
complex Laplace operator. result derived by Gawthrop (2009) apply and can be

Transforming Equations 14.2 and 14.5 into the Laplace encapsulated as the following theorem :
domain, assuming that disturbances vu , v u , and w are
zero: Theorem

x o (s) = (sI − Ao )−1 ( Bu(s) + Lyo (s)) (Observer), The continuous-time system (14.1) controlled by an IC
(14.165) with generalized hold gives a closed-loop system where
the Fourier transform U of the control signal u(t) is
x p (s) = e x̂ (s)

' ( given in terms of the Fourier transform Xd ( jω) by
−1 −( sI − A )Δ
+ (sI − A) I−e Bu(s) (Predictor),
- .s
(14.166) U = F ( jω, θ) Xd ( jω) , (14.176)
u(s) = −ke−sΔ x p (s) (Controller),
(14.167) where

where I is the n × n unit matrix. F ( jω, θ) = H( jω)Sz (e jω ), (14.177)


Equations 14.165 through 14.167 can be rewritten as
1 - .
H( jω) = k [ jωI − Ac ]−1 I − e−( jωI − Ac )Δol ,
u(s) = −e−sΔ [ Hy (s)y(s) + ( H1 (s) + H2 (s))u(s)], Δol
(14.168) (14.178)
Sz (e jω ) = [ I + Gz (e jω )]−1 , (14.179)
where - . −1
Gz (e jω ) = e jω I − A x Bx , (14.180)
Hy (s) = ke AΔ (sI − Ao )−1 L, (14.169)
Xd ( jω) = G( jω)d( jω), (14.181)
H1 (s) = ke AΔ (sI − Ao )−1 B, (14.170) −1
G( jω) = [ jωI − A] B. (14.182)
and
The sampling operator is defined as
H2 (s) = k (sI − A)−1 ( I − e−( sI − A)Δ) BesΔ. (14.171)
- .s ∞
It follows that the controller transfer function H (s) is Xd ( jω) = ∑ Xd ( jω − kjωol ), (14.183)
given by k =− ∞

Hy ( s ) ∗ This is a simplified version of (Gawthrop, 2009, Theorem 1) for the


H ( s) = , (14.172) special case considered in this section.
1 + H1 (s) + H2 (s)
324 Event-Based Control and Signal Processing

where the intermittent sampling-frequency is given by u[l ] ( jωk ), respectively, and the FRF can be estimated as
ωol = 2π/Δol .
u[l ] ( jωk )
T̂ [l ] ( jωk ) = k = 1, 2, . . . , N f , (14.184)
,
As discussed in Gawthrop (2009), the presence of the
!s d[l ] ( jωk )
sampling operator X ( jω) means that the interpre-
d

tation of F ( jω, θ) is not quite the same as that of the where N f denotes the number of frequency components
closed-loop transfer function T (s) of (14.175), as the in the excitation signal. An estimate of the FRF over all
sample process generates an infinite number of frequen- Np periods is obtained by averaging,
cies which can lead to aliasing. As shown in Gawthrop
(2009), the (bandwidth limited) observer acts as an !anti- N
1 p [l ]
Np l∑
aliasing filter, which limits the effect of X ( jω) to
d s T̂ ( jωk ) = T̂ ( jωk ), k = 1, 2, . . . , N f .
=1
higher frequencies and makes F ( jω, θ) a valid approxi-
(14.185)
mation of U. F ( jω, θ) will therefore be treated as equiva-
lent to T ( jω) in the rest of this section. This approach ensures that only the periodic (determin-
istic) features related to the disturbance signal are used
in the identification, and that the identification is robust
with respect to remnant components.
14.9.2 System Identification
The aim of the identification procedure is to derive 14.9.2.3 Parametric Optimization
an estimate for the closed-loop transfer function of the In the second stage of the identification procedure,
system. Our approach follows the two-stage procedure a parametric description, T̃ ( jωk , θ), is fitted to the
of Pintelon and Schoukens (2001) and Pintelon et al. estimated FRF of Equation 14.185. The parametric
(2008). In the first step, the frequency response transfer FRF approximates the closed-loop transfer function
function is estimated based on measured input–output (Equation 14.175), which depends in the case of pre-
data, resulting in a nonparametric estimate. In the sec- dictive control, on the loop transfer function L( jωk , θ),
ond step, a parametric model of the system is fitted to Equation 14.174, parameterized by the vector θ, while for
the estimated frequency response using an optimization the IC this is approximated by F ( jω, θ), Equation 14.176,
procedure.
 L ( jωk ,θ)
1+ L ( jωk,θ)
for PC
14.9.2.1 System Setup T̃ ( jωk , θ) = . (14.186)
F ( jωk , θ) for IC
To illustrate the approach, we consider the visual-
manual control task described in Section 14.8.1, where We use an indirect approach to parameterize the
the subject is asked to sustain control of an unstable controller, where the controller and observer gains are
second-order load using a joystick, with the instruction derived from optimized design parameters using the
to keep the load as close to the center as possible (“cc”). standard LQR approach of Equation 14.6. This allows
the specification of boundaries for the design param-
14.9.2.2 Nonparametric Estimation eters, which guarantee a nominally stable closed-loop
system. As described in Section 14.2.3, the feedback gain
In the first step, a nonparametric estimate of the closed- vector k can then be obtained by choosing the elements
loop FRF is derived, based on observed input–output of the matrices Qc and Rc in (14.6), and nominal sta-
data. The system was excited by a multi-sine disturbance bility can be guaranteed if these matrices are positive
signal (Equation 14.162). The output u(t) of a linear definite. As the system model is second order, we choose
system which is excited by d(t) then only contains infor- to parameterize the design using two positive scalars, qv
mation at the same discrete frequencies ωk as the input and q p ,
signal. If the system is nonlinear or noise is added, the
 
output will contain a remnant component at nonexcited qv 0
frequencies; as discussed in Section 14.8. several periods R c = 1 Q c = , with qv , q p > 0,
0 qp
of d(t) were used. (14.187)
The time-domain signals d (t) and u(t) over one related to relative weightings of the velocity (qv ) and
period T0 of the excitation signal were transformed position (q p ) states.
into the frequency domain. If the input signal has been The observer gain vector L is obtained by apply-
applied over Np periods, then the frequency-domain ing the same approach to the dual system [ AT , CT , BT ].
data for the lth period can be denoted as d[l ] ( jωk ) and It was found that the results are relatively insensitive to
Intermittent Control in Man and Machine 325

observer properties which was therefore parameterized such that θ = [θΔ , θc ]. The time-delay parameters are
by a single positive variable, qo , varied over a predefined range, with the restriction that
Δol > Δ for IC. For each given set of time-delay param-
Ro = 1 Qo = qo BBT with qo > 0, (14.188) eters, a corresponding set of optimal controller design

where Ro and Qo correspond to Rc and Qc in parameters θc is found which solves the constrained
Equation 14.6 for the dual system. optimization problem
The controller can then be fully specified by the pos- θ∗c = arg min J ([θΔ , θc ]), θc > 0, (14.192)
itive parameter vector θ = [qv , q p , qo , Δ] (augmented by θc
Δol for intermittent control).
The optimization criterion J is defined as the mean which was solved using the SQP algorithm (Nocedal
squared difference between the estimated FRF and its and Wright, 2006) (MATLAB Optimization Toolbox, The
parametric fit MathWorks, Inc., Natick, MA).
The optimal cost function for each set of time-delay
1
Nf
!2 parameters, J ∗ (θΔ ), was calculated, and the overall
N f k∑
J ( θ) = T̂ ( jωk ) − T̃ ( jωk , θ) . ∗
(14.189) optimum, J determined. For analysis, the time-delay
=1 parameters corresponding to the optimal cost are deter-
This criterion favors lower frequency data since | T ( jω)| mined, with Δ and Δol combined for the IC to give the
tends to be larger in this range. effective time-delay,
The parameter vector is separated into two parts: Δe = Δ + 0.5Δol . (14.193)
time-delay parameters,

[Δ] for PC 14.9.3 Illustrative Example
θΔ = , (14.190)
[Δ, Δol ] for IC Results from identifying the experimental data from one
and controller design parameters subject are used to illustrate the approach.
An extract of the time-domain data is shown in
θc = [ q v , q p , q o ] , (14.191) Figure 14.33b. The top plot shows the multi-sine

0.05
PC 0.5
0.045 IC (Δ*ol = 170 ms)

0.04 0
d

0.035

0.03 −0.5
50 55 60 65 70
0.025 Time (s)
J

PC
0.02
0.5 IC
Exp
0.015

0.01 0
u

0.005

0 –0.5
0 50 100 150 200 250 300 350 400 50 55 60 65 70
Δ (ms) Time (s)
(a) (b)

FIGURE 14.33
Illustrative experimental results. (a) The optimization cost (Equation 14.189) as a function of the time delay for predictive continuous control (PC)
and intermittent control (IC), together with the value of the intermittent interval corresponding to the smallest cost (Δ∗ol ). (b)–(d) Comparisons
between predictive continuous control (PC), intermittent control (IC), and experimental data (Exp). Plots for PC and IC in (c) and (d) are derived
analytically (Equation 14.186). (e) Comparison of the analytical FRF for the IC with the FRF derived from time-domain simulation data. (a) Cost
function. (b) Time-domain signals. (Continued)
326 Event-Based Control and Signal Processing
1

0.5 100

–0.5 10−1
Imag

|T|
−1

−1.5
10−2
−2
PC PC
−2.5 IC IC
Exp 10 −3 Exp
−3
−1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 10−1 100 101
Real Frequency (Hz)
(c) (d)

100

10−1
|T|

10−2

IC (sim)
IC (analytical)
10−3 Exp

10−1 100 101


Frequency (Hz)
(e)

FIGURE 14.33 (Continued)


Illustrative experimental results. (a) The optimization cost (Equation 14.189) as a function of the time delay for predictive continuous control (PC)
and intermittent control (IC), together with the value of the intermittent interval corresponding to the smallest cost (Δ∗ol ). (b)–(d) Comparisons
between predictive continuous control (PC), intermittent control (IC), and experimental data (Exp). Plots for PC and IC in (c) and (d) are derived
analytically (Equation 14.186). (e) Comparison of the analytical FRF for the IC with the FRF derived from time-domain simulation data. (c)
Nyquist plot of the FRF. (d) Bode magnitude plot of the FRF. (e) Bode magnitude plot of FRF for IC.

disturbance input over two 10 s periods, and the bottom It is clear that both the PC and IC are able to fit the
plot depicts the corresponding measured control signal experimental FRF equally well. The resulting controller
response (thin dark line). From this response, the experi- parameters (summarized in Table 14.1) are very sim-
mental FRF was estimated in stage 1 of the identification ilar for both control architectures. This is confirmed
(dark solid lines in Figure 14.33c–e). by time-domain simulations using the estimated con-
Stage 2 of the procedure aimed to find the controller trollers (Figure 14.33b) where the PC and IC responses
design parameters which resulted in the best fit to the are difficult to distinguish.
experimental FRF. The corresponding cost functions for Although the Nyquist plot of Figure 14.33c suggests
the predictive continuous and for the IC are shown in that the PC and IC responses are virtually identical,
Figure 14.33a, with the minima indicated by solid mark- further analysis shows that this is only the case at lower
ers. The estimated FRF (Equation 14.185) and their para- frequencies (at which most of the signal power lies).
metric fits (Equation 14.186) are shown in Figure 14.33c. The Bode plot of the frequency response (Figure 14.33d)
Intermittent Control in Man and Machine 327
TABLE 14.1
Estimated Controller Design
14.10 Identification of Intermittent Control:
Parameters
Detecting Intermittency
PC IC
qp 0.99 1.07 As discussed in Section 14.3 and by Gawthrop et al.
qv 0.00 0.00 (2011), the key feature distinguishing intermittent
qo 258.83 226.30 control from continuous control is the open-loop
Δ 180 ms 95 ms interval Δol of Equations 14.16 and 14.20. As noted
Δol — 170 ms in Sections 14.3.1 and 14.4.2, the open-loop interval
Δe — 180 ms provides an explanation of the PRP of Telford (1931)
as discussed by Vince (1948) to explain the human
response to double stimuli. Thus, “intermittency”
and “refractoriness” are intimately related. Within
shows that the PC and IC are indistinguishable only for this interval, the control trajectory is open loop but is
frequencies up to around 2–3 Hz. This is also the fre- continuously time varying according to the basis of the
quency range up to which the PC and IC provide a good generalized hold. The length of the intermittent interval
approximation to the experimental FRF. gives a trade-off between continuous control (zero
The controller frequency responses shown in intermittent interval) and intermittency. Continuous
Figure 14.33c and d are based on the analytically control maximizes the frequency bandwidth and stabil-
derived expressions. For the IC, the sampling operator ity margins at the cost of reduced flexibility, whereas
means that the theoretical response is only a valid intermittent control provides time in the loop for
approximation at lower frequencies, with aliasing optimization and selection (van de Kamp et al., 2013a;
evident at higher frequencies. A comparison of the Loram et al., 2014) at the cost of reduced frequency
analytical response with the response derived from bandwidth and reduced stability margins. The rationale
simulated time-domain data (Figure 14.33e) shows that for intermittent control is that it confers online flexibil-
the simulated frequency response of the IC at higher ity and adaptability. This rationale has caused many
frequency is in fact closer to the experimental data than investigators to consider whether intermittent control is
the analytical response. an appropriate paradigm for understanding biological
motor control (Craik, 1947a,b; Vince, 1948; Bekey, 1962;
Navas and Stark, 1968; Neilson et al., 1988; Miall et al.,
1993a; Hanneton, Berthoz, Droulez, and Slotine, 1997;
14.9.4 Conclusions Neilson and Neilson, 2005; Loram and Lakie, 2002).
However, even though intermittent control was first
The results illustrate that continuous, predictive, and proposed in the physiological literature in 1947, there
intermittent controllers can be equally valid descriptions has not been an adequate methodology to discriminate
of a sustained control task. Both approaches allow fit- intermittent from continuous control and to identify key
ting the estimated nonparametric frequency responses parameters such as the open-loop interval Δol . Within
with comparable quality. This implies that experimen- the biological literature, four historic planks of evidence
tal data can be equally well explained using the PC (discontinuities, frequency constancy, coherence limit,
and the IC hypotheses. This result is particularly inter- and PRF) have provided evidence of intermittency in
esting as it means that experimental results showing human motor control (Loram et al., 2014).
good fit for continuous predictive control models, dat-
ing back to at least those of Kleinman et al. (1970),
do not rule out an intermittent explanation. A theo- 1. The existence of discontinuities within
retical explanation for this result is given in Gawthrop the control signal has been interpreted as
et al. (2011, Section 4.3) where the masquerading prop- sub-movements or serially planned control
erty of intermittent control is discussed: As shown sequences (Navas and Stark, 1968; Poulton,
there (and illustrated in the results here), the frequency 1974; Miall et al., 1993a; Miall, Weir, and Stein,
response of an intermittent controller and that of the 1986; Hanneton et al., 1997; Loram and Lakie,
corresponding PC are indistinguishable at lower fre- 2002).
quency and only diverge at higher frequencies where
aliasing occurs. Thus, the responses of the predictive and 2. Constancy in the modal rate of discontinu-
the intermittent controllers are difficult to distinguish, ities, typically around 2–3 per second, has
and therefore both explanations appear to be equally been interpreted as evidence for a central pro-
valid. cess with a well-defined timescale (Navas and
328 Event-Based Control and Signal Processing

Stark, 1968; Poulton, 1974; Lakie and Loram, 14.10.1.1 Stage 1: Reconstruction of the Set-Point
2006; Loram et al., 2006).
With reference to Figure 14.34, this stage takes the
3. The fact that coherence between unpredicted known set-point and control output signals and recon-
disturbance or set-point and control signal structs the set-point step times to form that sequence
is limited to a low maximum frequency, with a linear-time invariant response which best
typically of 1–2 Hz, below the mechanical matches the control output. This is implemented as an
bandwidth of the feedback loop has been optimization process in which the fit of a general linear
interpreted as evidence of sampling (Navas time series model (zero-delay ARMA) is maximized
and Stark, 1968; Loram et al., 2009, 2011). by adjusting the trial set of step times. The practical
4. The PRF has provided direct evidence of algorithmic steps are stated by Loram et al. (2012). The
open-loop intervals but only for discrete output from stage 1 is an estimate of the time delay for
movements and serial reaction time (e.g., push each step stimulus.
button) tasks and has not been demonstrated
for sustained sensori-motor control (Vince, 14.10.1.2 Stage 2: Statistical Analysis of Delays
1948; Pashler and Johnston, 1998; Hardwick,
Rottschy, Miall, and Eickhoff, 2013). Delays are classified according to step (1 or 2, named
reaction-time∗ 1 (RT1) and reaction-time 2 (RT2), respec-
Since these features can be reproduced by a CC with tively) and interstep interval (ISI). A significant dif-
tuned parameters and filtered additive noise (Levison ference in delay, RT2 vs. RT1, is not explained by a
et al., 1969; Loram et al., 2012), this evidence is circum- linear-time-invariant model. The reaction time proper-
stantial. Furthermore, there is no theoretical requirement ties, or refractoriness, are quantified by
for regular sampling nor for discontinuities in con-
trol trajectory. Indeed, as historically observed by Craik 1. The size of ISI for which RT2 > RT1. This
(1947a,b), humans tend to smoothly join control trajecto- indicates the temporal separation required
ries following practice. Therefore, the key methodolog- to eliminate interference between successive
ical problem is to demonstrate that on-going control is steps.
sequentially open loop even when the control trajec-
tory is smooth and when frequency analysis shows no 2. The difference in delay (RT2 − RT1).
evidence of regular sampling.
14.10.1.3 Stage 3: Model-Based Interpretation
For controllers following the generalized continuous
14.10.1 Outline of Method
(Figure 14.1) and intermittent (Figure 14.2) structures,
Using the intermittent-equivalent setpoint of Sections the probability of a response occurring, the mean delay
14.3.7 and 14.4.2, we summarize a method to dis- and the range of delays can be predicted for each inter-
tinguish intermittent from continuous control (Loram step-interval (Figure 14.35 and Appendix C of Loram
et al., 2012). The identification experiment uses a spe- et al. (2012)). For a CC (Figure 14.1), all delays equal the
cially designed paired-step set-point sequence. The cor- model delay (Δ). Intermittent control is distinguished
responding data analysis uses a conventional ARMA from continuous control by increased delays for RT2 vs.
model to relate the theoretically derived equivalent set- RT1 for ISIs less than the open-loop interval (Δol ). Clock
point (of Section 14.3.7) to the control signal. The method (zero threshold) triggered intermittent control is distin-
sequentially and iteratively adjusts the timing of the guished from threshold triggered intermittent control by
steps of this equivalent set-point to optimize the linear the range of delays for RT1 and RT2 and by the increased
time invariant fit. The method has been verified using mean delay for ISIs greater than the open-loop inter-
realistic simulation data and was found to robustly dis- val (Δol ) (Figure 3). If the results of Stage 1–2 analysis
tinguish not only between continuous and intermittent conform to these patterns (Figure 14.35), the open-loop
control but also between event-driven intermittent and interval (Δol ) can be estimated. Simulation also shows
clock-driven intermittent control (Loram et al., 2012). that the sampling delay (Δs ) can be identified from the
This identification method is applicable for machine and ISI at which the delay RT2 is maximal (Figure 14.37) (van
biological applications. For application to humans the de Kamp et al., 2013a,b). Following verification by simu-
set-point sequence should be unpredictable in the tim- lation (Loram et al., 2012), the method has been applied
ing and direction of steps. This method proceeds in to human visually guided pursuit tracking and to whole
three stages. Stages 1 and 2 are independent of model
assumptions and quantify refractoriness, the key feature ∗ In the physiological literature, “delay” is synonymous with “reac-

discriminating intermittent from continuous control. tion time.”


Intermittent Control in Man and Machine 329

Continuous LTI IC threshold IC clock


a–c
4 4 4

Set-point
2 2 2

0 0 0
0 2 4 6 0 2 4 6 0 2 4 6

d–f

Control output
4 4 4
2 2 2
0 0 0
0 2 4 6 0 2 4 6 0 2 4 6

g–i
Control output

4 4 4
2 2 2
0 0 0
0 2 4 6 0 2 4 6 0 2 4 6

Time (s) Time (s) Time (s)

FIGURE 14.34
Reconstruction of the set-point. Example responses to set-point step-sequence (a–c) Solid: two paired steps (long and short interstep interval)
are applied to the set-point of each of three models: continuous linear time invariant, threshold triggered intermittent control (unit threshold),
and clock triggered (zero threshold) intermittent control (cols 1–3, respectively). Dashed: Set-point adjusted: time of each step follows preceding
trigger by one model time delay (Δ). (d–f) Solid: Control output (ue). Gray vertical dashed: event trigger times. (g–i) Solid: control output (ue).
Dash-dotted: ARMA (LTI) fit to set-point (solid in a–c). Dashed: ARMA (LTI) fit to adjusted set-point (dashed in a–c). (From I. D. Loram, et al.,
Journal of the Royal Society Interface, 9(74):2070–2084, 2012. With permission.)

Continuous LTI IC threshold IC clock


a–c
1 1 1
Prob. Response

0.5 0.5 0.5


R1
R2
0 0 0
0 0.2 0.4 0.6 0 0.2 0.4 0.6 0 0.2 0.4 0.6

d–f
0.4 0.4 0.4
RT (s)

0.2 0.2 0.2

0 0 0
0 0.2 0.4 0.6 0 0.2 0.4 0.6 0 0.2 0.4 0.6

g–i
Range RT (s)

0.3 0.3 0.3


0.2 0.2 0.2
0.1 0.1 0.1
0 0 0
0 0.2 0.4 0.6 0 0.2 0.4 0.6 0 0.2 0.4 0.6
ISI (s) ISI (s) ISI (s)

FIGURE 14.35
Reconstruction of the set-point. Predicted delays for varying interstep intervals. For three models, continuous linear time invariant, threshold
triggered intermittent control, and clock triggered intermittent control (cols 1–3, respectively), the following is shown as a function of interstep
interval (ISI): (a–c) The predicted probability of response. (d–f) The mean response delay. (g–i) The range of response delays. Responses 1 and
2 (R1, R2) are denoted by solid and dashed lines, respectively. For these calculations, the open-loop interval (Δol ) is 0.35 s (vertical dashed line)
and feedback time-delay (td) is 0.14 s. (From I. D. Loram, et al., Journal of the Royal Society Interface, 9(74):2070–2084, 2012. With permission.)
330 Event-Based Control and Signal Processing

(a) 10 (c) 0.5

Set-point
0 0.4

RT1 (s)
0.3
–10
0 10 20 30
0.2
(b) 10

Control output
0.1

“joystick”
0 0.2 0.4 0.6 0.8 1
0 ISI (s)

(d) 0.5
–10
0 10 20 30
0.4
Time (s)

RT1 (s)
0.3

0.2

0.1
0 0.2 0.4 0.6 0.8 1
ISI (s)

FIGURE 14.36
Reconstruction of the set-point. Representative stage 1 analysis. (a) Solid: set-point sequence containing eigth interstep interval (ISI) pairs with
random direction (first 30 s). Dashed: adjusted set-point from step 1 analysis. After a double unidirectional step, set-point returns to zero before
next pair. (b) Solid: control output (ue). Dash-dotted: ARMA (LTI) fit to set-point. Dashed: ARMA (LTI) fit to adjusted set-point. (c and d)
Response times (RT1, RT2), respectively, from each of three models vs ISI. Joined square: continuous LTI. Joined dot: threshold intermittent
control. Isolated dot: clock intermittent control. The system is zero order. The open-loop interval (Δol ) is 0.35 s and feedback time-delay (Δ) is
0.14 s. (From I. D. Loram, et al., Journal of the Royal Society Interface, 9(74):2070–2084, 2012. With permission.)

body pursuit tracking. In both cases, control has been 14.10.3 Refractoriness in Whole Body Control
shown to be intermittent rather than continuous.
Control of the hand muscles may be more refined, spe-
cialized, and more intentional than control of the mus-
14.10.2 Refractoriness in Sustained Manual Control cles serving the legs and trunk (van de Kamp et al.,
2013a). Using online visual feedback (<100 ms delay)
Using a uni-axial, sensitive, contactless joystick, partici- of a marker on the head, participants were asked to
pants were asked to control four external systems (zero, track as fast and accurately as possible a target that
first, second-order stable, second-order unstable) using changes position discretely and unpredictably in time
visual feedback to track as fast and accurately as possible and direction (Figure 14.39a). This required head move-
the target which changes position discretely and unpre- ments of 2 cm along the anterior–posterior axis and
dictably in time and direction (Figure 14.38a and b). For while participants were instructed not to move their feet,
the zero-, first-, and second-order systems, joystick posi- no other constraints or strategies were requested. The
tion determines system output position, velocity, and eight participants showed evidence of substantial open-
acceleration, respectively. The unstable second-order loop interval (refractoriness) (0.5 s) and a sampling delay
system had a time-constant equivalent to a standing (0.3 s) (Figure 14.39b). This result extends the evidence
human. Since the zero-order system has no dynamics of intermittent control from sustained manual control to
requiring ongoing control, step changes in target pro- integrated intentional control of the whole body.
duce discrete responses, that is, sharp responses clearly
separated from periods of no response. The first- and
second-order systems require sustained ongoing con-
trol of the system output position: thus, the step stimuli
14.11 Adaptive Intermittent Control
test responsiveness during ongoing control. The thirteen
participants showed evidence of substantial open-loop The purpose of feedback control is to reduce uncertainty
interval (refractoriness) which increased with system (Horowitz, 1963; Jacobs, 1974). Feedback control using
order (0.2 to 0.5 s, 14.38 C). For first- and second-order fixed controllers can be very successful in this role as
systems, participants showed evidence of a sampling long as sufficient bandwidth is available and the sys-
delay (0.2 to 0.25 s, 14.38 C). This evidence of refractori- tem does not contain time delays or other nonminimum
ness discriminates against continuous control. phase elements (Horowitz, 1963; Goodwin et al., 2001;
Intermittent Control in Man and Machine 331

1 1975, 1979, 1981; Åström and Wittenmark, 1989). How-


RT2 (s)

ever, such an approach ignores two things: the con-


troller is based initially on incorrect parameter estimates
0 and the quality of the parameter estimation is depen-
0 0.5 1 1.5 2 2.5 3 3.5 4
dent on the controller properties. This can be formal-
(a)
1 ized using the concepts of caution whereby the adaptive
RT2 (s)

controller takes account of parameter uncertainty and


0.5
probing whereby the controller explicitly excites the
0 system to improve parameter estimation (Jacobs and
0 0.5 1 1.5 2 2.5 3 3.5 4
Patchell, 1972; Bar-Shalom, 1981). Adaptive controllers
(b)
1 that explicitly and jointly optimize controller perfor-
RT2 (s)

0.5 mance and parameter estimation have been called dual


controllers (Feldbaum, 1960; Bar-Shalom and Tse, 1974).
0 Except in simple cases, for example, that of Astrom
0 0.5 1 1.5 2 2.5 3 3.5 4
(c)
and Helmersson (1986), the solution to the dual control
1 problem is impractically complex.
RT2 (s)

0.5 Since Wiener (1965) developed the idea of cyber-


netics, there has been a strong interest in applying both
0 biologically-inspired and engineering-inspired ideas to
0 0.5 1 1.5 2 2.5 3 3.5 4
ISIs (s)
adaptive control in both humans and machines; ideas
arising from the biological and engineering inspired
(d)
fields have been combined in various ways.
FIGURE 14.37 One such thread is reinforcement learning (Sutton and
Model-based interpretation (stage 3). Parameter variants from the gen- Barto, 1998), which continues to be developed both the-
eralized IC model of Figure 14.2 showing several possible relationships oretically and through applications (Khan, Herrmann,
between RT2 and inter-step interval (ISI) indicative of serial ballistic Lewis, Pipe, and Melhuish, 2012). It can be argued that
(intermittent) and continuous control behavior. The simulated sys- that “reinforcement learning is direct adaptive optimal
tem is zero order. The open-loop interval (Δol ) is 0.55 s and feedback control” (Sutton, Barto, and Williams, 1992). Artificial
time delay (Δ) is 0.25 s. For four models: (a) continuous LTI (Δol = 0), neural networks (ANN) have been applied to engineer-
(b) externally triggered intermittent control with a prediction error ing control systems: see the survey of Hunt, Żbikowski,
threshold, (c) internally triggered intermittent control (with zero pre- Sbarbaro, and Gawthrop (1992) and numerous text-
diction error threshold, triggered to saturation), and (d) externally books (Miller, Sutton, and Werbos, 1990; Żbikowski and
triggered intermittent control supplemented with a sampling delay Hunt, 1996; Kalkkuhl, Hunt, Żbikowski, and Dzieliński,
of 0.25 s, which is associated with the ISI at the maximum delay for 1997). Recent work is described by Vrabie and Lewis
RT2. The joined circles represent the theoretical delays as a function of (2009). Again, there are links between ANN methods
ISI, which are confirmed by the model simulations. (From C. van de such as back-propagation and engineering parameter
Kamp, et al., Frontiers in Computational Neuroscience, 7(55), 2013a. With estimation (Gawthrop and Sbarbaro, 1990).
permission.) Field robotics makes use of the concept of Simulta-
neous Location and Mapping (SLAM) (Durrant-Whyte
and Bailey, 2006; Bailey and Durrant-Whyte, 2006).
Skogestad and Postlethwaite, 1996). However, when the
Roughly speaking, location corresponds to state
system and actuators do contain such elements, an adap-
estimation and mapping to parameter estimation
tive controller can be used to reduce uncertainly and
and therefore concepts and techniques from SLAM
thus, in time, improve controller performance. By its
are appropriate to adaptive (intermittent) control. In
nature, intermittent control is a low-bandwidth con-
particular, the Extended Kalman Filter (EKF) and the
troller and so adaptation is particularly appropriate in
Unscented Kalman Filter (UKF) (Julier, Uhlmann, and
this context. Conversely, intermittent control frees com-
Durrant-Whyte, 2000; Schiff, 2012) provide the basis for
puting resources that can be used for this purpose.
SLAM and hence adaptive control.
As discussed in the textbooks by Goodwin and Sin
In this section, the simplest continuous-time self-tuning
(1984) and by Åström and Wittenmark (1989), adaptive
approach (Gawthrop, 1982, 1987, 1990) is used. As indi-
control of engineering systems is well-established. Per-
cated in the examples of Sections 14.12 and 14.13, this
haps, the simplest approach is to combine real-time
simple approach has interesting behaviors; nevertheless,
recursive parameter estimation with a simple controller
it would be interesting to investigate more sophisticated
design method to give a so-called self-tuning strategy
approaches based on, for example, the EKF and UKF.
(Åström and Wittenmark, 1973; Clarke and Gawthrop,
332 Event-Based Control and Signal Processing
ARP
Display
Double stimul (w)

ISI

Load
Second order: P=6.977 / (s2 + 0.03721 s + 0)
First order: P=6.977 / (s + 0.03721)
Zero order: P=1 / 1

Joystick
position
(u)

Human controller
Cyu

(a) (b)

Zero order First order


1 1

0.8 0.8
Mean response times (s)

Mean response times (s)


0.6 0.6

0.4 0.4

0.2 0.2

.000.000.001 .010 .002 .000.000.001 .001 .047


0 0
0.05 0.1 0.15 0.2 0.25 0.3 0.5 0.1 0.1 0.15 0.2 0.250.35 0.5 1 2.5
ISIs (s) ISIs (s)

Second order Second-order unstable


1 1

0.8 0.8
Mean response times (s)

Mean response times (s)

0.6 0.6

0.4 0.4

0.2 0.2

.002.001.003.003.006 .002.001.003.003.006
0 0
0.15 0.25 0.35 0.45 0.55 0.65 1.5 4 0.15 0.25 0.35 0.45 0.55 0.65 1.5 4
ISIs (s) ISIs (s)

(c)

FIGURE 14.38
Refractoriness in sustained manual control. (a) Task setup. An oscilloscope showed real-time system output position as a small focused dot
with negligible delay. Participants provided input to the system using a sensitive, uniaxial, contactless joystick. The system ran in Simulink
Real-Time Windows Target within the MATLAB environment (The MathWorks, Inc., Natick, MA). (b) Control system and experimental setup.
Participants were provided with a tracking target in addition to system output. The tracking signal was constructed from four possible patterns
of step sequence (uni- and reversed directional step to the left or to the right). First and second stimuli are separated by an unpredictable inter
step interval (ISI), and patterns are separated by an unpredictable approximate recovery period (ARP). The participant was only aware of an
unpredictable sequence of steps. (c) Group results: The four panels: zero order, first order, second order, and second-order unstable show the
interparticipant mean first (RT1, black) and second (RT2, gray) response times against inter step intervals (ISIs), and p-values of the ANOVA’s
post hoc test are displayed above each ISI level dark if significant, light if not). (From C. van de Kamp, et al., PLOS Computational Biology,
9(1):e1002843, 2013b. With permission.)
Intermittent Control in Man and Machine 333

ARP

Paired-step
sequence (w)

Setpoint ISI
(w)
Load position
(y)

Human
plant
Experimental setup

Control
Human controller signal
(u)
(a)

0.9
RT1
+ RT2
0.8

0.7
+
0.6
Mean RT (s)

0.5

0.4

0.3

0.2
+
0.1

0
0.2 0.4 0.6 0.8 1.4 2.4 4
ISI (s)
(b)

FIGURE 14.39
Refractoriness in whole body control. (a) The participant receives visual feed-back of the anterior–posterior head position through a dot pre-
sented on an LCD screen mounted on a trolley. Without moving their feet, participants were asked to track the position of a second dot displayed
on the screen. The four possible step sequence combinations (uni- and reversed-directional step up or down) of the pursuit target are illustrated
by the solid line. First and second stimuli are separated by an inter-step interval (ISI). The participant experiences an unpredictable sequence of
steps. (b) Group results. Figure shows the interparticipant mean RT1 (black) and RT2 (gray) against ISI combined across the eight participants.
The p-values of the ANOVA’s post hoc test are displayed above each ISI level (black if <0.05, gray if not). The dotted line shows the mean
RT1, the dashed line shows the regression linear fit between (interfered) RT2 and ISIs. (From C. van de Kamp, et al., Frontiers in Computational
Neuroscience, 7(55), 2013. With permission.)

14.11.1 System Model state-space (NMSS) representation of the system. The


NMSS approach is given in discrete-time form by Young,
Parameter estimation is much simplified if the system
Behzadi, Wang, and Chotai (1987) and continuous-time
can be transformed into linear-in-the-parameters form;
form by Taylor, Chotai, and Young (1998). Although a
the resultant model can be viewed as a non-minimal
334 Event-Based Control and Signal Processing

purely state-space approach to the NMSS representa- modeled using Equations 14.203 and 14.204 and
tion is possible (Gawthrop, Wang, and Young, 2007),
ξ ( t ) = d ξ δ( t ) , (14.206)
a polynomial approach is simpler and is presented here.
The linear-time invariant system considered in this and
chapter is given in Laplace transform terms by
  ξ̄u (s) = dξ , (14.207)
b( s) bξ (s) d
( s)
ȳ(s) = ū(s) + ξ̄u (s) + , (14.194) where δ(t − tk ) is the Dirac delta function.
a( s) aξ ( s ) a ( s ) aξ ( s )
Equation 14.202 then becomes
where ȳ (s), ū(s), and ξ̄u (s) are the Laplace transformed sa(s) sb(s) d ( s)
system output, control input, and input disturbance, ȳ (s) = ū(s) + , (14.208)
c( s) c( s) c( s)
b( s )
respectively. a(s) is the transfer function relating ȳ(s)
b (s) where
and ū(s), and aξ (s) provides a transfer function model
d ( s) = d
( s) + dξ b( s).
ξ
of the input disturbance. It is assumed that both transfer (14.209)
functions are strictly proper. The overall system initial Equation 14.205 can be rewritten in nonminimal state-
conditions are represented by the polynomial d
(s).∗ space form as
The polynomials a(s), b(s), and d
(s) are of the form:
d
n −1
φ y ( t ) = A s φ y ( t ) − Bs y ( t ) , (14.210)
a ( s ) = a0 s + a1 s
n
+ · · · + an , (14.195) dt
d
b(s) = b1 sn−1 + · · · + bn , (14.196) φ u ( t ) = A s φ u ( t ) + Bs u ( t ) , (14.211)
dt
d
( s ) = d
1 s n −1 + · · · + d
n , (14.197) d
φic (t) = As φic (t), φic (0) = φic0, (14.212)
nξ −1 dt
a ξ ( s ) = α0 s + α1 s

+ · · · + αn ,
ξ (14.198)
bξ (s) = β0 snξ + β1 snξ −1 + · · · + βnξ . (14.199) where
⎡ ⎤
− c1 − c2 ... − c N −1 −c N
Finally, defining the Hurwitz polynomial c(s) as ⎢ 1
⎢ 0 ... 0 0 ⎥ ⎥
c ( s ) = c 0 s N + c 1 s N −1 + · · · + c N , (14.200)
⎢ 0
⎢ 1 ... 0 0 ⎥ ⎥
As = ⎢ . .. .. .. ⎥ ,
⎢ ..
⎢ . ... . . ⎥⎥
where ⎣ 0 0 ... 0 0 ⎦
0 0 ... 1 0
N = n + nξ + 1. (14.201)
(14.213)
Equation 14.194 may be rewritten as:
and
a ( s ) aξ ( s ) b ( s ) aξ ( s ) b(s)bξ (s) d
( s) ⎡ ⎤
ȳ (s) = ū(s) + ξ̄u (s) + . 1
c( s) c( s) c( s) c( s) ⎢ 0⎥
⎢ ⎥
(14.202) Bs = ⎢ . ⎥ . (14.214)
⎣ .. ⎦
For the purposes of this chapter, the polynomials aξ (s) 0
and bξ (s) are defined as It follows that:
aξ (s) = s, (14.203) (t) = θT φ(t), (14.215)
bξ (s) = 1. (14.204)
where
With this choice, Equation 14.202 simplifies to ⎡ ⎤ ⎡ ⎤
a φy
sa(s) sb(s) b( s) d ( s) θ = ⎣b⎦ and φ(t) = ⎣ φu ⎦ , (14.216)
ȳ(s) = ū(s) + ξ̄u (s) + . (14.205) d φic
c( s) c( s) c( s) c( s)
In the special case that the input disturbance is a jump and
to a constant value dξ at time t = 0+ , then this can be !T
a = a0 a1 ... an 0 , (14.217)
∗ Transfer
!T
function representations of continuous-time systems and b= 0 b1 ... bn 0 , (14.218)
initial conditions are discussed, for example, by Goodwin et al. (2001, !T
Ch. 4). d= 0 d1 ... dn ... dnc . (14.219)
Intermittent Control in Man and Machine 335

14.11.2 Continuous-Time Parameter Estimation Partitioning Si as Equation 14.223 gives the parameter
estimate of Equation 14.225.
As discussed by Young (1981), Gawthrop (1982, 1987),
If there is a disturbance characterized by Equations
Unbehauen and Rao (1987, 1990), and Garnier and Wang
14.203, 14.204, and 14.207, the parameters correspond-
(2008), least-squares parameter estimation can be per-
ing to d (s) jump when the disturbance jumps. As such, a
formed in the continuous-time domain [as opposed to
jump will give rise to an event, a new set of d param-
the more usual discrete-time domain as described, for
eters should be estimated; this is achieved by adding
example, by Ljung (1999)]. A brief outline of the method
a diagonal matrix to the elements of Si corresponding
used in the following examples is given in this section:
to d(s).
eh (t) = θ̂Tu φ(t), (14.220)

1 t λ( t − t
)
J (θ̂u ) = e eh (t
)2 dt

2 0
= θ̂Tu S(t)θ̂u 14.12 Examples: Adaptive Human Balance
= θ̂Tu Suu (t)θ̂u + θ̂Tu Suk (t)θk + θTk Skk (t)θk ,
(14.221) As discussed by Gawthrop et al. (2014), it can be argued
that the human balance control system generates ballis-
where tic control trajectories that attempt to place the unstable
 t system at equilibrium; this leads to homoclinic orbits

S(t) = eλ(t−t ) φ(t


)φT (t
)dt
, (14.222) (Hirsch, Smale, and Devaney, 2012). However, such
0 behavior is dependent on a good internal model. This
section looks at the same ballistic balance control system
and the symmetrical matrix S(t) has been partitioned as as that of Gawthrop et al. (2014) but in the context of
  parameter adaptation.
Suu (t) Suk (t)
S(t) = . (14.223) The controlled system is given by the transfer
Suk (t) Skk (t)
T
function:
b
Differentiating the cost function J with respect to the G ( s) = 2 . (14.229)
s +a
vector of unknown parameters θ̂u gives
The actual system parameters are
dJ
= Suu (t)θ̂u + Suk (t)θk . (14.224) a = −1, (14.230)
dθ̂u
b = 1.1. (14.231)
Setting the derivative to zero gives the optimal solution:
The parameters a and b are estimated using the inter-
θ̂u (t) = −S− uu
1
( t ) S uk ( t ) θ k . (14.225) mittent parameter estimation method of Section 14.11.3
with initial values:
Differentiating S (14.222) with respect to time gives
â = −1, (14.232)
dS b̂ = 1.
+ λS(t) = φ(t)φ (t). T
(14.226) (14.233)
dt
Figure 14.40a and b shows the nonadaptive controller
14.11.3 Intermittent Parameter Estimation with correct parameters of Equations 14.230 and 14.231;
the behavior approximates that of the ideal ballistic
The incremental information matrix S̃i from the ith
controller.
intermittent interval is defined as
Figure 14.40c and d shows the nonadaptive controller
 t with the incorrect parameters of Equations 14.232 and
i
S̃i = φ(t
)φT (t
)dt
. (14.227) 14.233; the behavior is now a limit cycle.
t i −1
Figure 14.40e and f shows the adaptive controller
Equation 14.227 may be implemented using the differen- with the initial incorrect parameters of Equations 14.232
tial equation (14.226) with zero initial condition at time and 14.233. Initially, the behavior corresponds to that of
ti−1. The intermittent information matrix Si at the ith Figure 14.40a and b; but after about 50 s, the behavior
intermittent interval is defined as corresponds to that of Figure 14.40c and d. The corre-
sponding parameter estimate errors (â − a and b̂ − b) are
Si = λic Si−1 + S̃i . (14.228) given in Figure 14.41.
336 Event-Based Control and Signal Processing

As discussed by Bristow, Tharayil, and Alleyne (2006),


14.13 Examples: Adaptive Human Reaching “iterative learning control (ILC) is based on the notion
that the performance of a system that executes the same
Repetitive reaching and pointing have been examined task multiple times can be improved by learning from
by a number of authors including Shadmehr and Mussa- previous executions (trials, iterations, passes).” A num-
Ivaldi (1994) (see also Shadmehr and Mussa-Ivaldi ber of survey papers are available, including those of
(2012)), Burdet, Tee, Mareels, Milner, Chew, Franklin, Bristow et al. (2006), Ahn, Chen, and Moore (2007), and
Osu, and Kawato (2006) and Tee, Franklin, Kawato, Mil- Wang, Gao, and III (2009), as well as a book by Xu and
ner, and Burdet (2010). An iterative learning control Tan (2003). ILC is closely related to repetitive control
explanation of these results is given by Zhou, Oetomo, (Cuiyan, Dongchun, and Xianyi, 2004) and to multi-pass
Tan, Burdet, and Mareels (2012). control (Edwards, 1974; Owens, 1977).

2 4

1.5 3

1 2

0.5 1

0 0
y

–0.5 –1

–1 –2

–1.5 –3

–2 –4
0 20 40 60 80 100 –2 –1.5 –1 –0.5 0 0.5 1 1.5 2
t (s) y
(a) (b)

2 4

1.5 3

1 2

0.5 1

0 0
y

–0.5 –1

–1 –2

–1.5 –3

–2 –4
0 20 40 60 80 100 –2 –1.5 –1 –0.5 0 0.5 1 1.5 2
t (s) y
(c) (d)

FIGURE 14.40
Adaptive balance control. (a) and (b) Correct parameters, no adaptation. (c) and (d) Incorrect parameters, no adaptation. (e) and (f) Incorrect
parameters, with adaptation—the initial behavior corresponds to (c) and (d) and the final behavior corresponds to (a) and (b). For clarity, lines
are colored grey for t < 60 and black for t ≥ 60. (a) Output y. No adaption Δb0 = 0. (b) Phase plane. No adaption Δb0 = 0. (c) Output y. No
adaption Δb0 = 0.1. (d) Phase plane. No adaption Δb0 = 0.1. (Continued)
Intermittent Control in Man and Machine 337

2 4

1.5 3

1 2

0.5 1

0 0
y

v
–0.5 –1

–1 –2

–1.5 –3

–2 –4
0 20 40 60 80 100 –2 –1.5 –1 –0.5 0 0.5 1 1.5 2
t (s) y
(e) (f)

FIGURE 14.40 (Continued)


Adaptive balance control. (a) and (b) Correct parameters, no adaptation. (c) and (d) Incorrect parameters, no adaptation. (e) and (f) Incorrect
parameters, with adaptation—the initial behavior corresponds to (c) and (d) and the final behavior corresponds to (a) and (b). For clarity, lines
are colored grey for t < 60 and black for t ≥ 60. (e) Output y. Adaption Δb0 = 0.1. (f) Phase plane. Adaption Δb0 = 0.1.


−100 i ≤ 50
0.4 a= , (14.234)
0 i > 50
b = 100. (14.235)
0.2

The lateral target position was randomly set to ±0.01 m.


The parameters a and b are estimated using the inter-
Δθ

0
mittent parameter estimation method of Section 14.11.3
with initial values:
–0.2

â = 0, (14.236)
–0.4 b̂ = 200. (14.237)

0 20 40 60 80 100
t (s)
Figure 14.42 shows the system output (transverse
position) y and control input u for five of the iterations;
FIGURE 14.41 the sample instants are denoted by the symbol (•) and
Adaptive balance control: estimated parameter. The parameter esti- the ideal trajectory by the gray line. The initial behavior
mate errors ( â − a and b̂ − b) become smaller as time increases. (Figure 14.42a and e) is unstable and sampling occurs at
the minimum interval of 100 ms the behavior at the 50th
iteration (Figure 14.42b and f) just before the parame-
ter change is close to ideal even though the trajectory
is open loop for nearly 400 ms. The behavior at the 51st
iteration (Figure 14.42c and g) just after the parameter
This example shows how the intermittent parameter change is again poor (although stable) but has become
estimation method of Section 14.11.3 can be used in the ideal and open loop by iteration 100 (Figure 14.42d
context of iterative learning. and h). The data are replotted in Figure 14.43 to show
The system similar to that described in Section IV, case the transverse position y plotted against longitudinal
3 of the paper by Zhou et al. (2012) was used. The lateral position.
motion of the arm in the force field was described by the Figure 14.44 shows the evolution of the estimated
transfer function of Equation 14.229 with parameters with iteration number.
338 Event-Based Control and Signal Processing

0.04 0.04

0.02 0.02

0 0
y

y
–0.02 –0.02

–0.04 –0.04

0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6
t t
(a) (b)

0.04 0.04

0.02 0.02

y
y

–0.02 –0.02

–0.04 –0.04

0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6
t t
(c) (d)
0.6
0.04
0.4

0.2 0.02

0 0
u
u

–0.2
–0.02
–0.4
–0.04
–0.6
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
t t
(e) (f)
0.1
0.04

0.05
0.02

0 0
u

–0.02
–0.05

–0.04
–0.1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
t t
(g) (h)

FIGURE 14.42
Reaching in a force-field: transverse position. The additional transverse force field is applied throughout, but the initial parameters correspond
to zero force field. The sample instants are denoted by the • symbol. The behavior improves, and the intermittent interval increases, from
iteration 1 to iteration 50. (a) y, 1st iteration. (b) y, 50th iteration. (c) y, 51st iteration. (d) y, 100th iteration. (e) u, 1st iteration. (f) u, 50th iteration.
(g) u, 51st iteration. (h) u, 100th iteration.
Intermittent Control in Man and Machine 339

1 1

0.8 0.8

0.6 0.6

y2
y2

0.4 0.4

0.2 0.2

0 0
–0.04 –0.02 0 0.02 0.04 –0.04 –0.02 0 0.02 0.04
y1 y1
(a) (b)
1 1

0.8 0.8

0.6 0.6
y2

y2

0.4 0.4

0.2 0.2

0 0
–0.04 –0.02 0 0.02 0.04 –0.04 –0.02 0 0.02 0.04
y1 y1
(c) (d)
1

0.8

0.6
y2

0.4

0.2

0
–0.04 –0.02 0 0.02 0.04
y1
(e)

FIGURE 14.43
Reaching in a force-field. The data from Figure 14.42 are re-plotted against longitudinal position. (a) 1st iteration. (b) 25th iteration. (c) 50th
iteration. (d) 51st iteration. (e) 100th iteration.
340 Event-Based Control and Signal Processing
0 human nervous system. IC provides time within
the feedback loop to use the current state to
select new motor responses (control structure, law,
–20
goal, constraints). This facility provides compet-
itive advantage in performance, adaptation, and
–40 survival and is thus likely to operate through
neural structures which are evolutionarily old
a

as well as new (Brembs, 2011). Refractoriness


–60 in humans is associated with a-modal response
selection rather than sensory processing or motor
execution (Pashler and Johnston, 1998). This func-
–80
tion suggests plausible locations within premo-
tor regions and within the slow striatal-prefrontal
–100 gating loops (Jiang and Kanwisher, 2003; Dux,
0 20 40 60 80 100
Ivanoff, Asplund, and Marois, 2006; Houk, Bas-
Iteration
tianen, Fansler, Fishbach, Fraser, Reber, Roy, and
FIGURE 14.44 Simo, 2007; Seidler, 2010; Battaglia-Mayer, Buiatti,
Reaching in a force-field: parameters. The transverse force field param- Caminiti, Ferraina, Lacquaniti, and Shallice, 2014;
eter is a = −100 for 0 ≤ iteration ≤ 50 and a = 0 for iteration > 50. Loram et al., 2014).
The initial estimate is â = 0.
• It seems plausible that Intermittent Control has
applications within a broader biomedical context.
Some possible areas are:
14.14 Conclusion
• Rehabilitation practice, following neuromus-
• This chapter has given an overview of the current cular diseases such as stroke and spinal cord
state-of-the-art of the event-driven Intermittent injury, often uses passive closed-loop learning
Control discussed, in the context of physiological in which movement is externally imposed
systems, by Gawthrop et al. (2011). In particular, by therapists or assistive technology [e.g.,
Intermittent Control has been shown to provide robotic-assisted rehabilitation (Huang and
a basis for the human control systems associated Krakauer, 2009)]. Loram et al. (2011) have
with balance and motion control. shown that adaptation to parameter changes
• Intermittent control arose in the context of apply- during human visual-manual control can be
ing control to systems and constraints, which facilitated by using an explicitly intermittent
change through time (Ronco et al., 1999). The inter- control strategy. For successful learning, active
mittent control solution allows slow optimization user input should excite the system, allow-
to occur concurrently with a fast control action. ing learning from the observed intermittent
Adaptation is intrinsic to intermittent control and open-loop behavior (Loram et al., 2011).
yet the formal relationship between adaptive and • Cellular control systems in general and gene
intermittent control remains to be established. regulatory networks in particular seem to have
Some results of experiments with human subjects a intermittent nature (Albeck, Burke, Spencer,
reported by Loram et al. (2011), together with the Lauffenburger, and Sorger, 2008; Balazsi, van
simulations of Sections 14.12 and 14.13, support Oudenaarden, and Collins, 2011; Liu and Jiang,
the intuition that the intermittent interval some- 2012). It would be interesting to examine
how simplifies the complexities of dual control. whether the intermittent control approaches of
A future challenge is to provide a theoretical basis this paper are relevant in the context of cellular
formally linking intermittent and adaptive con- control systems.
trol. This basis would extend the applicability of
time-varying control and would enhance investi-
gation of biological controllers which are adaptive • The particular intermittent control algorithm
by nature. discussed within this chapter has roots and appli-
cations in control engineering (Ronco et al., 1999;
• It is an interesting question as to where the event- Gawthrop and Wang, 2006, 2009a; Gawthrop et al.,
driven intermittent control algorithm lies in the 2012) and it is hoped that this chapter will lead
Intermittent Control in Man and Machine 341

to further cross-fertilization of physiological and in robotics. In particular, as discussed by van


engineering research. Some possible areas are: de Kamp et al. (2013a), robots, like humans,
contain redundant possibilities within a multi-
• Decentralized control (Sandell, Varaiya, segmental structure. Thus, the multivariable
Athans, and Safonov, 1978; Bakule and Papik, constrained intermittent control methods illus-
2012) is a pragmatic approach to the control of trated in Section 14.7 and the adaptive versions
large-scale systems where, for reasons of cost, illustrated in Section 14.11 may well be applica-
convenience, and reliability, it is not possible ble to the control of autonomous robots.
to control the entire system by a single central-
ized controller. Fundamental control-theoretic
principles arising from decentralized control
have been available for some time (Clements,
1979; Anderson and Clements, 1981; Gong
and Aldeen, 1992). More recently, following Acknowledgments
the implementation of decentralized control The work reported here is related to the linked
using networked control systems (Moyne and EPSRC Grants EP/F068514/1, EP/F069022/1, and
Tilbury, 2007), attention has focused on the EP/F06974X/1 “Intermittent control of man and
interaction of communication and control machine.” PJG acknowledge the many discussions with
theory (Baillieul and Antsaklis, 2007; Nair, Megan and Peter Neilson over the years, which have
Fagnani, Zampieri, and Evans, 2007) and significantly influenced this work.
fundamental results have appeared (Nair
and Evans, 2003; Nair, Evans, Mareels, and
Moran, 2004; Hespanha, Naghshtabrizi, and
Xu, 2007). It would be interesting to apply the
physiologically inspired approaches of this
chapter to decentralized control as well as to Bibliography
reconsider Intermittent Control in the context H.-S. Ahn, Y. Q. Chen, and K. L. Moore. Iterative learn-
of decentralized control systems. ing control: Brief survey and categorization. IEEE
• Networked control systems lead to the “sam- Transactions on Systems, Man, and Cybernetics, Part
pling period jitter problem” (Sala, 2007; Moyne C: Applications and Reviews, 37(6):1099–1121, 2007.
and Tilbury, 2007) where uncertainties in doi:10.1109/TSMCC.2007.905759.
transmission time lead to unpredictable non-
uniform sampling and stability issues (Cloost- J. G. Albeck, J. M. Burke, S. L. Spencer, D. A.
erman, van de Wouw, Heemels, and Nijmeijer, Lauffenburger, and P. K. Sorger. Modeling a snap-
2009). A number of authors have suggested action, variable-delay switch controlling extrinsic
that performance may be improved by replac- cell death. PLoS Biology, 6(12):2831–2852, 2008.
ing the standard zero-order hold by a gener- doi:10.1371/journal.pbio.0060299.
alized hold (Feuer and Goodwin, 1996; Sala,
2005, 2007) or using a dynamical model of the A. V. Alexandrov, A. A. Frolov, F. B. Horak, P. Carlson-
system to interpolate between samples (Zhivo- Kuhta, and S. Park. Feedback equilibrium control dur-
glyadov and Middleton, 2003; Montestruque ing human standing. Biological Cybernetics, 93:309–322,
and Antsaklis, 2003). This can be shown to 2005. doi:10.1007/s00422-005-0004-1.
improve stability (Montestruque and Antsak-
lis, 2003; Hespanha et al., 2007). As shown by B. D. O. Anderson and D. J. Clements. Algebraic char-
Gawthrop and Wang (2011), the intermittent acterization of fixed modes in de-centralized control.
controller has a similar feature; it therefore fol- Automatica, 17(5):703–712, 1981. doi:10.1016/0005-
lows that the physiologically inspired form of 1098(81)90017-0.
intermittent controller described in this chapter
has application to networked control systems. Y. Asai, Y. Tasaka, K. Nomura, T. Nomura, M. Casa-
• Robotics It seems likely that understanding dio, and P. Morasso. A model of postural con-
the control mechanisms behind human balance trol in quiet standing: Robust compensation of
and motion control (Loram et al., 2009; van delay-induced instability using intermittent activa-
de Kamp et al., 2013a,b) and stick balancing tion of feedback control. PLoS One, 4(7):e6169, 2009.
(Gawthrop et al., 2013b) will have applications doi:10.1371/journal.pone.0006169.
342 Event-Based Control and Signal Processing

K. Astrom and B. Bernhardsson. Systems with Lebesgue Y. Bar-Shalom. Stochastic dynamic program-
sampling. In Directions in Mathematical Systems The- ming: Caution and probing. IEEE Transactions
ory and Optimization, A. Rantzer and C. Byrnes, Eds., on Automatic Control, 26(5):1184–1195, 1981.
pp. 1–13. Springer, 2003. doi:10.1007/3-540-36106-5. doi:10.1109/TAC.1981.1102793.
K. J. Astrom. Frequency domain properties of Otto Smith Y. Bar-Shalom and E. Tse. Dual effect, certainty equiv-
regulators. International Journal of Control, 26:307–314, alence, and separation in stochastic control. IEEE
1977. doi:10.1080/00207177708922311. Transactions on Automatic Control, 19(5):494–500, 1974.
K. J. Astrom. Event based control. In Analysis and Design doi:10.1109/TAC.1974.1100635.
of Nonlinear Control Systems, A. Astolfi and L. Mar-
coni, Eds., pp. 127–147. Springer, Heidelberg, 2008. A. Battaglia-Mayer, T. Buiatti, R. Caminiti, S. Ferraina,
doi:10.1007/978-3-540-74358-3. F. Lacquaniti, and T. Shallice. Correction and sup-
pression of reaching movements in the cerebral cor-
K. J. Astrom and B. M. Bernhardsson. Comparison tex: Physiological and neuropsychological aspects.
of Riemann and Lebesgue sampling for first order Neuroscience & Biobehavioral Reviews, 42:232–251, 2014.
stochastic systems. In Proceedings of the 41st IEEE doi:10.1016/j.neubiorev.2014.03.002.
Conference on Decision and Control, 2002, volume 2,
pp. 2011–2016, January 2002. G. A. Bekey. The human operator as a sampled-data sys-
tem. IRE Transactions on Human Factors in Electronics,
K. J. Astrom and A. Helmersson. Dual control of an 3(2):43–51, 1962. doi:10.1109/THFE2.1962.4503341.
integrator with unknown gain. Computers and Math-
ematics with Applications, 12(6, Part A):653–662, 1986. N. Bhushan and R. Shadmehr. Computational nature
doi:10.1016/0898-1221(86)90052-0. of human adaptive control during learning of reach-
K. J. Åström and B. Wittenmark. On self-tuning regu- ing movements in force fields. Biological Cybernetics,
lators. Automatica, 9:185–199, 1973. doi:10.1016/0005- 81(1):39–60, 1999. doi:10.1007/s004220050543.
1098(73)90073-3.
A. Bottaro, M. Casadio, P. G. Morasso, and
K. J. Åström and B. Wittenmark. Adaptive Control. V. Sanguineti. Body sway during quiet standing: Is it
Addison-Wesley, Reading, MA, 1989. the residual chattering of an intermittent stabilization
process? Human Movement Science, 24(4):588–615,
T. Bailey and H. Durrant-Whyte. Simultaneous
2005. doi:10.1016/j.humov.2005.07.006.
localization and mapping (SLAM): Part II. IEEE
Robotics Automation Magazine, 13(3):108–117, 2006. A. Bottaro, Y. Yasutake, T. Nomura, M. Casadio,
doi:10.1109/MRA.2006.1678144. and P. Morasso. Bounded stability of the quiet
J. Baillieul and P. J. Antsaklis. Control and com- standing posture: An intermittent control model.
munication challenges in networked real-time sys- Human Movement Science, 27(3):473–495, 2008.
tems. Proceedings of the IEEE, 95(1):9–28, 2007. doi:10.1016/j.humov.2007.11.005.
doi:10.1109/JPROC.2006.887290.
S. P. Boyd and L. Vandenberghe. Convex Optimization.
L. Bakule and M. Papik. Decentralized control and Cambridge University Press, Cambridge, UK, 2004.
communication. Annual Reviews in Control, 36(1):1–10,
2012. doi:10.1016/j.arcontrol.2012.03.001. B. Brembs. Towards a scientific concept of free will as
a biological trait: Spontaneous actions and decision-
G. Balazsi, A. van Oudenaarden, and J. J. Collins. making in invertebrates. Proceedings of the Royal
Cellular decision making and biological noise: From Society B: Biological Sciences, 278(1707):930–939, 2011.
microbes to mammals. Cell, 144(6):910–925, 2011. doi:10.1098/rspb.2010.2325.
doi:10.1016/j.cell.2011.01.030.
S. Baron, D. L. Kleinman, and W. H. Levison. An optimal D. A. Bristow, M. Tharayil, and A. G. Alleyne. A sur-
control model of human response part II: Prediction vey of iterative learning control. IEEE Control Systems,
of human performance in a complex task. Automatica, 26(3):96–114, 2006. doi:10.1109/MCS.2006.1636313.
6:371–383, 1970. doi:10.1016/0005-1098(70)90052-X.
E. Burdet, K. Tee, I. Mareels, T. Milner, C. Chew,
N. C. Barrett and D. J. Glencross. The double step analy- D. Franklin, R. Osu, and M. Kawato. Stability and
sis of rapid manual aiming movements. The Quarterly motor adaptation in human arm movements. Biolog-
Journal of Experimental Psychology Section A, 40(2):299– ical Cybernetics, 94:20–32, 2006. doi:10.1007/s00422-
322, 1988. doi:10.1080/02724988843000131. 005-0025-9.
Intermittent Control in Man and Machine 343

W.-H. Chen and P. J. Gawthrop. Constrained pre- J. B. Edwards. Stability problems in the control
dictive pole-placement control with linear of multipass processes. Proceedings of the Institu-
models. Automatica, 42(4):613–618, 2006. doi: tion of Electrical Engineers, 121(11):1425–1432, 1974.
10.1016/j.automatica.2005.09.020. doi:10.1049/piee.1974.0299.

D. W. Clarke and P. J. Gawthrop. Self-tuning controller. A. A. Feldbaum. Optimal Control Theory. Academic Press,
IEE Proceedings Part D: Control Theory and Applications, New York, 1960.
122(9):929–934, 1975. doi:10.1049/piee.1975.0252.
A. Feuer and G. C. Goodwin. Sampling in Digital Signal
D. W. Clarke and P. J. Gawthrop. Self-tuning control. Processing and Control. Birkhauser, Berlin, 1996.
IEE Proceedings Part D: Control Theory and Applications, R. Fletcher. Practical Methods of Optimization, 2nd edition.
126(6):633–640, 1979. doi:10.1049/piee.1979.0145. Wiley, Chichester, 1987.
D. W. Clarke and P. J. Gawthrop. Implementation M. Foo and E. Weyer. On reproducing existing con-
and application of microprocessor-based self-tuners. trollers as model predictive controllers. In Aus-
Automatica, 17(1):233–244, 1981. doi:10.1016/0005- tralian Control Conference (AUCC), 2011, pp. 303–308,
1098(81)90098-4. November 2011.

D. J. Clements. A representation result for two- G. F. Franklin and J. D. Powell. Digital Control of Dynamic
input two-output decentralized control systems. Systems. Addison-Wesley, Reading, MA, 1980.
The Journal of the Australian Mathematical Society.
G. F. Franklin, J. D. Powell, and A. Emami-Naeini. Feed-
Series B. Applied Mathematics, 21(1):113–127, 1979.
back Control of Dynamic Systems, 3rd edition. Addison-
doi:10.1017/S0334270000001971.
Wesley, Reading, MA, 1994.
M. B. G. Cloosterman, N. van de Wouw, W. P. M. H. A. T. Fuller. Optimal nonlinear control systems with
Heemels, and H. Nijmeijer. Stability of networked pure delay. International Journal of Control, 8:145–168,
control systems with uncertain time-varying delays. 1968. doi:10.1080/00207176808905662.
IEEE Transactions on Automatic Control, 54(7):1575–
1580, 2009. doi:10.1109/TAC.2009.2015543. H. Garnier and L. Wang, Ed. Identification of Continuous-
Time Models from Sampled Data. Advances in Industrial
K. J. Craik. Theory of human operators in control sys- Control. Springer, London, 2008.
tems: Part 1, the operator as an engineering sys-
tem. British Journal of Psychology, 38:56–61, 1947a. P. Gawthrop, H. Gollee, A. Mamma, I. Loram, and
doi:10.1111/j.2044-8295.1947.tb01141.x. M. Lakie. Intermittency explains variability in human
motor control. In NeuroEng 2013: Australian Workshop
K. J. Craik. Theory of human operators in control sys- on Computational Neuroscience, Melbourne, Australia,
tems: Part 2, man as an element in a control sys- 2013a.
tem. British Journal of Psychology, 38:142–148, 1947b.
P. Gawthrop, K.-Y. Lee, M. Halaki, and N. O’Dwyer.
doi:10.1111/j.2044-8295.1948.tb01149.x.
Human stick balancing: An intermittent control expla-
L. Cuiyan, Z. Dongchun, and Z. Xianyi. A sur- nation. Biological Cybernetics, 107(6):637–652, 2013b.
vey of repetitive control. In Proceedings of the doi:10.1007/s00422-013-0564-4.
2004 IEEE/RSJ International Conference on Intelli- P. Gawthrop, I. Loram, H. Gollee, and M. Lakie. Intermit-
gent Robots and Systems, 2004 (IROS 2004), vol- tent control models of human standing: Similarities
ume 2, pp. 1160–1166, September–October 2004. and differences. Biological Cybernetics, 108(2):159–168,
doi:10.1109/IROS.2004.1389553. 2014. doi:10.1007/s00422-014-0587-5.
H. Durrant-Whyte and T. Bailey. Simultaneous P. Gawthrop, I. Loram, and M. Lakie. Predictive
localization and mapping (SLAM): Part I. IEEE feedback in human simulated pendulum balanc-
Robotics Automation Magazine, 13(2):99–110, 2006. ing. Biological Cybernetics, 101(2):131–146, 2009.
doi:10.1109/MRA.2006.1638022. doi:10.1007/s00422-009-0325-6.
P. E. Dux, J. Ivanoff, C. L. Asplund, and R. Marois. Iso- P. Gawthrop, I. Loram, M. Lakie, and H. Gollee. Inter-
lation of a central bottleneck of information process- mittent control: A computational theory of human
ing with time-resolved fMRI. Neuron, 52(6):1109–1120, control. Biological Cybernetics, 104(1–2):31–51, 2011.
2006. doi:10.1016/j.neuron.2006.11.009. doi:10.1007/s00422-010-0416-4.
344 Event-Based Control and Signal Processing

P. Gawthrop, D. Wagg, S. Neild, and L. Wang. P. J. Gawthrop and L. Wang. Intermittent predic-
Power-constrained intermittent control. Inter- tive control of an inverted pendulum. Control Engi-
national Journal of Control, 86(3):396–409, 2013c. neering Practice, 14(11):1347–1356, 2006. doi:10.1016/
doi:10.1080/00207179.2012.733888. j.conengprac.2005.09.002.

P. Gawthrop and L. Wang. The system-matched hold P. J. Gawthrop and L. Wang. Intermittent model
and the intermittent control separation principle. predictive control. Proceedings of the Institution
International Journal of Control, 84(12):1965–1974, 2011. of Mechanical Engineers Pt. I: Journal of Systems
doi:10.1080/00207179.2011.630759. and Control Engineering, 221(7):1007–1018, 2007.
doi:10.1243/09596518JSCE417.
P. J. Gawthrop. Studies in identification and con-
trol. D.Phil. thesis, Oxford University, 1976. P. J. Gawthrop and L. Wang. Constrained
http://ora.ox.ac.uk/objects/uuid:90ade91d-df67- intermittent model predictive control. Inter-
42ef-a422-0d3500331701. national Journal of Control, 82:1138–1147, 2009a.
doi:10.1080/00207170802474702.
P. J. Gawthrop. A continuous-time approach to discrete-
P. J. Gawthrop and L. Wang. Event-driven intermittent
time self-tuning control. Optimal Control: Applications
control. International Journal of Control, 82(12):2235–
and Methods, 3(4):399–414, 1982.
2248, 2009b. doi:10.1080/00207170902978115.
P. J. Gawthrop. Continuous-time Self-tuning Control. P. J. Gawthrop and L. Wang. Intermittent
Volume 1: Design. Engineering Control Series, Research redesign of continuous controllers. Interna-
Studies Press, Lechworth, England, 1987. tional Journal of Control, 83:1581–1594, 2010.
doi:10.1080/00207179.2010.483691.
P. J. Gawthrop. Continuous-Time Self-tuning Control. Vol-
ume 2: Implementation. Engineering Control Series, P. J. Gawthrop, L. Wang, and P. C. Young.
Research Studies Press, Taunton, England, 1990. Continuous-time non-minimal state-space design.
International Journal of Control, 80(10):690–1697, 2007.
P. J. Gawthrop. Frequency domain analysis of inter- doi:10.1080/00207170701546006.
mittent control. Proceedings of the Institution of
Mechanical Engineers Pt. I: Journal of Systems H. Gollee, A. Mamma, I. D. Loram, and P. J.
and Control Engineering, 223(5):591–603, 2009. Gawthrop. Frequency-domain identification of the
doi:10.1243/09596518JSCE759. human controller. Biological Cybernetics, 106:359–372,
2012. doi:10.1007/s00422-012-0503-9.
P. J. Gawthrop and H. Gollee. Intermittent tap-
ping control. Proceedings of the Institution of Z. Gong and M. Aldeen. On the characterization of
Mechanical Engineers, Part I: Journal of Systems fixed modes in decentralized control. IEEE Trans-
and Control Engineering, 226(9):1262–1273, 2012. actions on Automatic Control, 37(7):1046–1050, 1992.
doi:10.1177/0959651812450114. doi:10.1109/9.148369.

P. J. Gawthrop, M. D. Lakie, and I. D. Loram. Predictive G. C. Goodwin, S. F. Graebe, and M. E. Salgado. Control
feedback control and Fitts’ law. Biological Cybernetics, System Design. Prentice Hall, NJ, 2001.
98(3):229–238, 2008. doi:10.1007/s00422-007-0206-9.
G. C. Goodwin and K. S. Sin. Adaptive Filtering Prediction
P. J. Gawthrop, S. A. Neild, and D. J. Wagg. Semi-active and Control. Prentice-Hall, Englewood Cliffs, NJ, 1984.
damping using a hybrid control approach. Journal M. Günther, S. Grimmer, T. Siebert, and R. Blickhan. All
of Intelligent Material Systems and Structures, 23(18): leg joints contribute to quiet human stance: A mechan-
965–974, 2012. doi:10.1177/1045389X12436734. ical analysis. Journal of Biomechanics, 42(16):2739–2746,
2009. doi:10.1016/j.jbiomech.2009.08.014.
P. J. Gawthrop and E. Ronco. Predictive pole-placement
control with linear models. Automatica, 38(3):421–432, M. Günther, O. Müller, and R. Blickhan. Watching
2002. doi:10.1016/S0005-1098(01)00231-X. quiet human stance to shake off its straitjacket.
Archive of Applied Mechanics, 81(3):283–302, 2011.
P. J. Gawthrop and D. G. Sbarbaro. Stochastic approx- doi:10.1007/s00419-010-0414-y.
imation and multilayer perceptrons: The gain back-
propagation algorithm. Complex System Journal, 4: M. Günther, O. Müller, and R. Blickhan. What does
51–74, 1990. head movement tell about the minimum number
Intermittent Control in Man and Machine 345

of mechanical degrees of freedom in quiet human O. L. R. Jacobs. Introduction to Control Theory. Oxford
stance? Archive of Applied Mechanics, 82(3):333–344, University Press, 1974.
2012. doi:10.1007/s00419-011-0559-3.
O. L. R. Jacobs and J. W. Patchell. Caution and prob-
S. Hanneton, A. Berthoz, J. Droulez, and J. J. E. Slotine. ing in stochastic control. International Journal of
Does the brain use sliding variables for the control Control, 16(1):189–199, 1972. doi:10.1080/
of movements? Biological Cybernetics, 77(6):381–393, 00207177208932252.
1997. doi:10.1007/s004220050398.
Y. Jiang and N. Kanwisher. Common neural sub-
R. M. Hardwick, C. Rottschy, R. C. Miall, and S. B. strates for response selection across modalities
Eickhoff. A quantitative meta-analysis and review and mapping paradigms. Journal of Cognitive
of motor learning in the human brain. NeuroImage, Neuroscience, 15(8):1080–1094, 2003. doi:10.1162/
67(0):283–297, 2013. doi:10.1016/j.neuroimage.2012. 089892903322598067.
11.020.
R. Johansson, M. Magnusson, and M. Akesson. Identi-
W. Heemels, J. H. Sandee, and P. P. J. V. D. Bosch. fication of human postural dynamics. IEEE Transac-
Analysis of event-driven controllers for linear sys- tions on Biomedical Engineering, 35(10):858–869, 1988.
tems. International Journal of Control, 81(4):571–590, doi:10.1109/10.7293.
2008. doi:10.1080/00207170701506919.
S. Julier, J. Uhlmann, and H. F. Durrant-Whyte. A new
J. P. Hespanha, P. Naghshtabrizi, and Y. Xu. A sur- method for the nonlinear transformation of means
vey of recent results in networked control sys- and covariances in filters and estimators. IEEE
tems. Proceedings of the IEEE, 95(1):138–162, 2007. Transactions on Automatic Control, 45(3):477–482, 2000.
doi:10.1109/JPROC.2006.887288. doi:10.1109/9.847726.
M. W. Hirsch, S. Smale, and R. L. Devaney. Differen- J. C. Kalkkuhl, K. J. Hunt, R. Zbikowski, and A. Dzielin-
tial Equations, Dynamical Systems, and an Introduction to ski, Eds. Applications of Neural Adaptive Control Tech-
Chaos, 3rd edition. Academic Press, Amsterdam, 2012. nology. Volume 17. Robotics and Intelligent Systems
Series. World Scientific, Singapore, 1997.
N. Hogan. Adaptive control of mechanical impedance
by coactivation of antagonist muscles. IEEE Trans- S. G. Khan, G. Herrmann, F. L. Lewis, T. Pipe, and
actions on Automatic Control, 29(8):681–690, 1984. C. Melhuish. Reinforcement learning and optimal
doi:10.1109/TAC.1984.1103644. adaptive control: An overview and implementation
examples. Annual Reviews in Control, 36(1):42–59, 2012.
I. M. Horowitz. Synthesis of Feedback Systems. Academic doi:10.1016/j.arcontrol.2012.03.004.
Press, Amsterdam, 1963.
O. Khatib. A unified approach for motion and force
J. C. Houk, C. Bastianen, D. Fansler, A. Fishbach, control of robot manipulators: The operational space
D. Fraser, P. J. Reber, S. A. Roy, and L. S. Simo. formulation. IEEE Journal of Robotics and Automation,
Action selection and refinement in subcortical loops 3(1):43–53, 1987. doi:10.1109/JRA.1987.1087068.
through basal ganglia and cerebellum. Philosoph-
ical Transactions of the Royal Society of London. D. Kleinman. Optimal control of linear systems
Series B, Biological Sciences, 362(1485):1573–1583, 2007. with time-delay and observation noise. IEEE Trans-
doi:10.1098/rstb.2007.2063. actions on Automatic Control, 14(5):524–527, 1969.
doi:10.1109/TAC.1969.1099242.
V. S. Huang and J. W. Krakauer. Robotic neurorehabil-
itation: A computational motor learning perspective. D. L. Kleinman, S. Baron, and W. H. Levison. An
Journal of Neuroengineering and Rehabilitation, 6:5, 2009. optimal control model of human response part I:
doi:10.1186/1743-0003-6-5. Theory and validation. Automatica, 6:357–369, 1970.
doi:10.1016/0005-1098(70)90051-8.
K. J. Hunt, R. Zbikowski, D. Sbarbaro, and P. J.
Gawthrop. Neural networks for control systems— P. Kowalczyk, P. Glendinning, M. Brown, G. Medrano-
A survey. Automatica, 28(6):1083–1112, 1992. doi:10. Cerda, H. Dallali, and J. Shapiro. Modelling human
1016/0005-1098(92)90053-I. balance using switched systems with linear feedback
control. Journal of the Royal Society Interface, 9(67):234–
T. Insperger. Act-and-wait concept for continuous-time 245, 2012. doi:10.1098/rsif.2011.0212.
control systems with feedback delay. IEEE Trans-
actions on Control Systems Technology, 14(5):974–977, B. C. Kuo. Digital Control Systems. Holt, Reinhart and
2006. doi:10.1109/TCST.2006.876938. Winston, New York, 1980.
346 Event-Based Control and Signal Processing

H. Kwakernaak and R. Sivan. Linear Optimal Control I. D. Loram, C. N. Maganaris, and M. Lakie.
Systems. Wiley, New York, 1972. Human postural sway results from frequent,
ballistic bias impulses by soleus and gastrocne-
M. Lakie, N. Caplan, and I. D. Loram. Human bal- mius. Journal of Physiology, 564(Pt 1):295–311, 2005.
ancing of an inverted pendulum with a compliant doi:10.1113/jphysiol.2004.076307.
linkage: Neural control by anticipatory intermittent
bias. The Journal of Physiology, 551(1):357–370, 2003. I. D. Loram, C. van de Kamp, H. Gollee, and P. J.
doi:10.1113/jphysiol.2002.036939. Gawthrop. Identification of intermittent control in
man and machine. Journal of the Royal Society Interface,
M. Lakie and I. D. Loram. Manually controlled human 9(74):2070–2084, 2012. doi:10.1098/rsif.2012.0142.
balancing using visual, vestibular and proprioceptive
senses involves a common, low frequency neural pro- I. D. Loram, C. van de Kamp, M. Lakie, H. Gollee, and
cess. The Journal of Physiology, 577(Pt 1):403–416. 2006. P. J. Gawthrop. Does the motor system need
doi:10.1113/jphysiol.2006.116772. intermittent control? Exercise and Sport Sciences
Reviews, 42(3):117–125, 2014. doi:10.1249/JES.
M. Latash. The bliss (not the problem) of motor abun- 0000000000000018.
dance (not redundancy). Experimental Brain Research,
217:1–5, 2012. doi:10.1007/s00221-012-3000-4. J. M. Maciejowski. Predictive Control with Constraints.
Prentice Hall, Englewood Cliffs, NJ, 2002.
W. H. Levison, S. Baron, and D. L. Kleinman.
J. M. Maciejowski. Reverse engineering existing con-
A model for human controller remnant. IEEE Trans-
trollers for MPC design. In Proceedings of the IFAC Sym-
actions on Man-Machine Systems, 10(4):101–108, 1969.
posium on System Structure and Control, pp. 436–441,
doi:10.1109/TMMS.1969.299906.
October 2007.
Y. Liu and H. Jiang. Exponential stability of genetic reg-
A. Mamma, H. Gollee, P. J. Gawthrop, and I. D. Loram.
ulatory networks with mixed delays by periodically
Intermittent control explains human motor remnant
intermittent control. Neural Computing and Applica-
without additive noise. In 2011 19th Mediterranean
tions, 21(6):1263–1269, 2012. doi:10.1007/s00521-011-
Conference on Control Automation (MED), pp. 558–563,
0551-4.
June 2011. doi:10.1109/MED.2011.5983113.
L. Ljung. System Identification: Theory for the User, 2nd G. McLachlan and D. Peel. Finite Mixture Models. Wiley,
edition. Information and Systems Science. Prentice- New York, 2000.
Hall, Englewood Cliffs, NJ, 1999.
D. McRuer. Human dynamics in man-machine sys-
I. D. Loram, P. Gawthrop, and M. Lakie. The fre- tems. Automatica, 16:237–253, 1980. doi:10.1016/0005-
quency of human, manual adjustments in balanc- 1098(80)90034-5.
ing an inverted pendulum is constrained by intrinsic
physiological factors. Journal of Physiology, 577(1):403– R. C. Miall, D. J. Weir, and J. F. Stein. Manual track-
416, 2006. doi:10.1113/jphysiol.2006.118786. ing of visual targets by trained monkeys. Behavioural
Brain Research, 20(2):185–201, 1986. doi:10.1016/0166-
I. D. Loram, H. Gollee, M. Lakie, and P. Gawthrop. 4328(86)90003-3.
Human control of an inverted pendulum: Is con-
tinuous control necessary? Is intermittent control R. C. Miall, D. J. Weir, and J. F. Stein. Inter-
effective? Is intermittent control physiologi- mittency in human manual tracking tasks. Jour-
cal? The Journal of Physiology, 589:307–324, 2011. nal of Motor Behavior, 25:53–63, 1993a. doi:10.1080/
doi:10.1113/jphysiol.2010.194712. 00222895.1993.9941639.
R. C. Miall, D. J. Weir, D. M. Wolpert, and
I. D. Loram and M. Lakie. Human balancing of
J. F. Stein. Is the cerebellum a Smith predic-
an inverted pendulum: Position control by small,
tor? Journal of Motor Behavior, 25:203–216, 1993b.
ballistic-like, throw and catch movements. Journal
doi:10.1080/00222895.1993.9942050.
of Physiology, 540(3):1111–1124, 2002. doi:10.1113/
jphysiol.2001.013077. W. T. Miller, R. S. Sutton, and P. J. Werbos. Neural
Networks for Control. MIT Press, Cambridge, MA, 1990.
I. D. Loram, M. Lakie, and P. J. Gawthrop. Visual
control of stable and unstable loads: What is the L. A. Montestruque and P. J. Antsaklis. On the
feedback delay and extent of linear time-invariant model-based control of networked systems. Auto-
control? Journal of Physiology, 587(6):1343–1365, 2009. matica, 39(10):1837–1843, 2003. doi:10.1016/S0005-
doi:10.1113/jphysiol.2008.166173. 1098(03)00186-9.
Intermittent Control in Man and Machine 347

J. R. Moyne and D. M. Tilbury. The emergence of indus- R. J. Peterka. Sensorimotor integration in human postu-
trial control networks for manufacturing control, diag- ral control. The Journal of Neurophysiology, 88(3):1097–
nostics, and safety data. Proceedings of the IEEE, 1118, 2002.
95(1):29–47, 2007. doi:10.1109/JPROC.2006.887325.
R. Pintelon and J. Schoukens. System Identification.
G. N. Nair and R. J. Evans. Exponential stabilisability A Frequency Domain Approach. IEEE Press, New York,
of finite-dimensional linear systems with limited data 2001.
rates. Automatica, 39(4):585–593, 2003.
R. Pintelon, J. Schoukens, and Y. Rolain. Frequency
G. N. Nair, R. J. Evans, I. M. Y. Mareels, and domain approach to continuous-time identification:
W. Moran. Topological feedback entropy and non- Some practical aspects. In Identification of Continuous-
linear stabilization. IEEE Transactions on Automatic Time Models from Sampled Data, pp. 215–248. Springer,
Control, 49(9):1585–1597, 2004. 2008.

G. N. Nair, F. Fagnani, S. Zampieri, and R. J. Evans. I. J. Pinter, R. van Swigchem, A. J. K. van Soest, and
Feedback control under data rate constraints: An L. A. Rozendaal. The dynamics of postural sway
overview. Proceedings of the IEEE, 95(1):108–137, 2007. cannot be captured using a one-segment inverted
doi:10.1109/JPROC.2006.887294. pendulum model: A PCA on segment rotations dur-
ing unperturbed stance. Journal of Neurophysiology,
F. Navas and L. Stark. Sampling or Intermittency in 100(6):3197–3208, 2008. doi:10.1152/jn.01312.2007.
Hand Control System Dynamics. Biophysical Journal,
8(2):252–302, 1968. E. C. Poulton. Tracking Skill and Manual Control. Aca-
demic Press, New York, 1974.
P. D. Neilson and M. D. Neilson. An overview of
J. B. Rawlings. Tutorial overview of model predictive
adaptive model theory: Solving the problems of
control. IEEE Control Systems Magazine, 20(3):38–52,
redundancy, resources, and nonlinear interactions in
2000.
human movement control. Journal of Neural Engineer-
ing, 2(3):S279–S312, 2005. doi:10.1152/jn.01144.2004. J. B. Rawlings and K. R. Muske. The stability of con-
strained receding horizon control. IEEE Transactions on
P. D. Neilson, M. D. Neilson, and N. J. O’Dwyer. Inter-
Automatic Control, 38(10):1512–1516, 1993.
nal models and intermittency: A theoretical account
of human tracking behaviour. Biological Cybernetics, E. Ronco, T. Arsan, and P. J. Gawthrop. Open-loop
58:101–112, 1988. doi:10.1007/BF00364156. intermittent feedback control: Practical continuous-
time GPC. IEE Proceedings Part D: Control Theory
K. M. Newell, K. M. Deutsch, J. J. Sosnoff, and G. Mayer- and Applications, 146(5):426–434, 1999. doi:10.1049/
Kress. Variability in motor output as noise: A default ip-cta:19990504.
and erroneous proposition? In Movement system vari-
ability (K. Davids, S. Bennett and K.M. Newell, Eds.). S. A. Safavynia and L. H. Ting. Task-level feed-
Human Kinetics Publishers, Champaign, IL, 2006. back can explain temporal recruitment of spatially
fixed muscle synergies throughout postural perturba-
J. Nocedal and S. J. Wright. Numerical Optimization, tions. Journal of Neurophysiology, 107(1):159–177, 2012.
2nd edition. Springer Series in Operations Research. doi:10.1152/jn.00653.2011.
Springer Verlag, Berlin, 2006.
A. P. Sage and J. J. Melsa. Estimation Theory with Appli-
T. M. Osborne. An investigation into the neural mech- cations to Communication and Control. McGraw-Hill,
anisms of human balance control. PhD thesis, School New York, 1971.
of Sport and Exercise Sciences, University of Birming-
ham, 2013. http://etheses.bham.ac.uk/3918/. A. Sala. Computer control under time-varying sam-
pling period: An LMI gridding approach. Automat-
D. H. Owens. Stability of linear multipass processes. ica, 41(12):2077–2082, 2005. doi:10.1016/j.automatica.
Proceedings of the Institution of Electrical Engineers, 2005.05.017.
124(11):1079–1082, 1977. doi:10.1049/piee.1977.0220.
A. Sala. Improving performance under sampling-rate
H. Pashler and J. C. Johnston. Attentional limitations in variations via generalized hold functions. IEEE Trans-
dual-task performance. In Attention, H. Pashler, Ed., actions on Control Systems Technology, 15(4):794–797,
pp. 155–189. Psychology Press, 1998. 2007. doi:10.1109/TCST.2006.890302.
348 Event-Based Control and Signal Processing

N. Sandell, P. Varaiya, M. Athans, and M. Safonov. C. W. Telford. The refractory phase of voluntary and
Survey of decentralized control methods for large associative responses. Journal of Experimental Psychol-
scale systems. IEEE Transactions on Automatic Control, ogy, 14(1):1–36, 1931. doi:10.1037/h0073262.
23(2):108–128, 1978. doi:10.1109/TAC.1978.1101704.
L. H. Ting. Dimensional reduction in sensorimotor
S. J. Schiff. Neural Control Engineering: The Emerging Inter- systems: A framework for understanding muscle
section between Control Theory and Neuroscience. Com- coordination of posture. In Computational Neuro-
putational Neuroscience. MIT Press, Cambridge, MA, science: Theoretical Insights into Brain Function, Vol-
2012. ume 165 of Progress in Brain Research, T. D. P. Cisek
and J. F. Kalaska, Eds., pp. 299–321. Elsevier, 2007.
R. Seidler. Neural correlates of motor learning,
doi:10.1016/S0079-6123(06)65019-X.
transfer of learning, and learning to learn. Exer-
cise & Sport Sciences Reviews, 38(1):3–9, 2010. E. Todorov. Optimality principles in sensorimotor con-
doi:10.1097/JES.0b013e3181c5cce7. trol (review). Nature Neuroscience, 7(9):907–915, 2004.
R. Shadmehr and F. A. Mussa-Ivaldi. Adaptive represen- doi:10.1038/nn1309.
tation of dynamics during learning of a motor task.
E. Todorov and M. I. Jordan. Optimal feedback control
The Journal of Neuroscience, 14(5):3208–3224, 1994.
as a theory of motor coordination. Nature Neuroscience,
R. Shadmehr and S. Mussa-Ivaldi. Biological Learning 5(11):1226–1235, 2002. doi:10.1038/nn963.
and Control. Computational Neuroscience. MIT Press,
Cambridge, MA, 2012. H. Unbehauen and G. P. Rao. Identification of Continuous
Systems. North-Holland, Amsterdam, 1987.
R. Shadmehr and S. P. Wise. Computational Neurobiol-
ogy of Reaching and Pointing: A Foundation for Motor H. Unbehauen and G. P. Rao. Continuous-time
Learning. MIT Press, Cambridge, MA, 2005. approaches to system identification—A survey.
Automatica, 26(1):23–35, 1990. doi:10.1016/0005-
S. Skogestad and I. Postlethwaite. Multivariable Feedback 1098(90)90155-B.
Control Analysis and Design. Wiley, New York, 1996.
C. van de Kamp, P. Gawthrop, H. Gollee, M. Lakie,
O. J. M. Smith. A controller to overcome dead-time. ISA and I. D. Loram. Interfacing sensory input with
Transactions, 6(2):28–33, 1959. motor output: Does the control architecture con-
G. Stein. Respect the unstable. IEEE Control Sys- verge to a serial process along a single channel?
tems Magazine, 23(4):12–25, 2003. doi:10.1109/MCS. Frontiers in Computational Neuroscience, 7(55), 2013a.
2003.1213600. doi:10.3389/fncom.2013.00055.

G. Stepan and T. Insperger. Stability of time-periodic C. van de Kamp, P. J. Gawthrop, H. Gollee, and
and delayed systems—A route to act-and-wait con- I. D. Loram. Refractoriness in sustained visuo-
trol. Annual Reviews in Control, 30(2):159–168, 2006. manual control: Is the refractory duration intrinsic
doi:10.1016/j.arcontrol.2006.08.002. or does it depend on external system properties?
PLOS Computational Biology, 9(1):e1002843, 2013b.
R. S. Sutton and A. G. Barto. Reinforcement Learning: An doi:10.1371/journal.pcbi.1002843.
Introduction. Cambridge University Press, 1998.
H. Van Der Kooij, R. Jacobs, B. Koopman, and
R. S. Sutton, A. G. Barto, and R. J. Williams.
H. Grootenboer. A multisensory integration model of
Reinforcement learning is direct adaptive optimal
human stance control. Biological Cybernetics, 80:299–
control. IEEE Control Systems, 12(2):19–22, 1992.
308, 1999. doi:10.1007/s004220050527.
doi:10.1109/37.126844.
C. J. Taylor, A. Chotai, and P. C. Young. Continuous- H. Van Der Kooij, R. Jacobs, B. Koopman, and F. Van
time proportional-integral derivative-plus (PIP) con- Der Helm. An adaptive model of sensory integra-
trol with filtering polynomials. In Proceedings of tion in a dynamic environment applied to human
the UKACC Conference “Control ’98”, pp. 1391–1396, stance control. Biological Cybernetics, 84:103–115, 2001.
Swansea, UK, September 1998. doi:10.1007/s004220050527.

K. Tee, D. Franklin, M. Kawato, T. Milner, and E. Bur- M. A. Vince. The intermittency of control movements
det. Concurrent adaptation of force and impedance and the psychological refractory period. British Jour-
in the redundant muscle system. Biological Cybernetics, nal of Psychology, 38:149–157, 1948. doi:10.1111/j.2044-
102:31–44, 2010. doi:10.1007/s00422-009-0348-z. 8295.1948.tb01150.x.
Intermittent Control in Man and Machine 349

D. Vrabie and F. Lewis. Neural network approach P. Young. Parameter estimation for continuous-time
to continuous-time direct adaptive optimal control models: A survey. Automatica, 17(1):23–39, 1981.
for partially unknown nonlinear systems. Neural doi:10.1016/0005-1098(81)90082-0.
Networks, 22(3):237–246, 2009. doi:10.1016/j.neunet.
2009.03.008. P. C. Young, M. A. Behzadi, C. L. Wang, and
A. Chotai. Direct digital and adaptive control by
L. Wang. Model Predictive Control System Design and input-output state variable feedback pole assignment.
Implementation Using MATLAB, 1st edition. Springer, International Journal of Control, 46(6):1867–1881, 1987.
Berlin, 2009. doi:10.1080/00207178708934021.

Y. Wang, F. Gao, and F. J. Doyle III. Survey on iterative R. Zbikowski and K. J. Hunt, Ed. Neural Adaptive Con-
learning control, repetitive control, and run-to-run trol Technology. Volume 15. Robotics and Intelligent
control. Journal of Process Control, 19(10):1589–1600, Systems Series. World Scientific, Singapore, 1996.
2009. doi:10.1016/j.jprocont.2009.09.006.
P. V. Zhivoglyadov and R. H. Middleton. Net-
N. Wiener. Cybernetics: Or the Control and Communication worked control design for linear systems.
in the Animal and the Machine, 2nd edition. MIT Press, Automatica, 39(4):743–750, 2003. doi:10.1016/S0005-
Cambridge, MA, 1965. 1098(02)00306-0.

D. A. Winter. Biomechanics and Motor Control of Human S.-H. Zhou, D. Oetomo, Y. Tan, E. Burdet, and I. Mareels.
Movement, 4th edition. Wiley, New York, 2009. Modeling individual human motor behavior through
model reference iterative learning control. IEEE
D. M. Wolpert, R. C. Miall, and M. Kawato. Internal Transactions on Biomedical Engineering, 59(7):1892–1901,
models in the cerebellum. Trends in Cognitive Sciences, 2012. doi:10.1109/TBME.2012.2192437.
2:338–347, 1998. doi:10.1016/S1364-6613(98)01221-2.

J.-X. Xu and Y. Tan. Linear and Nonlinear Iterative Learning


Control. Volume 291. Springer, Berlin, 2003.
Part II

Event-Based Signal Processing


15
Event-Based Data Acquisition and Digital Signal
Processing in Continuous Time

Yannis Tsividis
Columbia University
New York, NY, USA

Maria Kurchuk
Pragma Securities
New York, NY, USA

Pablo Martinez-Nuevo
Massachusetts Institute of Technology
Cambridge, MA, USA

Steven M. Nowick
Columbia University
New York, NY, USA

Sharvil Patil
Columbia University
New York, NY, USA

Bob Schell
Analog Devices Inc.
Somerset, NJ, USA

Christos Vezyrtzis
IBM
Yorktown Heights, NY, USA

CONTENTS
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
15.2 Level-Crossing Sampling and CT Amplitude Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
15.2.1 Level-Crossing Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
15.2.2 Relation of Level-Crossing Sampling to CT Amplitude Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
15.2.3 Spectral Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
15.2.4 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
15.2.4.1 Amplitude Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
15.2.4.2 Time Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
15.2.5 Other Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
15.2.6 Reducing the Required Timing Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
15.3 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
15.4 Derivative Level-Crossing Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
15.5 Continuous-Time Digital Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
15.5.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
15.5.2 “Continuous-Time” versus “Asynchronous” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
15.5.3 Time Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
353
354 Event-Based Control and Signal Processing

15.6 Versatility of CT DSP in Processing Time-Coded Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366


15.7 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
15.7.1 A Voice-Band CT ADC/DSP/DAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
15.7.2 A General-Purpose CT DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
15.7.3 A CT ADC/DSP/DAC Operating at GHz Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
15.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
ABSTRACT We review techniques in which uniform the local sampling rate adapted to the properties of the
sampling is abandoned, with samples generated and signal. There have been many attempts to do this over
processed instead only when there is an input event, several decades [8–14]; the techniques developed gener-
thus causing dynamic energy use only when demanded ally go by the name of adaptive sampling rate, variable
by the information in the signal. Methods to implement sampling rate, or signal-dependent sampling. In one
event-based A/D converters and digital signal proces- approach, the input is divided into segments, with the
sors (DSPs) in this context are reviewed. It is shown that, sampling rate varied from segment to segment, but kept
compared with traditional techniques, the techniques constant within each segment [13,15]. Another approach
reviewed here lead to circuits that completely avoid uses a sampling rate that varies continuously in propor-
aliasing, respond immediately to input changes, result in tion to the magnitude of the slope of the signal, with the
lower in-band quantization error, and exhibit dynamic slope value obtained by a slope detector [8,9]. A similar
power dissipation that decreases when the input activity effect is obtained by level-crossing sampling [11,12,14],
decreases. Experimental results from recent test chips, illustrated in Figure 15.1a; samples are generated only
operating at kilohertz (kHz) to gigahertz (GHz) signal when the input changes enough to cross any one of a set
frequencies, are reviewed. of amplitude levels, shown by broken lines. For signals
varying less rapidly, samples are generated less fre-
quently; the power dissipation of the associated circuitry
can thus also be allowed to decrease, as illustrated in Fig-
ure 15.1b. In addition to the systems mentioned above, a
15.1 Introduction
variety of other systems can benefit from this approach;
This chapter discusses research on event-based, these include ones involving variable frequency and sig-
continuous-time (CT) signal acquisition and digital nals obtained from variable-speed sources, such as those
signal processing. The aim of this work is to enable use encountered in Doppler-shifted signals or disk drives.
of resources only when there is a reason to do so, as The material in this chapter expands on review
dictated by the signal itself, rather than by a clock. papers [16–18], and uses material from [18] with copy-
Among the several potential benefits of this approach right permission from IEEE (Copyright ( c 2010 IEEE).
is energy savings, which is important in a large number Level-crossing sampling can be viewed as one type
of portable applications in which long battery life is a of event-driven or event-based sampling, which can
must. An example can be found in sensor networks [1,2], be loosely defined as taking samples only when a sig-
in which the nodes sense one or more physical quantities nificant event occurs, such as a notable change in the
(temperature, pressure, humidity, magnetic field inten-
sity, light intensity, etc.) and perform computation and
communication. Another example is biomedical devices
[3–6], which can be wearable, implantable, or ingestible;
in these, similar functions as above are performed. Input
A third example is personal communication devices.
t
Key components in such applications include analog-to- (a)
digital converters (ADCs) and digital signal processors
Power
(DSPs). In conventional systems, these utilize a fixed
dissipation
sampling frequency which, according to the Nyquist
theorem, must be higher than twice the highest expected t
(b)
signal frequency. When the input properties are more
relaxed (lower frequency content, bursty signals, or even FIGURE 15.1
periods of silence), the high sampling frequency simply (a) Input level-crossing sampling; (b) corresponding ADC and DSP
wastes energy and transmission resources. To avoid this, power dissipation. (From Y. Tsividis, IEEE Transactions on Circuits and
one can use some form of nonuniform sampling [7], with Systems II: Express Briefs, 57(8), 577–581  c 2010 IEEE.)
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 355

quantity being monitored. Event-based and adaptive- x(t) xq(t)


rate control systems have been studied for a long time
[8,9,19–28] and have been reviewed elsewhere [29,30],
as well as in the first part of this book. Several types (a)
of event-based sampling are discussed in the above
references, in the context of control. The context of
this chapter is, instead, signal processing and associ- tk–1 tk t
ated data acquisition. In order to provide some focus, Δk
the discussion will concentrate mostly on level-crossing
sampling in CT. However, most of the principles pre- 1
sented are valid for other types of event-based sampling, 0
and for finely discretized time; comments on these will 1
be made where appropriate. Event-based-sampled sig- 0 (b)
nals can be processed using event-driven digital signal 1
processing (DSP) [31], as will be discussed later on. 0
Event-based systems can place demands on the accu-
racy with which the timing of events is recorded and
processed. This shift from amplitude accuracy to tim-
ing accuracy is compatible with the advent of modern 1
0 (c)
nanometer very large scale integration (VLSI) fabrica- –1
tion processes [32], which make higher speeds possi-
ble, but require reduced supply voltages, thus making FIGURE 15.2
amplitude operations increasingly difficult. Thus, the (a) Input signal (x (t)), threshold levels (broken lines), level-crossing
promise of the techniques discussed can be expected to samples (dots), and quantized signal (xq (t)); (b) CT digital representa-
improve as chip fabrication technology advances. tion of quantized signal; (c) CT delta mod representation of quantized
signal. (From Y. Tsividis, IEEE Transactions on Circuits and Systems II:
Express Briefs, 57(8), 577–581 
c 2010 IEEE.)

15.2 Level-Crossing Sampling and CT [11,12,14], including the use of adaptive reference levels
Amplitude Quantization [33–36]. Related approaches and variants can be found
15.2.1 Level-Crossing Sampling in [37–40].
Level-crossing ADCs, due to their attention to time
Level-crossing sampling [11,12,14], already introduced information, can be considered a form of time-based
in Figure 15.1, is further illustrated in Figure 15.2a. ADCs [41].
A CT input signal x (t) is compared to a set of discrete
amplitude levels, shown by horizontal broken lines;
15.2.2 Relation of Level-Crossing Sampling to
samples, shown as dots, are taken at the instants such
CT Amplitude Quantization
levels are crossed. (The waveform x q (t) in part (a), and
those in parts (b) and (c), should be ignored until fur- Consider now a CT signal passed through a CT quan-
ther notice.) The pairs (tk , x (tk )) form a representation tizer (without sampling), as in Figure 15.3a, with deci-
of the signal. No sampling clock is used; the signal, sion levels as in Figure 15.2a; when a level is crossed, the
in some sense, samples itself, in a way dual to con- output of the quantizer x q (t) goes to the closest quan-
ventional sampling; the sampling can be considered, in tized value, chosen in between the decision levels. There
some sense, to occur along the amplitude axis, rather is no clock in the system. The quantizer’s output, xq (t),
than the time axis. Low-frequency and low-amplitude is shown in Figure 15.2a. It is clear from that figure
inputs are sampled less densely in time than high- that xq (t) inherently contains all the information con-
frequency, high-amplitude ones, and periods of silence tained in the level-crossing samples, and that the latter
are not sampled at all. However, the resulting average can be reproduced from the former, and vice versa. It
sampling rate can be very high if many levels are used; is thus seen that level-crossing sampling, with zero-order
this can raise the power dissipation and impose exces- reconstruction, can be viewed as amplitude quantization with
sive demands on hardware speed. Ways to reduce this no sampling at all. This point of view will be seen to lead
problem are discussed in Section 15.2.6. Further discus- to a convenient way of deducing the spectral proper-
sion of level-crossing sampling can be found elsewhere ties of the signals under discussion, as well to lead to
356 Event-Based Control and Signal Processing

appropriate hardware implementations. Unless stated the sinusoidal input be f in ; if this input is sampled at
otherwise, the term “level-crossing sampling” will be a frequency f s , the output of the sampler will contain
taken to imply also zero-order reconstruction. components at frequencies m f s ± f in , where m is an
integer; these components theoretically extend to arbi-
15.2.3 Spectral Properties trarily high frequencies. When this spectrum is now
passed through the quantizer in Figure 15.3b, the non-
Level-crossing sampling, followed by zero-order recon- linearities inherent in the latter will cause harmonic
struction, results in interesting spectral properties. distortion as well as intermodulation between the above
To begin, we note that it results in no aliasing; this is components; in the general case, many of the inter-
shown in [42–44] and can be understood intuitively in modulation components fall in-band, as suggested by
two ways. One way is to note that high-frequency sig- the spectrum in Figure 15.3b, and constitute what is
nals are automatically sampled at a higher rate than commonly called “quantization noise.” This represents
low-frequency ones; the other way is to note that the extra in-band quantization error, which is absent in
equivalent operation of CT quantization (Section 15.2.2) Figure 15.3a. (A different, but equivalent, point of view
in Figure 15.3a involves no sampling at all, and thus is suggested elsewhere [42,43].) Thus, by avoiding a
no aliasing can occur. This property of level-crossing sampling clock, one not only can achieve event-driven
sampling in CT seems not to have been recognized energy dissipation, but can also achieve lower quantiza-
in the early references on level-crossing sampling, per- tion error and better spectral properties. While the above
haps because in those references the time was assumed discussion uses a sinusoidal input as a simple example,
discretized. more complex inputs lead to the same conclusion. The
Consider now a sinusoidal input x (t) as an example spectra of level-crossing-sampled signals are discussed
(Figure 15.3a); x q (t) will be periodic, and can be repre- in detail in another chapter in this book [45].
sented by a Fourier series, containing only harmonics, The use of dithering can result in further improved
with no error spectral components in between. As many spectral properties for low-level signals, at the expense
of the harmonics fall outside the baseband of the follow- of an increased average sampling rate [46].
ing DSP, suggested by a broken-line frequency response
in Figure 15.3, the in-band error can be significantly
lower than in classical systems involving uniform sam- 15.2.4 Encoding
pling plus quantization (Figure 15.3b), typically by 10–20
15.2.4.1 Amplitude Coding
dB [31,42–44].
It is interesting to consider for a moment what it is The quantized or level-crossing-sampled signal in
that makes the spectral properties of the quantizer out- Figure 15.2a can be represented digitally as shown in
put so different from those in the conventional case, Figure 15.2b; this can be accomplished, for example, by
where a signal is first uniformly sampled and then quan- a flash ADC without a clock, as in Figure 15.4 [31,47].
tized, as shown in Figure 15.3b. Let the frequency of The binary signals generated are functions of continuous

Comparators

+
x(t) xq(t) _
Decision level generator

(a) + b1
_
b2
Logic

bN
x(t)

+
(b) _

FIGURE 15.3
Output and its spectrum with a sinusoidal input, with (a) quantization NO CLOCK
Input
only, and (b) uniform sampling and quantization. (From Y. Tsividis,
IEEE Transactions on Circuits and Systems II: Express Briefs, 57(8), 577–581 FIGURE 15.4
c 2010 IEEE.) Principle of CT flash ADC.
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 357
Comp
+
_ Delta out

Up/down
+ +Δ/ 2 counter
In Parallel out
–Δ/ 2 and
– logic

+
_
NO CLOCK
Comp

DAC

FIGURE 15.5
Implementation of CT asynchronous delta modulation ADC obtained from [52], with the clock removed; a parallel output is also available.

time; their transitions can occur at any time instant, as are discussed in [74,75]. The power dissipation of CT
dictated by the signal itself, as shown in Figure 15.2b. ADCs [76] is currently significantly higher than that of
Alternatively, one can signal just the times a level clocked ADCs; however, the development efforts behind
is crossed, plus the direction of crossing, as shown the latter is orders of magnitude larger than that devoted
in Figure 15.2c (the resulting waveform can then be so far to CT ADCs, so it is too early to say what the
encoded in two bits); this results in the asynchronous eventual performance of CT ADCs will be.
delta modulation [48–50]. The original signal values Other ADC schemes, such as pipeline, have not so far
can be found from this signal by using an accumula- been considered for CT implementation to the authors’
tor and an initial value. The accumulation operation can knowledge.
be performed by an analog integrator [49,50] or by a In the above type of encoding, information on the
counter plus a digital-to-analog converter (DAC) [51,52]. sample time instants is not explicitly encoded. The
This approach is shown in Figure 15.5; it is obtained signals in Figure 15.2b and c are functions that evolve
from that in [52], with the clock removed. in real time; for real-time processing, the time instants
For applications in which only a few quantiza- do not need to be coded, just as time does not need to
tion levels are needed, one can also use thermometer be coded for real-time processing of classical analog
coding [53]. signals.
Several implementations of ADCs for level-crossing
sampling have been presented [47,52,54–67], includ- 15.2.4.2 Time Coding
ing some incorporated into products [68] and ones The event times tk can be encoded into digital words
described in recent patent applications by Texas Instru- as well, and stored, if the time axis is finely quantized,
ments [69–71]. Some of these [52,55] use quantized time, resulting in pseudo-continuous operation; more conve-
but can be adapted to CT operation [54,72]. niently, one would code the differences Δk = tk − tk−1
The maximum input rate of change that can be suc- [11,12,14]. This is referred to as “time coding” [12]. Time
cessfully handled depends on hardware implementation discretization needs to be sufficiently fine in order to
details, and on the power dissipation one is willing to avoid introducing errors in the signal representation, as
accept. The input rate of change can be limited by using discussed in the context of CT DSPs in Section 15.5.3.
an analog low-pass filter in front of the ADC. Other
considerations concerning the minimum and maximum
15.2.5 Other Techniques
sampling intervals in event-driven systems can be found
in the references [19,22]. Noise immunity is improved by While we have focused on level-crossing sampling
incorporating hysteresis [19]. If the hysteresis width is above, several other ways to accomplish event-based
made equal to two quantization levels, one obtains the sampling are possible. For example, one can use as a cri-
“send-on-delta” scheme [73]. terion for sampling based not on the difference between
In general, the design of CT ADCs is challenging; the signal and its nearest quantization level, but rather
the absence of a clock makes the design of compara- on the integral of this difference [20,25], or on the dif-
tors especially difficult. An important consideration is ference between the input and a prediction of it [9,77].
the fact that the comparison time depends on the rate A technique that relies on level-crossing sampling the
of change of the input; ways to minimize this problem derivative of the input is described in Section 15.4.
358 Event-Based Control and Signal Processing

15.2.6 Reducing the Required Timing Resolution lower resolution using a variable-resolution quantizer,
as illustrated in Figure 15.7. A larger quantization step
CT ADCs can produce a large number of samples as
for fast inputs causes an increase in the local magni-
compared to Nyquist-rate ADCs, particularly when the
tude of e(t), as well as an increase in high-frequency
input frequency and amplitude are large. Reference [78]
error power, but without increasing appreciably the
presents a methodology for taking advantage of the
error power that falls in the baseband, highlighted in
spectral properties of the CT quantization error, to adapt
gray in Figure 15.7c. The sampling instants, shown in
the resolution of a CT ADC according to the magnitude
Figure 15.7b, occur less frequently and with longer inter-
of the input slope in such a way that practically no sig-
vals during the low-resolution segments, emphasized
nal degradation occurs compared to the full-resolution
by dashed boxes; this not only leads to a decrease in
case. Instead of tracking fast and large deviations in the
the number of samples, and thus in the ensuing power
input with numerous small steps, the signal can be rep-
dissipation, but also allows for the constraints on the
resented with fewer, larger steps, without causing an
processing time of the hardware to be relaxed.
increase in the in-band error power. This technique is
Considerations involved in the judicious choice of the
now briefly summarized.
level-skipping algorithm are given elsewhere [78]; it is
The error waveform, e(t) = xq (t) − x (t), that results
shown there that a number of difficulties can be avoided
from quantization contains bell-shaped segments that
by using the approach shown in Figure 15.8, which
vary slowly, and sawtooth-like segments that vary fast,
shows high- and low-resolution transfer curves for the
as shown in Figure 15.6 for a sinusoidal input. The
ADC. Note that no new decision levels and output lev-
former segments occur during slowly varying portions
els need to be generated in this approach. More than two
of the signal (e.g., near the peaks of the sinusoid in
resolution settings can be used.
the example assumed), and contain power primarily at
A generalized block diagram of a variable-resolution
low frequencies. The sawtooth-like segments, in con-
CT ADC is shown in Figure 15.9a, with example signals
trast, occur during high-slope portions of the input and
provided in Figure 15.9b. A slope magnitude detector
contribute primarily high-frequency error power, which
determines whether a slope threshold has been crossed
is typically outside the signal bandwidth. Since fast
and adjusts the step of the ADC accordingly. The slope
portions of the input do not contribute significantly to
thresholds are chosen to keep the low-resolution saw-
the in-band error power, they can be quantized with
tooth error power several decades outside the base-
band, according to a simple criterion [78]. When a slope
threshold is surpassed, the quantization step and the
1
x(t) maximum amplitude of the error triple, as shown in
xq(t) Figure 15.9b. The quantized signal has larger steps for
fast portions of the input and finer steps during seg-
0
ments of low slope.
The variable-resolution function can be realized by
increasing the quantization step of an ADC or by post-
processing the output of a fixed-resolution ADC; the
–1 post-processor skips an appropriate number of samples
(a) before generating a variable-resolution output. The
variable-resolution output has two extra bits—one bit to
Sawtooth Bell indicate whether the resolution has changed, and a sec-
Δ/2 ond bit to indicate whether that slope has increased or
decreased. The slope detector function can be realized
0
without determining the input slope by comparing the
time between quantization level crossings to a thresh-
old time. The slope detector implementation proposed
–Δ/2
in [78] consists of a few digital gates and adds insignifi-
cantly to the hardware overhead. A related approach is
(b)
discussed elsewhere [55]. Time granularity reduction is
considered further in [79,80].
FIGURE 15.6 The performance of a variable-resolution ADC has
(a) Input sinusoid and its quantized version; (b) corresponding quan- been compared with that of a fixed-resolution ADC for
tization error. (From M. Kurchuk and Y. Tsividis, IEEE Transactions on a sinusoidal input in the voice band in [78]. Three res-
Circuits and Systems I, 57(5), 982–991 
c 2010 IEEE.) olution settings were used, with a maximum resolution
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 359
Fixed resolution Variable resolution

(a)

Normalized time Normalized time

(b)

Normalized time Normalized time

dB
–60 –60

(c)
–80 –80

–100 –100
0 200 400 600 800 0 200 400 600 800
Frequency (f/fIN) Frequency (f/fIN)

FIGURE 15.7
(a) Quantization error, (b) sampling instants, and (c) quantization error spectrum, with the baseband highlighted in gray, for ADCs with a fixed
resolution (left) and a variable resolution (right). (From M. Kurchuk and Y. Tsividis, IEEE Transactions on Circuits and Systems I, 57(5), 982–991
c 2010 IEEE.)

1 variable-resolution ADC, as compared with a fixed-


resolution one, was found to be lower by a maximum
factor of about eight. The decrease in the number of sam-
ples can be expected to result in a decrease in the power
dissipation of a following CT DSP.
A method to implement variable resolution on a sili-
con chip, using variable comparison windows, has been
Output

0 presented in [75].
A very different approach [53] for reducing the timing
resolution required of the hardware is now explained,
Δ=ΔMIN
with the help of Figure 15.10. Consider a signal vary-
Δ=3ΔMIN ing very fast (e.g., in the GHz range), as shown in
Figure 15.10a. The required samples, shown as dots, will
be very close to each other, and the least significant bit
–1 (LSB) of the representation in Figure 15.2b will have
–1 0 1
Input to toggle extremely fast, as shown in Figure 15.10b. To
avoid this, one can consider per-level encoding, in which
FIGURE 15.8 a separate comparator is used for each crossing level, as
Variable-resolution ADC transfer characteristic for two resolution set- shown in Figure 15.10c. This removes the previous prob-
tings. (From M. Kurchuk and Y. Tsividis, IEEE Transactions on Circuits lem, as shown in Figure 15.10d for the left-hand lobe of
and Systems I, 57(5), 982–991 
c 2010 IEEE.) the signal; however, if a local extremum of the signal
happens to be around a crossing level, as shown in the
right-hand lobe, the resulting two samples can be very
of 8 bits and a quantization step variable by factors of close to each other, and thus the LSB will again have
three. By choosing the slope thresholds appropriately, to toggle extremely fast, as shown in the right part of
the low-resolution error was kept outside the 20 kHz Figure 15.10d. This problem can be avoided by using a
bandwidth. The number of samples produced by the very different type of encoding, shown in Figure 15.10e.
360 Event-Based Control and Signal Processing

and can be done in several ways. One can construct a


x(t) piecewise-constant waveform from the samples, some-
Δ
Ideal
xq(t)
thing that a quantizer inherently does (Figure 15.3a).
CT DAC Better performance is possible, albeit with great compu-
Slope Δ tational effort; if the average sampling rate exceeds twice
detector CTRL
the bandwidth of the signal, perfect reconstruction is,
in theory, possible [81,82], albeit with considerable com-
(a)
putational effort. The classical relation between quanti-
zation resolution and achievable reconstruction error no
x(t) longer holds; arbitrarily small error can in principle be
achieved from only a few quantization levels, provided
there are enough samples to satisfy the above criterion
Slope threshold, high on average sampling rate, and given sufficient computa-
|dx/dt| Slope threshold, low
tional power [83]. Intuitively, the reason for this is that,
even with a few levels in Figure 15.2a, the sample val-
Δ 9 ues are exact, unlike the case in conventional, uniform
Δ MIN 1
sampling plus quantization.
A detailed discussion of “exact” reconstruction tech-
5 niques can be found elsewhere in this book [84]. These
e(t)
0 are suitable mostly for offline applications, unless a
Δ MIN –5 way is found to drastically speed up the required
computation time. In the present chapter, we will
xq(t) instead emphasize real-time applications, and thus sim-
ple piecewise-constant reconstruction will be assumed
(b) throughout.
FIGURE 15.9
(a) Block diagram of a variable-resolution CT ADC and (b) example
signals for an ADC with three resolution settings. Δ is the quantization
15.4 Derivative Level-Crossing Sampling
step. (From M. Kurchuk and Y. Tsividis, IEEE Transactions on Circuits
and Systems I, 57(5), 982–991 
c 2010 IEEE.) As has been mentioned in Section 15.1, several appli-
cations require ultra-low-power acquisition, signal pro-
In it, flip-flops are used to generate two signals from cessing, and transmission. Most practical level-crossing
the LSB: one (Rm ) toggles on the up-going transitions sampling schemes are based on zero-order-hold recon-
of the LSB, and the other (Fm ) on the down-going ones. struction at the receiver [14,46,47,52,54,64,75,78,85,86].
These two signals now contain all information in the Other schemes employ computationally intensive recon-
LSB (which can be reconstructed from them using an struction techniques [7,83]; however, in applications
XOR operation) but, unlike the LSB waveform, they do where the receiver is on a very tight power budget,
not involve very narrow pulses, and thus the require- such techniques constitute a serious overhead. One can
ments on the following hardware are greatly reduced. consider first-order reconstruction which might result
The price to be paid is that each of the two signals must in a smaller quantization error. Unfortunately, current
be processed separately, and that such processing must first-order reconstruction techniques are noncausal: to
be carefully matched for the two paths involved. know the signal value at a given instant between two
The authors feel that the approaches described here samples, one needs to know the value of the sample fol-
are only the beginning, and that reduction at much lowing that instant. The corresponding storage need and
higher levels should eventually be possible, depending computational effort can result in significant hardware
on the method of reconstruction used. overhead. First-order prediction techniques can be used
to avoid the above noncausality, but those can result
in discontinuities, and they, too, imply a significant
computational overhead.
Derivative level-crossing sampling (DLCS) is an LCS
15.3 Reconstruction
technique that automatically results in piecewise-linear
Reconstruction of a signal from its level-crossing sam- reconstruction in real time, with no need for storage,
ples (tk , x (tk )) need be no different from reconstruction meant for applications in which both the transmitter and
of other types of nonuniformly sampled signals [7], the receiver are on a tight power budget. This technique
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 361

(a) mth level


(c)

LSB (b)
(d)

Rm

(e)
Fm

FIGURE 15.10
(a) Signal, threshold levels, and level-crossing samples, (b) corresponding LSB, (c) signal, threshold level, and per-level samples, (d) correspond-
ing per-level digital signal, and (e) encoding that avoids high time-resolution requirements.

x(t) xd(t) xh(t) xr(t) 0.1


1 d Δ DLCS
B 0
B dt LCS
–0.1 Original signal
FIGURE 15.11 –0.2
Principle of a derivative level-crossing sampling and reconstruction.
Amplitude

–0.3
(From P. Martinez-Nuevo, et al., IEEE Transactions on Circuits and –0.4
Systems II, 62(1), 11–15 
c 2015 IEEE.) –0.5
–0.6
is discussed in detail in Reference [87], from which the –0.7
material in this section is taken with permission from –0.8
IEEE (copyright ( c 2015 by IEEE). The DLCS princi- –0.9
ple is shown in Figure 15.11. At the transmitter, the 0.02 0.04 0.06 0.08 0.1 0.12 0.14
input is scaled and differentiated, and the result is level- Time (ms)
crossing-sampled. At the receiver, the samples are zero-
order-held and integrated, thus compensating for the FIGURE 15.12
differentiation. Thanks to integration, the scheme inher- Blow-up of DLCS (first-order) and LCS (zero-order) reconstruction for
ently achieves first-order reconstruction, leading to a a full-scale sinusoidal input signal at 2 kHz. (From P. Martinez-Nuevo,
lower reconstruction error, in real time, without the need et al., IEEE Transactions on Circuits and Systems II, 62(1), 11–15 
c 2015
of any linear predictor or noncausal techniques. This can IEEE.)
be seen in Figure 15.12, which compares the output of
the system to that of an LCS system with zero-order-hold Therefore, the high-frequency quantization artifacts are
reconstruction and to the original signal. As explained severely attenuated, whereas the low-frequency ones are
in Section 15.2.2, the operations of LCS and zero-order not. For single-tone inputs, this results in a significantly
hold together are conceptually equivalent to quantiza- lower mean-square error (MSE) than classical LCS of
tion [18,43]; thus, for the purpose of analysis, the deriva- the same resolution over most of the frequency range,
tive can be directly quantized as shown in Figure 15.11. with the MSE increasing at low input frequencies.
We assume that the input signal x (t) satisfies a zero ini- To improve the performance for low-frequency
tial condition, x (0) = 0 (as in delta-modulated systems inputs, the adaptive-resolution (AR) DLCS, shown in
[50]), and is bandlimited to B rad/s, bounded so that Figure 15.13, has been proposed [87]. The slow-changing
| x (t)| ≤ M, where M is a positive number; using Bern- portions of the input derivative are quantized with a
stein’s inequality [88, theorem 11.1.2], we conclude that high resolution, and the fast-changing portions are
|dx/dt| is bounded by BM. Therefore, the quantizer has quantized with a low resolution; the second derivative
an input range of [− M, M ]. of the input is used to identify the rate of change of the
The first-order reconstruction in DLCS shapes the input derivative so as to control the resolution. A high
quantization error with an integrator transfer function. resolution around the slow-changing portions of the
362 Event-Based Control and Signal Processing
x(t) xd(t) xh(t) xr(t) in a biomedical device in order to deduce its proper-
1 d
B ties of interest, or in a local network node in prepa-
B dt
Δ(xdd(t)) ration for transmission. If a conventional DSP were
1 d used for this, the potential advantages of the event-
Δ(xdd(t)) driven approach would be largely wasted. It has
B dt
been shown that fully event-driven digital signal pro-
xdd(t) Resolution cessing of level-crossing-sampled, or equivalently CT-
controller
quantized, signals is possible [31]. This can be done
in CT [31,42–44]; or, the time can be quantized and
FIGURE 15.13
then a clocked processor can be used [89], although
Principle of adaptive-resolution derivative level-crossing sampling
then one has to deal with generating and distributing
and reconstruction. Δ( xdd ( t)) denotes the variable quantization step
a high-frequency clock signal, and with additional fre-
size, which depends on the value of xdd ( t). (From P. Martinez-Nuevo,
quency components in the spectrum (Section 15.5.3). In
et al., IEEE Transactions on Circuits and Systems II, 62(1), 11–15 
c 2015
another approach, the time can be divided into frames,
IEEE.)
within which a fixed sampling rate is used [13,15,90,
91]. Here we will emphasize the CT approach, which
derivative keeps the low-frequency quantization error
we will refer to as CT DSP (but it is to be under-
components low enough such that post integration, they
stood that the properties discussed also apply approx-
do not degrade the signal-to-error ratio (SER). Coarse
imately in the case of finely discretized time). Due to
quantization around the fast-changing regions results in
its emphasis on time information, CT DSP is a form
a higher quantization error, but the integrator shaping
of time-domain signal processing, or time-mode signal
significantly attenuates it during reconstruction such
processing [92,93].
that it has negligible effect on the SER. A detailed dis-
The processing of binary signals which are functions
cussion of spectral properties of level-crossing-sampled
of CT is, in a sense, “the missing category” of sig-
signals can be found in [45].
nal processing, as shown in Table 15.1. The first three
For a given SER requirement, both DLCS and AR
rows under the title correspond to well-known types
DLCS can result in a substantial reduction in the num-
of processors (a representative type of the third row is
ber of samples generated, processed, and transmitted
switched-capacitor filters). The last category in the table
as compared with classical LCS. While improvements
is the one discussed here. If one considers the names of
depend on the characteristics of the input signal, they
the first three categories, it follows by symmetry that the
have been consistently observed for a variety of inputs,
appropriate name for the last category is “continuous-
especially for AR DLCS [87].
time digital.” This name is consistent with the fact that
Both DLCS and AR DLCS exploit the varying spec-
this category involves binary signals that are functions
tral context of the input in addition to its amplitude
of CT (see Figure 15.2b).
sparsity to generate very few samples for a given MSE.
CT binary signals, like the ones in Figure 15.2b, can be
These schemes have not been demonstrated in practice
processed in continuous time using CT delays, CT mul-
at the time of this writing. If they turn out to work as
tipliers, and CT adders. The multipliers and adders can
expected from theory, they may prove advantageous in
be implemented using unclocked combinational circuits,
the development of sensor networks, which consist of
using asynchronous digital circuitry.
sensor nodes that are on a very tight power budget and
Consider as a prototype the CT finite impulse
spend most power in communication [2]; a reduction
response (FIR) analog structure in Figure 15.14a. The
in the number of generated samples can substantially
coefficient multipliers, the summer, and the delay lines
lower the total power dissipation by minimizing the
are all CT. An input x (t) can in principle be processed by
power spent by the transmitter node in processing and
transmission and that by the receiver node in reception
and processing.

TABLE 15.1
Signal Processing Categories
15.5 Continuous-Time Digital Signal
Processing Time Amplitude Category
Continuous Continuous Classical analog
15.5.1 Principle Discrete Discrete Classical digital
Once a quantity has been sampled using event-driven Discrete Continuous Discrete-time analog
Continuous Discrete Continuous-time digital
means, it typically needs to be processed, for example,
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 363
DIG. DIG. This system then is a continuous-time digital signal pro-
x(t) a0 y(t) IN a0 OUT
cessor, or CT DSP. The output of this processor can be
τ τ converted to analog form using a CT D/A converter
a1 a1
(DAC), which is basically a binary-weighted CT adder.
τ τ The result is shown in Figure 15.15; this structure is
NO
aK aK CLOCK meant to be shown in principle only; other, more effi-
CT CT cient architectures will be discussed below. It can be
delays
(a)
delays
(b)
shown that the frequency response of the entire system
consisting of the CT A/D converter, the CT DSP, and the
FIGURE 15.14 CT D/A converter is still given by (15.4) [42–44].
(a) CT FIR analog filter; (b) corresponding CT digital filter. (From For the delay segments, one can use a cascade of
Y. Tsividis, IEEE Transactions on Circuits and Systems II: Express Briefs, digital delay elements operating without a clock, as
57(8), 577–581  c 2010 IEEE.) shown in Figure 15.16; in the simplest case, the ele-
mental delay elements shown can be CMOS inverters
(possibly slowed down) [31], but better performance is
this structure, resulting in an output possible using special techniques [80,94,95]. A number
K of such elements will need to be used, as opposed to just
y (t) = ∑ ak x (t − kτ), (15.1) a single slow one, when the delay chain must be agile
k =0 enough to handle many signal transitions within each
delay interval. The delay can be automatically tuned
where τ is the delay of each delay element. Taking the using digital calibration and a time reference; this can
Laplace transform of both sides of this equation, we find be a clock, perhaps already used elsewhere in the sys-
K tem, or can be turned off after tuning [96]. In the absence
Y (s) = ∑ ak X (s)e−skτ . (15.2) of a clock, an RC time constant involving an internal
k =0

Thus the transfer function, H (s) = Y (s)/X (s), is


CT bits
K NO CLOCK
a0
H ( s) = ∑ ak e−skτ. (15.3)
IN
a0
a0
anywhere
k =0 D
D !1
Using s = jω, with ω = 2π f the radian frequency, we CT ADC τ !1
a1 OUT
obtain the frequency response: D
D CT DAC
τ !"
K !" a
jωτ − k

K
H ( jω) = ak (e ) , (15.4)
k =0 CT single-bit
digital delays CT digital CT weighted
which can be seen to be periodic with period 2π/τ. It multipliers digital adder
is thus seen that the frequency response is identical to FIGURE 15.15
that of a corresponding discrete-time filter. Thus, well- CT ADC, CT DSP, and CT ADC (principle only). (From Y. Tsividis,
established synthesis techniques developed for discrete- Proceedings of the IEEE International Conference on Acoustics, Speech, and
time filters can directly be used for CT digital filters. Signal Processing, vol. 2, pp. 589–592 
c 2004, IEEE.)
We now note that if, instead of using x (t) as
the input, we feed (in theory only) the structure of
Figure 15.14a with a quantized version xq (t) of the
input (see Figure 15.2a), the structure will produce a
corresponding piecewise-constant output. Finally, if we
instead have the CT digital representation of xq (t),
shown in Figure 15.2b, we can replace all elements
in Figure 15.14a by corresponding CT digital ones, as τ
shown in Figure 15.14b. A rigorous proof of this is given
elsewhere [42–44]. Now all signals in the processor are
0s and 1s, but they are functions of continuous, rather FIGURE 15.16
τ
than discrete, time; they are composed of piecewise- CT digital delay line composed of
constant “bit waveforms” like those in Figure 15.2b. CT elemental delay elements.
364 Event-Based Control and Signal Processing

temperature-insensitive capacitor and an external low- As already mentioned, the frequency response in
temperature-coefficient trimmable resistor can be used (15.4) is periodic; to attenuate the undesired frequency
as a time reference. Such techniques have been used for lobes, an output or input CT analog filter can be used,
decades by the industry for the automatic calibration of as is the case in conventional systems. Note, though,
integrated analog filters [97]. that the periodicity in the frequency response has noth-
The output of the adder in Figure 15.15 can involve ing to do with aliasing; in fact, there is no aliasing,
arbitrarily small intervals between the signal transitions as has been discussed in Section 15.2.3. This is illus-
in the various paths, due to the unclocked arrival of var- trated in Figure 15.18, where a CT DSP is compared
ious delayed versions of the input. If the logic cannot with a classical, discrete-time one. It is seen that a signal
handle such intervals, they will be missing in the output applied at a certain input frequency results in a single
code. Fortunately, such narrow pulses involve very little frequency component at the same frequency at the out-
energy which falls mostly outside the baseband. Thus, put, no matter whether the input frequency is in the
as the input frequency is raised beyond the speed limits baseband or not. Such a property is, of course, absent
of the hardware, the degradation observed is graceful. in the case of classical, discrete-time systems involving
Design considerations for the system of Figure 15.15 uniform sampling.
are discussed elsewhere [43,54]. It is very important to Advantages occur in the time domain as well. CT
avoid glitches by using handshaking throughout [54]; DSPs react immediately to input changes, unlike classi-
an example will be seen in Section 15.7. Time jitter must cal systems which may not catch such changes until the
also be carefully considered [54,94,98,99], to make sure following sampling instant; this makes them especially
that the noise power it contributes to the output is below well suited for use in very fast digital control loops.
that expected from the quantization distortion compo- Thus, application of CT DSP in DC-DC converters has
nents. The challenging design of the adder can be largely made possible the reduction of the required very large
bypassed by using a semi-digital approach employing filter capacitance, and the associated cost, by a factor of
multiple small D/A converters as shown in Figure 15.17, three to five [101,102].
similar to that proposed for clocked systems [100]. Small mismatches in the CT delays cause small errors
It should be mentioned that the block diagrams in in the frequency response [103]; as will be seen in
this section illustrate the principle of CT DSP and may
not necessarily correspond to the system’s microarchi-
tecture, which has recently evolved. In [96], data are Continuous-time Discrete-time
separated from the timing information inside each delay digital filter digital filter
segment. When a sample enters the segment, the data
is stored in a memory and remains idle while the tim-
ing goes through multiple delay cells. When the timing
pulse finally exits the segment, it picks up the corre-
sponding data and transfers it to the next segment,
Freq.
and this process is repeated. This timing/data decom- resp.
position is not only significantly more energy efficient, f f
but it is also very scalable; to build a similar CT DSP
for different data width, one must only change the In
size of the segment memories but can leave the delay f f
part unchanged. A test chip based on this approach is Out
described in Section 15.7.2. f f

In
Analog
f f
ao adder
Out
Digital in DAC + Analog out
f f
τ (a) (b)
a1
DAC
FIGURE 15.18
τ
Frequency response, spectra with an input in the baseband, and spec-
aK
tra with an input in a higher lobe of the frequency response, for (a) a CT
DAC
digital filter, and (b) a discrete-time digital filter preceded by a uniform
FIGURE 15.17 sampler. (From Y. Tsividis, IEEE Transactions on Circuits and Systems II:
Semi-digital version of CT DSP/DAC. Express Briefs, 57(8), 577–581  c 2010 IEEE.)
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 365
CT Delta
modulator CT DAC
IN OUT
a0
τ

CT delays τ a1

Accumulators aK Coefficient
multipliers
Combine

FIGURE 15.19
Delta mod CT ADC/DSP/DAC.

Section 15.7, such errors are not significant in practice. 1 1


The CT delays can also be made different from each
other on purpose, to modify the frequency response 0 t
(e.g., change the size of a lobe compared to the one
in the baseband), but analytical techniques for this are 1 1
mostly lacking. The optimization of CT DSPs using
0
delay adjustment has been considered in [104]. t
If an asynchronous delta modulator like the one in FIGURE 15.20
Figure 15.5 is used at the input, the processing can be Three time waveforms. In the context of 1 1
asynchronous processing, all three repre-
done in delta-mod, as is the case for discrete-time dig-
sent the same signal; in the context of CT 0
ital filters [105]. The accumulators, required to recover DSP, they represent three distinct signals.
t
the original signal from the delta mod stream, can be
combined with the coefficient multipliers [54,72]. Such
an implementation is shown in Figure 15.19. between them) are an integral part of the CT digital
The computer-aided simulation of CT DSPs can be signal representation.
time-consuming. Special techniques have been pro-
posed for significantly reducing simulation time [106]. 15.5.3 Time Discretization
Infinite impulse-response CT DSPs have been consid-
ered elsewhere [44,107], but have not been adequately CT DSPs are best suited to real-time applications, as
tested so far. their output cannot be stored. If storage is desired, the
Several applications for CT DSPs have been proposed, output of such processors must be sampled. This is con-
including their embedding in ADCs to enhance their sidered elsewhere in this book [45]. In another approach,
performance [93,108]. A single-bit version of a CT DSP the entire operation can be replaced by pseudo-
has been used in communications chips [68,109,110]. continuous operation, in which a high-frequency clock
is involved throughout and the time axis becomes finely
quantized [89]. As mentioned in Section 15.2.4, it is
possible to quantize the time from the very begin-
15.5.2 “Continuous-Time” versus “Asynchronous”
ning, in the A/D converter [11,12,14,52]. This results
A CT DSP uses asynchronous logic blocks and hand- in spectral components in between the harmonics in
shake techniques borrowed from asynchronous logic Figure 15.3a. With coarse time quantization, the signal
(see Section 15.7). Nevertheless, it is not appropriate spectrum begins to resemble the one in Figure 15.3b.
to call it an asynchronous DSP. Asynchronous DSPs Time quantization requires a high-frequency clock
(see, e.g., [111]) are discrete-time systems, processing (with an ensuing effect on power dissipation), which
sequences of data; the time intervals between samples may make the design of such systems more straight-
do not usually represent important information. In con- forward, but which one would ideally rather avoid, if
trast, in a CT DSP the timing information is critical true event-driven operation is sought. In fact, to reach
and must be carefully preserved. Thus, all signals in the performance of CT DSPs in terms of spectral prop-
Figure 15.20 are the same to an asynchronous DSP; how- erties and transient response speed, the clock would
ever, they are all distinct to a CT DSP, since the times have to be infinitely fast, which is infinitely difficult
of occurrence of each transition (or the time intervals to implement. But this would be equivalent to not
366 Event-Based Control and Signal Processing

quantizing the time at all, which is relatively easy; it just 15.7.1 A Voice-Band CT ADC/DSP/DAC
implies CT operation and can be achieved without a
We begin with a 8b, 15th-order, finite-impulse response
clock, as has already been discussed.
(FIR) continuous-time ADC, continuous-time DSP, and
continuous-time DAC, collectively referred to as CT
ADC/DSP/DAC, targeting speech processing [54].
A block diagram of the chip is shown in Figure 15.22.
15.6 Versatility of CT DSP in Processing The input is applied to terminals INH and INL. The CT
Time-Coded Signals ADC is a variation of the asynchronous delta modula-
In this chapter, the name “time coding” [12] has been tor (Section 15.2.4) [48–50,52]. The ADC’s output consists
used in relation to signals derived using level-crossing of two signals (CHANGE, UPDN) that indicate the time
sampling. However, other types of time coding exist. An and direction of the input change (which is equivalent
example is pulse-time modulation, defined in [112] as a to the signal in Figure 15.2c). Each of these signals takes
class of techniques that encode the sample values of an the values of 1 or 0, and the two together are called a
analog signal onto the time axis of a digital signal; this token. The CT ADC uses two asynchronous comparators
class includes pulse-width modulation and pulse posi- to continuously monitor the input in order to generate
tion modulation. Another time-coded signal can be pro- signal-driven tokens.
duced by click modulation [113]. A different technique The comparator levels are determined by the input
of coding a signal using only timing is described in [114]. and the signals as provided by a feedback CT DAC.
(We also note, in passing, that implementing filter coef- Due to continuous tracking, the thresholds only move
ficient values using only timing is discussed in [32].) Yet up (if dictated by the INC signal) or down (if dictated
another example is asynchronous sigma-delta modula- by the DEC signal) one level at a time. To coordinate
tion, which uses a self-oscillating loop involving an inte- the movements of tokens into the CT DSP, asynchronous
grator and a Schmidt trigger, shown in Figure 15.21. This handshaking (ACK signals) is used. Hysteresis is used in
system is usually attributed to Kikkert [115], but was in the comparators for noise immunity.
fact proposed earlier by Sharma [116]; for this reason, we The CT ADC is followed by a CT DSP, composed
refer to it as the Sharma–Kikkert asynchronous sigma- of 15 CT delay elements, each a serial chain of indi-
delta modulator. The circuit has been further analyzed in vidual asynchronous delay cells [94], and accumula-
[117], and the recovery of its output has been discussed tor/multiplier blocks to do the token summation and
in [118]. This system produces a self-oscillating output weighting. The delay segments are implemented along
even in the absence of input and is thus not strictly even the lines of Figure 15.16. The delay taps provide an
driven. Nevertheless, this signal can be processed with 8b representation of the (now) delayed input signal,
a CT DSP [119–121]. In fact, most time-coded binary sig- and the weighting is done by the multiplication by
nals, event-driven or not, asynchronous or synchronous, the programmable filter coefficients. The outputs of all
can be processed by a CT DSP. This is because a CT DSP the accumulator/multiplier blocks are added using an
“looks at all time.” asynchronous carry-save adder to create the final filter
output, which goes to the CT DAC.
A photograph of the fabricated chip implementing
the system of Figure 15.22 is shown in Figure 15.23.
15.7 Experimental Results The 8b CT ADC/DSP/DAC was fabricated in a UMC
90 nm CMOS process, operating with a 1V supply. The
This section summarizes the design and measurements
active area (sum of outlined areas in Figure 15.23) is
of several test chips designed in order to verify the
0.64 mm2 , with the 15 delay elements dominating the
principles of CT data acquisition and DSP.
area (0.42 mm2 ). The power consumption also reflects
this dominance: when configured as a low-pass filter
and processing a full-scale 1 kHz input tone, 60% of
the total power consumption is dissipated by the delay
+ elements.
IN

∫ OUT
In order to demonstrate some important properties
1
0 of CT DSPs, we show in Figure 15.24 a frequency
response and output spectra of the CT ADC/DSP/DAC
when configured as a low-pass filter; see the caption for
FIGURE 15.21 details. As expected from Section 15.5.1, the frequency
The Sharma–Kikkert asynchronous sigma-delta modulator [115,116]. response is periodic. In the spectral results, however, it
Its output can be processed by a CT DSP. can be seen that the applied 2 kHz input tone does not
CHANGE CHANGE1 CHANGE2 CHANGE15
INH
+ INC
– UPDN UPDN1 UPDN2 UPDN15
COMPARATOR
ACKA ACK1 ACK2

DELAY TAP 1
DELAY TAP 2
DELAY TAP 15
+ DEC
ACKB

CONTROL LOGIC
INL

COMPARATOR
VHIGH(t)
256
R-STRING
DAC

VLOW(t) h0 h1 h2 h15

256-BIT BI-DIR.
SHIFT REGISTER
/MULTIPLIER

ONE-HOT

/MULTIPLIER
/MULTIPLIER
/MULTIPLIER
ACCUMULATOR

ACCUMULATOR
ACCUMULATOR
ACCUMULATOR

DELTA-MOD CT ADC v0
16
v1
16
v2
16
v15
16

V0
MCLK CARRY-SAVE ADDER, FOLLOWED BY RIPPLE-CARRY ADDER,
V1
THEN TRUNCATION
ADONE 8

ADDER
TIMING
CONTROL
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time

V15 FLIP-FLOPS

DAC ŷ(t)

FIGURE 15.22
c 2008 IEEE.)
Block diagram of entire CT ADC/DSP/DAC chip. (From B. Schell and Y. Tsividis, IEEE Journal of Solid-State Circuits, 43(11), 2472–2481 
367
368 Event-Based Control and Signal Processing

result in an alias tone in the second lobe of the response.


Similarly, a 39 kHz input tone, which places it in this sec-
Output
ond lobe, does not produce an alias tone in the baseband.
Delta
Modulator DAC This verifies the behavior illustrated in Figure 15.18.
Figure 15.25 shows an input speech signal plotted
versus time, together with the measured instantaneous
power consumption of the CT ADC/DSP/DAC. As can
One be seen, when the input is quiet (no activity), the power
TAP consumption drops to a baseline level (needed for cir-
cuit biasing). When the input becomes more active,
the power consumption automatically increases, due to
Delay the processing of more tokens per second. This result
elements Digital stresses the utility of CT processing for signals that have
core varying levels of activity; the power scaling is inher-
ent in the CT nature of the circuit. Such dynamics are
possible in conventional circuits, but they require a con-
trol of some sort to detect the measure of activity of the
input and perform power management; here this bene-
fit is automatic. The actual power dissipation levels in
Figure 15.25 are rather high in this early test chip; lower-
power CT DSPs have now been designed (see below).
FIGURE 15.23
A method to implement sample rate reduction based on
Chip photograph of 8b CT ADC/DSP/DAC. The total area of the out-
Section 15.2.6, and thus reduce power dissipation, has
lined blocks is 0.64 mm2 . (From B. Schell and Y. Tsividis, IEEE Journal
been described in [75]; several other circuit techniques
of Solid-State Circuits, 43(11), 2472–2481 
c 2008 IEEE.)
for power reduction are described in that reference.

20

0
Gain (dB)

–20

–40

–60 Expected
Measured
–80
0 5 10 15 20 25 30 35 40 45 50
Frequency (kHz)

–10 –10

–30 –30
Amplitude (dBV)

Amplitude (dBV)

–50 –50

–70 –70

–90 –90

–110 –110
0 10 20 30 40 50 0 10 20 30 40 50
Frequency (kHz) Frequency (kHz)

FIGURE 15.24
Measurement results from the 8-bit CT ADC/DSP/DAC chip, with DSP set to a low-pass filter using element delays of 27 μs [1/(37 kHz)].
Top: Frequency response. Bottom left: Output spectrum for a full-scale 2 kHz input. Bottom right: Output spectrum for 39 kHz input at −14 dB
relative to full scale. No aliasing is observed. (From B. Schell and Y. Tsividis, IEEE Journal of Solid-State Circuits, 43(11), 2472–2481 
c 2008 IEEE.)
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 369
600

500

Amplitude (mV)
400

300

200

100

0
0 0.2 0.4 0.6 0.8 1
Time (s)
1000

800 Average power = 310 μW


Power (μW)

600

400

200

0
0 0.2 0.4 0.6 0.8 1
Time (s)

FIGURE 15.25
Plot of input speech signal (top) and instantaneous power consumption (bottom) of the CT ADC/DSP/DAC chip. (From B. Schell and Y. Tsividis,
IEEE Journal of Solid-State Circuits, 43(11), 2472–2481 
c 2008 IEEE.)

15.7.2 A General-Purpose CT DSP up to 20 Msamples/s, and implements a 15th-order FIR


filter. Fifteen delay segments generate delayed input ver-
A more recent general-purpose CT DSP test chip [96]
sions; all copies, along with the input itself, are then
is based on an entirely new microarchitecture. It pro-
weighted with proper coefficients by asynchronous mul-
vides a highly programmable CT DSP core (it does
tipliers and summed by a 16-way adder to form the
not contain an ADC or DAC), which can be con-
digital output. The latter is sent to a DAC to create the
nected to various ADCs and process signals of different
analog output. Tuning is required for all CT DSPs, to
sample rates, synchronous or asynchronous. The core
ensure that the delay of the segments is equal to the
consists of asynchronous (i.e., clockless) digital compo-
desired value; this chip is the first CT DSP to incorporate
nents, along with a calibrated delay line, allowing it
on-chip automated tuning.
to observe and respond to signals in CT. As a result,
A “sample” sent to the chip, shown as I ND in the fig-
it is capable of processing signals in a wide variety
ure, consists of 8 (or more) data bits representing the
of practical formats, such as PCM, PWM, ΣΔ, and
magnitude information. Timing information is bundled
others, both synchronous and asynchronous, without
with the data field; each new and valid sample is indi-
requiring any internal design changes. This property
cated by a transition on the timing signal I NR . A critical
is not possible for synchronous DSP systems. The fil-
requirement of the design is that timing, that is, spacing
ter’s frequency response is programmed only once,
between consecutive samples, is preserved inside the CT
and is not affected by changes in the ADC sample
DSP, by using a calibrated delay line, which is divided
rate. The chip also accommodates a larger data width
into a series of delay segments, as shown in the figure.
than the one in [54]; it supports 8-bit-wide signals,
The time interval between successive samples is main-
and can be extended to arbitrary width, while the
tained by the CT DSP core so as not to distort the CT
chip in [54] only supports 1-bit wide signals (resulting
signal processing function. All communication internal
from asynchronous Δ modulation). The chip exhibits an
to the DSP employs asynchronous handshaking to indi-
SER which, for certain inputs, exceeds that of clocked
cate the timing and validity of samples; one example is
systems.
a delay segment sending the latest sample to the next
Figure 15.26 shows a top-level view. The chip can
segment and to a multiplier.
receive input from a variety of ADCs at any sample rate
370

ON-CHIP TUNING
TUNING
NUMBER
TUNING
OF CELLS
CURRENTS
2 8 2 8 2 8 2 8
Analog VARIOUS ADCs
input 8 IND 8 8 8 8 8
8
–VARIOUS FORMATS
DELAY DELAY DELAY DELAY
–SYNC/ASYNC
SEG. SEG. SEG. … SEG.
–ANY RATE UP TO
INR 1 2 3 15
MAX RATE
8 8
8 8 8
8 8 8 8 8
c0 c1 c2 c3 c15

16 16 16 16 16
Analog
8 OUTD output
16-WAY ADDER DAC
OUTR

NO CLOCK

FIGURE 15.26
c 2014 IEEE.)
Top-level view of the flexible CT digital FIR filter as part of a ADC/DAC/DSP system. (From C. Vezyrtzis, et al., IEEE Journal of Solid-State Circuits, 49(10), 2292–2304 
Event-Based Control and Signal Processing
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 371

8 8
DATAIN ASYNCHRONOUS DATAOUT
DATAIN,REQ CYCLIC SRAM DATAOUT,REQ
(32X16 8T BANK)

TIMINGIN,REQ TIMINGOUT,REQ
TIMING PATH
(BINARY-WEIGHTED
TIMINGIN,ACK GROUPS OF CELLS) TIMINGOUT,ACK

FIGURE 15.27 Delay segments


Top-level view of a delay segment, showing the segment’s organiza-
tion and the data and timing paths. (From C. Vezyrtzis, et al., IEEE
Journal of Solid-State Circuits, 49(10), 2292–2304 
c 2014 IEEE.)

In more detail, the chip uses a calibrated pipelined Arith-


delay line to provide the desired delay within the seg- metic
unit
ments. Each segment is partitioned into a large number On-chip tuning
of “delay cells,” as shown at the bottom of Figure 15.27.
Each cell provides only a small unit of delay, and can
hold at most one distinct sample at any time. This orga-
nization serves two purposes. First, the segment can
hold a large number of samples inside it at a given
FIGURE 15.28
time; this is needed when the chip is processing an input
Chip photograph of the general purpose CT DSP. The core area is
with sample spacing much smaller than the segment
5 mm2 . (From C. Vezyrtzis, et al., IEEE Journal of Solid-State Circuits,
delay. Second, the segment can be programmed to a
49(10), 2292–2304 
c 2014 IEEE.)
wide range of delay values, by selecting any combina-
tion of the different groups of delay cells (two cells, four
cells, etc.).
In an early prototype CD DSP [54], data moved contin-
ually inside each segment, from cell to cell. This resulted changes, the filter maintains its frequency response. This
in unnecessary data movement and wasted energy. In indicates that the response of the CT DSP is fully decou-
contrast, in the chip described here [96], this movement pled from the input data rate, as expected theoretically.
is entirely avoided. Instead, the timing pulse is separated Similar behavior was observed with other signal formats
from the data, and only that pulse is passed through used for testing, namely, synchronous and asynchronous
the delay segments, as has already been mentioned in PWM and ΣΔ signals, and for a wide range of frequency
Section 15.5.1 and as shown in Figure 15.27. responses. The latter were set by the chip’s automated
Figure 15.28 shows the die photograph of the chip, tuning blocks, controlled via a simple user interface.
as implemented in a 0.13 μm IBM CMOS technology. The chip’s power consumption grows linearly with
Due to the significant programmability and testability the input rate, confirming the theoretically expected
features incorporated in this chip, it occupies a much event-driven nature of CT DSPs. During idle times,
larger area than the one in [54]. As in that reference, the total power reduces to a baseline value resulting
the delay segments occupy the majority of the chip area, from leakage and circuit biasing, which is significantly
with the arithmetic (multipliers, adder) and on-chip reduced compared to that in [54].
tuning equally sharing the remaining area. A recently
proposed approach [122] can be applied to the delay to
significantly reduce its average power consumption, by
15.7.3 A CT ADC/DSP/DAC Operating at
dynamically varying its pipeline granularity to tailor it
GHz Frequencies
to the needs of the input sample stream.
Figure 15.29 shows the frequency response of the chip. CT DSPs have also been tested at radio frequen-
In this experiment, the chip was programmed only once, cies, by designing and testing a programmable 6-tap,
before the measurements, and was then fed with PCM 3-bit ADC/DSPDAC operating at frequencies up to
signals of three different sample rates. Despite the fact 3.2 GHz. The chip is intended for ultra-wide-band com-
that no adjustments were made to the chip during these munications [53]; it can provide flexibility in adapting
372 Event-Based Control and Signal Processing
0

–5

–10

Magnitude response (dB)


–15

–20

–25

–30

–35 IDEAL
fS = 100 kHz
–40
fS = 1 MHz
–45
fS = 10 MHz
–50
0 10 20 30 40 50
Frequency (kHz)

FIGURE 15.29
Measured frequency responses demonstrating independence from input sampling rate. PCM input, FIR filter low-pass response following
automatic tuning. (From C. Vezyrtzis, et al., IEEE Journal of Solid-State Circuits, 49(10), 2292–2304 
c 2014 IEEE.)

output spectra, as well as hardware design information,


can be found in [53].

ADC DSP/DAC

15.8 Conclusions
In contrast to conventional data acquisition and digital
signal processing, event-based operations offer potential
for significant advantages, if the sampling strategy is
adapted to the properties of the input signal. As has been
shown, true event-driven operation without using a
FIGURE 15.30 clock is possible, both for A/D and D/A converters, and
Chip photograph of the radio frequency CT ADC/DSP/DAC. The for digital signal processors. The concept of continuous-
highlighted core area is 0.08 mm2 . (From M. Kurchuk, et al., IEEE time digital signal processing has been reviewed. The
Journal of Solid-State Circuits, 47(9), 2164–2173 
c 2012 IEEE.) known advantages of CT ADC/DSP/DAC systems in
comparison to classical systems using uniform sampling
and discrete time operation are the following:
• No aliasing
a receiver’s frequency response to the signal and inter-
ference spectrum at its input. The chip has been fab- • No spectral components unrelated to the signal
ricated in 65 nm CMOS technology, and is shown in
• Lower in-band error
Figure 15.30. It can be seen that the core on this chip is
very compact; thanks to the small number of threshold • Faster response to input changes
levels used and the high frequency of operation, which
implies small CT delays, this area is only 0.08 mm2 . • Lower EMI emissions due to the absence of clock
The chip uses the coding approach in Figure 15.10e. • Power dissipation that inherently decreases with
The power dissipation of this chip varies between 1.1 decreasing input activity
and 10 mW, depending on input activity. The chip fre-
quency response is fully programmable. Representative CT DSPs are as programmable as conventional DSPs
frequency responses, nonlinearity measurements, and of the same topology. However, they are best suited for
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 373

real-time applications, unless their output is sampled B. Murmann. A 96-channel full data rate direct
so that it can be stored. Thus they are complementary neural interface in 0.13 μm CMOS. In Proceed-
to conventional, clocked DSPs, rather than competing ings of the IEEE Symposium on VLSI Circuits,
with them. pp. 144–145, 2011.
The principles presented have been verified on test
chips, and the results are encouraging. Further research [7] F. Marvasti. Nonuniform Sampling: Theory and Prac-
will be necessary in order to develop more power- tice. Springer, New York, 2001.
efficient CT ADCs and CT delay lines, to reduce the
number of events needed per unit time, and to inves- [8] P. Ellis. Extension of phase plane analysis to
tigate other processor topologies, notably including quantized systems. IRE Transactions on Automatic
IIR ones. Control, 4(2):43–54, 1959.

[9] R. Dorf, M. Farren, and C. Phillips. Adaptive sam-


pling frequency for sampled-data control systems.
IRE Transactions on Automatic Control, 7(1):38–47,
Acknowledgment 1962.
This work was supported by National Science Foun-
[10] J. Mitchell and W. McDaniel Jr. Adaptive sampling
dation Grants CCF-07-01766, CCF-0964606, and CCF-
technique. IEEE Transactions on Automatic Control,
1419949.
14(2):200–201, 1969.

[11] J. W. Mark and T. D. Todd. A nonuniform sam-


pling approach to data compression. IEEE Trans-
Bibliography actions on Communications, 29(1):24–32, 1981.
[1] M. Neugebauer and K. Kabitzsch. A new proto- [12] J. Foster and T.-K. Wang. Speech coding using
col for a low power sensor network. In Proceedings time code modulation. In Proceedings of the IEEE
of the IEEE International Conference on Performance, Southeastcon’91, pp. 861–863, 1991.
Computing, and Communications, Phoenix, pp. 393–
399, 2004. [13] P. E. Luft and T. I. Laakso. Adaptive control of
sampling rate using a local time-domain sam-
[2] L. M. Feeney and M. Nilsson. Investigating the pling theorem. In Proceedings of the IEEE Interna-
energy consumption of a wireless network inter- tional Symposium on Circuits and Systems, volume 3,
face in an ad hoc networking environment. In pp. 145–148, 1994.
Proceedings of the Twentieth Annual Joint Conference
of the IEEE Computer and Communications Societies, [14] N. Sayiner, H. V. Sorensen, and T. R. Viswanathan.
volume 3, pp. 1548–1557, 2001. A level-crossing sampling scheme for a/d conver-
sion. IEEE Transactions on Circuits and Systems II:
[3] A. Yakovlev, S. Kim, and A. Poon. Implantable
Analog and Digital Signal Processing, 43(4):335–339,
biomedical devices: Wireless powering and
1996.
communication. IEEE Communications Magazine,
50(4):152–159, 2012. [15] W. R. Dieter, S. Datta, and W. K. Kai. Power reduc-
[4] M. D. Linderman, G. Santhanam, C. T. Kemere, tion by varying sampling rate. In Proceedings of
V. Gilja, S. O’Driscoll, B. M. Yu, A. Afshar, S. I. the ACM International Symposium on Low power
Ryu, K. V. Shenoy, and T. H. Meng. Signal process- Electronics and Design, pp. 227–232, 2005.
ing challenges for neural prostheses. IEEE Signal
[16] Y. Tsividis. Event-driven, continuous-time ADCs
Processing Magazine, 25(1):18–28, 2008.
and DSPs for adapting power dissipation to signal
[5] J. Lee, H.-G. Rhew, D. Kipke, and M. Flynn. A 64 activity. In Proceedings of the 2010 IEEE International
channel programmable closed-loop deep brain Symposium on Circuits and Systems, pp. 3581–3584,
stimulator with 8 channel neural amplifier and 2010.
logarithmic ADC. In Proceedings of the IEEE Sym-
posium on VLSI Circuits, pp. 76–77, 2008. [17] Y. P. Tsividis. Event-driven data acquisition and
continuous-time digital signal processing. In
[6] R. M. Walker, H. Gao, P. Nuyujukian, Proceedings of the IEEE Custom Integrated Circuits
K. Makinwa, K. Shenoy, T. Meng, and Conference, pp. 1–8, 2010.
374 Event-Based Control and Signal Processing

[18] Y. Tsividis. Event-driven data acquisition and dig- [31] Y. Tsividis. Continuous-time digital signal pro-
ital signal processing: A tutorial. IEEE Transactions cessing. Electronics Letters, 39(21):1551–1552, 2003.
on Circuits and Systems II: Express Briefs, 57(8):
577–581, 2010. [32] Y. Tsividis. Signal processors with transfer func-
tion coefficients determined by timing. IEEE Trans-
[19] R. Tomovic and G. Bekey. Adaptive sampling actions on Circuits and Systems, 29(12):807–817,
based on amplitude sensitivity. IEEE Transactions 1982.
on Automatic Control, 11(2):282–284, 1966.
[33] K. M. Guan and A. C. Singer. Opportunistic sam-
[20] D. Ciscato and L. Martiani. On increasing sam- pling by level-crossing. In Proceedings of the IEEE
pling efficiency by adaptive sampling. IEEE Trans- International Conference on Acoustics, Speech, and
actions on Automatic Control, 12(3):318–318, 1967. Signal Processing, volume 3, pp. 1513–1516, 2007.
[21] T. Hsia. Comparisons of adaptive sampling con-
trol laws. IEEE Transactions on Automatic Control, [34] K. M. Guan, S. S. Kozat, and A. C. Singer. Adap-
17(6):830–831, 1972. tive reference levels in a level-crossing analog-to-
digital converter. EURASIP Journal on Advances in
[22] K.-E. Årzén. A simple event-based PID controller. Signal Processing, 2008:183, 2008.
In Proceedings of the 14th IFAC World Congress,
volume 18, pp. 423–428, 1999. [35] K. M. Guan and A. C. Singer. Sequential place-
ment of reference levels in a level-crossing analog-
[23] M. Velasco, J. Fuertes, and P. Marti. The self trig- to-digital converter. In Proceedings of the IEEE
gered task model for real-time control systems. Conference on Information Sciences and Systems,
In Work-in-Progress Session of the 24th IEEE Real- pp. 464–469, 2008.
Time Systems Symposium (RTSS03), volume 384,
pp. 67–70, 2003. [36] K. Kozmin, J. Johansson, and J. Delsing. Level-
crossing ADC performance evaluation toward
[24] J. Sandee, W. Heemels, and P. Van den Bosch.
ultrasound application. IEEE Transactions on Cir-
Event-driven control as an opportunity in the
cuits and Systems I: Regular Papers, 56(8):1708–1719,
multidisciplinary development of embedded
2009.
controllers. In Proceedings of the American Control
Conference, pp. 1776–1781, 2005. [37] M. Greitans, R. Shavelis, L. Fesquet,
[25] M. Miskowicz. Asymptotic effectiveness of the T. Beyrouthy. Combined peak and level-crossing
event-based sampling according to the integral sampling scheme. In Proceedings of the International
criterion. Sensors, 7(1):16–37, 2007. Conference on Sampling Theory and Applications,
pages 5–8, 2011.
[26] K. J. Aström. Event based control. In Analysis and
Design of Nonlinear Control Systems (A. Astolfi and [38] L. C. Gouveia, T. J. Koickal, and A. Hamilton.
L. Marconi, Eds.), pp. 127–147. Springer, Berlin- An asynchronous spike event coding scheme for
Heidelberg, 2008. programmable analog arrays. IEEE Transactions on
Circuits and Systems I, 58(4):791–799, 2011.
[27] A. Anta and P. Tabuada. To sample or not to
sample: Self-triggered control for nonlinear sys- [39] T. J. Yamaguchi, M. Abbas, M. Soma, T. Aoki,
tems. IEEE Transactions on Automatic Control, 55(9): Y. Furukawa, K. Degawa, S. Komatsu, and
2030–2042, 2010. K. Asada. An equivalent-time and clocked
approach for continuous-time quantization. In
[28] M. Miskowicz. Efficiency of event-based sam-
Proceedings of the IEEE International Symposium on
pling according to error energy criterion. Sensors,
Circuits and Systems, pp. 2529–2532, 2011.
10(3):2242–2261, 2010.
[29] S. Dormido, J. Sánchez, and E. Kofman. Muestreo, [40] T. Marisa, T. Niederhauser, A. Haeberlin, J. Goette,
control y comunicación basados en eventos. M. Jacomet, and R. Vogel. Asynchronous ECG
Revista Iberoamericana de Automática e Informática time sampling: Saving bits with Golomb-Rice
Industrial RIAI, 5(1):5–26, 2008. encoding. Computing in Cardiology, 39:61–64, 2012.

[30] M. Miskowicz. Event-based sampling strategies in [41] S. Naraghi, M. Courcy, and M. P. Flynn. A 9-bit,
networked control systems. In Proceedings of IEEE 14 μw and 0.06 mm2 pulse position modulation
International Workshop on Factory Communication ADC in 90 nm digital CMOS. IEEE Journal of Solid-
Systems, pp. 1–10, 2014. State Circuits, 45(9):1870–1880, 2010.
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 375

[42] Y. Tsividis. Digital signal processing in continu- [54] B. Schell and Y. Tsividis. A continuous-time
ous time: A possibility for avoiding aliasing and ADC/DSP/DAC system with no clock and with
reducing quantization error. In Proceedings of the activity-dependent power dissipation. IEEE Jour-
IEEE International Conference on Acoustics, Speech, nal of Solid-State Circuits, 43(11):2472–2481, 2008.
and Signal Processing, volume 2, pp. 589–592, 2004.
[55] M. Trakimas and S. Sonkusale. A 0.8 V asyn-
[43] Y. Tsividis. Mixed-domain systems and signal pro- chronous ADC for energy constrained sensing
cessing based on input decomposition. IEEE Trans- applications. In Proceedings of the IEEE Custom
actions on Circuits and Systems I, 53(10):2145–2156, Integrated Circuits Conference, pp. 173–176, 2008.
2006.
[56] V. Majidzadeh, A. Schmid, and Y. Leblebici.
[44] B. Schell and Y. Tsividis. Analysis and simula- Low-distortion switched-capacitor event-driven
tion of continuous-time digital signal processors. analogue to-digital converter. Electronics Letters,
Signal Processing, 89(10):2013–2026, 2009. 46(20):1373–1374, 2010.

[45] Y. Chen, M. Kurchuk, N. T. Thao, and [57] Y. Li, D. Zhao, M. N. van Dongen, and W. A.
Y. Tsividis. Spectral analysis of continuous- Serdijn. A 0.5 V signal-specific continuous-time
time ADC and DSP. In Event-Based Control and level-crossing ADC with charge sharing. In Pro-
Signal Processing (M. Miskowicz, Ed.), pp. 409–420. ceedings of the IEEE Biomedical Circuits and Systems
CRC Press, Boca Raton, FL 2015. Conference, pp. 381–384, 2011.

[46] T. Wang, D. Wang, P. J. Hurst, B. C. Levy, and [58] T. A. Vu, S. Sudalaiyandi, M. Z. Dooghabadi,
S. H. Lewis. A level-crossing analog-to-digital con- H. A. Hjortland, O. Nass, T. S. Lande, and
verter with triangular dither. IEEE Transactions on S.-E. Hamran. Continuous-time CMOS quantizer
Circuits and Systems I, 56(9):2089–2099, 2009. for ultra-wideband applications. In Proceedings of
the IEEE International Symposium on Circuits and
[47] F. Akopyan, R. Manohar, and A. B. Apsel. A level- Systems, pp. 3757–3760, 2010.
crossing flash asynchronous analog-to-digital con-
verter. In Proceedings of the IEEE International [59] S. Araujo-Rodrigues, J. Accioly, H. Aboushady,
Symposium on Asynchronous Circuits and Systems, M. Louërat, D. Belfort, and R. Freire. A clock-
pp. 12–22, 2006. less 8-bit folding A/D converter. In Proceedings of
the IEEE Latin American Symposium on Circuits and
[48] H. Inose, T. Aoki, and K. Watanabe. Asyn- Systems, 2010.
chronous delta-modulation system. Electronics
Letters, 2(3):95–96, 1966. [60] D. Chhetri, V. N. Manyam, and J. J. Wikner. An
event-driven 8-bit ADC with a segmented resistor-
[49] P. Sharma. Characteristics of asynchronous delta- string DAC. In Proceedings of the European Con-
modulation and binary-slope-quantized-pcm sys- ference on Circuit Theory and Design, pp. 528–531,
tems. Electronic Engineering, 40(479):32–37, 1968. 2011.
[50] R. Steele. Delta Modulation Systems. Pentech Press, [61] R. Grimaldi, S. Rodriguez, and A. Rusu. A 10-bit
London, 1975. 5 kHz level-crossing ADC. In Proceedings of the
European Conference on Circuit Theory and Design,
[51] N. Jayant. Digital coding of speech waveforms: pp. 564–567, 2011.
PCM, DPCM, and DM quantizers. Proceedings of
the IEEE, 62(5):611–632, 1974. [62] R. Agarwal and S. R. Sonkusale. Input-feature cor-
related asynchronous analog to information con-
[52] E. Allier, G. Sicard, L. Fesquet, and M. Renaudin. verter for ECG monitoring. IEEE Transactions on
A new class of asynchronous A/D converters Biomedical Circuits and Systems, 5(5):459–467, 2011.
based on time quantization. In Proceedings of the
International Symposium on Asynchronous Circuits [63] Y. Hong, I. Rajendran, and Y. Lian. A new ECG
and Systems, pp. 196–205, 2003. signal processing scheme for low-power wearable
ECG devices. In Proceedings of the Asia Pacific Con-
[53] M. Kurchuk, C. Weltin-Wu, D. Morche, and ference on Postgraduate Research in Microelectronics
Y. Tsividis. Event-driven GHz-range continuous- and Electronics, pp. 74–77, 2011.
time digital signal processor with activity-
dependent power dissipation. IEEE Journal of [64] Y. Li, D. Zhao, and W. A. Serdijn. A sub-microwatt
Solid-State Circuits, 47(9):2164–2173, 2012. asynchronous level-crossing ADC for biomedical
376 Event-Based Control and Signal Processing

applications. IEEE Transactions on Biomedical Cir- [76] V. Balasubramanian, A. Heragu, and C. Enz. Anal-
cuits and Systems, 7(2):149–157, 2013. ysis of ultralow-power asynchronous ADCs. In
Proceedings of the IEEE International Symposium on
[65] W. Tang, A. Osman, D. Kim, B. Goldstein, Circuits and Systems, pp. 3593–3596, 2010.
C. Huang, B. Martini, V. A. Pieribone, and
E. Culurciello. Continuous time level crossing [77] Y. S. Suh. Send-on-delta sensor data transmission
sampling ADC for bio-potential recording sys- with a linear predictor. Sensors, 7(4):537–547, 2007.
tems. IEEE Transactions on Circuits and Systems I,
60(6):1407, 2013. [78] M. Kurchuk and Y. Tsividis. Signal-dependent
variable-resolution clockless A/D conversion
[66] D. Kościelnik and M. Miskowicz. Event-driven with application to continuous-time digital signal
successive charge redistribution schemes for processing. IEEE Transactions on Circuits and
clockless analog-to-digital conversion. In Design, Systems I, 57(5):982–991, 2010.
Modeling and Testing of Data Converters (P. Carbone,
S. Kiaey, and F. Xu, Eds.), Springer, pp. 161–209, [79] D. Brückmann, T. Feldengut, B. Hosticka,
2014. R. Kokozinski, K. Konrad, and N. Tavangaran.
Optimization and implementation of continuous
[67] X. Zhang and Y. Lian. A 300-mV 220-nW event- time DSP-systems by using granularity reduction.
driven ADC with real-time QRS detection for In Proceedings of the IEEE International Symposium
wearable ECG sensors. IEEE Transactions on on Circuits and Systems, pp. 410–413, 2011.
Biomedical Circuits and Systems, 8(6):834–843, 2014.
[80] D. Brückmann, K. Konrad, and T. Werthwein.
[68] [Online] Novelda AS. https://www.novelda.no, Concepts for hardware efficient implementation
accessed on July 10, 2014. of continuous time digital signal processing.
[69] G. Thiagarajan, U. Dasgupta, and V. Gopinathan. In Event-Based Control and Signal Processing (M.
Asynchronous analog-to-digital converter. US Miskowicz, Ed.), pp. 421–440. CRC Press, Boca
Patent application 2014/0 062 734 A1, March 6, Raton, FL 2015.
2014. [81] J. Yen. On nonuniform sampling of bandwidth-
[70] G. Thiagarajan, U. Dasgupta, and V. Gopinathan. limited signals. IRE Transactions on Circuit Theory,
Asynchronous analog-to-digital converter having 3(4):251–257, 1956.
adaptive reference control. US Patent application [82] F. J. Beutler. Error-free recovery of signals from
2014/0 062 735 A1, March 6, 2014. irregularly spaced samples. Siam Review, 8(3):
[71] G. Thiagarajan, U. Dasgupta, and V. Gopinathan. 328–335, 1966.
Asynchronous analog-to-digital converter having [83] C. Vezyrtzis and Y. Tsividis. Processing of sig-
rate control. US Patent application 2014/0 062 751 nals using level-crossing sampling. In Proceedings
A1, March 6, 2014. of the IEEE International Symposium on Circuits and
[72] Y. W. Li, K. L. Shepard, and Y. Tsividis. Systems, pp. 2293–2296, 2009.
A continuous-time programmable digital FIR
[84] N. T. Thao. Event-based data acquisition and
filter. IEEE Journal of Solid-State Circuits, 41(11):
reconstruction—Mathematical background. In
2512–2520, 2006.
Event-Based Control and Signal Processing
[73] M. Miskowicz. Send-on-delta concept: An event- (M. Miskowicz, Ed.), pp. 379–408. CRC Press,
based data reporting strategy. Sensors, 6(1):49–63, Boca Raton, FL 2015.
2006.
[85] W. Tang, C. Huang, D. Kim, B. Martini, and
[74] K. Kozmin, J. Johansson, and J. Kostamovaara. E. Culurciello. 4-channel asynchronous bio-
A low propagation delay dispersion comparator potential recording system. In Proceedings of
for a level-crossing A/D converter. Analog Inte- the IEEE International Symposium on Circuits and
grated Circuits and Signal Processing, 62(1):51–61, Systems, pp. 953–956, 2010.
2010.
[86] M. Trakimas and S. R. Sonkusale. An adaptive res-
[75] C. Weltin-Wu and Y. Tsividis. An event-driven olution asynchronous ADC architecture for data
clockless level-crossing ADC with signal- compression in energy constrained sensing appli-
dependent adaptive resolution. IEEE Journal cations. IEEE Transactions on Circuits and Systems I:
of Solid-State Circuits, 48(9):2180–2190, 2013. Regular Papers, 58(5):921–934, 2011.
Event-Based Data Acquisition and Digital Signal Processing in Continuous Time 377

[87] P. Martinez-Nuevo, S. Patil, and Y. Tsividis. [98] N. Tavangaran, D. Brückmann, T. Feldengut,


Derivative level-crossing sampling. IEEE Transac- B. Hosticka, R. Kokozinski, K. Konrad, and
tions on Circuits and Systems II, 62(1):11–15, 2015. R. Lerch. Effects of jitter on continuous time digital
systems with granularity reduction. In Proceed-
[88] R. P. Boas. Entire Functions. Academic Press, ings of the European Signal Processing Conference,
London, 1954. pp. 525–529, 2011.
[89] F. Aeschlimann, E. Allier, L. Fesquet, and [99] T. Redant and W. Dehaene. Joint estimation of
M. Renaudin. Asynchronous FIR filters: Towards propagation delay dispersion and time of arrival
a new digital processing chain. In Proceedings of in a 40-nm CMOS comparator bank for time-
the IEEE International Symposium on Asynchronous based receivers. IEEE Transactions on Circuits and
Circuits and Systems, pp. 198–206, 2004. Systems II: Express Briefs, 60(2):76–80, 2013.
[90] S. M. Qaisar, L. Fesquet, M. Renaudin. Computa- [100] D. K. Su and B. A. Wooley. A CMOS oversampling
tionally efficient adaptive rate sampling and filter- D/A converter with a current-mode semidigital
ing. In Proceedings of the European Signal Processing reconstruction filter. IEEE Journal of Solid-State
Conference, volume 7, pp. 2139–2143, 2007. Circuits, 28(12):1224–1233, 1993.
[91] S. M. Qaisar, R. Yahiaoui, and T. Gharbi. An effi- [101] Z. Zhao and A. Prodic. Continuous-time digi-
cient signal acquisition with an adaptive rate A/D tal controller for high-frequency DC-DC convert-
conversion. In Proceedings of the IEEE International ers. IEEE Transactions on Power Electronics, 23(2):
Conference on Circuits and Systems, pp. 124–129, 564–573, 2008.
2013.
[102] Z. Zhao, V. Smolyakov, and A. Prodic.
[92] V. Dhanasekaran, M. Gambhir, M. M. Elsayed, Continuous-time digital signal processing based
E. Sánchez-Sinencio, J. Silva-Martinez, C. Mishra, controller for high-frequency DC-DC converters.
L. Chen, and E. Pankratz. A 20MHz BW 68dB DR In Proceedings of the IEEE Applied Power Electronics
CT ΔΣ ADC based on a multi-bit time-domain Conference, pp. 882–886, 2007.
quantizer and feedback element. In Digest,
IEEE International Solid-State Circuits Conference, [103] M. T. Ozgun and M. Torlak. Effects of ran-
pp. 174–175, 2009. dom delay errors in continuous-time semi-digital
transversal filters. IEEE Transactions on Circuits and
[93] H. Pakniat and M. Yavari. A time-domain noise- Systems II, 61(1):183–190, 2014.
coupling technique for continuous-time sigma-
delta modulators. Analog Integrated Circuits and [104] D. Brückmann and K. Konrad. Optimization of
Signal Processing, 78(2):439–452, 2014. continuous time filters by delay line adjustment.
In Proceedings of the IEEE International Midwest
[94] B. Schell and Y. Tsividis. A low power tunable Symposium on Circuits and Systems, pp. 1238–1241,
delay element suitable for asynchronous delays 2010.
of burst information. IEEE Journal of Solid-State
Circuits, 43(5):1227–1234, 2008. [105] G. B. Lockhart. Digital encoding and filtering
using delta modulation. Radio and Electronic Engi-
[95] K. Konrad, D. Brückmann, N. Tavangaran, neer, 42(12):547–551, 1972.
J. Al-Eryani, R. Kokozinski, and T. Werthwein.
Delay element concept for continuous time dig- [106] A. Ratiu, D. Morche, A. Arias, B. Allard, X. Lin-Shi,
ital signal processing. In Proceedings of the IEEE and J. Verdier. Efficient simulation of continuous
International Symposium on Circuits and Systems, time digital signal processing RF systems. In Pro-
pp. 2775–2778, 2013. ceedings of the International Conference on Sampling
Theory and Applications, pp. 416–419, 2013.
[96] C. Vezyrtzis, W. Jiang, S. M. Nowick, and
Y. Tsividis. A flexible, event-driven digital filter [107] D. Brückmann. Design and realization of
with frequency response independent of input continuous-time wave digital filters. In Pro-
sample rate. IEEE Journal of Solid-State Circuits, ceedings of the IEEE International Symposium on
49(10):2292–2304, 2014. Circuits and Systems, pp. 2901–2904, 2008.
[97] Y. P. Tsividis. Integrated continuous-time filter [108] D. Hand and M.-W. Chen. A non-uniform sam-
design – an overview. IEEE Journal of Solid-State pling ADC architecture with embedded alias-free
Circuits, 29(3):166–176, 1994. asynchronous filter. In Proceedings of the IEEE
378 Event-Based Control and Signal Processing

Global Communications Conference, pp. 3707–3712, [116] P. Sharma. Signal characteristics of rectangular-
2012. wave modulation. Electronic Engineering, 40(480):
103–107, 1968.
[109] S. Sudalaiyandi, M. Z. Dooghabadi, T.-A. Vu, H. A.
Hjortland, Ø. Nass, T. S. Lande, and S. E. Hamran. [117] E. Roza. Analog-to-digital conversion via duty-
Power-efficient CTBV symbol detector for UWB cycle modulation. IEEE Transactions on Circuits and
applications. In Proceedings of the IEEE International Systems II, 44(11):907–914, 1997.
Conference on Ultra-Wideband, volume 1, pp. 1–4,
2010. [118] A. A. Lazar and L. T. Tóth. Time encoding and per-
fect recovery of bandlimited signals. In Proceedings
[110] S. Sudalaiyandi, T.-A. Vu, H. A. Hjortland, of the IEEE International Conference on Acoustics,
O. Nass, and T. S. Lande. Continuous-time single- Speech, and Signal Processing, pp. 709–712, 2003.
symbol IR-UWB symbol detection. In Proceed-
ings of the IEEE International SOC Conference, [119] A. Can, E. Sejdic, and L. F. Chaparro. An asyn-
pp. 198–201, 2012. chronous scale decomposition for biomedical sig-
nals. In Proceedings of the IEEE Signal Processing in
[111] G. M. Jacobs and R. W. Brodersen. A fully Medicine and Biology Symposium, pp. 1–6, 2011.
asynchronous digital signal processor using
self-timed circuits. IEEE Journal of Solid-State [120] N. Tavangaran, D. Brückmann, R. Kokozinski, and
Circuits, 25(6):1526–1537, 1990. K. Konrad. Continuous time digital systems with
asynchronous sigma delta modulation. In Proceed-
[112] L. W. Couch, M. Kulkarni, and U. S. Acharya. Dig- ings of the European Signal Processing Conference,
ital and Analog Communication Systems, 6th edition. pp. 225–229, 2012.
Prentice Hall, Upper Saddle River, 1997.
[121] A. Can-Cimino, E. Sejdic, and L. F. Chaparro.
[113] B. Logan. Click modulation. AT&T Bell Laboratories Asynchronous processing of sparse signals. Signal
Technical Journal, 63(3):401–423, 1984. Processing, 8(3):257–266, 2014.

[114] R. Kumaresan and Y. Wang. On representing sig- [122] C. Vezyrtzis, Y. Tsividis, and S. M. Nowick.
nals using only timing information. The Journal of Designing pipelined delay lines with
the Acoustical Society of America, 110(5):2421–2439, dynamically-adaptive granularity for low-energy
2001. applications. In International Conference on
Computer-Aided Design, pp. 329–336, 2012.
[115] C. Kikkert and D. Miller. Asynchronous delta
sigma modulation. In Proceedings of the IREE,
36:83–88, 1975.
16
Event-Based Data Acquisition and
Reconstruction—Mathematical Background

Nguyen T. Thao
City College of New York
New York, NY, USA

CONTENTS
16.1 Linear Operator Approach to Sampling and Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
16.1.1 Linear Approach to Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
16.1.2 Linear Approach to Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
16.1.3 Functional Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
16.1.4 Generalized Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
16.1.5 Examples of Sampling Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
16.1.6 Sampling–Reconstruction System and Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
16.1.7 Revisiting Continuous-Time Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
16.2 The Case of Periodic Bandlimited Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
16.2.1 Space of Periodic Bandlimited Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
16.2.2 Sampling Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
16.2.3 “Nyquist Condition” for Direct Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
16.2.4 Sampling Pseudoinverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
16.2.5 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
16.2.5.1 Fourier Coefficients of Circular Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
16.3 Sampling and Reconstruction Operators in Hilbert Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
16.3.1 Adjoint Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
16.3.2 Orthogonality in Hilbert Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
16.3.3 Injectivity of S and Surjectivity of S ∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
16.3.4 Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
16.3.5 Tight Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
16.3.6 Neumann Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
16.3.7 Critical Sampling and Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
16.3.8 Optimality of Pseudoinverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
16.3.9 Condition Number of Sampling–Reconstruction System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
16.4 Direct Sampling of Aperiodic Bandlimited Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
16.4.1 Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
16.4.2 Critical Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
16.4.3 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
16.4.4 Weighted Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
16.5 Constrained Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
16.5.1 Condition for (S , R) to Be Admissible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
16.5.2 Known Cases Where I − RCS < 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
16.5.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
16.5.4 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
16.6 Deviation from Uniform Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
16.6.1 Sensitivity to Sampling Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
16.6.2 Analysis of I−RS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398

379
380 Event-Based Control and Signal Processing

16.6.3
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
16.6.3.1 Proof of (16.90) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
16.6.3.2 Justification of (16.93) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
16.7 Reconstruction by Successive Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
16.7.1 Basic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
16.7.2 Reconstruction Estimates with Sampling Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
16.7.3 Estimate Improvement by Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
16.7.4 General Case of Admissible Pair (S , R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
16.8 Discrete-Time Computation Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
16.8.1 Discrete-Time Processor D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
16.8.2 Synchronous Sampling of Reconstructed Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
16.8.3 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
16.8.4 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
16.8.4.1 Proof of (16.114) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
16.8.4.2 Derivation of the Bound (16.119) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
ABSTRACT Event-based signal acquisition differs of the crossings (together with the corresponding
from traditional data acquisition in the features that one level indices). One wonders about the potential for
tries to capture from a continuous-time signal. But as these crossings to uniquely characterize the input and
a common point, one obtains a discrete description of about the possibility of better reconstructions than
the input signal in the form of a sequence of real values. what continuous-time quantization proposes, with the
Although these values are traditionally uniform samples ultimate question of perfect reconstruction.
of the input, they can be in the event-based approach Answers to these questions can be found in some sub-
nonuniform samples, and more generally innerprod- stantial literature on the nonuniform sampling of band-
ucts of the input with known kernel functions. But the limited signals [3–7]. A number of well-known surveys
same theoretical question of input reconstruction from have also appeared, including [8] on the various aspects
these discrete values as in Shannon sampling theorem is and applications of nonuniform sampling and [9,10] on
posed. We give a tutorial on the difficult mathematical the more mathematical aspects. This literature may how-
background behind this question and indicate practical ever still remain of difficult access to researchers and
methods of signal recovery based on sliding-window engineers without a special inclination for applied math-
digital signal processing. ematics. Indeed, it often requires advanced knowledge
on functional and complex analysis that may not be part
Event-based continuous-time data acquisition and of the common background. In this chapter, we propose
signal processing has been discussed in Chapter 15. to walk through this knowledge by motivating the most
They promote the idea to acquire the characteristics of theoretical notions involved, while keeping a connection
a continuous-time signal by detecting the time instants with the constraints of real implementations.
of precise amplitude-related events and to process From a signal processing viewpoint, the main tech-
the result. Advantages of this method compared with nical difficulty in this topic is the absence of simulta-
traditional pulse-code modulation (PCM) include the neous time invariance and linearity in the operations.
absence of aliasing, improved spectral properties, faster Event-driven operations such as in continuous-time
response to input changes, and power savings. This quantization are by construction time invariant but not
different type of data acquisition however escapes linear. Meanwhile, analyzing this type of processing
from the setting of Shannon sampling theorem, and the from the perspective of nonuniform sampling implies
theoretical question of signal reconstruction from the the construction of operators that are linear but not
discrete events needs to be mathematically revisited. time invariant. In either case, the usual tools of sig-
For illustration, a concrete example of event- nal processing such as Fourier analysis collapse and
based data acquisition proposed in Chapter 15 [1] more general theories need to be involved. The appeal-
is continuous-time amplitude quantization [2]. As in ing point of the nonuniform sampling approach is the
Figure 16.1, this operation can be viewed as providing preservation of linearity, a property that is at least com-
a piecewise constant approximation of the input based monly understood in finite-dimensional vector spaces
on its crossings with predetermined levels. Funda- where linear operations are described by matrices. We
mentally, the exact information about the input signal take advantage of this common background to progres-
that is acquired by the quantizer is the time instants sively lead the reader to nonuniform sampling in the
Event-Based Data Acquisition and Reconstruction—Mathematical Background 381

infinite-dimensional space of square-integrable ban- a minimal requirement for the perfect reconstruction of
dlimited signals. x (t) to be possible from these samples.
The first important feature of this approach is that S
is linear, as can be easily verified. According to standard
mathematical terminology, S is called a linear operator, or
16.1 Linear Operator Approach to Sampling simply an operator from B to D. As a result, it is easy to
see that x = S u(t) has a unique solution in u(t) if and
and Reconstruction only if the null space of S defined by
16.1.1 Linear Approach to Sampling $ %
Null(S) := u(t) ∈ B : S u(t) = 0 , (16.2)
Figure 16.1 gives an example where a continuous-time
is reduced to the zero signal. This is because x = S u(t)
signal x (t) satisfies the relations x (tn ) = xn for a known
if and only if x = S(u(t)+v(t)) for all v(t) ∈ Null(S),
sequence of (nonuniform) instants (tn )n∈Z and a known
since S(u(t)+v (t)) = S u(t) + S v(t) by linearity of S .
sequence of amplitude values ( xn )n∈Z . We say that
Interestingly, the property that Null(S) = {0} (here
(tn , xn )n∈Z are samples of x (t). Assuming that x (t)
0 denotes the zero function) does not involve the ampli-
belongs to a known space of bandlimited continuous-
tude sequence x = ( xn )n∈Z , but only depends on S and
time signals B, one can formalize the knowledge that
hence on the instants (tn )n∈Z . In fact, this property
x (tn ) = xn for all n ∈ Z by defining the mapping
implies that the equation u = S u(t) has a unique solu-
S : B −→ D " , (16.1) tion in u(t) for any u in the range of S , which is the linear
#
u ( t ) " → u : = u ( t n ) n ∈Z subspace of D defined by
$
where D is a vector space of real discrete-time sequences Range(S) := S u(t) : u(t) ∈ B}. (16.3)
and by writing that the image of x (t) by this mapping It is said in this case that S is one-to-one, or injective. In
is x := ( xn )n∈Z . We use the short notation S x (t) = x. conclusion, x (t) can be uniquely reconstructed from its
A point that needs to be emphasized is that S is specif- samples (t , x )
n n n ∈Z if and only if S is injective. It will be
ically defined for the given sampling instants (tn )n∈Z . naturally desirable to find the condition on (t )
n n ∈Z for
In this approach, we are no longer concerned about how this property to be realized.
these instants have been obtained (by the event-driven
system, such as the continuous-time quantizer), but only
about what would happen if another bandlimited input 16.1.2 Linear Approach to Reconstruction
was sampled at these same instants. In that sense, S is Assuming that S has been shown to be injective, the
a virtual mapping. But, S is mathematically completely second difficulty is to find a formula to reconstruct
defined, and if the equation x = S u(t) has a unique solu- x (t) from x = S x (t). For any u ∈ Range(S), we know
tion in u(t), we will know (at least theoretically) that the from the previous section that there exists a unique
samples (tn , xn )n∈Z uniquely characterize x (t). This is function u(t) ∈ B such that u = S u(t). By calling S −1 u
the unique solution, we obtain a mapping S −1 from
x(t) Range(S) back to B that is easily shown to be linear
yn – 1 y(t) as well. As a particular case, we will have x (t) = S −1 x.
xn The difficulty here is not just the analytical derivation of
yn S −1 but the characterization of its domain Range(S). It
is easier to search for a linear mapping from the whole
space D to B that coincides with S −1 on Range(S). In
this chapter, we call in general a reconstruction opera-
tor any linear function that maps D into B. Although
Range(S) is not known explicitly, a reconstruction oper-
tn tn + 1 t ator R coincides with S −1 on Range(S) if and only if
u(t) = Ru whenever u = S u(t). Denoting by RS the
operator such that RS u(t) = R(S u(t)), this is equiva-
lent to writing that
RS = I ,
where I is the identity mapping of B. It is said in this
case that R is a left inverse of S . The advantage of work-
FIGURE 16.1 ing on the whole space D is that R can be uniquely char-
Continuous-time amplitude quantization. acterized by its action on the shifted impulse sequences
382 Event-Based Control and Signal Processing
" #
δk := δn−k n∈Z , where δn is equal to 1 at n = 0 and the magnitude of an error sequence e = (en )n∈Z is mea-
" #
0 when n = 0. Indeed, for any u = un n∈Z ∈ D, we sured by eD such that
have u = ∑k∈Z uk δk and the application of R on both
sides of this equality yields by linearity
u2D := ∑ | u n |2 . (16.9)
n ∈Z
These space restrictions imply constraints on the
Ru = ∑ un rn ( t ), sampling
(16.4)
operator S and hence the sampling instants
n ∈Z
(tn )n∈Z . Indeed, without any restrictions on these
where instants, S " does# not necessarily map B into D [meaning
r n ( t ) : = Rδ n . (16.5) that u = u(tn ) n∈Z is not necessarily square summable
$ % when u ( t ) is square integrable and bandlimited, which
We call rn (t) n∈Z the impulse responses of R. All of would typically happen when the values of (tn )n∈Z are
them are bandlimited functions in B. The convergence too dense in time]. In fact, it is desirable to require the
of the above summation requires certain constraints on stronger condition that S be continuous with respect to
the signal spaces and operators that will be presented in the norms  ·  and  · D , so that S u(t)D is not only
Section 16.1.3. By composing S with R, we then have finite for any u(t) ∈ B but can be made arbitrarily small
by making the norm u(t)D small enough. Given the
RS u(t) = ∑ u(tn ) rn (t). (16.6) linearity of S , this is realized if and only if there exists a
n ∈Z constant β > 0 such that
We can then conclude that R is $a left %inverse of S if S u(t)D ≤ β u(t), (16.10)
and only if its impulse responses rn (t) n∈ Z satisfy the
for all u(t) ∈ B. It is said in this case that S is bounded.
equation
The smallest possible value of β is called the norm of S
u( t ) = ∑ u( t n ) rn ( t ), (16.7) and is equal to†
n ∈Z
$ % S u(t)D
for all u(t) ∈ B. In this case, we call rn (t) n∈Z the S := sup = sup S u(t)D ,
u ( t )∈ B \{ 0 } u(t) u ( t )∈B
reconstruction functions. This gives an equation similar to
 u( t )=1
Shannon sampling theorem, except that rn (t) is not the (16.11)
shifted version of a single function r (t). The difficulty is
where B\{0} denotes the set B without its zero element.
of course to find functions rn (t) that satisfy (16.7) for all Then, S is bounded if and only if S < ∞. A task will
u(t) ∈ B. be to find the condition on (tn )n∈Z for this property to
be achieved.
16.1.3 Functional Setting Naturally, these space restrictions also imply con-
straints on any considered reconstruction operator R.
Until now, we have remained vague about the signal
Not only must the impulse responses {rn (t)}n∈Z be
spaces. For an infinite summation such as in (16.4) to be
functions of B, but it is also desirable that R be con-
defined, the signal spaces need to be at least equipped
tinuous with respect to  · D and  · . This is again
with some metric to define the notion of convergence.
ensured by imposing that R be bounded such that
This also will be needed when dealing with the sensitiv-
Ru ≤ γ uD for some γ > 0 and any u ∈ D. It can be
ity of reconstructions to errors in samples. In this chap-
seen that this condition simultaneously guarantees the
ter, we assume that B is the space of square-integrable
unconditional convergence of the sum in (16.4).
signals bandlimited by a maximum frequency Ωo . In this
space, the magnitude of an error signal e(t) can be
measured by e(t) such that∗ 16.1.4 Generalized Sampling
 ∞ The operator S of (16.1) can be seen as a particular case
1
u(t)2 := |u(t)|2 dt, (16.8) of operators of the type
To −∞
S : B −→ D " # , (16.12)
where To := Ωπo is the Nyquist period of the signals of u ( t ) " → u = Sn u ( t ) n ∈ Z
B. Meanwhile, we take D to be the space of square-
summable sequences [often denoted by 2 (Z)] in which where Sn is for each n a linear functional of B, that is,
a continuous linear function from B to R. Most of the
∗ Contrary to the default mathematical practice, we have included † The supremum of a function is its smallest upper bound. When
the factor T1o in the definition of  · 2 so that  u ( t) is homogeneous this value is attained by the function, it is the same as the maximum of
to an amplitude. the function. The infimum of a function is defined in a similar manner.
Event-Based Data Acquisition and Reconstruction—Mathematical Background 383

material covered until now is actually not specific to to see from (16.14) that
the definition Sn u(t) = u(tn ) implied by (16.1) and can D E
be restated in the general case of (16.12). One obtains s(t−tn ), u(t) = T1o (s̄ ∗ u)(tn ), (16.19)
a linear functional of B by forming, for example, the
mapping where ∗ is the convolution product and s̄ (t) := s(−t).
Defining
 ∞
1
Sn u ( t ) = sn (t) u(t) dt, (16.13) so (t) := sinc( Tto ), (16.20)
To −∞
where
where sn (t) is some signal in B. In fact, any linear sin(πx )
functional of B can be put in this form due to Riesz rep- sinc( x ) := πx ,
resentation theorem. Using the standard inner-product it is easy to see that
notation
 D E
1 ∞ sn , u = u( t n ) with sn (t) := so (t−tn ). (16.21)
) x, y* := x (t) y(t) dt, (16.14)
To −∞
This is because T1o so (t) is even and is the impulse
we now call a sampling operator any mapping of the response of the ideal low-pass filter of cutoff frequency
form Ωo , so that T1o (s̄o ∗ u)(t) = ( T1o so ∗ u)(t) = u(t) for any
u(t) ∈ B. This leads to the initial sampling operator S
S : B −→ D " # . (16.15) of (16.1) which we call the direct sampling operator at the
u ( t ) " → u = ) s n , u * n ∈Z instants (tn )n∈Z to distinguish it from the other upcom-
ing operators. Sampling the derivative u
(t) of u(t) can
In this generalized context, we call each value )sn , u* a also be achieved in this manner. Indeed, as u
(t) also
sample of u(t). In the same way, a reconstruction opera- belongs to B, u
(tn ) = T1 (so ∗ u
)(tn ) = T1 (s
o ∗ u)(tn )
o o
tor R is uniquely
$ % defined by the family of its impulse by integration by part. As Ds
o (t) is an oddE function,
responses rn (t) n∈Z and the operator S is uniquely it follows from (16.19) that −s
(t−tn ), u(t) = u
(tn ).
$ % o
defined by the family of functions sn (t) n∈Z , which we Denoting by u(i) (t) the ith derivative of u(t), we have
call the kernel functions of S . more generally
The composed operator RS of (16.6) yields the more
general expression D E (i)
sn , u = u( i ) ( t n ) with sn (t) := (−1)i so (t−tn ).

RS u(t) = ∑ ) sn , u* rn ( t ). (16.16) There are also interesting cases where sn (t) is not the
n ∈Z shifted version of a single function s(t). For example,
 t n +1
By injecting the integral expression of )sn , u* in Equation
16.16 and interchanging the summation and the integral, ) sn , u* = u(t) dt with s n ( t ) : = so ( t ) ∗ p n ( t ) ,
tn
we obtain (16.22)
 ∞
RS u(t) = h(t, τ) u(τ) dτ, (16.17) where pn (t) is the rectangular function defined in
−∞ Figure 16.2a. This result is easily derived from the gen-
eral identity
where ) x, f ∗ y* = ) f¯ ∗ x, y*, (16.23)
1
h (t, τ) :=
To ∑ r n ( t ) s n (τ). (16.18) since
 t n +1
)sn , u* = )so ∗ pn , u* = ) pn , so ∗ u* = ) pn , To u* =
n ∈Z
tn u ( t ) dt. This corresponds to the sampling of an
This gives the classic kernel representation of RS as a integrated version of u(t) followed by the differentiation
linear (but not time-invariant) transformation. of consecutive samples. This type of sampling has been
notably used in [11].
16.1.5 Examples of Sampling Operator
16.1.6 Sampling–Reconstruction System and
We derive the kernel functions of important examples of
Optimization
sampling operator. A basic class of operators is achieved
with kernel functions sn (t) of the type sn (t) = s(t−tn ), Usually, the type of sampling operator S is imposed
where s(t) is a fixed bandlimited function of B. It is easy by the application, and the freedom of design for
384 Event-Based Control and Signal Processing

p̌n(t) pn(t)
1 1 ln(t)

0 tn – 1 ťn tn ťn + 1 tn + 1 t 0 tn – 1 tn tn + 1 t
(a) (b)

FIGURE 16.2
(a) Rectangular functions pn ( t) and p̌n ( t) with ťn : = 12 ( tn −1 + tn ). (b) Triangular function ln ( t).

u(t) un υn û(t) where ϕo (t) is the impulse response of the bandlimita-


S D R
x(t) xn yn x̂(t) tion filter, explicitly equal to
Sampling Discrete-time Reconstruction
operator processor operator ϕo ( t ) : = 1
To sinc( Tto ). (16.24)
FIGURE 16.3 Interestingly, the transformation from x (t) to x̂ (t) can be
Framework of RDS system for the approximate reconstruction of x ( t) modeled as an RDS system as described in Figure 16.3.
from ( xn ) n ∈Z = S x ( t). The quantized signal y(t) is a piecewise constant func-
tion of the form y (t) = ∑n∈Z yn pn (t), where y n is the
the reconstruction operator R is limited due to the constant value of y (t) in the interval [tn , tn+1 ) and pn (t)
constraints of analog circuit implementation. As left is the rectangular function of" Figure
# 16.2a. We formally
inverses of S are most likely not rigorously imple- write y(t) = P y, where y = yn n∈Z and
mentable in practice, the goal is at least to mini-
mize the error û(t)−u(t) of the reconstructed sig- P u := ∑ un p n ( t ). (16.25)
n ∈Z
nal û (t) := RS u(t). As the sampled signal u = S u(t)
is discrete in time, we consider the larger frame- This is basically the zero-order hold operation at the
work of sampling–reconstruction systems that include nonuniform clock instants (tn )n∈Z. The bandlimited sig-
an intermediate discrete-time processor D as shown nal x̂ (t) is then equal to Ry where
in Figure 16.3. In this case, û(t) := RDS u(t). Since
û(t)−u(t) = (RDS−I)u(t) and RDS−I is linear, the Ru := ϕo (t) ∗ P u. (16.26)
error û (t)−u(t) is proportional to u(t). It is then
desirable to optimize D and R toward the minimization R can be equivalently presented as the reconstruction
of the coefficient of proportionality for the given sam- operator of impulse responses
pling operator S . This is formalized as the minimization r n ( t ) : = ϕo ( t ) ∗ p n ( t ) .
of RDS−I, where, for any linear operator A of B,
At each instant tn , the input x (t) must cross the quan-
Au(t)
A := sup = Au(t).
tization level lying between y n−1 and yn in amplitude.
sup
u ( t )∈B\{0} u( t )
u ( t )∈B When quantization is uniform as in the example of
 u( t )=1 Figure 16.1, we must have x (tn ) = xn , where xn :=
1
(y + y ). The sequence x = ( xn )n∈Z is then equal to
Once D and R have been designed, one approximately 2 n−1 n
S x (t) with the direct sampling operator of (16.1) and
reconstructs x (t) with x̂ (t) = RDx, where x = S x (t).
y = Dx, where D can be described as the IIR filter of
recursive equation
16.1.7 Revisiting Continuous-Time Quantization
y n = 2xn − yn−1. (16.27)
Consider again continuous-time quantization presented
in the introduction as an example of event-driven sig- We have thus established the three relations x =
nal approximation. Assume that x (t) ∈ B is quantized S x (t), y = Dx, and x̂ (t) = Ry of the RDS system of
in amplitude as illustrated in Figure 16.1, to give a Figure 16.3.
piecewise constant approximation y (t). The best recon- The introduction raised the theoretical question of
struction of x (t) that can be performed from y(t) is to possible reconstruction improvements after amplitude
remove the out-of-band content of y(t). This leads to the quantization. In the equivalent RDS system model, S is
signal imposed by the quantizer since it is the direct sampling
x̂ (t) = ϕo (t) ∗ y(t), operator at the instants (tn )n∈Z , where the input x (t)
Event-Based Data Acquisition and Reconstruction—Mathematical Background 385

of interest crosses the quantization thresholds. Natu- Parseval’s equality, which states that
rally, little improvement will be expected if the sequence
 Tp L
(tn )n∈Z is such that S is not injective. So assume that 1
x (t) y(t) dt = ∑ Xk∗ Yk ,
the injectivity of S is effective. Possible improvements Tp 0 k =− L
will depend on how much freedom of design one has
on the operators D and R. In the basic implementation where X := F x (t) and Y := F y(t) for any x (t), y(t) ∈ B,
of amplitude quantization, the zero-order hold function and Xk∗ denotes the complex conjugate of Xk . In short
P is limited to operate on discrete-valued sequences notation, we write
(yn )n∈Z . This greatly limits the freedom of design of
the operator D, which is to output these sequences. ) x, y* = X∗ Y, (16.29)
By allowing zero-order hold operations on real-valued
sequences, we will see under reasonable conditions where  Tp
1
that there exists a discrete-time operator D such that ) x, y* := x (t) y(t) dt,
Tp 0
RDS = I , thus allowing the perfect reconstruction of
x (t) from x = S x (t). X and Y are seen as column vectors and M∗ denotes the
conjugate transpose of a matrix M also called the adjoint
of M, such that (M∗ )i,j is the complex conjugate of M j,i
for all i, j.

16.2 The Case of Periodic Bandlimited Signals 16.2.2 Sampling Operator


The previous section left a number of difficult unan- We show how generalized sampling introduced in Sec-
swered questions, such as the condition for S to be tion 16.1.4 is formulated in finite dimension. In its gen-
injective and a formula for finding a left inverse R. These erality, a sampling operator is any linear function S
questions are surprisingly easy to answer in the context that maps a signal x (t) of B into a vector x of N val-
of periodic bandlimited signals, assuming a period that ues. Like any linear mapping in finite dimension, S can
is a multiple of their Nyquist period. This is because be represented by a matrix. We call S the matrix that
the signals of a given bandwidth and a given period maps the K components of x (t) in the Fourier basis
$ j2πkt/T %
form a vector space of finite dimension. The sampling– e p into x = S x (t) ∈ R N . This matrix is
− L≤k≤ L
reconstruction problem is then brought to the familiar rectangular of size N ×K. It can be concisely defined as
ground of linear algebra in finite dimension. Although
new issues appear when taking the period to the limit S := SF −1 . (16.30)
of infinity, the periodic case provides some useful basic
intuition while also permitting easy and rigorous com- Then, S x (t) = SX where X := F x (t). Let Sn be the nth
puter simulations. column vector of S∗ so that
!
S ∗ = S1 S2 · · · S N . (16.31)
16.2.1 Space of Periodic Bandlimited Signals
A signal x (t) that is Tp periodic and bandlimited yields As S∗n is the nth row vector
" of
# S, SX is the N-dimensional
a finite Fourier series expansion of the form vector of components S∗n X 1≤n≤ N . Defining

L s n ( t ) : = F −1 S n , (16.32)
x (t) = ∑ Xk e j2πkt/Tp
, (16.28)
it follows from (16.29) that S∗n X = )sn , x *. Thus, any
k =− L
sampling operator can be put in the form
for some positive integer L. We call B the space of all sig-
nals of this form for a given L. This is a vector space of S: B −→ R N " # (16.33)
dimension K := 2L + 1. Any signal x (t) of B is uniquely x (t) " → x = ) sn , x * 1≤ n ≤ N .
defined by the K-dimensional vector X of its Fourier
coefficients ( X− L , . . . , X L ), which we formally write as
16.2.3 “Nyquist Condition” for Direct Sampling
X = F x ( t ). We saw in Section 16.1.1 that a necessary and sufficient
condition for a linear mapping S to allow the theoretical
The mapping F can be thought of as a discrete- recovery of x (t) from S x (t) is that S be injective. In the
frequency Fourier transform. It satisfies the generalized finite-dimensional case of (16.33), it is a basic result of
386 Event-Based Control and Signal Processing

linear algebra that S is injective if and only if the dimen- 16.2.4 Sampling Pseudoinverse
sion of its range, also called the rank of S , is equal to the
Once a sampling operator S of the form (16.33) has
dimension K of B. One immediately concludes that
been shown to be injective, the next question is to find
N ≥ K, (16.34) a left inverse R. In the present context of periodic sig-
nals, a reconstruction operator R is any linear mapping
is a necessary condition for S to be injective. The suffi- from R N to B. Consider the reconstruction operator S ∗
ciency of this condition of course depends on S . Assum- defined by
ing that N ≥ K, S is injective if and only if its matrix S ∗
N
S x = ∑ x n sn ( t ), (16.38)
has full rank K. We are going to show that this is always
n =1
realized with the direct sampling operator
" # for all x ∈ R N . There is no reason for S ∗ to be a
S x (t) = x (t1 ), x (t2 ), . . ., x (t N ) , (16.35) left inverse of S . However, it can be shown that the
composed operator S ∗ S is systematically invertible
for N distinct instants t1 , t2 , . . ., t N in [0, Tp ). From in B given the injectivity of S . This is easily seen
(16.30), SX = S x (t) where x (t) = F −1 X. From (16.28), by matrix representation. By linearity of F , F S ∗ x =
we have
L
∑nN=1 xn F sn (t) = ∑nN−1 xn Sn from the relation (16.32).
!
x (tn ) = ∑ e j2πktn /Tp Xk . Therefore, F S ∗ is the K × N matrix S1 S2 · · · S N .
k =− L It follows from (16.31) that
Since x (tn ) is the nth component of SX, the entries of the S∗ = F S ∗ . (16.39)
matrix S are
Sn,k = e j2πktn /Tp , (16.36) With (16.30), we obtain that S ∗ S = F −1 (S∗ S)F . From
for n = 1, . . . , N and k = − L, . . . , L. The first K the injective assumption of S , S is an N ×K matrix with
row
$ k % vectors of S form the Vandermonde matrix N ≥ K and∗ full rank K. It is a basic result of∗ linear alge-
un 1≤n≤K,− L≤k≤ L where un := e j2πt n /Tp
. Its determi- bra that S S is also of rank K. But since S S is a K ×K
matrix, it is invertible. This implies the invertibility of
nant is in magnitude equal to ∏1≤n<m≤K |um − un |.
S ∗ S . We can then define the reconstruction operator
Since t1 , . . . , tK are distinct in [0, Tp ), then u1 , . . . , uK are
distinct on the unit circle. This makes the determinant S + := (S ∗ S)−1 S ∗ . (16.40)
nonzero and hence S of rank K. Thus, with direct
sampling, x (t) can be recovered from S x (t) if and only if Since S + S = (S ∗ S)−1 S ∗ S = I where I is the identity
the number N of samples is at least equal to the number operator of B, then S + is a left inverse of S . The oper-
of Nyquist periods within one period of x (t). This ator S + is called the pseudoinverse of S . By applying
constitutes the Nyquist condition for the nonuniform (S ∗ S)−1 on both sides of (16.38), we have
sampling of periodic signals. Note that there is no
condition on the location of the sampling instants, only N
a condition on their number. We say that the sampling S + x = ∑ xn s̃n (t) where s̃n (t) := (S ∗ S)−1 sn (t).
n =1
is critical when N = K. This implies the idea that no
(16.41)
sample can be dropped for the reconstruction to be
possible. The term oversampling is used when N > K.
$ We %show in Figure $ %16.4 examples of functions
Returning to general sampling operators of the type sn (t) 1≤n≤ N and s̃n (t) 1≤n≤ N for the direct sampling
(16.33), N ≥ K is also sufficient for S to be injective operator of (16.35). According to (16.37), this operator is
when sn (t) = s(t−tn ), where t1 , . . . , t N are distinct in obtained by taking s (t) = s (t−t ) where s (t) is such
n o n o
[0, Tp ) and s(t) is a function of B whose Fourier coeffi-
that T1p (s̄o  u)(t) = u(t). As shown in Appendix 16.2.5,
cients S− L , . . . , S L are nonzero. This is because similarly
to (16.19), this is achieved by the function
D E L
s( t −t n ), u( t ) = Tp ( s̄  u)( t n ),
1
(16.37) so (t) := ∑ e j2πkt/Tp . (16.42)
k =− L
 Tp
where ( x  y )(t) := 0 x (τ)y(t−τ)dτ, and the map- One can also show that so (t) is a Tp -periodized sinc
ping u(t) "→ (s̄  u)(t) is invertible in B as shown in function. Note from Figure 16.4 that contrary to sn (t),
Appendix 16.2.5. The operator S such that )sn , x * = s̃n (t) is not the shifted version of a single function. This
 t n +1
tn x (t) dt for all n = 1, . . . , N with t N +1 := t1 + Tp can makes the derivation of s̃n (t) difficult in practice. In the
also be shown to be injective when N ≥ K. particular case where N = K, S is invertible and hence
Event-Based Data Acquisition and Reconstruction—Mathematical Background 387
N=K N=K+1

s1(t) s2(t) s3(t) s4(t) s5(t) s1(t) s2(t) s3(t) s6(t) s4(t) s5(t)
1 1

0.5 0.5

sn(t)
sn(t) 0
t1 t2 t3 t4 t5
0
t1 t2 t3 t6 t4 t5

–0.5 –0.5

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


t t

N=K N=K+1

~ ~ ~ ~ ~
s1(t) s2(t) s3(t) s4(t) s5(t)
1 1
~ ~
s1(t) ~ s5(t)
~ s4(t)
s2(t) ~
~ s6(t)
0.5 0.5 s3(t)
sn(t)

sn(t)
~

~
0 0
t1 t2 t3 t4 t5 t1 t2 t3 t6 t4 t5

–0.5 –0.5

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


t t

(a) (b)

FIGURE 16.4
Examples of functions s n ( t) : = so ( t−tn ) and s̃ n ( t): (a) case N = K = 5 (critical sampling); (b) case N = K +1 = 6 (oversampling). The sampling
instants of the case N = K +1 are the sampling instants t1 , . . . , t5 of the case N = K, plus one additional instant t6 (time index ordering is not
necessary here).

S + = S −1 . Since s̃n (t) = S + δn where δn is the of Fourier coefficients of u(t) and Au(t), respectively.
N-dimensional vector whose nth component is 1 Then, Vk = Tp Sk∗ Uk for each k = − L, . . . , L.
and the others 0, then S s̃n (t) = δn , which amounts to The operator A is invertible in B if and only if
writing that the mapping U "→ V is invertible in C N . This is in
turn equivalent to the fact that Sk is nonzero for all
 k = − L, . . . , L.
1, k = n
s̃n (tk ) = . (16.43) Next, Au(t) = Tp u(t) for all u(t) ∈ B if and only if
0, k = 1, . . . , N and k = n
Tp Sk∗ Uk = Tp Uk for each k = − L, . . . , L and all Uk ∈ C.
This is in turn equivalent to having Sk = 1 for all
We say that the functions s̃n (t) are interpolation func- k = − L, . . . , L, and hence s(t) = F −1 (1, . . . , 1) which
tions at the sampling instants t1 , . . . , t N . This property is gives the function so (t) of (16.42).
confirmed by the figure in the case N = K. This interpo-
lating property, however, no longer holds when N > K.
This can also be observed in the figure in the case
N = K +1.

16.2.5 Appendix 16.3 Sampling and Reconstruction Operators in


Hilbert Space
16.2.5.1 Fourier Coefficients of Circular Convolution
While the periodic case brings some useful intuition
Given the Fourier coefficient formula Xk =
 Tp to the problem of nonuniform sampling, its extension
1
Tp 0 x (t) e− j2πkt/Tp dt, it is easy to verify that the kth
to aperiodic bandlimited signals is the major challenge
Fourier coefficient of z(t) := x (t)  y(t) is Zk = Tp Xk Yk . in this topic. In the periodic case, the injectivity of S
Consider the operator Au(t) := (s̄  u)(t) for s(t) ∈ B. immediately led to a left inverse. This was based on the
Let U = (Uk )− L≤k≤ L and V = (Vk )− L≤k≤ L be the vectors introduction of the adjoint operator (16.38) and followed
388 Event-Based Control and Signal Processing

by the operator construction


" # of (16.40). With a sampling B that satisfies the relation
operator S x (t) = )sn , x * n∈Z in the aperiodic case, one D E D E
u, S ∗ v = S u, v D , (16.47)
can naturally generalize (16.38) into
for all u ∈ B and v ∈ D. This is in fact the original
S∗u = ∑ un sn ( t ). (16.44) definition of the adjoint S ∗ of S . It is also known that S ∗
n ∈Z is bounded. It is easy to verify that S ∗ yields the expres-
sion of (16.44). Indeed,D defining Ru := ∑En∈Z un sn (t), we
Under the injectivity of S , one is then tempted to define
have )u(t), Rv* = u(t), ∑n∈Z vn sn (t) = ∑n∈Z )u(t),
the pseudoinverse
sn (t)* vn = )S u(t), v*D . So R = S ∗ . Due to the symme-
try of the relation (16.47), S ∗ naturally yields S as its
S + := (S ∗ S)−1 S ∗ . (16.45) adjoint operator, that is, (S ∗ )∗ = S .

Unfortunately, the injectivity of S is not sufficient to


make S ∗ S invertible in infinite dimension. In this sec- 16.3.2 Orthogonality in Hilbert Space
tion, we give the reader basic tools from the theory of We will see that adjoint operators yield special proper-
Hilbert spaces that permit an extension of the properties ties connected to the notion of orthogonality induced
of interest in finite dimension to the infinite-dimensional by inner products. Two signals x (t) and y(t) of B are
case. We refer the reader to [12–14] for further details on said to be orthogonal and symbolized by x (t) ⊥ y(t)
the mathematical notions invoked in this section. when ) x, y* = 0. Because the norm  ·  is induced
by the inner product )·, ·*, there is some outstanding
16.3.1 Adjoint Operator interaction between orthogonality and metric proper-
ties. The most basic interaction lies in the Pythagorean
To understand the properties of S ∗ S , one needs to theorem, which states that z(t)2 =  x (t)2 + y(t)2
understand the fundamental concept behind the con- when z(t) = x (t) + y(t) and x (t) ⊥ y(t). In this case,
struction of S ∗ . We know the explicit definition of the z(t) is larger than or equal to both  x (t) and y(t).
adjoint S∗ for a N ×K matrix S in terms of the matrix The second outstanding interaction is in the notion of
coefficients. A generalization of this notion to operators orthogonal projection. If V is a closed linear subspace of B
would require a definition that does not involve matrix (meaning that V includes all its limit points with respect
coefficients. Note that x∗ (S∗ y) = (Sx)∗ y for all x ∈ CK to  · ) and x (t) ∈ B, then there exists a unique y(t) ∈
and y ∈ C N seen as column vectors. This relation also V that minimizes the distance y(t)−x (t). This signal
uniquely characterizes S∗ as each of its coefficients can y(t) is denoted by PV x (t), is also the unique element
be retrieved by choosing x and y with single nonzero
of V such that x (t)−y(t) ⊥ V (meaning that x (t)−y(t)
coefficients. Using the notation )x, y* P := x∗ y for any x
is orthogonal to all elements of V), and is called the
and y of CP , S∗ can also be defined as the unique K × N
orthogonal projection of x (t) onto V. As x (t) = y(t) +
matrix such that
( x (t)−y(t)), one obtains from the Pythagorean theorem
that PV x (t) ≤  x (t) (Bessel’s inequality). All these
)x, S∗ y*K = )Sx, y* N , (16.46)
properties can be restated in the space D with the inner
product )·, ·*D and its induced norm  · D .
for all x ∈ CK and y ∈ C N .
More properties of interest are about orthogonal space
The generalization of this notion of adjoint to an oper-
decompositions. For any subset A ⊂ B, let A⊥ denote
ator S from B to D is possible when B and D are
the set of all elements of B that are orthogonal to A. By
equipped with the structure of Hilbert spaces. The space
continuity of each argument of )·, ·* with respect to  · 
B is indeed a Hilbert space with respect to the func-
(as a result of Cauchy–Schwarz inequality), it is shown
tion )·, ·* defined in (16.14). This means that )·, ·* is an ⊥
inner product in B, and B is complete with respect to the that A⊥ is a closed subspace of B and is equal to A ,
norm  ·  of (16.8) where A is the set of all the limits points of A and is
 which is induced by )·, ·* in the sense called the closure of A. By orthogonal projection, it is
that  x (t) = ) x, x *. The same can be said of the space
D with the norm  · D of (16.9) induced by the inner then shown that B = V ⊕ V⊥ for any closed subspace V
product of B (where the addition symbol ⊕ simply recalls that
V ∩ V⊥ = {0}). One obtains from this that (V⊥ )⊥ = V
)x, y*D := ∑ xn yn . [by saying that B = V⊥ ⊕ (V⊥ )⊥ since V⊥ is closed and
n ∈Z
noticing that (V⊥ )⊥ ⊃ V]. This is again valid in D with
Under these circumstances and given our initial respect to )·, ·*D .
assumption that S is bounded as stated in (16.10), it is These properties imply some outstanding conse-
shown that there exists a unique operator S ∗ from D to quence on the relation between S and S ∗ . Defining
Event-Based Data Acquisition and Reconstruction—Mathematical Background 389
$ %
Null(S ∗ ) := u ∈ D : S ∗ u = 0 and Range(S ∗ ) := existence of an exact expansion
$ ∗
S u : u ∈ D} similarly to (16.2) and (16.3), one obtains
x (t) = ∑ x̃ n sn (t), (16.53)
the basic identities
n ∈Z

Range(S ∗ )⊥ = Null(S), (16.48) with ( x̃ n )n∈Z in D. The next theorem derived from [15]


Range(S) = Null(S ). ∗
(16.49) gives necessary and sufficient conditions for S to be
surjective.
This is proved asD follows. E D If u
E ( t ) ∈ Null (S) ,
(16.47) implies that u, S ∗ v = S u, v D = 0 for all Theorem 16.1
v ∈ D. Hence, u(t) ∈ Range(S ∗ )⊥ . Conversely, if
u(t) ∈ Range(S ∗ )⊥ , one can see from the identities Let S be a bounded operator from B to D. The following
propositions are equivalent:
S u(t)2D = )S u, S u*D = )u, S ∗ S u*, (16.50)
(i) S ∗ is surjective (Range(S ∗ ) = B).
that S u(t)D = 0 since S ∗ S u(t) ∈ Range(S ∗ ). Hence, (ii) S is injective and Range (S) is closed in D.
u(t) ∈ Null(S). This proves (16.48). The second relation (iii) S ∗ S is invertible.
is shown in a similar manner.
(iv) There exists a constant α > 0 such that for all
u(t) in B,
16.3.3 Injectivity of S and Surjectivity of S ∗
α u(t) ≤ S u(t)D . (16.54)
The pending question has been how to recognize that S
is injective. From (16.48), this is effective if and only if
The equivalence between (ii) and (iv) can be found

{0} = Range(S ∗ )⊥ , which is also equal to Range(S ∗ ) . in [15, theorem 1.2]. The equivalence between (i) and
Since {0}⊥ = B and Range(S ∗ ) is a closed subspace, (ii) is easily established from (16.51) and the fact from
then [15, lemma 1.5] that Range(S ∗ ) is closed in B if
and only if Range(S) is closed in D. It is clear that
S is injective ⇐⇒ Range(S ∗ ) = B. (16.51) (iii) implies (i). Assume now (i) and (ii). From [15,
Lemma 1.4], Null(S ∗ S) = Null(S), so S ∗ S is injec-
In practice, the above property means that any signal tive due to (ii). Since Range(S) is closed from (ii),
u(t) ∈ B can be approximated with arbitrary precision it follows from (16.49) that D = Range(S)" ⊕ Null(S ∗ ).
with respect to the norm  ·  by a function of the type With (i), B = Range(S ∗ ) = S ∗ (D) = S ∗ Range(S) ⊕
# " #
Null(S ) = S ∗ Range(S) = S ∗ S(B). So S ∗ S is injec-

v ( t ) = S ∗ v = ∑ v n sn ( t ), (16.52) tive and surjective, which implies (iii).


n ∈Z The largest possible value of α in (16.54) is equal to

with v ∈ D. It is said in this case that {sn (t)}n∈Z is S u(t)D


γ(S) := inf = inf S u(t)D ,
complete in B. u ( t )∈B\{0} u(t) u ( t )∈B
In the case where S is the direct sampling operator  u( t )=1
at instants (tn )n∈Z, we recall that the functions sn (t) of (16.55)
(16.52) are the shifted sinc functions so (t−tn ). When S
is injective, we know from the previous section that any which is called the minimum modulus of S . Property (iv)
is equivalent to stating that γ(S) > 0.
u(t) ∈ B can be arbitrarily approached by linear com-
binations of these functions with coefficients in D. One
could see in this a generalized version of Shannon sam- 16.3.4 Frame
pling theorem where the sinc functions are shifted in
Since Theorem 16.1 is based on the assumption that S
a nonuniform manner. However, a true generalization
is bounded, the two inequalities (16.10) and (16.54) are
would require u(t) to be exactly achieved by a fixed lin-
needed to ensure the surjectivity of S ∗ . By squaring their
ear combination of the sinc functions with coefficients
members, these inequalities are equivalent to
in D. For this to be realized, one needs the stronger
property that  2
A u(t)2 ≤ ∑ )sn , u* ≤ B u(t)2 , (16.56)
Range(S ∗ ) = B. n ∈Z

It is said in this case that S ∗ is onto, or surjective. With any for all u(t) ∈ B, where both A = α2 and B = β2 are pos-
operator S , this is the required property to guarantee the itive. It is said in this case that the family of functions
390 Event-Based Control and Signal Processing

{sn (t)}n∈Z is a frame of B. The constants A and B are trivial. It is when


called the frame bounds. This condition is equivalent to
writing that there exist positive constants A and B such S u(t)D = αu(t), (16.62)
that
for some constant α > 0 and for all u(t) ∈ B. Let us show
A ≤ γ(S) 2
and S ≤ B.
2
(16.57) in this case that
S ∗ S = α2 I . (16.63)
Under this condition, we know that x (t) yields an exact By squaring the members of (16.62), one obtains
expansion of the type (16.53). The current drawback of α2 )u, u* = DS u, S u* = Du, S ∗ S u* so that Du, Au* = 0
D
this expression is that it is not a function of the sam- with A := S ∗ S−α2 I . This operator is easily seen to
ple sequence x = S x (t). This is fixed by the equivalence be self-adjoint, meaning that )u, Av* = )Au, v* for all
between the surjectivity of S ∗ and the invertibility of u(t), v(t) ∈ B. It is known in this case that A must be the
S ∗ S in Theorem 16.1. The latter property allows the zero operator [14, corollary 2.14], which leads to (16.63).
definition of the pseudoinverse S + in (16.45), which The pseudoinverse of S is then simply
provides us with a left inverse of S . Hence, when x =
S x ( t ), 1
S + = 2 S ∗.
x (t) = S + x = ∑ xn s̃n (t), (16.58) α
n ∈Z
The condition (16.62) is equivalent to the frame condi-
where {s̃n (t)}n∈Z are the impulse responses of S + . Since tion (16.75) with A = B = α2 . The frame {sn (t)}n∈Z is
{sn (t)}n∈Z are the impulse responses of S ∗ and S + = said in this case to be tight.
(S ∗ S)−1 S ∗ , then
16.3.6 Neumann Series
s̃n (t) := (S ∗ S)−1 sn (t), (16.59)
Another equivalent criterion can be added in Theorem
for all n ∈ Z. The invertibility of S ∗ S also permits the 16.1, more specifically on the invertibility of S ∗ S . For
derivation of a sequence x̃ = ( x̃ n )n∈Z satisfying (16.53) any operator A of B, the Neumann series
in terms of x = S x (t). Indeed, note that S + can be

expressed as
A−1 = ∑ (I−A)m , (16.64)
! m =0
S + = S ∗ S(S ∗ S)−2 S ∗ , (16.60)
is known to converge when
where A2 := AA for any operator A of B. Hence,
x (t) = S + x = S ∗ x̃ with I−A < 1. (16.65)
! In (16.64), the notation Am is recursively defined
x̃ := S(S ∗ S)−2 S ∗ x. (16.61)
by Am+1 := AAm with A0 = I . The expansion
Given that the initial knowledge is the family of func- (16.64) is nothing∞ butm a generalization of the geo-
−1 , or equivalently
tions {sn (t)}n∈Z and the sequence of samples ( xn )n∈Z , metric series ∑ m = 0 u = ( 1 − u )
∞ −1
both reconstruction formulas (16.53) and (16.58) require ∑m=0 (1−u) = u . Although (16.65) is only a suffi-
m

some transformation to obtain either {s̃n (t)}n∈Z or cient condition for A to be invertible, one obtains the
∗ S to
( x̃n )n∈Z . The difference between these two formu- following necessary and sufficient condition for S
las, however, is that (16.58) involves a continuous- be invertible.
time transformation (see Equation 16.59), while (16.53)
requires a discrete-time transformation (see Equa- Proposition 16.1
tion 16.61). Even though both S and S ∗ involve
continuous-time signals (either as inputs or outputs), Let S be a bounded sampling operator. Then, S ∗ S
the global operator S(S ∗ S)−2 S ∗ of (16.61) has pure is invertible if and only if there exists c > 0 such that
discrete-time inputs and outputs. This difference is of I− c S ∗ S < 1.
major importance for practical implementations.
Let us justify this result. When S ∗ S is invertible,
we know that the frame condition (16.56) is satis-
16.3.5 Tight Frame
fied. As the central member in this condition is equal
In general, the derivation of S ∗ S and its inverse is a to )u, S ∗ S u* according to (16.50), it is easy to ver-
major difficulty. But there is a special case where this is ify that |)u, Au*| ≤ γu(t)2 where A := I−c S ∗S ,
Event-Based Data Acquisition and Reconstruction—Mathematical Background 391
B− A ∗ + −1 δ for all k ∈ Z. Then,
c := A+ 2
B > 0, and γ : = B + A < 1 [3]. As A = A, s̃k ( t ) = S δk = S k
it is known that A = supu=1 |)u, Au* [14, S s̃k (t) = δk , (16.67)
proposition 2.13]. Therefore, A ≤ γ. This proves
that I− c S ∗ S < 1. Conversely, if this inequality is for all k ∈ Z. The nth components of the above sequences
satisfied for some c > 0, c S ∗S is invertible due to the yield the equality
Neumann series, and so is S ∗ S . 
1, n = k
)sn , s̃k * = .
0, n = k
16.3.7 Critical Sampling and Oversampling
It is said that {s̃n (t)}n∈Z is biorthogonal to {sn (t)}n∈Z .
Assume that {sn (t)}n∈Z is a frame. We know that any
of the propositions of Theorem 16.1 is true. Hence, S is
16.3.8 Optimality of Pseudoinverse
injective and S ∗ is surjective. Meanwhile, no condition
is implied on the injectivity of S , or the surjectivity of We show that the pseudoinverse S + is the left inverse of

S . Note that these two properties are related since S that is the least sensitive to sample errors with respect
to the norms  · D and  · . Assume that the acquired
D = Range(S) ⊕ Null(S ∗ ), (16.66) sample sequence of a signal x (t) of B is
x = S x (t) + e, (16.68)
as a result of (16.49) plus the fact that Range(S) is closed
where e is some sequence of sample errors. When
from Theorem 16.1(ii). Thus, S ∗ is injective if and only if
attempting a reconstruction of x (t) by applying a left
S is surjective. This leads to two cases.
inverse R of S , we obtain
Oversampling: This is the case where S ∗ is not injective Rx = x (t) + Re. (16.69)
(S is not surjective). Since Null(S ∗ ) = {0}, there exists
u = (un )n∈Z ∈ D\{0} such that S ∗ u = ∑n∈Z un sn (t) = 0. Thus, Re is the error of reconstruction. One is concerned
At least one component uk of (un )n∈Z is nonzero, with its magnitude Re with respect to the error norm
so one can express sk (t) as a linear combination of eD . Analytically, one wishes to minimize the norm of
{sn (t)}n=k with square-summable coefficients. If we call R defined by
Sk the sampling operator of kernel functions {sn (t)}n=k , Re
one easily concludes that Range(Sk∗ ) = Range(S ∗ ) = B R := sup , (16.70)
e∈D\{0} eD
(since S ∗ is surjective). As Sk is obviously still bounded,
it follows from the implication (i) ⇒ (iv) of Theorem 16.1 in a way similar to (16.11). Let us show that
that {sn (t)}n=k is still a frame. This justifies the term of
“oversampling” for this case. Meanwhile, as S is not sur- R ≥ S +  = γ(S)−1 . (16.71)
jective, Range(S) does not cover the whole space D. In The inequality R ≥ γ(S)−1 results from (16.55) and
other words, there are sequences u ∈ D that cannot be the relations
the sampled version of any signal in B. An intuitive way
Re RS u(t)
e D ≥
to understand this phenomenon is to think that over- sup sup
S u(t)D
sampling creates samples with some interdependence e∈D\{0} u ( t )∈B\{0}
which prevents them from attaining all possible values. u(t)
= sup
S u(t)D
Critical sampling: This is the case where S ∗ is injective u ( t )∈B\{0}

(S is surjective). Then, S ∗ is invertible since it is also sur- ' S u(t)D (−1


jective. So every signal of B is uniquely expanded as a = inf
u(t) .
u ( t )∈B\{0}
linear combination of {sn (t)}n∈Z with square-summable
coefficients. Hence, {sn (t)}n∈Z is a basis of B. Under This implies in particular that S +  ≥ γ(S)−1 since
the special condition (16.56), it is said to be a Riesz S + is a left inverse of S . So what remains to
basis. In this situation, sk (t) never belongs to Range(Sk∗ ) be proved is that S +  ≤ γ(S)−1 . Let e ∈ D\{0}.
for any k ∈ Z, and consequently, Range(Sk∗ ) never cov- According to (16.66), e can be decomposed as e =
ers the whole space B. Therefore, {sn (t)}n∈Z ceases to S e(t) + e0 , where e(t) ∈ B and e0 ∈ Null(S ∗ ). Now,
be a frame after the removal of any of its functions. S e(t) ⊥ e0 due to (16.49). By the Pythagorean theo-
This family is said to be an exact frame. The term “crit- rem, eD ≥ S e(t)D . Meanwhile, S + e = S + S e(t) +
ical sampling” for this case is from a signal-processing S + e0 = e(t) since Null(S ∗ ) ⊂ Null(S + ). So S + e =
perspective. Now, S is also invertible (since it is also e(t) ≤ γ(S)−1 S e(t)D ≤ γ(S)−1 eD . This suffices
surjective and injective). So S + coincides with S −1 and to prove that S +  ≤ γ(S)−1 .
392 Event-Based Control and Signal Processing

16.3.9 Condition Number of that )sn , u* = u(tn ), and hence the frame condition
Sampling–Reconstruction System (16.56) takes the form
The sensitivity of the global sampling reconstruction  2
system to errors in the samples is more precisely eval-
A u(t)2 ≤ ∑ u ( t n )  ≤ B u(t)2 , (16.75)
n ∈Z
uated as follows. Consider again Equations 16.68 and
16.69, where an error sequence e is added to the sam- for some constants B ≥ A > 0 and all u(t) ∈ B. We
ple sequence S x (t) and a left inverse R of S is applied recall from (16.21) that sn (t) = so (t−tn ) where so (t) :=
on x = S x (t) + e to obtain the reconstruction Rx = sinc( Tto ). The main question is what sampling instants
x (t) + Re. While the relative error in the input x is (tn )n∈Z can guarantee this condition and what formulas
eD /S x (t)D , the relative error in the reconstruction of reconstruction are available.
Rx is Re/ x (t). In practice, one wishes to limit the
worst-case ratio between these two quantities
16.4.1 Oversampling
Re/ x (t)
κ := sup , (16.72) We saw in the periodic case of Section 16.2.3 that S is
e,x ( t )  eD /S x (t)D injective if and only if the number of samples within one
period of x (t) is at least the number of Nyquist periods
where e and x (t) are taken among all nonzero signals of within this time interval. As an aperiodic signal is often
D and B, respectively. Note that κ cannot be less than 1 thought of as a periodic signal of infinite period, one
[since the ratio in (16.72) is 1 when e = α S x (t) in which would conjecture that S is injective when the number of
case Re = α x (t)]. It is easy to see that samples is at least the number of Nyquist periods within
any time interval centered about 0 of large enough size.
κ = R S. (16.73)
This property is achieved when there exist two constants
According to (16.71),
Ts < To and μ ≥ 0,
S
κ ≥ κ(S) := , (16.74) such that
γ(S)
|tn − nTs | ≤ μTs , (16.76)
and κ = κ(S) when R is chosen to be the pseudoinverse
S + . The value κ(S) is intrinsic to the operator S and is for all n. It was actually proved in [3] that this guaran-
called the condition number of S . To have κ(S) as small tees the frame property of (16.75) under the additional
as possible, one needs to have γ(S) and S as close as condition that
possible to each other in ratio. They are equal when and tn+1 − tn ≥ d > 0, (16.77)
only when the condition (16.62) is met, and hence whenfor all n ∈ Z and some constant d.
the frame {sn (t)}n∈Z is tight. When only frame bounds Note that conditions (16.76) and (16.77) remain valid
A and B in (16.56) are known, we obtain from (16.57) that
after the suppression of any sample. Indeed, if we
 remove any given instant tk by defining t
n = tn for all
B
κ(S) ≤ . n < k and t
n = tn+1 for all n ≥ k, the sequence t
n sat-
A

"
# with μ = μ+1 and (16.77) with d =
d.
isfies (16.76)
Again, A and B should be as close as possible to each Then, x (tn ) n∈Z satisfies (16.75) for some bounds B >
other in ratio. A
> 0. This is a situation of oversampling according
to Section 16.3.7. This can be in fact repeated any finite
number
" # of times. Thus, x (t) can still be recovered from
x (tn ) n∈Z after dropping any finite subset of samples.

16.4 Direct Sampling of Aperiodic Bandlimited 16.4.2 Critical Sampling


Signals It was shown in [7,16,17] that (16.75) is also achieved
For the reconstruction of a signal x (t) ∈ B from under (16.76) and (16.77) with
x = S x (t) to be realistically possible, we saw in the
1
previous section that it is desirable for the kernel func- Ts = To and μ < 4.
tions {sn (t)}n∈Z of S to be a frame. In this section, we
concentrate
" # on the direct sampling operator S u(t) = It was also shown that {sn (t)}n∈Z is an exact frame.
u(tn ) n∈Z . In this case, the kernel functions are such This is the case of critical sampling described in
Event-Based Data Acquisition and Reconstruction—Mathematical Background 393

Section 16.3.7, where S is invertible. From (16.67), 16.4.4 Weighted Sampling


we have
 We started Section 16.4 with the intuition that perfect
1, k = n
s̃n (tk ) = . (16.78) reconstruction should be possible when the number of
0, k = n samples is at least the number of Nyquist periods within
any time interval centered about 0 of large enough
similar to (16.43). This is the generalization in infinite
size. Condition (16.76) imposes that (tn )n∈Z is uniformly
dimension of the interpolation functions obtained in dense according to the terminology of [9, equation 34].
Section 16.2.4 in the periodic case with N = K. The case Intuitively, it seems, however, that perfect reconstruc-
μ = 0 corresponds to uniform sampling and brings us
tion should be achievable with a nonuniform density of
back to Shannon sampling theorem. sampling instants as long as it remains larger than the
Nyquist density at any point. The only feature of the
16.4.3 Lagrange Interpolation direct sampling operator that prevents this is in fact
the uniform norm of its kernel functions. With the only
In the above case of critical sampling, the reconstruc-
assumption that
tion functions {s̃n (t)}n∈Z of (16.58) were also shown
in [7,16,17] to coincide with the Lagrange interpolation tn+1 − tn ≤ d
< To , (16.80)
functions

t − ti for all n ∈ Z and some constant d
, it was shown in
rn ( t ) = ∏ . (16.79)
t − ti [18, equation 4.6] that there exist constants B ≥ A > 0
i =− ∞ n
i=n such that
 2
It is easy to verify by construction that these functions A u(t)2 ≤ ∑ cn u(tn ) ≤ B u(t)2 , (16.81)
satisfy (16.78). n ∈Z
In the oversampling case Ts < To with any finite value
for all u(t) ∈ B, where
μ, the left inverses of S are no longer unique and the
impulse responses {s̃n (t)}n∈Z of S + need not satisfy the t n +1 − t n −1
interpolation property of (16.78) as was demonstrated in cn := 2To . (16.82)
the periodic case (see Figure 16.4b). It was shown in [7]
" #
that a left inverse R of S can still be found with Specific values of the bounds are A = 1− Tm 2 and
To
impulse responses {r̃n (t)}n∈Z satisfying (16.78) (gener- " #2
B = 1+ TTmo where
alized Lagrange interpolation functions). We will, how-
ever, skip the details of these functions as they involve
Tm := sup(tn+1 − tn ). (16.83)
exponential functions with little potential for practical n ∈Z
implementations.
When Ts < To and μ < 14 , however, it is interest- The weight cn can be interpreted as an estimate of
ing to see that the operator R of impulse responses the local normalized sampling period. Thus, the frame
{rn (t)}n∈Z given in (16.79) still satisfies RS u(t) = u(t) condition
$√ (16.56)
% is achieved by the kernel functions
for any u(t) ∈ B. The rigorous way to explain this is that cn so (t−tn ) just based on condition (16.80). Hence,

RS = I in the larger space B


of signals with no fre- the sampling operator S defined by these kernel func-

quency content beyond Ωo := Ts , which is indeed larger tions is such that S S is invertible. Maintaining S as

π

than Ωo = Tπo . One simply needs to think of To


:= Ts as the direct sampling operator at instants (tn )n∈Z , it is easy
to derive that S
∗ S
= S ∗ CS where C is the diagonal
the Nyquist period in B
. In this view, R is not just the
operator of D such that the nth component of Cu is
pseudoinverse of S in B
but is also its exact inverse
(since the sampling is critical with respect to signals in (Cu)n = cn un . (16.84)
B
). The considered input x (t) just happens to be in the
subspace B of B
, which is not a problem. What pre- Since (S ∗ CS)−1 S ∗ CS = I , the operator
vents R from being rigorously a left inverse of S in B is
that the functions {rn (t)}n∈Z belong to B
and have no R := (S ∗ CS)−1 S ∗ C,
reason to be in B. A consequence of practical impact is
when some random error sequence e is added to S x (t) is a left inverse of S . Interestingly, this works without the
as in (16.68). Even though x (t) is in B, the reconstruc- requirement that S be bounded, and worse, when S x (t)
tion error Re shown in (16.69) belongs to B
and most is not even square summable (and hence not in D). This
likely yields frequency oscillations beyond the baseband is because there is no constraint on how large the density
of x (t). of (tn )n∈Z can be.
394 Event-Based Control and Signal Processing

we obtain RDS = (RŘ)(ŠS) = I , which proves (i).


16.5 Constrained Reconstruction The implication (i) ⇒ (iii) is trivial since I−RDS = 0.
Finally, assume (iii). We know from the Neumann series
The reconstruction x (t) = Rx from x = S x (t) is the that RCS is invertible. With the discrete-time operator
most delicate part of real implementations. The first
reason is the complicated definition of the impulse D := CS(RCS)−2 RC, (16.86)
responses of a left inverse R as can be seen, for exam-
we obtain RDS = RCS(RCS)−2 RCS = I , which
ple, in (16.59) when R = S + . As a second reason, these
proves (i).
impulse responses must be produced in continuous-time
We have a number of remarks.
implementations by analog circuits with limited preci-
1. The condition (ii) for (S , R) to be admissible is quite
sion and freedom from a mathematical viewpoint. Since
weak as S and R only need to satisfy separate properties
the input to a reconstruction operator is discrete in time,
and need not have any connection between each other, at
an alternative approach is to look for left inverses of S
least from a theoretical viewpoint. For R to be surjective,
in the form RD where D is an operator from D to D,
it is necessary and sufficient that its impulse responses
which we will simply call a discrete-time operator. In this
{rn (t)}n∈Z form a frame. Indeed, calling S
the sam-
way, computational requirements on R can be relaxed
pling operator whose kernel functions are {rn (t)}n∈Z,
and partly moved to D which in practice is implemented
we know from Sections 16.3.3 and 16.3.4 that these func-
by pure digital circuits. A more rational way to proceed
tions form a frame if and only if S
∗ is surjective. Now
is to seek the condition on imposed operators S and R,
S
∗ is nothing but R. Meanwhile, the mere injectivity of
for RDS = I to be achievable with some discrete-time
S does not require its kernel functions to form a frame.
operator D. When this is realized, we will say that
2. When condition (ii) is satisfied, it is easy to show
(S , R) is an admissible sampling–reconstruction pair.
that RDS = I if and only if (SR)D(SR) = SR. As a
When D can be found explicitly, then the RDS system
result, RDS = I if and only if D is a generalized inverse of
of Figure 16.3 achieves perfect reconstruction.
the discrete-time operator SR [19, definition 1.38]. The
idea of generalized inversion can also be qualitatively
16.5.1 Condition for (S , R) to Be Admissible seen as follows. To obtain RDS = I , one needs to have
Let us show the following result. u(t) = Rũ where ũ := Du and u := S u(t), for any given
u(t) ∈ B. The second relation ũ := Du then appears as
Proposition 16.2 an “inversion” of the equation u = SRũ that results
from the first and third relations. One cannot talk about
the inverse of SR as in the most general case, SR is nei-
Let S and R be bounded sampling and reconstruc- ther injective nor surjective. Hence, the term “inversion”
tion operators, respectively. The following statements can only be understood here in a generalized sense.
are equivalent: 3. The condition (iii) for (S , R) to be admissible is
mainly interesting for its practical contribution. Once
(i) The equality RDS = I can be achieved with a discrete-time operator C has been found to achieve
some discrete-time operator D. I−RCS < 1, (16.86) gives a concrete way to obtain
(ii) S is injective and R is surjective. a generalized inverse D of SR.
4. Once condition (ii) is known to be satisfied, (16.85)
(iii) The inequality I−RCS < 1 can be gives another method to find a generalized inverse D
achieved with some discrete-time operator C. of SR. However, it is likely to be less practical than
(16.86), as separate generalized inverse S̃ and R̃ to S
Assume (i). It is easy to see that Null(S) ⊂ and R need to be found [we will see later that the
Null(RDS) = {0} and Range(R) ⊃ Range(RDS) = B. square power in (16.86) can be eliminated when deal-
This implies (ii). Conversely, assume (ii). In the same ing with numerical approximations]. Meanwhile, (16.85)
way the injectivity of S guarantees the existence of a has its theoretical appeal. Specifically, (16.85) yields as
left inverse Š of S , it can be shown that the surjectivity a particular solution the Moore–Penrose pseudoinverse
of R implies the existence of a right inverse of R, that (SR) + of SR [19]. This is achieved by taking Š = S +
is, a sampling operator Ř such that RŘ = I [one can and Ř = R+ where
show that the restriction of R to any subspace of D
R+ := R∗ (RR∗ )−1 .
complementary to Null(R) is invertible and take for R̃
its inverse]. With the discrete-time operator This operator is well defined from the mere condition
that R is surjective from (ii). Indeed, the sampling oper-
D := Ř Š , (16.85) ator S
introduced in the first remark above is such
Event-Based Data Acquisition and Reconstruction—Mathematical Background 395

that S
∗ = R and hence R∗ = (S
∗ )∗ = S
. Then RR∗ = Again, one obtains I−RCS < 1 under (16.80).
S
∗ S
, which is invertible by Theorem 16.1 since S
∗ = R As a final example, it was shown in [20, equation 48]
is surjective. that I−RS ≤ ( TTmo )2 with the operator
5. In the case where R = S ∗ , the operator of (16.61)
can be seen as the particular case of (16.85) with Š = S + Ru := ϕo (t) ∗ Lu, (16.89)
and Ř = (S ∗ )+ , and of (16.86) with C = I.
where
16.5.2 Known Cases Where I − RCS  < 1 Lu : = ∑ u n l n ( t ),
n ∈Z
With the direct sampling operator S at instants (tn )n∈Z
and ln (t) is the triangular function shown in Fig-
under the mere sampling condition (16.80), a number of
ure 16.2b. In other words, R is the linear interpolator at
reconstruction operators R have been found such that
instants (tn )n∈Z followed by bandlimitation. Once again,
I − RCS < 1 with some simple, discrete-time oper-
one obtains I−RS < 1 under (16.80).
ator C. In this case, we recall from the previous section
Given that A∗  = A for any bounded opera-
that RD is a left inverse of S with D given in (16.86).
tor A of B, and (RCS)∗ = S ∗ C∗ R∗ , one obtains
With the operator
I−S ∗ C∗ R∗  = I−RCS. This identity allows to


Ru := ϕo (t) ∗ P̌ u, (16.87) consider the reconstruction operator R = S (sinc
reconstruction) with the sampling operator S = R∗ ,

where where R is any of the operators listed above [11]. The


P̌ u := ∑ un p̌n (t), adjoint C∗ can be viewed as the conjugate transpose
n ∈Z of C seen as an infinite matrix. When R is defined by
and p̌ n (t) is the rectangular function described in (16.26), the kernel functions of the sampling operator
Figure 16.2a, it was shown in [18, equation 5.7] that S
= R∗ coincide with those of (16.22) (up to a scaling
I − RS ≤ TTmo , where Tm is given in (16.83). This oper- factor To ).
ator R is similar to that of (16.26) except that the nth
impulse response p̌ n (t) of P̌ is the rectangular function
16.5.3 Experiments
on [ťn , ťn+1 ) instead of [tn , tn+1 ), where ťn := 12 (tn−1 +tn )
(see Figure 16.2a). Under the condition (16.80), we obtain We return to the problem of Section 16.1.7 where
I − RS < 1. This is a case where C is simply identity continuous-time amplitude quantization followed by
in D. This is actually the result that enabled the proof bandlimitation was modeled as an RDS system. We
of (16.81). recall in this case that S is the direct sampling operator
From a practical viewpoint, the operator R of (16.87) at the instants (tn )n∈Z where x (t) crosses the quantiza-
may be somewhat inconvenient as it introduces the extra tion levels, R performs a zero-order hold followed by a
time instants (ťn )n∈Z in the processing. Meanwhile, if bandlimitation as described in (16.26), and D is defined
one tries to analyze I−RS with R from (16.26), one by the recursive equation (16.27). By nature of ban-
will face some obstacle as this operator holds its input dlimited quantization, this RDS system only performs
value yn at tn during the interval [tn , tn+1 ] before ban- an approximate reconstruction of the input. When the
dlimitation, causing a global signal delay in the operator zero-order hold is extended to real-valued discrete-time
RS . This can be fixed by feeding into R at tn the aver- inputs (as opposed to just discrete-amplitude discrete-
age between yn and yn+1 , instead of yn . This leads us time inputs in standard quantization), we know from
to consider the analysis of I−RCS, where C is the the previous section that RDS = I can be achieved
discrete-time operator such that the nth component of with the operator D of (16.86) by the operator C of
Cy is equal to (16.88) under condition (16.80). We plot in Figure 16.5a
the result of a simulation performed with periodic sig-
(Cy)n := 12 (yn +yn+1 ). (16.88) nals, where the bandlimited input x (t) is the black
solid curve, the quantizer output is the piecewise con-
By slightly modifying the mathematical arguments of
stant signal, and its bandlimited version x̂ (t) is the
[18], we prove in Appendix 16.5.4 the following result.
gray dashed curve. For reference, the gray horizon-
tal lines represent the quantization levels. According
Proposition 16.3
to the RDS model, the quantizer output is also the
zero-order hold of the sequence (yn )n∈Z output by the
Let S be the direct sampling operator at (tn )n∈Z, R be operator D of (16.27). We show in Figure 16.5b the new
the operator defined in (16.26), and C be the discrete- piecewise-constant signal obtained when the operator
time operator defined in (16.88). Then, I−RCS ≤ TTmo . D of (16.86) is used instead with C from (16.88), and
396 Event-Based Control and Signal Processing

1 x(t) 1 x(t)
x̂(t) x̂(t)
0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


t t

(a) (b)

1 x(t) 1 x(t)
x̂(t) x̂(t)
0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


t t

(c) (d)

FIGURE 16.5
Piecewise constant and piecewise linear approximations of periodic bandlimited input x ( t) (black solid curves) from nonuniform samples (black
dots), and bandlimited version x̂ ( t) of approximations (gray dashed curves): (a) continuous-time amplitude quantization; (b) bandlimited-
optimal piecewise-constant approximation; (c) linear interpolation; and (d) bandlimited-optimal piecewise-linear approximation.

see that its bandlimited version x̂ (t) also represented Lemma 16.1
by a gray dashed line coincides exactly with x (t). This
leads to the nontrivial conclusion that the piecewise Let v (t) be continuously differentiable on [0, T ] such
constant values of a continuous-time quantizer can be T 2 T
that v ( T ) = −v (0). Then 0 |v(t)|2dt ≤ πT2 0 |v
(t)|2 dt,
modified such that the input is perfectly recovered after
bandlimitation. where v
(t) denotes the derivative of v (t).
To emphasize the generality of the method, we show
in Figure 16.5c and d an experiment with the same input
and sampling operator but where R is the linear interpo- This is shown as follows. Let w(t) be the 2T-
lator followed by bandlimitation as described in (16.89). periodic function such that w(t) = v(t) for
In Figure 16.5c, D is taken to be identity while in Fig- t ∈ [0, T ) and w(t) = −v(t−T ) for t ∈ [ T, 2T ).
ure 16.5d, it is replaced by the operator D of (16.86) with The function w(t) is continuous everywhere, dif-
C = I. Perfect reconstruction is again confirmed in part ferentiable everywhere except at multiples of T,
(d) of the figure. The piecewise linear curve in this fig- and has a zero average. By Wirtinger’s inequality,
 2T (2T )2  2T

0 | w ( t )| dt ≤ (2π)2 0 | w ( t )| dt. By dividing each


ure represents the linear interpolation of the sequence y 2 2
output by D. member of this inequality by 2, one proves the lemma.
For the proof of Proposition 16.3, one then pro-
ceeds in a way similar to [18]. Using Parseval’s
equality for the Fourier transform, it is easy to
see in the Fourier domain that for any square-
16.5.4 Appendix
integrable function u(t), ϕo (t) ∗ u(t) ≤ u(t),
The proof of Proposition 16.3 uses the following result where ϕo (t) is defined in (16.24). Let u(t) ∈ B.
as a core property. Since ϕo (t) ∗ u(t) = u(t), it follows from (16.26) that
Event-Based Data Acquisition and Reconstruction—Mathematical Background 397
" #
u(t) − RCS u(t) = ϕo (t) ∗ u(t) − P CS u(t) . Then, sn (t) = s(t−nTs ) where s(t) is some function in B. This
& & & &
&u(t)−RCS u(t)& ≤ &u(t)−P CS u(t)&. From (16.25) is true even when sn (t) is not originally of the form
s(t−tn ) under general sampling instants (tn )n∈Z . Con-
and" (16.88), P C#S u(t) = ∑n∈Z yn pn (t), where y n :=
the example (16.22), where sn (t) = so (t) ∗ pn (t)
2 u( t n )+u( t n +1 ) . As ∑ n ∈Z p n ( t ) = 1, one can write
1 sider
u(t)−RCS u(t) = ∑n∈Z vn (t) pn (t), where v n (t) := and p n (t) is described in Figure 16.2a. The function sn (t)
& &2  tn+1 is not the shifted version of a single function, except
u(t) − yn . Then, & u(t)−P CS u(t)& = To ∑ tn 1
n ∈Z # when tn = nTs , in which case sn (t) = s(t−nTs ), where
"
|vn (t)|2 dt. Note that vn (tn+1 ) = 12 u(tn+1 )−u(tn ) = s(t) := so (t) ∗ p(t) and p(t) is the rectangular function
t
−vn (tn ). From the above lemma, tnn+1 |vn (t)|2 dt ≤ supported by [0, Ts ). Regardless of the function s(t) ∈ B,
( t n +1 − t n ) 2  t n +1
& & γ(S) and S can be derived in terms of the Fourier
| u ( t )| 2 dt. Hence, &u(t)−RCS u(t)&2 transform S(ω) of s(t) defined as
π 2 t n
2
≤ Tπm2 u
(t)2 . By Bernstein’s inequality, u
(t) ≤  ∞
& & S (ω) = s(t) e− jωt dt.
Ω0 u(t). Since Ωπo = T1o , then &u(t)−RCS u(t)& ≤ −∞
To u( t ). This proves Proposition 16.3.
Tm
We show in Appendix 16.6.3 that

1
γ(S) = √ inf |S(ω)|, (16.90)
To Ts |ω|<Ωo

16.6 Deviation from Uniform Sampling


and
It is, in general, difficult to have analytical formulas for
the precise behavior of an operator of the type RS . 1
S = √ sup |S(ω)|,
The mathematical derivations of the previous section To Ts |ω|<Ω
o
are limited to specific kernel and reconstruction func-
tions and are mostly based on inequalities that may not assuming that S(ω) is continuous in (−Ωo , Ωo ). The
be tight. For example, I−RS can only be shown to frame {sn (t)}n∈Z is tight when S(ω) is constant in
be upper bounded by 1 when Tm = To with the oper- (−Ωo , Ωo ). This is achieved when and only when s(t)
ators S and R used in [18], while it can be shown is proportional to the sinc function so (t), since s(t) ∈ B.
by Fourier analysis to be equal to 1− π2 < 0.37 when In the case of direct sampling where s(t) = so (t), S(ω) =
the sampling is uniform at the Nyquist rate, as will To for all ω ∈ (−Ωo , Ωo ). In this situation,
be shown in this section. While inequalities have been √
useful to prove sufficient conditions, more refined esti- γ(S) = S = ρ, (16.91)
mates may be needed for design optimizations. With the
lack of analytical formulas, one mainly need to rely on where ρ is the oversampling ratio
numerical experiments. However, it may be useful to be
To
guided by the intuition that views nonuniform sampling ρ := .
as a perturbation of uniform sampling. In the latter case, Ts
quantities can be derived by Fourier transform and are Several factors can contribute to increase the ratio
expected to continuously evolve as the operators expe- κ(S) = S/γ(S) from its minimal value 1. Obviously,
rience variations such as deviations on the sampling in-band distortions in the signal s(t) are a first factor as
instants. they contribute to create a gap between the infimum and
supremum values of |S(ω)| over the interval (Ωo , Ωo ).
But more generally, any distortion of the operator S
16.6.1 Sensitivity to Sampling Errors
will have a similar effect. While Fourier analysis can
The approach of nonuniform sampling as a perturba- no longer be used when sampling uniformity is lost,
tion of uniform sampling is first useful to understand one can simply return to the original definitions of γ(S)
the sensitivity of sampling to errors. We recall from and S in (16.55) and (16.11) as the infimum and the
Section 16.3.9 that this sensitivity is measured by the supremum values of S u(t)D for functions u(t) ∈ B of
condition number κ(S). This quantity is, however, dif- norm 1. Any distortion of S is likely to separate these
ficult to derive in the nonuniform case. So let us see two values apart.
what can be said in the uniform case. We include all In the case of direct sampling, having sn (t) = so (t−tn )
the sampling operators presented in Section 16.1.5 with where the values tn are deviated from the uniform sam-
the assumption that tn = nTs for all n where Ts ≤ To . pling instants nTs is a particular way to distort the
Regardless of S , the kernel functions of S take the form operator S and hence increase κ(S). There is a simple
398 Event-Based Control and Signal Processing
ω
scenario in which κ(S) can be made arbitrarily large. is easy to derive that Hs (ω) = sinc( 2ρ
1
Ωo ). With this,
Start with tn = nTo for all n, but deviate t0 toward I−RS = 1−sinc( 2ρ
1
). Its maximum is obtained with
t1 = To . The minimum modulus γ(S) is expected to con-
ρ = 1, that is, Ts = To , and is equal to 1− π2 . 0.36. This
tinuously vary with t0 . When t0 reaches t1 , the situation
nonzero value is due to the inband distortion of the
is intuitively similar to uniform sampling at the Nyquist
function p̌ (t). Note in this condition that the analysis
period To in which the sample at instant 0 has been
of [18] can only give the upper bound I−RS ≤ 1 (see
dropped. According to Section 16.4.2, {so (t−nTo )}n∈Z
Section 16.5.2).
is an exact frame, so {so (t−nTo )}n=0 ceases to be a
In general, one should try to have I−RS as
frame. Since S is only expected to experience a small
small as possible under uniform sampling. This value
variation when t0 moves from 0 to t1 , then γ(S) must
is exactly zero when Hs (ω) = 1 for all ω ∈ (−Ωo , Ωo ),
become 0. Thus, κ(S) is expected to grow infinitely as
t0 tends to t1 . Qualitatively, this gives the general sense which is achieved when R(ω) = ST∗o(Tωs) in this frequency
that κ(S) grows with the degree of nonuniformity. range. As the sampling instants are progressively devi-
ated, it is expected that I−RS continuously grows
from its initial value. When I−RS becomes too large
16.6.2 Analysis of I −RS for R to be considered as a satisfactory left inverse, the
When the available reconstruction operator R is not next concern is whether I−RS can be maintained
a left inverse of S , one wishes to analyze I−RS less than 1 for the reason explained above.
so that R can still be approximately adopted as a left In spite of the lack of analytical formulas for the
inverse when this number is small enough, or RD can growth of I−RS with sampling deviations, this intu-
be made a left inverse of S according to the method of itive approach may still be productive in the design
Section 16.5.1 (in the case C = I) when this number is process. With the direct sampling operator S at uni-
less than 1. Again, initial clues are obtained in the case of form instants t = nTs with Ts ≤ To , we know from
n

uniform sampling. Assume that the impulse responses (16.91) that S u ( t ) = ρu(t) for all u(t) ∈ B, and

of R are of the type rn (t) = r (t−nTs ), where r (t) ∈ B hence S S = ρI from (16.63). Therefore, one obtains
1 ∗
and make the same assumptions on S as in the previous I−RS = 0 with R := ρ S . One can also write that
section under uniform sampling. The function h(t, τ) of I−S ∗ CS = 0 where C = 1ρ I. Assume now that the
(16.18) then becomes sampling period varies slowly in time. Even if the vari-
1 ation is extremely slow, I−S ∗ CS is expected to grow
To n∑
h(t, τ) := r (t−nTs ) s̄(nTs −τ). (16.92) as a function of the peak-to-peak amplitude of variation.
∈Z Now, instead of having C with a diagonal constantly
With the bandlimitation of r (t) and s(t), we show in equal to ρ1 = TTos , it is intuitive that I−S ∗ CS will be
Appendix 16.6.3 that maintained small if this constant is adaptively adjusted
to the “local” sampling period. A natural way to do so
h (t, τ) = hs (t−τ), (16.93) is to choose C as defined by (16.82) and (16.84). This
brings us to the idea of weighted sampling discussed in
where
1 Section 16.4.4.
hs (t) := T T (r ∗ s̄ )(t).
o s
It then follows from (16.17) that RS is linear and time- 16.6.3 Appendix
invariant of impulse response h s (t), that is,
16.6.3.1 Proof of (16.90)
RS x (t) = hs (t) ∗ x (t). (16.94)
Let
D u(t) ∈ B\{E0}. The nth component of S u(t) is
In a way similar to (16.90), it can be shown that s ( t − nTs ), u(t) = T1o (s̄ ∗ u)(nTs ). Thus,
   2
I−RS = sup 1 − Hs (ω), (16.95) S u(t)2D = T12 ∑ (s̄ ∗ u)(nTs ) . (16.96)
|ω|<Ωo o n ∈Z

where Hs (ω) is the Fourier transform of hs (t). Since (s̄ ∗ u)(t) is bandlimited by Ωo = Tπo ≤ Tπs , Shan-
Consider, for example, the case studied in [18] where non sampling theorem applied at the sampling period
S is the direct sampling operator at (tn )n∈Z and R Ts yields (s̄ ∗ u)(t) = ∑n∈Z (s̄ ∗ u)(nTs ) sinc( Tts −n). The
is defined by (16.87). When tn = nTs for all n ∈ Z, family {sinc( Tts −n)}n∈Z is known to be orthogonal with
the above equations are applicable with s(t) = so (t) respect to )·, ·*. By Parseval’s equality,
and r (t) = ϕo (t) ∗ p̌(t), where p̌(t) is the rectangu-  2
(s̄ ∗ u)(t)2 = sinc( Tts )2 ∑ (s̄ ∗ u)(nTs ) . (16.97)
lar function supported by [− T2s , T2s ). In this case, it n ∈Z
Event-Based Data Acquisition and Reconstruction—Mathematical Background 399
∞ ∞
Since −∞ |sinc(t)|2 dt = 1, then sinc( Tts )2 = T1o ∞ The smaller I−RCS is compared to 1, the faster the
|sinc( Tts )|2 dt = TTos . From the above two equations and series converges and, therefore, the smaller is the num-
Parseval’s equality for the Fourier transform, we obtain ber of terms one can afford to consider. We study in this
section various aspects of the approximate implementa-
S u(t)2D 1 (s̄ ∗ u)(t)2 tion of D.
=
u(t)2 To Ts u(t)2
∞
−∞ | S(ω)| |U (ω)| dω
2 2 16.7.1 Basic Algorithm
1
= ∞ .
−∞ |U (ω)| dω
To Ts 2
To simplify the analysis, we start by assuming that
I−RS < 1. According to Section 16.5.1, RDS = I
S u( t )2
Define Sm := inf|ω|<Ωo |S(ω)|. It is clear that u(t)2D ≥ with D := S(RS) −2 R (case where C = I). This operator
S2m could be approximated by injecting a truncated Neu-
To Ts . By assumption on S(ω), it is possible to find for mann series for (RS) −1 . This is, however, complicated
any  > 0 an interval I ⊂ (−Ωo , Ωo ) of nonzero measure due to the square power. Now, it is not necessary to
such that |S(ω)| ≤ Sm + for all ω ∈ I. By taking U (ω) implement an operator D̂ that approximates D, but suf-
S u( t )2 that RD̂ approximates RD. We have
equal to 1 for ω ∈ I and 0 otherwise, we find u(t)2D ≤ ficient to find D̂ so
RD = RS(RS)−2 R = (RS)−1 R. We then obtain from
( Sm + ) 2 2
Sm −1 that RD . Q R for p
To Ts . This shows that γ(S) = To Ts and hence the first the Neumann series of (RS)
2
p
relation of (16.90). The second relation is obtained in a large enough, where
similar manner.
p −1

16.6.3.2 Justification of (16.93) Q p := ∑ (I−RS)m . (16.98)


m =0
Let δ(t) denote the Dirac " impulse. #For any given Next, note that
θ, τ ∈ R, r (t−θ) s̄(θ−τ) = r (t) ∗ δ(t−θ) s̄ (θ−τ) = r (t) ∗
" # " #
δ(t−θ) s̄(θ−τ) = r (t) ∗ δ(t−θ) s̄(t−τ) . From (16.92), (I − RS) R = R − RSR = R (I − SR), (16.99)
we can then write hs (t, τ) = T1o r (t) ∗ sτ (t) where sτ (t) :=
where I is the identity operator of D. By induction, one
s̄(t−τ) ∑n∈Z δ(t−nTs ). The Fourier transform of sτ (t) is
that of s̄(t−τ) periodized by Ωs := 2π finds that (I−RS)m R = R(I−SR)m for all m ≥ 0 [22,
Ts and divided by Ts p. 598]. By defining
[21]. For any given τ, s̄(t−τ) is bandlimited by Ωo [since
this is true for s(t)]. Since 2Ωo = 2π To ≤ Ωs , the portion of p −1
sτ (t) that lies in [−Ω0 , Ω0 ] in the frequency domain is Dp := ∑ (I−SR)m , (16.100)
equal to T1s s̄ (t−τ). Since r (t) is bandlimited by Ωo , then m =0
" #
r (t) ∗ sτ (t) = r (t) ∗ T1s s̄(t−τ) = T1s (r ∗ s̄)(t−τ). one obtains Q p R = RDp . Thus,

RD . RDp .
Since RDS = I , x ( p) (t) := RDp x is an approximation of
x (t) with x := S x (t). Computationally,
16.7 Reconstruction by Successive
x ( p ) ( t ) = Ry( p ) , (16.101)
Approximations
For given sampling and reconstruction operators S and where
R, we saw in Section 16.5 under what condition on these y( p) := Dp x. (16.102)
operators the RDS system of Figure 16.3 can achieve For the implementation of Dp , note that
perfect reconstruction. The major difficulty, however,
is the implementation of the intermediate discrete-time Di+1 = (I−SR)Di + I,
processor D. In the favorable case where a discrete-
with D0 equal to the zero operator. Therefore, y(i) satis-
time operator C can be found to achieve the inequality
fies the recursive relation
I−RCS < 1, an available solution for D is given by
(16.86). But the central difficulty is the inversion of RCS y(i+1) = (I−SR) y(i) + x, (16.103)
in this solution. The Neumann series (16.64) provides a
method for the numerical evaluation of (RCS)−1 . Given starting from y(0) = 0. The operator Dp is then imple-
the constraint of finite computation, one will of course mented in practice by iterating the above equation p
only consider a finite number of terms in this series. times.
400 Event-Based Control and Signal Processing

Note that the expression of Dp in (16.100) looks like the sample error e that is undistinguishable from the
Q p in (16.98) and thus like the partial sum of a Neumann samples of a bandlimited signal. Now, the convergence
series. One would be initially tempted to claim that Dp of x ( p) (t) toward x̌ (t) is only valid as long as the grow-
tends to the inverse of SR when p goes to ∞. The ing error p e0 contained in the intermediate signal y( p)
problem is that SR is only invertible when restricted remains within the tolerance of the hardware. In typical
to Range(S). One cannot either rigorously state that Dp implementations, the value of p remains, however, rela-
converges to a generalized inverse of SR. Indeed, we tively small due to computation complexity limitations,
will see in the next section that y( p) := Dp x systematically and y( p) may keep magnitudes similar to y̌( p) assuming a
diverges with p whenever x is not in Range(S). relatively small error vector e0 .

16.7.2 Reconstruction Estimates with Sampling 16.7.3 Estimate Improvement by Contraction


Errors
The Neumann series actually includes some deeper
We have assumed until now x to be exactly the sam- mechanisms of sequence convergence. Inspired by
ples of a bandlimited signal x (t). As considered in (16.103), consider two estimates of x (t) of the form
Section 16.3.8, one should, however, expect in practice
to have y ( t ) = Ry and y
( t ) = Ry
, (16.106)
x = S x (t) + e,
where y and y
are related to each other by
where e is a vector of sampling errors. Thanks to the
invertibility of RS in B, it can be shown that the error y
= (I−SR) y + x, (16.107)
sequence e can be uniquely decomposed as
with x = S x (t). Let us show that
e = S e ( t ) + e0 , (16.104)
y
(t)− x (t) ≤ γ y(t)− x (t), (16.108)
such that e(t) ∈ B and e0 ∈ Null(R). Indeed, by apply-
ing R to this equality, we find Re = RS e(t), which where
leads to the unique solution e(t) = (RS)−1 Re and γ := I−RS.
e0 = e − S e(t). Conversely, these two signals do sat-
By operating R on both sides of (16.107) and applying
isfy (16.104) and Re0 is easily verified to be 0. Note by
the identity (16.99) backward, one finds that
the way that this proves that D = Range(S) ⊕ Null(R),
which is a generalization of (16.66). We can then write y
(t) = (I−RS) y(t) + R x. (16.109)
x = x̌ + e0 ,
Since x = S x (t), note that x (t) = (I−RS) x (t) + R x.
where By subtracting this from the above equation, one obtains
y
(t)−x (t) = (I−RS)(y(t)− x (t)). This implies (16.108).
x̌ := S x̌ (t) and x̌ (t) := x (t) + e(t). The impact of (16.108) is outstanding. Regardless
of how y has been obtained to produce y (t) = Ry
Since Re0 = 0, then (I−SR) e0 = e0 and by induction
as an estimate of x (t), the transformed vector y
of
(I−SR)m e0 = e0 for all m ≥ 0. Hence, Dp e0 = p e0 and
(16.107) systematically leads to a better estimate y
(t) =
the vector y( p) of (16.102) yields the expression
Ry
when I−RS) < 1. The estimate improvement
y( p) = y̌( p) + p e0, (16.105) of (16.109) has also some intuitive interpretation, better
seen from the equivalent formulation
where " #
y̌( p) := Dp x̌. y
(t) = y(t) − R S y(t) − x .
If e ∈
/ Range(S), then e0 = 0 and the component p e0 Ideally, one would wish to subtract from y(t) the esti-
makes y( p) diverge with p. However, this component is mate error signal e(t) := y(t)−x (t), which is of course
immediately canceled after application of R so that the not available. "The signal#y
(t) is obtained by subtract-
output of the system x ( p) (t) defined in (16.101) yields the ing instead R S y(t) − x = (RS)e(t) since x = S x (t).
final expression When RS is not too far away from identity, this still
x ( p) (t) = Ry( p) = Ry̌( p) = RDp x̌ = RDp S x̌ (t). reduces the error contained in y (t). This is at least
mathematically guaranteed when I−RS < 1.
We conclude that x ( p) (t) tends to x̌ (t) when p goes to Returning to the iteration (16.103), the successive
infinity. The error e(t) contained in the signal x̌ (t) = reconstruction estimates x (i) (t) of (16.101) thus sat-
x (t) + e(t) results qualitatively from the component of isfy the inequality  x (i+1) (t)−x (t) ≤ γ  x (i) (t)−x (t),
Event-Based Data Acquisition and Reconstruction—Mathematical Background 401

which recursively leads to with the complete expression (16.86) of D including C.


A simpler alternative is to keep these derivations as
 x (i) (t)−x (t) ≤ γi  x (0) (t)− x (t). is but replace R by R
:= RC, which does satisfy
I−R
S < 1. The recursive relation (16.103) then sim-
This is true regardless of the initial estimate x (0) (t) = ply becomes
Ry(0) [not just with x (0) (t) = R.0 = 0 as was used for the
Neumann series]. This points to a more general design y(i+1) = (I−SRC) y(i) + x, (16.110)
(0)
of the processor D, where a first estimate y of x̃ is (0)
constructed using any method (including heuristic and starting from some initial guess y , and the recon-
nonlinear methods) and fewer iterations of (16.107) are structed estimate of (16.101) is changed into
needed.
x ( p) (t) := RCy( p) . (16.111)

As shown in Figure 16.6, the processor D then con-


16.7.4 General Case of Admissible Pair (S , R)
sists of providing an initial guess y(0) , iterating (16.110)
In the most general case of an admissible pair (S , R), to obtain y( p) , and finally computing y = Cy( p) so that
I−RS may not be less than 1 and one may need Ry = RCy( p) = x ( p) (t).
to look for some discrete-time operator C such that We show in Figure 16.7a how an input x (t) is
I−RCS < 1. Once such an operator is found, one can approached by successive estimates x (i) (t) from (16.111)
redo the derivations from the beginning of Section 16.7 under the experimental conditions of Figure 16.5a and b.

xn yn(0) yn(1) yn(2) yn(p–1) yn(p) yn


Initial
C SR + C SR + C SR + C
guess
– – –

FIGURE 16.6
Implementation of processor D.

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t t

x(t) x(1)(t) x(3)(t)


x(0)(t) x(2)(t)
(a) (b)

FIGURE 16.7
Successive approximations of input x ( t) by x ( i) ( t) of (16.111): (a) conditions of experiment of Figure 16.5a and (b) conditions of experiment of
Figure 16.5c.
402 Event-Based Control and Signal Processing

We recall that S is the direct sampling operator at the As )sn , rk * tends to 0 as tk gets far away from tn , (SRy)n
instants of crossings of the input with the quantiza- is approximated as
tion thresholds represented by dashed gray horizontal
lines, R is the zero-order hold operator followed by ban- (SRy)n . ∑ )sn , rk * yk , (16.112)
k∈Kn
dlimitation as described in (16.26), and C is given in
(16.88). The initial guess y(0) is chosen so that x (0) (t) = where
RCy(0) is the bandlimited version of the quantized sig- $ %
Kn = k ∈ Z : |t k − t n | ≤ T ,
nal represented by the piecewise constant function in
Figure 16.7a. We show in Figure 16.7b a similar exper- for some constant T. This operation amounts again to an
iment, where R is the linear interpolator at the same FIR filter with time-varying tap coefficients. The main
instants followed by bandlimitation and C = I. The ini- issue is the computation of each coefficient )sn , rk *. In
tial guess y(0) is just taken to be directly x = S x (t). practice, it is unrealistic to evaluate numerically the inte-
We have seen examples where the introduction of an gral implied by this inner product. One would usually
auxiliary operator C is a necessity to make I−RCS rely on lookup tables. However," both # sn (t) and rn (t)
less than 1. But the ultimate direction of research is find depend on a sequence of instants tk k∈Z and, hence, are
C so that I−RCS is as small as possible. The smaller multiparameter dependent. The use of single-parameter
it is, the lower is the number p of iterations needed to lookup tables is fortunately possible with the sampling
make x ( p) (t) a good approximation of x (t). Intuitively and reconstruction operators considered in this chap-
also, the closer I−RCS is to 0, the closer C is to ter and listed in Table 16.1. In this table, s(t) and r (t)
a generalized inverse of SR. The iterative estimation are any bandlimited functions, so (t) = sinc( Tto ), ϕo (t) =
of x (t) can then be seen as a systematic procedure to To sinc ( To ), as defined in (16.20) and (16.24), while p n ( t )
1 t
compensate for the errors of C with respect to a true and ln (t) are the rectangular and triangular functions
generalized inverse. In the absence of analytical expres- defined in Figure 16.2. In practice, one needs to precom-
sions for the generalized inversion of SR, this opens pute densely enough samples of the function f (t) given
the door to heuristic designs of C. We have seen until in Table 16.1 and store them in a lookup table. The last
now examples of design where C is a discrete-time oper- column of Table 16.1 shows what values of f (t) needs
ator with only one or two nonzero diagonals. A direction to be read to obtain )sn , rk *. Depending on the case, it
of investigation is the minimization of I−RCS using also indicates some additional algebraic operations to be
operators C with N nonzero and nonconstant diagonals. performed.
From an implementation viewpoint, such an operator is We now show how Table 16.1 is derived. Case (i)
a time-varying FIR filter with N taps. is easily obtained from the identity (16.19), which
implies that )sn , rk * = T1o (s̄ ∗ rk )(tn ) = T1o (s̄ ∗ r )(tn −tk ).
With
D the same
E identity, we obtain in case (ii), )sn , rk * =
sn , ϕo ∗ pk = T1o (s̄ ∗ ϕo ∗ pk )(tn ) = T1o (s̄ ∗ pk )(tn ), since
s(t) is bandlimited. Given the description of the rectan-
16.8 Discrete-Time Computation gular function p n (t) in Figure 16.2a, one easily finds with
Implementations any function h(t) that [23]
16.8.1 Discrete-Time Processor D  t −t
k
(h ∗ pk )(t) = h(t) dt = f (t−tk ) − f (t−tk+1 ),
In the proposed design of the processor D of Figure 16.6, t − t k +1
the most critical part from an implementation viewpoint (16.113)
is the operator SR. Since
' ( where  t
SRy = S ∑ yk rk (t) = ∑ yk S rk (t), f (t) := h(τ) dτ.
k ∈Z k ∈Z 0
" # This leads to the result of )sn , rk * shown in the table.
and S rk (t) = )sn , rk * n∈Z , then the nth component of
Case (iii) is obtained in a similar manner. With the iden-
the sequence SRy is
tity (16.23) and the above
D result (16.113),
E D we obtainE in
(SRy)n = ∑ )sn , rk * yk . case (iv), )sn , rk * = so ∗ pn , ϕo ∗ pk = p n , s̄o ∗ϕo ∗ pk =
D E t t
k ∈Z p n , so ∗ pk = tnn+1 (ϕo ∗ pk )(t)dt = tnn+1 g(t−tk )dt −
 t n +1 τ
The operator SR can be seen as an infinite matrix M of tn g(t−tk+1 )dt, where g (τ) := 0 ϕo (τ
) dτ
. One then
entries Mn,k := )sn , rk * with n, k ∈ Z. For practical imple- easily finds the result of )sn , rk * given in the table.
mentation, the above summation needs to be truncated. Case (v) is like case (ii) except that pn (t) is replaced
Event-Based Data Acquisition and Reconstruction—Mathematical Background 403
TABLE 16.1
Lookup-Table Calculation of )sn , rk *
Sampling
Kernel Reconstruction Lookup-Table
Functions Functions Function
sn (t ) rn (t ) f (t ) sn , rk 
(i) s(t −t n ) r (t −t n ) To ( s̄ ∗ r )( t )
1
f (t n −t k )
t
(ii) s (t −t n ) (ϕo ∗ pn )(t) To 0 s̄ (τ) dτ
1
f ( t n − t k ) − f ( t n − t k +1 )
t
(iii) (so ∗ pn )(t) r (t −t n ) 0 r̄ (τ) dτ f ( t k − t n ) − f ( t k − t n +1 )
tτ

(iv) (so ∗ pn )(t) (ϕo ∗ pn )(t) 0 0 ϕo (τ ) dτ dτ f ( t n +1 − t k ) − f ( t n − t k )


− f ( t n +1 − t k +1 ) + f ( t n − t k +1 )
t " #
(v) s(t −t n ) (ϕo ∗ ln )(t) 1
0 ( t −τ) s̄ (τ) dτ f ( t n − t k +1 ) − f ( t n − t k ) / ( t k +1 − t k )
To " #
− f ( t n − t k ) − f ( t n − t k −1 ) / ( t k − t k −1 )

Discrete-time processor
Synchronous
S D R sampler
x(t) xn yn x̂(t) x̂(nTo)

FIGURE 16.8
Conversion of nonuniform samples into standard PCM signal.

by ln (t). Thus, )sn , rk * = T1o (s̄ ∗ lk )(tn ). We show in then x̂ (td ) = ∑n∈Z yn rn (td ). Again, for finite computa-
Appendix 16.8.4 that for any function h(t), tion complexity, one calculates the approximation

x̂ (td ) . ∑ yn rn (td ), (16.115)


f ( t − t k +1 ) − f ( t − t k ) n∈Kd
(h ∗ lk )(t) = (16.114)
t k +1 − t k
f ( t − t k ) − f ( t − t k −1 ) where Kd = {k ∈ Z : |tk − td | ≤ T } for some constant
− , T. With the examples of reconstruction operator con-
t k − t k −1
sidered in the previous sections, rn (td ) can be again
obtained from the lookup table of a one-argument func-
where tion f (t). When rn (t) = r (t−tn ) for some function r (t),
 t rn (td ) = r (td −tn ), which requires a table for the function
1
f (t) := (t−τ) h(τ) dτ. f (t) = r (t). When rn (t) = ϕo (t) ∗ pn (t) as in
To 0
the case (ii) of Table 16.1, one obtains from
This leads to the result of )sn , rk * in the case (v) of
(16.113) that  τ rn (td ) = f (td − tk ) − f (td −tk+1 ), where
the table. f (τ) := T1o 0 ϕo (t) dt. This technique was first intro-
duced in [23]. When rn (t) = ϕo (t) ∗ ln (t) as in the
case (v) of Table 16.1, one then applies (16.114) with
16.8.2 Synchronous Sampling of Reconstructed
h (t) = ϕo (t) and t = td .
Signal
An important application is the conversion of nonuni- 16.8.3 Lagrange Interpolation
form samples into a PCM signal for interface compat- In the previous section, we proposed to convert nonuni-
ibility with standard digital systems. This can be done form samples into a PCM signal by resampling the out-
by resampling the output x̂ (t) of the RDS system " at
# the put of an RDS
Nyquist period To to obtain the sequence x̂ (nTo ) n∈Z . " system.
# With the direct sampling oper-
ator S x (t) = x (tn ) n∈Z under the conditions (16.76)
In practice, one does not necessarily need to produce
and (16.77) with Ts ≤ To and μ < 14 , we recall from
physically the continuous-time signal before resampling
Section 16.4.3 that x (t) ∈ B can be exactly reconstructed
it.
" It is more
# efficient to directly and digitally compute
by the Lagrange interpolation formula
x̂ (nTo ) n∈Z from the output (yn )n∈Z of the processor D
(see Figure 16.8). Let us concentrate on the computa-
tion of x̂ (td ) at a given discrete instant td . As x̂ (t) = Ry,
x (t) = ∑ x ( t n ) rn ( t ), (16.116)
n ∈Z
404 Event-Based Control and Signal Processing

where ∀{rn (t)}n∈Z are given in (16.79). In this case, an 16.8.4 Appendix
alternative method of PCM conversion is to resample an
16.8.4.1 Proof of (16.114)
estimate x̂ (t) of x (t) obtained from a finite-complexity
approximation of the above formula. A standard way to Let ρa,b,c (t) be the triangular function shown in dashed
approximate Lagrange interpolation is to take x̂ (t) such line in Figure 16.10d. In the convolution expression
that for each k,  ∞
k+K (h ∗ lk )(t) = h(τ) lk (t−τ) dτ,
−∞
∀ t ∈ [ t k , t k +1 ], x̂ (t) = ∑ x (tn ) r̂k,n (t), (16.117)
n = k − K +1 lk (t−τ) = ρa,b,c (τ) with ( a, b, c) = (t−tk+1 , t−tk , t−tk−1 ).
The key is to expand the three-parameter function
where
k+K
t − ti ρa,b,c (τ) in terms of a one-parameter function. Con-
r̂k,n (t) = ∏ tn − ti
, sider first the two-parameter function ρa,b (τ) shown in
i = k − K +1 Figure 16.10c, that is zero on (−∞, a], linear on [ a, b] of
i=n
slope 1, and constant on [b, ∞). It is easy to see from
and K is a truncation integer parameter. Using Figure 16.10d that
the rectangular function pn (t) defined in
Figure 16.2a, one can write more concisely that ρa,b (τ) ρb,c (τ)
ρa,b,c (τ) = − .
x̂ (t) = ∑(k,n)∈ I x (tn ) r̂k,n (t) pk (t) for all t ∈ R, where b−a c−b
I is the set of all integer pairs (k, n) such that
Now, let ρt (τ) be the one-parameter function equal to
k−K +1 ≤ n ≤ k+ K. Since this is equivalent to
|t − τ| when 0 ≤ τ ≤ t or t ≤ τ < 0, and 0 otherwise.
n−K ≤ k ≤ n+ K −1, we also have
This function is illustrated in Figure 16.10a. By compar-
∀t ∈ R, x̂ (t) = ∑ x (tn ) r̂n (t), (16.118) ing the parts (b) and (c) of Figure 16.10, it is easy to see
n ∈Z that
ρa,b (τ) = ρa (τ) − ρb (τ) + (b − a) u(τ),
where
where u(τ) is the unit step function. By injecting this into
n + K −1 the above expression of ρa,b,c (τ), we obtain
r̂n (t) = ∑ r̂k,n (t) pk (t).
k=n−K ρ a ( τ) − ρb ( τ) ρb ( τ) − ρc ( τ)
ρa,b,c (τ) = − .
We show in Figure 16.9 examples of such func- b−a c−b
tions r̂n (t) for various values of K and sampling Defining f (t) := ∞ h(τ)ρt (τ) dτ, it is clear that

−∞
configurations. With K = 1, one obtains r̂n (t) =
t − t n −1 t − t n +1  ∞
t n − t n −1 p n −1 ( t ) + t n − t n +1 p n ( t ) = l n ( t ), where l n ( t ) is
f ( a ) − f ( b) f ( b) − f ( c )
h(τ) ρa,b,c (τ) dτ = − .
the triangular function of Figure 16.2b, and thus −∞ b−a c−b
recognizes linear interpolation.
Then, (h ∗ lk )(t) is obtained by replacing ( a, b, c) in the
The theory that led to the perfect reconstruction
above expression by (t−tk+1 , t−tk , t−tk−1 ). Meanwhile,
(16.116) does not apply when μ ≥ 14 [7]. Interestingly, it f (t) yields the simpler expression in (16.114) using the
is still possible to show that the estimate x̂ (t) of (16.117) above definition of ρ (τ).
t
uniformly converges to x (t) when K goes to infinity
regardless of μ ≥ 0 with a large enough oversampling 16.8.4.2 Derivation of the Bound (16.119)
ratio ρ = TTos . Using basic properties of Lagrange poly-
nomials from [24, pp. 186–192] and extending deriva- Without loss of generality, consider the function x̂ (t) of
tions from [25,26] to the present case, we show in (16.117) for t ∈ [t0 , t1 ] (k=0). The first crucial step is a
Section 16.8.4 that at large enough K, result of [24] stating that
" π #2K x̂(t) − x (t) = 1
x (2K ) (u) w(t),
K 2μ− 2
1
| x̂ (t) − x (t)| ≤ xm

π 2ρ , (16.119) (2K ) !

where xm is the input maximum amplitude. From the where


above upper bound, x̂ (t) is guaranteed to converge uni-
K
formly to x (t) when the oversampling ratio ρ is more
than π2 . 1.57, regardless of μ. Note that the assumption
w(t) = ∏ ( t − t i ),
i =− K +1
ρ > π2 is only a sufficient condition permitting an analyt-
ical proof of convergence and has nothing of a necessary x (n) (t) is the nth derivative of x (t) and u is some value
condition. in [t−K +1 , tK ]. From (16.76), t− + ±
i ≤ t i ≤ t i , where t i : =
Event-Based Data Acquisition and Reconstruction—Mathematical Background 405
K=1 K=2 K=6
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0

–0.2 –0.2 –0.2


–5 –4 –3 –2 –1 0 1 2 3 4 5 –5 –4 –3 –2 –1 0 1 2 3 4 5 –5 –4 –3 –2 –1 0 1 2 3 4 5
t t t

sinc(t)
r0(t)
(a)

1.2 K=2 1.2 K = 24


K=1 1 K=6
1
0.8 0.8
0.6 0.6
0.4 0.4
r0(t)
r0(t)

0.2 0.2

0 0
t–4t–3 t–2 t–1 t0 t1 t2 t3 t4 t–5 t–4 t–3 t–2 t–1 t0 t1 t2 t3 t4 t5
–0.2 –0.2

–0.4 –0.4

–0.6 –0.6

–0.8 –0.8
–5 –4 –3 –2 –1 0 1 2 3 4 5 –5 –4 –3 –2 –1 0 1 2 3 4 5
t t
(b)

FIGURE 16.9
Truncated Lagrange interpolation function r̂0 ( t) given in (16.118) for various values of K: (a) uniform sampling tn = kTs (with Ts = 1) and
(b) nonuniform sampling.

(i ± μ) Ts . Since t ∈ [t0 , t1 ], then t− + − + Ts


i ≤ t i ≤ t ≤ t j ≤ t j for reaches a maximum within [ t0 , t1 ] at 2 . One finds that
all integers i ≤ 0 and j ≥ 1. Then, |w(t)| ≤ v− (t) v+ (t),
" # " # K
where v − T2s = v+ T2s = ∏ (t+ j − 2 Ts )
1
0 j =1
v− (t) := ∏ (t − t−
i ), K
i =− K +1 Γ( K +μ+ 12 )
= TsK ∏ ( j+μ− 12 ) = TsK ,
Γ(μ+ 12 )
j =1
and
K given the relation Γ(t+1) = t Γ(t) satisfied by the
v+ (t) := ∏ (t+j − t). gamma function Γ. By Bernstein inequality, | x (2K ) (u)| ≤
j =1 − +
o . Then, for any t ∈ [ t0 , t1 ] ⊂ [ t0 , t1 ],
xm Ω2K
The zeros of v− (t) and the zeros of v + (t) can be seen Γ( K +μ+ 12 )2
to be symmetric about T2s . The product v − (t) v+ (t) then | x̂ (t) − x (t)| ≤ xm
( Ts Ωo )2K (2K ) !
. (16.120)
Γ(μ+ 21 )2
406 Event-Based Control and Signal Processing
(a)
a<b<0 a<0≤b 0≤a<b

−a
b

ρa(τ) ρb(τ)
−b −a
b a
ρb(τ) ρa(τ) ρa(τ)
ρb(τ)
a b 0 τ a 0 b τ 0 a b τ
(b)
ρa(τ) – ρb(τ) ρa(τ) – ρb(τ) ρa(τ) – ρb(τ)
b–a

−a

a b 0 τ a 0 b τ 0 a b τ
–b

a–b
(c)
ρa,b(τ) ρa,b(τ) ρa,b(τ)
b−a
b−a b−a

a b 0 τ a 0 b τ 0 a b τ

(d)

1 ρa,b(τ)/(b – a)

ρa,b,c(τ)

a 0 b c τ

–1 –ρb,c(τ)/(c – b)

FIGURE 16.10
Decomposition of triangular function ρ a,b,c (τ): (a) ρ a (τ) and ρb (τ); (b) ρ a (τ) − ρb (τ); (c) ρ a,b (τ); and (d) ρ a,b,c (τ).


Using Stirling formula Γ(t+1) ∼ 2πt( et )t at large t,
n! = Γ(n+1) and β := μ− 12 , we have at large K Acknowledgments
" K +β #2K +2β The author thanks Yannis Tsividis for motivating the
Γ( K +μ+ 12 )2 Γ( K +β+1)2 2π(K + β) e elaboration of this chapter and for his useful comments,
(2K ) !
= (2K ) !
∼ √ " 2K #2K and Sinan Güntürk for his help on the mathematical
2π2K
e aspects of operator theory.
( + β ) " K +β #2β

∼ √ K
e (1+ Kβ )2K 22K
1
2π2K
√ 2μ− 1
πK 2
∼ 2K
,
2
Bibliography
using the limit ∼ (1+ Kβ )K eβ .
One obtains (16.119) after [1] Y. Tsividis, M. Kurchuk, P. Martinez-Nuevo,
injecting the above result into (16.120), the inequality S. Nowick, B. Schell, and C. Vezyrtzis. Event-based

Γ(μ + 12 ) ≥ Γ( 12 ) = π and the relation Ωo = Tπo . data acquisition and digital signal processing in
Event-Based Data Acquisition and Reconstruction—Mathematical Background 407

continuous time. In Event-Based Control and Signal [15] C. S. Kubrusly. Spectral Theory of Operators on Hilbert
Processing (M. Miskowicz, Ed.), pp. 353–378. CRC Spaces. Birkhäuser/Springer, New York, NY, 2012.
Press, Boca Raton, FL 2015.
[16] R. Paley and N. Wiener. Fourier Transforms in the
[2] J. W. Mark and T. Todd. A nonuniform sampling Complex Domain, volume 19 of American Mathematical
approach to data compression. IEEE Transactions on Society Colloquium Publications. American Mathe-
Communications, 29:24–32, 1981. matical Society, Providence, RI, 1987. Reprint of the
1934 original.
[3] R. Duffin and A. Schaeffer. A class of nonhar-
monic Fourier series. Transactions of the American [17] N. Levinson. Gap and Density Theorems, volume 26
Mathematical Society, 72:341–366, 1952. of American Mathematical Society Colloquium Publica-
tions. American Mathematical Society, New York,
[4] J. L. Yen. On nonuniform sampling of bandwidth- NY, 1940.
limited signals. IEEE Transactions on Circuit Theory,
CT-3:251–257, 1956. [18] K. Gröchenig. Sharp results on irregular sam-
pling of bandlimited functions. In Probabilistic and
[5] F. J. Beutler. Error-free recovery of signals from Stochastic Methods in Analysis, with Applications
irregularly spaced samples. SIAM Review, 8:328– (Il Ciocco, 1991), volume 372 of NATO Advanced
335, 1966. Science Institutes Series C: Mathematical and Physi-
cal Sciences (J.S.Byrnes, J.L.Byrnes, K.A.Hargreaves,
[6] H. Landau. Sampling, data transmission, and the
and K. Berry, Eds.), pp. 323–335. Kluwer Academic
Nyquist rate. Proceedings of the IEEE, 55:1701–1706,
Publishers, Dordrecht, 1992.
1967.
[19] A. A. Boichuk and A. M. Samoilenko. General-
[7] K. Yao and J. Thomas. On some stability and
ized Inverse Operators and Fredholm Boundary-Value
interpolatory properties of nonuniform sampling
Problems. VSP, Utrecht, 2004. Translated from the
expansions. IEEE Transactions on Circuit Theory,
Russian by P. V. Malyshev and D. V. Malyshev.
14:404–408, 1967.
[20] K. Gröchenig. Reconstruction algorithms in
[8] F. Marvasti. Nonuniform Sampling: Theory and Prac-
irregular sampling. Mathematics of Computation,
tice. New York: Kluwer, 2001.
59(199):181–194, 1992.
[9] J. Benedetto. Irregular sampling and frames. In [21] A. V. Oppenheim and R. W. Schafer. Discrete-Time
Wavelets: A Tutorial in Theory and Applications, C. K. Signal Processing. Prentice-Hall Signal Processing
Chui, Ed., pp. 445–507. Academic Press, Boston, Series, Upper Saddle River, NJ, 2011.
MA, 1992.
[22] J. Benedetto and S. Scott. Frames, irregular sam-
[10] H. G. Feichtinger and K. Gröchenig. Theory and pling, and a wavelet auditory model. In Nonuni-
practice of irregular sampling. In Wavelets: Mathe- form Sampling: Theory and Practice (F. Marvasti, Ed.),
matics and Applications, J. Benedetto, Ed., pp. 318– pp. 585–617. Kluwer, New York, NY, 2001.
324. CRC Press, Boca Raton, FL, 1994.
[23] D. Hand and M.-W. Chen. A non-uniform sam-
[11] A. Lazar and L. T. Tóth. Perfect recovery and sensi- pling ADC architecture with embedded alias-free
tivity analysis of time encoded bandlimited signals. asynchronous filter. In Global Communications Con-
IEEE Transactions on Circuits and Systems I: Regular ference (GLOBECOM), 2012 IEEE, Anaheim, CA,
Papers, 51:2060–2073, 2004. pp. 3707–3712, December 2012.
[12] M. Vetterli, V. K. Goyal, and J. Kovacevic. Foun- [24] P. Henrici. Elements of Numerical Analysis. Wiley,
dations of Signal Processing. Cambridge University New York, NY, 1964.
Press, Cambridge, UK, 2014.
[25] J. J. Knab. System error bounds for Lagrange poly-
[13] A. W. Naylor and G. R. Sell. Linear Operator The- nomial estimation of band-limited functions. IEEE
ory in Engineering and Science, volume 40 of Applied Transactions on Information Theory, 21:474–476, 1975.
Mathematical Sciences, 2nd edition. Springer-Verlag,
New York, NY, 1982. [26] Z. Cvetković. Single-bit oversampled A/D con-
version with exponential accuracy in the bit rate.
[14] J. B. Conway. A course in Functional Analysis, vol- IEEE Transactions on Information Theory, 53:3979–
ume 96 of Graduate Texts in Mathematics, 2nd edition. 3989, 2007.
Springer-Verlag, New York, NY, 1990.
17
Spectral Analysis of Continuous-Time ADC and DSP

Yu Chen
Columbia University
New York, NY, USA

Maria Kurchuk
Pragma Securities
New York, NY, USA

Nguyen T. Thao
City College of New York
New York, NY, USA

Yannis Tsividis
Columbia University
New York, NY, USA

CONTENTS
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
17.2 Continuous-Time Amplitude Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
17.2.1 Quantization Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
17.2.2 Single-Tone Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
17.2.2.1 Power within [0, f SAW ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
17.2.3 Two-Tone Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
17.2.4 Bandlimited Gaussian Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
17.3 Uniform Sampling of Quantized Continuous-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
17.3.1 Context and Issues of Uniform Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
17.3.2 Analysis of Aliased Quantization Error—A Staircase Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
17.4 CT-ADC with Time Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
17.5 Error Analysis of CT DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419

ABSTRACT This chapter analyzes the error intro- mainly resulted from the culture of discrete-time sig-
duced in continuous-time amplitude quantization. nals; to produce those from a continuous-time input,
Spectral features of the quantization error added on both input sampling and quantization are performed.
bandlimited signals are highlighted. The effects of syn-
In contrast to this, in continuous-time (CT) analog-to-
chronization and time coding on the error spectrum are
digital converter (ADC) only quantization is performed
discussed. The error at the output of a continuous-time
[1], as shown in Figure 17.1. In this chapter, we show
DSP is also analyzed. how amplitude quantization in continuous time can be
analyzed as a memoryless (nonlinear) transformation
of continuous waveforms. This yields spectral results
that are harmonic based, rather than noise oriented. The
resulting spectrum is discussed in Section 17.2.
17.1 Introduction
While pure continuous-time processing of signals has
The traditional view that quantization amounts to an advantages as presented in Chapter 15, one still needs
additive and independent source of white noise has to consider and evaluate the effects of an eventual
409
410 Event-Based Control and Signal Processing

sampling of the signals thus obtained for interface delay of the quantization, respectively:
compatibility with standard data processing (includ-
" #2
ing storage, transmission, and off-line computation). MSE = min{ xq (t) − ax (t − τ) − b }, (17.4)
Section 17.3 addresses this issue. a,b,τ
As mentioned in [1], one can store the nonuniformly Once the three parameters are determined, the error
spaced event times tk by finely quantizing the time axis, signal can then be defined as
which results in “pseudocontinuous” operation. A basic
question when doing this is how to decide what the e(t) = xq (t) − ax (t − τ) − b. (17.5)
resolution of time quantization should be, such that
the accuracy of the resulting signal is comparable with For simplicity, this chapter assumes that quantizers have
that of a purely continuous-time system. Section 17.4 a unity gain, zero DC shift, and zero time delay, so that
discusses this problem. e(t) = xq (t) − x (t). However, the principles presented
Finally, the results obtained for CT quantizers are are also valid for other cases.
extended to the output of a CT DSP in Section 17.5. After reviewing basic knowledge on the error func-
tion E( x ) of standard scalar quantizers, we will tackle
the spectral analysis of e(t) successively when x (t) is a
single tone and when it is composed of two tones.

17.2 Continuous-Time Amplitude Quantization 17.2.1 Quantization Characteristics


Continuous-time amplitude quantization (Figure 17.1) Quantization consists of approximating an input num-
converts the input x (t) to a piecewise constant signal ber x by the nearest value from a predetermined finite
xq (t) and allows further digitizing. Basically, xq (t) is set of quantum levels. In general, these levels may not be
an instantaneous transformation of x (t), which can be uniformly spaced, as is the case in signal companding. In
expressed as this chapter, we concentrate on the simpler but standard
x q (t) = Q( x (t)), (17.1) case of uniform spacing. Figure 17.2a and b shows the
two typical transfer functions Q of uniform scalar quan-
where Q is the scalar function of quantization. The tizers [2]. They are both staircase functions. The mid-rise
error signal e(t) = xq (t) − x (t) is then also a memoryless quantizer of Figure 17.2a has a discontinuity at x = 0.
function of x (t) Meanwhile, the mid-tread quantizer of Figure 17.2b has
e(t) = E( x (t)), (17.2) a quantum level at 0 and is preferred in event-driven sys-
tems, as it can tolerate small input imperfections around
where E is the error function of Q defined by zero (DC offset, noise) and maintain a zero output in the
absence of a signal. An odd number of quantum lev-
E( x ) = Q( x ) − x. (17.3) els is however necessary to achieve symmetry around
the origin. With a quantization step size Δ and an N-bit
The above error signal e(t) is defined with the implicit resolution of quantum levels, the single-ended full-scale
assumptions that it does not contain a signal component amplitude of the mid-tread quantizer is
or a constant, and the quantization does not add delay.
 
In the more general case, the error has to be defined as N −1 1
follows. The mean square error (MSE) is obtained by A FS = 2 − Δ,
2
the following minimization algorithm over a, b, and τ,
which compensates for the gain, DC offset, and time (while it is 2 N −1Δ with the mid-rise quantizer).

Input signal Quantizer Quantized signal


x(t) xq(t)

FIGURE 17.1
A continuous-time amplitude quantization operation.
Spectral Analysis of Continuous-Time ADC and DSP 411
Qmid-rise Qmid-tread
7
—Δ
2 Quantized Quantized

5 output output
—Δ
2

3
—Δ
2
Input values
1 7 5 3 1Δ
—Δ –—Δ –—Δ –—Δ –—Δ Input values
2 Δ 2Δ 3Δ 4Δ 2 2 2 2
x x
–4Δ –3Δ –2Δ –Δ 1 1 3 5 7
–—Δ —Δ —Δ —Δ —Δ
2 2 2 2 2
–Δ
3
–—Δ
2
–2Δ
5
–—Δ
2
–3Δ
7
–—Δ
2
(a) (b)
1 1
Emid-rise E =—Δ Emid-tread E = —Δ
2 2
x x

–4Δ –3Δ –2Δ –Δ 0 Δ 2Δ 3Δ 4Δ 7 5 3 1 1 3 5 7


–—Δ –—Δ –—Δ –—Δ —Δ —Δ —Δ —Δ
1 2 2 2 2 2 2 2 2
E = –— Δ 1
E = –— Δ
2
2
(c) (d)

FIGURE 17.2
Quantization characteristics and error functions: (a) characteristic of mid-rise quantizer; (b) characteristic of mid-tread quantizer; (c) quantization
error of mid-rise quantizer; and (d) quantization error of mid-tread quantizer.

Figure 17.2c and d shows the respective error func- 1. The bell-like pulses formed by the quantiza-
tions E( x ). They are both odd periodic sawtooth func- tion of the sine wave around its peaks;
tions of period Δ. When x is a random variable that is 2. The sawtooth patterns arising from the quan-
uniformly distributed over a range of a multiple of Δ tization of the sine wave around its zero-
in length, E( x ) is known to be uniformly distributed crossings, where it is close to an ideal ramp;
between − Δ2 and Δ2 [3]. In this case, E( x ) yields the
3. The transition parts between the above two
classical variance value of Δ
2
12 . regions.
When the above statistical assumption on the input is
not satisfied, one needs to resort to a deterministic anal- Although the amplitude values of a sinusoid are not
ysis of the error function E( x ). As a periodic function, uniformly distributed [7] (maximal density values being
E( x ) can be expanded in a Fourier series. With mid-tread obtained at the peak values of the sinusoid, as expected
quantizers, the expansion is [4,5] from Figure 17.3c), the probability distribution of e(t) is
still modeled as uniform in [− Δ2 , Δ2 ] [3]. In the following,

sin (2kπx/Δ)
E( x ) = Δ ∑ (−1)k . (17.6) “power”, denoted by P, refers to a mean-square value.
k =1
kπ With a full-scale sinusoidal input, the total error power
of Δ
2

By removing (−1)k from this equation, one obtains 12 resulting from this model appears to be satisfacto-
rily accurate in practice. Given the power of a full-scale
the series corresponding to mid-rise quantizers. In the A2
following discussion, however, we will focus on the sinusoidal input of 2FS , the signal-to-error ratio (SER) of
mid-tread characteristic and choose by default the error the quantized output is
expansion of (17.6). Psignal A2 /2
SER = = 2FS .
Perror Δ /12
17.2.2 Single-Tone Input
With an N-bit mid-tread quantizer, A FS = (2 N −1 −
We now analyze the quantization error signal when
2 )Δ. Thus, the SER in decibels yields the expression
1
x (t) is a single-tone (sinusoidal) input. Figure 17.3 " N −1 #2
shows both the quantized output xq (t) = Q( x (t)) and (2 − 1/2)Δ /2
the quantization error signal e(t) = E( x (t)) for a 6-bit SERdB = 10 log10
Δ2 /12
mid-tread quantizer with a full-scale single-tone input. ≈ 6.02N + 1.76 dB,
As illustrated in part c of this figure, Pan and Abidi [6]
divided the analysis of e(t) into three different regions: where the approximation assumes 2 N $ 1.
412 Event-Based Control and Signal Processing
1
One-tone input
0.5

x (t)
0
–0.5
–1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time (t/T)
(a)
1
Quantized output
0.5
xq (t)

0
–0.5
–1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time (t/T)
(b)
1
Bell shape Sawtooth
eq (t)/Δ

0.5
0
–0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time (t/T)
(c)

FIGURE 17.3
A quantized sinusoid waveform and its quantization error. The quantizer has a 6-bit resolution. (a) A full-scale sinusoidal input; (b) the quan-
tized output waveform (the piecewise-constant waveform is more clearly shown in the zoomed-in plot.); and (c) the quantization error signal.
Its amplitude is normalized to the quantization step Δ.

–40
From bell-shape fSAW
–50 waveform
Error power rel. to signal (dB)

–60
–20 dB/decade

–70

–80

–90

–100 0
10 101 102 103 104
Normalized frequency (f/fin)

FIGURE 17.4
Power spectrum of the quantization error in Figure 17.3c relative to signal power. The error signal is obtained by quantizing a full-scale
sinusoidal signal with a 6-bit mid-tread quantizer. The x-axis shows the frequency normalized to f in and is plotted on a logarithmic scale.

In the frequency domain, the bell-like pulses and the frequency. Since the quantization error function E( x ) is
sawtooth waveform have very different contributions to odd-symmetric around the origin and the input signal
the spectrum. Figure 17.4 shows the spectrum of the x (t) is so-called half-wave symmetry, the corresponding
error signal e(t) in Figure 17.3c. Since e(t) is periodic error signal e(t) is also “half-wave symmetry,” and thus
with the same period as the input, its spectrum is com- the resulting discrete spectrum contains only odd-order
posed of harmonics, with the first harmonic at the input harmonics. The bell-like pulses are periodic in the time
Spectral Analysis of Continuous-Time ADC and DSP 413

domain at the input frequency and contribute mostly important results. The term of (17.11) corresponding to
low-order harmonics. In [8], it is observed that small k = 1 is
variations in the amplitude of the input may cause large ∞  
2Δ 2πA FS
changes in the bell-like pulses. Thus, the power of these e1 ( t ) = − ∑ m
π m=1,m
J
Δ
low-order harmonics is very sensitive to the input ampli- odd
tude. The fast-varying sawtooth part contributes mostly !
× sin m(2π f in t + φ) ,
to the high-frequency part of the error power. Let the
minimum sawtooth duration TSAW be the shortest time where Jm (.) is the Bessel function of the first kind of
for the input to cross a quantization interval, and define order m. This is a sum of all the odd-order harmonics
the sawtooth frequency f SAW as its inverse. For a ban- of the input frequency with amplitudes Jm ( 2πA Δ ) π .
FS 2Δ
Δ
dlimited input, TSAW is approximately |dx/dt |max
, which According to [9],
implies that
Jm (β) ≈ 0, m > β + 1. (17.12)
|dx/dt|max
f SAW = . (17.7) " #
N −1 − 1 ) is negligible for
Δ Thus, Jm ( 2πA Δ ) = Jm 2π(2
FS
2
For an input of the form x (t) = Ain sin (2π f in t), m > 2 N π. In other words, all the significant tones of
e1 (t) have frequencies below 2 N π f IN , which is exactly
2πAin f in the f SAW defined in Equation 17.9.
f SAW = . (17.8) The kth-order term of e(t) in (17.11) can be
Δ
expressed as
With a full-scale input, Ain = (2 N −1 − 12 )Δ and hence ∞  
k 2Δ 2kπA FS
  ek (t) = (−1) ∑ Jm
kπ m=1,m Δ
N −1 1 odd
f SAW = 2π 2 − f in ≈ 2 π f in .
N
(17.9) !
2 × sin m(2π f in t + φ) .

The last approximation assumes 2 N $ 1, which is satis- It has the same form as e1 (t) but with a factor of 1/k in
fied in most practical cases. In the given example, N = 6. front of the summation and a factor of k inside the argu-
This leads to f SAW = 201 f IN (indicated in Figure 17.4). ment of the Bessel function. (−1)k in front does not affect
The spectrum beyond f SAW contains the harmonics of its amplitude. Using again (17.12), we find the significant
the fundamental components below f SAW . A qualitative tones of ek (t) are present only up to k f SAW . The fre-
observation is in order. In Figure 17.4, one can observe quency range [(k−1) f SAW , k f SAW ], which we call the kth
that the amplitude of the components up to f SAW can f SAW band, contains the significant tones from the kth
roughly be thought to have a constant envelope, while and higher order terms only, that is, those resulting from
the amplitude of the components beyond f SAW can be ∑∞ i = k ei ( t ). However, because of the scaling factor 1/k,
seen to drop off, on average, with a −20 dB/decade the power of the tones within each kth f SAW is mostly
slope (the reason for the value of this slope will be dis- determined by the term of the lowest order ek (t). They
cussed below). are approximately k2 times lower than the tones within
Assume the single-tone input has a full-scale ampli- the first f SAW band (i.e., [0, f SAW ]). This result is better
tude A FS , an arbitrary input frequency f in and an arbi- observed on the plot of Figure 17.5, plotted using a linear
trary phase shift φ, that is: frequency axis. A staircase-like spectrum is observed.
Each step has the same width of f SAW . Also, the aver-
x (t) = A FS sin (2π f in t + φ). (17.10) age power in each kth f SAW band is k2 lower than the
tones in the first f SAW band. This is consistent with the
By substituting (17.10) into (17.6), the quantization error observed slope of −20 dB/decade indicated on the log-
becomes: arithmic scale of Figure 17.4. A zoomed-in plot around
f in is provided to more clearly show the discrete feature

1 ' 2πkA ( of the spectrum. Only harmonics are present, while no
e(t) = Δ ∑ (−1)k sin FS
sin (2π f in t + φ) . component exists at any in-between frequency.
k =1
πk Δ
(17.11)
17.2.2.1 Power within [0, f SAW ]
A complete Fourier series expansion of e(t) can then It was observed in [8] that the error power below f SAW
be obtained by using Bessel functions and Jacobi-Anger dominates the entire quantization error power. The rea-
expansion [5,9,10]. This process is involved and thus is son is that the tones beyond f SAW are coming from the
not included in this chapter. We only highlight some higher order terms of Equation 17.11 and thus their
414 Event-Based Control and Signal Processing
–40
–55
–60
–50

Error power rel. to signal (dB)


[0, fSAW]
–65

–60 [fSAW, 2fSAW] –70


0 5 10 15 20
[2fSAW, 3fSAW]
[3fSAW, 4fSAW]
–70 [4fSAW, 5fSAW]

–80

–90

–100
0 100 200 300 400 500 600 700 800 900 1000
Normalized frequency (f/fin)

FIGURE 17.5
Power spectrum of the quantization error in Figure 17.3c relative to signal power. The error signal is obtained by quantizing a full-scale sinu-
soidal signal with a 6-bit mid-tread quantizer. The x-axis shows the frequency normalized to f in and is plotted on a linear scale. A zoomed-in
plot shows a narrow spectrum around f in (i.e., [0, 20 f in ]).

power is scaled down by 1/k2 . From (17.11), it can be Ain1 = Ain2 = A2FS and N = 6. The spectrum now con-
found numerically that 71% of the total error power lies tains not only harmonics but also intermodulation prod-
in the band [0, f SAW ]. It is also interesting to estimate this ucts. The tones are uniformly spaced by a distance of
number using the staircase model of Figure 17.5. Accord- Δ f = f in2 − f in1 in frequency. Given the bit resolution
ing to this rough model, the power spectral density in N = 6, f SAW = π(2 N −1 − 12 ) = 201 f in,avg. This value is
the kth f SAW band is assumed flat and proportional to consistent with the value of f SAW obtained from the
1/k2 . As the error tones are uniformly spaced (with a graphical method by finding the transition frequency
distance of 2 f IN ), the error power within the kth f SAW between the flat part and the descending part of the
band decreases in 1/k2 with k. The ratio of power that spectrum. Such agreement will also be observed with
lies in [0, f SAW ] is then 1/ ∑∞ π2
k =1 k = 1/ 6 . 61%, which
2 more complicated inputs as will be seen in Section 17.2.4.
can be compared to the numerical result above. It can be found numerically that the power below the
maximum sawtooth frequency in this two-tone example
is 78% of the total power of quantization error.
17.2.3 Two-Tone Input
The total error power approximately equals
to Δ
2
12 , which results in an SER = 34.8 dB (i.e.,
Assume now that the input is a two-tone signal
x (t) = Ain1 sin (2π f in1t) + Ain2 sin (2π f in2t) (assuming A2 /2+ A2 /2
10 log10 in1 Δ2 /12in2 ). However, if a narrow baseband is
zero phase in the two). The sawtooth frequency f SAW can
still be derived from (17.7) and yields considered, only a few harmonics and intermodulation
components fall in band and the resulting in-band SER
2πAin1 f in1 + 2πAin2 f in2 can be much higher. Figure 17.6b shows an example
f SAW = .
Δ baseband, which is [0, 2 f in,avg ]. A limited number of
By defining the weighted-averaged frequency tones are evenly distributed within the baseband,
with no component exists at any other frequency. The
Ain1 f in1 + Ain2 f in2 in-band SER can be found by summing up the relative
f in,avg = , power of all these tones, which is 51 dB. The choice
A FS
of baseband is rather arbitrary in this two-tone case,
one obtains the equivalent expression where input signal does not occupy a full band. We can
2πA FS f in,avg choose a baseband with almost any width and find the
f SAW = , in-band SER correspondingly. Figure 17.6c shows the
Δ
in-band quantization error power as a function of
which coincides with f SAW of a full-scale single- the ratio of baseband’s upper band-edge frequency to
tone input of frequency f in,avg according to (17.8). f in,avg. As the band-edge moves to high frequencies,
Figure 17.6a plots the spectrum of the quantization more and more harmonics and intermodulation tones
error e(t) resulting from this input in the case where fall into the baseband and the resulting in-band error
Spectral Analysis of Continuous-Time ADC and DSP 415
–50
fSAW
–55

Error power rel. to signal (dB)


–60
–65
–70
–20 dB/decade
–75
–80
–85
–90
–95
–100
100 101 102 103 104
Normalized frequency (f/fin,avg)
(a)
–50 –30
Error power rel. to signal (dB)

Error power rel. to signal (dB)


Total quantization
error power
–60 –35
–40 In-band quantization
–70 error power
–45
–80
–50
–90 –55 fSAW/fin,avg

–100 –60
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 100 200 300 400 500 600 700
fin1 fin2 Upper band-edge to fin,avg ratio
Normalized frequency (f/fin,avg) (c)
(b)

FIGURE 17.6
Quantization error relative to signal power in the two-tone test. The quantizer has a 6-bit resolution. Full load of the quantizer is achieved by
setting the amplitudes of the two input tones as half of A FS . (a) A wide spectrum of the quantization error plotted on a logarithmic scale; the
frequency axis is normalized to f in,avg. (b) A narrow baseband spectrum of the quantization error on linear scale; the frequency axis is normalized
to f in,avg and shows the ranges from 0 to 2. The locations of the two input tones are shown. (c) In-band quantization error power relative to signal
power, as a function of the ratio of upper band-edge frequency to f in,avg.

power increases. Although the curve is monotonically slope. f SAW,Gaussian can be calculated by averaging the
increasing, its changing rate becomes very slow once f SAW contributed by each input tone [8], that is,
the band-edge is larger than f SAW . This is because the
error tones beyond f SAW contain low power. The hori- 2πA RMS f in
f SAW,Gaussian = f SAW = .
zontal line on the top shows the total quantization error Δ
power relative to the signal power. It is the ultimate Calling f BW the maximum frequency of input, we obtain
value of the relative error power when the baseband is f SAW,Gaussian = πA RMS f BW /Δ.
infinitely wide. Figure 17.7 shows an example spectrum of the quan-
tization error (solid line). Contrary to the one-tone and
two-tone cases, the spectrum is continuous. A RMS is cho-
17.2.4 Bandlimited Gaussian Inputs
sen to be A FS /4 and thus f SAW,Gaussian = π2 N −3 f BW . For
We now consider a more complex class of signals, N = 6 in this case, f SAW,Gaussian ≈ 25 f BW . This is con-
namely bandlimited signals with a Gaussian amplitude sistent with the plot of the quantization error. Beyond
probability density function. In [11,12], it is found that f SAW,Gaussian, a −20 dB/decade slope is apparent. Inte-
the probability for such a random input to overload the grating the error power over the entire frequency axis,
quantizer is negligible (less than 1 in 10,000) as long as its we find the SER equals to 29 dB. In the same figure, the
root-mean-square amplitude A RMS is four times smaller spectrum of the bandlimited Gaussian input is also plot-
than the quantizer’s full-scale range A FS . The spectrum ted (dotted line). Its power spectrum is white up to f BW .
of its quantization error has similar features to one-tone Beyond that, its power drops significantly (due to filter-
and two-tone inputs, that is, (1) it has a corner frequency ing). In this bandlimited Gaussian case, the signal band
(which we will denote by f SAW,Gaussian); (2) beyond is well defined as [0, f BW ]. The bandwidth of the quanti-
f SAW,Gaussian, the spectrum follows a −20 dB/decade zation error is much wider than that of the input signal.
416 Event-Based Control and Signal Processing
0
Gaussian input signal
Quantization error
–20

–40
fSAW, Gaussian
Power rel. to Psignal (dB) –60

–20 dB/decade
–80

–100

–120

–140

–160

–180 –3
10 10–2 10–1 100 101 102 103 104
Normalized frequency (f/fBW)

FIGURE 17.7
Power spectra in the bandlimited Gaussian input case. The root-mean-square amplitude A RMS is set as one-fourth of A FS . The quantizer has a
6-bit resolution. The x-axis shows the frequency normalized to f BW and is plotted on a logarithmic scale. The dotted line is the power spectrum
of input signal. It is generated by low-pass filtering a white Gaussian noise with a cutoff frequency of 500 Hz. The solid line shows the power
spectrum of the quantization error. Both power values are normalized to Psignal , the total power of the input signal.

Integrating the error power within the signal band, we aliasing. In fact, because continuous-time quantization
find the in-band SER to be 45 dB, which is 16 dB higher is an instantaneous operation, permuting the order of
than the SER calculated over the entire frequency axis. amplitude quantization and time sampling gives the
To summarize Section 17.2, f SAW plays an important same result [14]. Indeed, the sample of the amplitude-
role in the spectral characterization of any continuous- quantized signal x q (t) = Q( x (t)) at an instant tk is
time quantized signals which are originally bandlimited. x q (tk ) = Q( x (tk )), which is equal to the quantized ver-
The value of this quantity can be obtained from Equa- sion of the sample of x (t) at tk . This makes the system
tion 17.7. Within [0, f SAW ], the error power spectral den- equivalent to a conventional ADC involving sampling
sity is almost flat. Beyond f SAW , the spectral density and quantization, because the two operations can be
drops with a slope of −20 dB/decade. The frequency interchanged. In spite of this system equivalence, there
range [0, f SAW ] contains the majority of the total power are still two strong reasons for using continuous-time
of the quantization error [8,13], which is roughly in the systems. (1) The major part of the hardware in a
range of 70–80%. continuous-time system is event-driven and thus dissi-
pates power more efficiently compared to clocked sys-
tems. (2) Using oversampling to improve the SER is
straightforward with continuous-time systems. Only the
sampling clock frequency of the output synchronizer
17.3 Uniform Sampling of Quantized needs to be increased. In conventional systems on the
Continuous-Time Signals contrary, oversampling requires an increase of the clock
17.3.1 Context and Issues of Uniform Sampling rate of the entire signal processing chain.

While continuous-time amplitude-quantized signals are


free from aliasing, the lengths of the steps in their
17.3.2 Analysis of Aliased Quantization Error—
piecewise-constant waveform varies, and this prevents
A Staircase Model
their simple integration with standard systems oper-
ating under a fixed clock rate. To resolve this issue, Next, we investigate how aliasing degrades signal
a uniform sampling of the continuous-time outputs accuracy when a quantized continuous-time quantized
is necessary. This inevitably brings back the issue of signal is sampled. It is usually believed that in-band
Spectral Analysis of Continuous-Time ADC and DSP 417

quantization error can be arbitrarily reduced by increas- where Pbb is the original error power within the base-
ing sampling frequency. We will see that there is a limit band before aliasing and Nk is the number of positive
to this error reduction. multiples of f s that fall in the kth f SAW band. The fac-
Assume that a bandlimited input x (t) of maximum tor of 2 is due to the two-sided spectrum (in signed
frequency f BW is quantized and then uniformly sampled frequency, the kth f SAW band includes the interval
at a frequency f s ≥ 2 f BW . The sampled signal includes [(k−1) f SAW , k f SAW ] and its negative counterpart). The
an error component due to quantization, equal to total quantization error power in es (t) is

Ptotal = Pal + Pbb . (17.15)
es ( t ) = ∑ e(nTs ) δ(t−kTs ),
k =− ∞ In (17.14), Pbb is solely dependent on the input and the
f
where e(t) is the continuous-time quantization error sig- quantizer. Meanwhile, Nk depends on the ratio of f s .
SAW
nal and Ts = f1s . We wish to evaluate the power of es (t) When f s
f
1, the aliased error power Pal is domi-
SAW
that lies in the input baseband [0, f BW ]. Denoting by E( f ) nated by the part coming from the first f SAW band where
the Fourier transform of e(t), we have power density is the highest. Thus, Equation 17.15 can
∞ be reduced to
Es ( f ) = E ( f ) ∗ ∑ δ( f − n f s ) . (17.13)
Ptotal ≈ 2N1 Pbb + Pbb .
n =− ∞

As a result, the in-band error power is the sum of the With N1 $ 1 (which is true in most oversampling sce-
error power in a bandwidth of ± f BW around each har- narios), it can be further reduced to
monic of the sampling frequency f s [8]. This convolution
operation is visually explained in Figure 17.8. As was Ptotal ≈ 2N1 Pbb .
explained in Section 17.2, | E( f )|2 along the frequency
axis can be reasonably modeled as a staircase function, The worst value of SERdB , denoted by SERdB,worst, is
whose steps correspond to the f SAW bands of succes- achieved at Nyquist sampling, where the entire error
sive orders. The heights of the steps represent the power power is aliased into the baseband. As long as the con-
f
spectrum density, and decrease as 1/k2 . The dark areas dition of f s 1 is satisfied, doubling f s halves N1
SAW
represent the frequency bands of width 2 f BW centered and thus decreases Ptotal approximately by a factor of 2.
around the multiples of f s (excluding the zero multi- A 3 dB improvement in SERdB is expected.
ple). As (17.13) indicates, after e(t) is sampled at fre- f
When f s $ 1, Pal results from high-order f SAW
SAW
quency f s , all the dark parts are shifted into the baseband bands, and is consequently negligible compared to Pbb .
[− f BW , f BW ] and added to the power originally located In this case, Equation 17.15 reduces to
within it.
Assuming the aliased components are independent, Ptotal ≈ Pbb .
we can estimate the total aliased error power Pal from
the double-sided spectra by the following equation: The error power becomes independent of the sampling
frequency and SERdB reaches its highest value, denoted

P by SERdB,best. At infinite oversampling, the result con-
Pal ≈ 2 ∑ Nk kbb2 , (17.14)
verges to the continuous-time quantization error power.
k =1

Normalized noise power density

fBW 2f
BW
1 1

1 1
4 4
1 1
9 9

–3rd fSAW band –2nd fSAW band –1st fSAW band 1st fSAW band 2nd fSAW band 3rd fSAW band

–8fs –7fs –2fSAW –5fs –4fs –fSAW –2fs –fs 0 fs 2fs fSAW 4fs 5fs 2fSAW 7fs 8fs 3fSaw
Frequency

FIGURE 17.8
Staircase-modeled error spectrum before aliasing. The height represents the power spectrum density and is normalized to the value of the first
f SAW band. The dark parts represent the bands which will be shifted into the baseband after sampling. In this example, f s is assumed to be a
fraction of f SAW ; thus, each f SAW band contains multiple dark parts.
418 Event-Based Control and Signal Processing
46

45

44

In-band SER (dB)


43

42

41

40

39 Asymptotes
Simulated data with a one-tone input
38 −1
10 100 101 102
fS/fSAW
(a)

42

41
In-band SER (dB)

40

39

38

37

36 Asymptotes
Simulated data with a two-tone input
35
10−1 100 101 102
fS/fSAW
(b)

FIGURE 17.9
Theoretical asymptotes and simulation results of SERdB using the staircase model. In both cases, a 6-bit mid-tread quantizer is used. (a) One-tone
test with f BW = 20 f in ; (b) two-tone test with f BW = 20 f avg,in.

Ptotal does not go to zero, as one would have assumed f SAW band, which is not exactly true; the assump-
by applying the “3 dB reduction for each doubling of tion implied by the power additivity in (17.14), that is,
f s ” rule. that all the aliased components are independent is not
As this analysis shows, the plot of SERdB versus rigorously satisfied; and Equation 17.14 assumes that
fs
has two asymptotes. As shown in Figure 17.9, for each band [n f s − f BW , n f s + f BW ] entirely falls into one
fSAW
fs f SAW band of order k. Figure 17.8 shows that this latter
f SAW 1 the plot asymptotically approaches a straight assumption fails when n f s is close to the boundaries of
line of slope of 10 dB/decade (i.e., 3 dB/octave), start- an f SAW band (e.g., n = 3).
f
ing from the lowest point of SERdB,worst. For f s $ 1,
SAW
SERdB approaches SERdB,best.
Figure 17.9 also shows the results of MATLAB exper-
iments to verify the staircase model. One-tone and
two-tone inputs are used with a 6-bit quantizer. The 17.4 CT-ADC with Time Coding
measured points satisfactorily reproduce the asymptotic
f f
The event times tk in CT-ADCs are nonuniformly spaced
trends in the regions f s 1 and f s $ 1, with some and can have arbitrary values. As mentioned in [1],
SAW SAW
local deviations that can be attributed to the following time coding (a term adopted from [15]) is necessary
factors: the model assumes a flat spectrum within each when one wants to store the nonuniform samples.
Spectral Analysis of Continuous-Time ADC and DSP 419

e(t)
xq(t)
x(t) H(f ) y(t)

CT-ADC CT DSP

FIGURE 17.10
A continuous-time signal processing system composed of a CT-ADC and a CT DSP

A high-frequency clock can be used to quantize the While the bandwidth of X ( f ) is less than half this
time intervals between any two successive events Δk = period, | E( f )|2 remains dominant in the whole interval
tk − tk−1 . An interesting question is how high the clock [0, f SAW ], which typically covers several 1/TD periods.
frequency should be. To answer this question, we note Thus, the power spectrum | E( f )|2 | H ( f )|2 of the filtered
that the output of such a CT-ADC with time coding is quantization error signal periodically yields regions of
equivalent to a clocked ADC with the same sampling high values at least several times outside the input base-
frequency, and thus the staircase model in Figure 17.8 band. When the output of the CT DSP is interfaced
applies. According to the previous result that the aliased with standard (i.e., synchronous) data processing blocks,
f uniform sampling is necessary, synchronously with the
error power becomes negligible when f s $ 1, the
SAW
time discretization needs to be sufficiently fine so that clock of those blocks. To obtain an in-band SER compa-
the in-band SER is comparable to the continuous-time rable with the nonsampling case, a sampling frequency
case, with no sampling. Thus, the sampling frequency at least higher than f SAW should be used to prevent any
has to be several times higher than f SAW to assure the significant out-of-band error power being aliased into
highest in-band SER. the baseband.

17.5 Error Analysis of CT DSP Acknowledgment

The error at the output of a continuous-time DSP (CT This work was supported by National Science Foun-
DSP) is analyzed next. Figure 17.10 shows a signal pro- dation (NSF) Grants CCF-07-01766, CCF-0964606, and
cessing system composed of a CT-ADC and a CT DSP. CCF-1419949.
Mathematically, the CT-ADC (i.e., the quantizer) can be
represented as a summer with an additional input e(t),
which is the quantization error signal. The output of the
quantizer x q (t) can then be expressed as
Bibliography
x q ( t ) = x ( t ) + e( t ).
[1] Y. Tsividis, M. Kurchuk, P. Martinez-Nuevo,
Since the CT-DSP is a linear-time-invariant (LTI) system, S. Nowick, S. Patil, B. Schell, and C. Vezyrtzis.
both the input and the error are processed by the same Event-based data acquisition and digital signal pro-
transfer function H ( f ) of the DSP. The Fourier transform cessing in continuous time. In Event-Based Control
of the output of the CT DSP, Y ( f ), can be expressed as and Signal Processing (M. Miskowicz, Ed.), CRC
Press, Boca Raton, FL, 2015, pp. 253–278.
Y ( f ) = X ( f ) H ( f ) + E ( f ) H ( f ), (17.16)
[2] A. Gersho and R. M. Gray. Vector Quantization and
where X ( f ) and E( f ) are the Fourier transforms of x (t) Signal Compression. Kluwer Academic Publishers,
and e(t), respectively. The error at the output of a CT- Boston, 1992.
DSP is the second term in the equation, whose power [3] S. Haykin. Communication Systems. Wiley, New
spectrum can be represented as | E( f )|2 | H ( f )|2 . York, 2001.
As presented in [1], CT-DSP consists of an FIR or an
IIR filter with a uniform tap delay TD . This results in [4] H. E. Rowe. Signals and Noise in Communication
a periodic frequency response with a period of 1/TD . Systems. van Nostrand, New York, 1965, p. 314.
420 Event-Based Control and Signal Processing

[5] N. M. Blachman. The intermodulation and distor- Circuits and Systems (ISCAS), pp. 3593–3596, May
tion due to quantization of sinusoids. IEEE Trans- 2010.
actions on Acoustics, Speech and Signal Processing,
33:1417–1426, 1985. [11] W. R. Bennett. Spectra of quantized signals. Bell
System Technical Journal, 27(3):446–472, 1948.
[6] H. Pan and A. Abidi. Spectral spurs due to quan-
tization in Nyquist ADCs. IEEE Transactions on [12] N. S. Jayant and P. Noll. Digital Coding of Wave-
Circuits and Systems I: Regular Papers, 51:1422–1439, forms: Principles and Applications to Speech and Video.
2004. Prentice Hall, Englewood Cliffs, NJ, pp. 115–251,
1984.
[7] T. Claasen and A. Jongepier. Model for the power
spectral density of quantization noise. IEEE Trans- [13] D. Hand and M. S. Chen. A non-uniform sampling
actions on Acoustics, Speech and Signal Processing, ADC architecture with embedded alias-free asyn-
29(4):914–917, 1981. chronous filter. Global Communications Conference
(GLOBECOM). IEEE, 2012.
[8] M. Kurchuk. Signal encoding and digital sig-
nal processing in continuous time. PhD thesis, [14] Y. Tsividis. Digital signal processing in continu-
Columbia University, 2011. ous time: A possibility for avoiding aliasing and
reducing quantization error. IEEE Transactions on
[9] P. Z. Peebles. Communication System Principles. Acoustics, Speech, and Signal Processing, 2:ii-589–ii-
Addison-Wesley, Advanced Book Program, MA, 592, 2004.
1976.
[15] J. Foster and T.-K. Wang. Speech coding using
[10] V. Balasubramanian, A. Heragu, and C. Enz. Analy- time code modulation. In IEEE Proceedings of the
sis of ultralow-power asynchronous ADCS. In Pro- Southeastcon ’91, pp. 861–863, 1991.
ceedings of 2010 IEEE International Symposium on
18
Concepts for Hardware-Efficient Implementation of
Continuous-Time Digital Signal Processing

Dieter Brückmann
University of Wuppertal
Wuppertal, Germany

Karsten Konrad
University of Wuppertal
Wuppertal, Germany

Thomas Werthwein
University of Wuppertal
Wuppertal, Germany

CONTENTS
18.1 Basics of Digital Signal Processing in Continuous Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
18.2 CT Delay Element Concept with Reduced Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
18.2.1 Functional Description of the New Base Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
18.2.2 Required Chip Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
18.2.3 Reduction of Power Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
18.2.4 Optimum Number of Registers per Base Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
18.2.5 Performance Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
18.3 Asynchronous Sigma Delta Modulation for CT DSP Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
18.3.1 Asynchronous Sigma–Delta Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
18.3.2 Optimization of a First-Order Asynchronous Sigma–Delta Modulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
18.3.3 Performance and Hardware Comparison of ASDM with DM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
18.4 Optimized Filter Structures for Event-Driven Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
18.4.1 Continuous-Time Digital FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
18.4.2 Optimized Continuous-Time Digital FIR-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
18.4.3 Optimized Continuous-Time Recursive Low-Pass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
18.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438

ABSTRACT Event-driven signal processing tech- which require much more hardware than their con-
niques have the potential for significant reduction of ventional sampled data counterparts. Therefore in this
power consumption and improvement of performance. contribution, several recently developed new concepts
In particular, systems used for applications which for delay element implementation will be described,
deal with bursty or heavily changing traffic can profit which enable considerable hardware reduction. Further-
from this special method of signal processing. Even more, it will be shown that the hardware requirements
though quite a number of implementations are already can be also reduced by applying asynchronous sigma
described in the literature, such techniques are how- delta modulation for quantization. Finally, filter struc-
ever, still in a research stage. Thus, further improve- tures are proposed, which are especially well-suited
ments must be made until this kind of signal processing for event-driven signal processing. All described, tech-
has the level of maturity required for implementation niques enable considerable improvements with respect
in a commercial product. The most critical point is to a hardware-efficient implementation of CT signal
the realization of the time-continuous delay elements, processing.

421
422 Event-Based Control and Signal Processing

Bit CT mid-tread ADC, Tgran can be determined by the


18.1 Basics of Digital Signal Processing in following relationship [6]:
Continuous Time 1
Tgran = . (18.1)
2 N −1 · 2πA f in,max
The basic idea behind continuous-time DSP systems is
to perform signal processing only when the input signal Key components of digital signal processing systems are
has changed by a certain amount. This can, for example, arithmetic units with adders and multipliers, and delay
be achieved by using a CT Analog to Digital Converter elements. In sampled data systems, the delay time TD
(ADC) for quantization. This converter type generates a in most cases is equal to the sampling period or it is
new digital output value only when a new quantization a multiple of it. Therefore, the delay elements can be
level is reached. Since no discrete-time (DT) sampling implemented by simple digital storage elements such as
is performed in a CT system [1–6,27], there will be no registers. CT systems, however, operate clockless; there-
aliasing and quantization merely results in harmonic fore, their arithmetic units must be implemented asyn-
distortion. Thus, the signal-to-noise ratio (SNR) in the chronously. Asynchronous logic circuits are well-known
band of interest is better than that of the respective DT [7,26] and the respective circuitry can be also used for
sampled data system, provided that quantizers with the CT systems. In classical asynchronous systems, how-
same resolution are used. Note that even though the ever, only the order of the events is preserved, whereas
total quantization noise is the same for both systems, in CT systems the time distance between successive sig-
the noise is spread over a wider frequency range by nal values must be also retained. This results in a more
CT signal processing, whereas in a sampled data sys- complex implementation of the respective delay circuits.
tem the total quantization noise is concentrated in the A straightforward implementation of a CT delay ele-
frequency range from zero to the sampling rate. Fur- ment, as described in [5,6], consists of a cascade con-
thermore, the signal values generated by the CT-ADC nection of base cells, each delaying the signal by Tgran.
contain the information about the time instants of signal Depending on the total time delay TD to be realized and
changes and the actual digital data. In [4,5], it is pro- the granularity of the input signal, the required number
posed to compose the signal values to be processed of ND1 of base cells can be quite large and is given by
pulses at the time instants of signal changes and the A B
TD
respective digital data values. This pulse train can be ND1 = . (18.2)
Tgran
used to trigger the subsequent CT circuitry [4,5] and to
start signal processing. Figure 18.1 shows a block diagram of such a delay ele-
The most important advantage of this kind of sig- ment consisting of ND1 base cells. Each cell is composed
nal processing is the large potential for minimization of of an analog delay circuit and a digital storage element
the power consumption. Since the analog input signal or latch consisting of an FF. Each time a new input pulse
is only processed when it changes, no signal process- arrives at the analog circuitry of the base cell, the respec-
ing will be performed for constant input signals and tive digital data word is stored in the Flip Flop. When
thus during these time intervals the dynamic power con- the time period Tgran is expired, an output pulse is gen-
sumption is nearly zero. In DT systems on the contrary, erated by the delay circuit and the digital data word is
the dynamic power consumption is directly propor- transferred to the next base cell. The latch can be also
tional to the sampling frequency f s of the system, which composed of several FFs in parallel depending on the
must be at least twice the maximum input frequency word length to be processed. The overall delay time TD
f in,max , to comply with the Nyquist theorem. These dis- to be implemented is determined by the system require-
tinct properties make CT signal processing interesting ments. This can, for example, be the specification of a
for a number of applications, especially for those that filter characteristic to be fulfilled.
deal with bursty signals. As an example, let us consider an input signal quan-
The time interval between two consecutive changes tized with a resolution of N = 10 bits and with a max-
of the continuously processed signal xCT (t) is not pre- imum frequency of f in,max = 4 kHz. Using (18.1) we
dictable. Therefore, contrary to classical sampled data obtain a value of Tgran = 77.7 ns for the granularity.
systems, this time distance must be also preserved. Thus, for the realization of a CT delay element with
A characteristic parameter of CT systems is the mini- TD = 62.5 μs at least 806 CT base cells are required,
mum time interval Tgran between two successive quanti- whereas in a DT system a single latch is sufficient. For
zation level changes, also designated as granularity time a resolution of N = 8 bits, the number of required base
[6]. Tgran is determined by the resolution used for quan- cells reduces to 202.
tization and by the maximum frequency of the input Even though this large number is an obvious disad-
signal. Assuming that a sinusoidal input signal x (t) with vantage with respect to the implementation costs, the
frequency f in,max and amplitude A is quantized by an N CT delay element offers a new degree of freedom for
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 423

Delay element TD

Base cell 1 Base cell 2 Base cell NDl

Delay Delay Delay


Pulse Tgran Tgran Tgran Pulse
t t

xCT(t) Register Register Register y1(t)


t0 t t0+TD t

FIGURE 18.1
Classical CT delay element with cascaded base cells and input and output signals.

element. Based on this concept, hardware requirements


the filter design. Since the delay element usually is com-
posed of a cascade of a large number of base cells, the and power consumption can be considerably reduced.
delay time TD can be adjusted in a wide range with a In the classical delay element in Figure 18.1, one delay
high resolution [30]. This can be achieved either by mak-circuit is responsible for the transfer of one signal value
ing the time delays of the base cells smaller than Tgran through one base cell. This happens ND1 times, until the
or by adding additional base cells to each delay element.signal value is delayed by TD and is shifted out of the
It has been shown that the coefficients of a continuous- delay element [28,29].
time digital filter can be optimized by applying this The new base cell TB shown in Figure 18.2, however,
so-called delay line adjustment [8–11]. In the best case,can store up to M signal values at the same time. It
the filter coefficients can be simplified by this method in
consists of only one analog delay cell but the digital cir-
such a way that no multiplications must be carried out cuitry is enlarged to a shift register with M FF stages.
in the filter anymore. In addition, the filter characteristic
This base cell can be further extended for input signals
can be improved by delay line adjustment. By avoiding with a word length of N bits by arranging N shift reg-
multipliers, the required chip area for a filter imple- isters in parallel as shown in the figure. Since M FFs
mentation can be considerably reduced. The method has are cascaded in each shift register, the time delay gen-
been described in more detail in Section 18.4. erated by the new base cell is increased to TBDC , with
TBDC = M · Tgran . Therefore, a signal value is now kept
for the time interval TBDC in one base cell.
If a new signal value arrives at the base cell, it is stored
18.2 CT Delay Element Concept with Reduced in the first stages of the shift registers and the counter is
reset. This counter reset together with the active input
Hardware Requirements
pulse triggers the delay circuit via the upper AND gate.
It is obvious that in CT systems the delay element is After the time interval Tgran, the delay circuit generates
the most critical functional block. To make CT systems an output pulse, which triggers the shift registers so that
more competitive, it is thus essential to optimize these the stored signal values are shifted to the next FF stages.
functional blocks. The respective objectives are power Furthermore, this output pulse generates a new trigger
consumption and chip area reduction, without sacrific- signal for the delay cell and it increments the counter,
ing CT signal quality. The time delay generated by a as long as the counter has not reached the value M-1.
base cell should be equal or smaller than Tgran , other- Thus, the delay circuit keeps on generating pulses with
wise signal information can get lost and signal quality the time grid Tgran until the counter has reached M-1 and
would suffer. Of course, it is possible to make the time no values are stored in the registers anymore. If all signal
delay smaller than Tgran; however, this would result in values are shifted out, the delay circuit stops generating
an increased number ND1 of base cells, and there with a pulses and goes to power down mode until the input
larger chip area. signal changes again. Furthermore, for each signal value
shifted out of a base cell, a trigger signal “pulse out” is
generated by the respective AND gate. This pulse can be
18.2.1 Functional Description of the New Base Cell
applied to the next cascaded base cell to trigger the delay
To meet the aforementioned requirements, a new delay circuit of this cell and to store the respective data value.
element concept has been developed, which can operate Since each base cell generates a delay of M · Tgran,
with a reduced number of delay circuits per delay the number of base cells required for a delay cell with
424 Event-Based Control and Signal Processing

Pulse in &
Reset 0

Counter Delay Tgram Pulse out


M–1
0 to M–1 &
&

SR 1
2
Data in
xCT(t) SR N
FF 1
FF 2 Logic
Data out
y2(t)
Base cell TB FF M

FIGURE 18.2
CT base cell TB with shift registers as storage elements.

Tgran Terr

Input pulse

Input data

FFs 1 01 11 10
FF
states FFs 2 01 11 10

FFs 3 01 11 10

Terr
Output pulse

Output data 01 11 10

TBDC TBDC+ Terr


t0 t1 t2 t3

FIGURE 18.3
Timing diagram and FF states for a data transfer into the new base cell, with word length of 2 bits.

delay time TD can be reduced to ND1 /M, even though the delay circuit and the signal value is stored in the first
the total number ND1 of digital storage elements has not stage of the shift registers. After the time interval Tgran,
changed. Note that an additional base cell is required if the delay circuit triggers the shift registers and {0 1}
ND1 /M is not an integer. This cell consists of one analog moves on to the second stage. This recurs until the shift
circuit and the remaining digital storage elements. registers run empty and the signal value {0 1} is shifted
Based on this concept, the delay circuit of one base cell out of the base cell after time interval TBDC .
controls several data values stored in the respective shift The next signal value {1 1} arrives at time instant t1
registers at the same time. Thus, the number of delay and is now stored in the first register stage. Furthermore,
circuits per delay element is reduced, which results in a the delay circuit is started again, and after Tgran , the
smaller chip area. Furthermore, the delay circuit controls value {1 1} is shifted to the second register stage. As long
the shift of all values in one base cell simultaneously. as signal values are contained in the base cell, it oper-
Therefore, less delay circuits are active at the same time ates in the time grid Tgran . Thus, even though the next
and the dynamic power consumption is reduced. input value arrives at time instant t2 = t1 + Tgran + T,
The timing diagram in Figure 18.3 illustrates the oper- it is not stored before t3 = t1 + 2Tgran in the first register
ation of the new base cell in more detail for a 2-bit input stage. At the same time, the values already stored in the
signal. At time instant t0 , when a new signal value {0 1} second stage are shifted to the third. The next pulse is
arrives at the base cell, the corresponding pulse activates generated by the delay circuit again after Tgran , causing
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 425

another shift for the values in register stages 1 and 3. in Figure 18.1 consists of an analog delay circuit, gen-
Thus, only one pulse is required for the shift of both erating the time delay Tgran , and a digital latch where
signal values. With increasing number M of shift regis- the respective signal value is stored. The required num-
ter stages, the number of active delay circuits decreases, ber ND1 of cascaded base cells in the delay element is
which results in lower power consumption. A possible obtained from (18.2). The respective chip area A D1 for
drawback of the new base cell is that the signal itself such a delay element can be thus approximated by
is also affected. As can be seen in the timing diagram
(Figure 18.3), the signal value arriving at time instant t2 A D1 = ND1 · A DC + ND1 · A R , (18.5)
suffers an additional delay and is shifted out of the base with A DC the chip area of the delay circuitry and A R
cell not before TBDC + Terr . This time shift always hap- the chip area of the latch(es). For modern semiconduc-
pens if the time distance between two consecutive signal tor technologies, A DC is considerably larger than A R .
changes is smaller than TBDC and if these changes are not Depending on the minimum feature size used for imple-
spaced by an integer multiple of Tgran . The time shift Terr mentation, the ratio A DC /A R is in the range of 13/1 to
is in the range of 0 < Terr < Tgran and can be considered 25/1 [12]. This ratio increases with downscaling of the
also as a unidirectional jitter-like error. The resulting CT semiconductor technologies, since the chip area of the
signal is identical to a signal sampled with frequency capacitor in the analog delay circuit cannot be decreased
1/Tgran. Since sampling occurs only when more than one by the same amount as the chip area for the digital cir-
signal value is stored in one base cell at the same time, cuitry. Thus, A D1 is mainly determined by the chip area
only the fast changing regions of the input signal will be of the analog delay circuits.
affected. The required chip area for a delay element can be
The limit frequency f D , where sampling comes into considerably reduced by using the new base cell TB in
operation, is determined by the granularity used for the Figure 18.2. As described previously, this cell consists of
delay element and the number M of FF stages used for one analog delay circuit and a shift register composed
the registers. Based on (18.1) the new parameter f D can of M-cascaded storage cells, so that several values can
be evaluated [9]. Since the time interval during which a be stored. Even though the total number of storage cells
signal value is stored in one base cell is given now by in the delay element must be still ND1 to generate the
TBDC = M · Tgran , the time distance between two succes- time delay TD , the number of base cells can be reduced
sive pulses generated by an input signal with frequency to ND2 = / ND1 /M 0. Thus, the required overall chip area
f D needs to be longer than TBDC . Thus, the granularity A D2 of a delay element can be now approximated by
of a signal with frequency f D and amplitude A = 1 must A B
be equal to M · Tgran , with ND1
A D2 = · A DC + ND1 · A R . (18.6)
M
1
M · Tgran = . (18.3) If ND1 /M is not an integer, an additional base cell at the
2 N −1 · 2π f D end of the base cell cascade within the delay element
contains the remaining registers to obtain a total num-
Therefore, the limit frequency f D is obtained from
ber of ND1 storage elements. Using (18.5) and (18.6), the
f in,max normalized chip area has been determined for different
fD = . (18.4) values of M. In Table 18.1, the respective values normal-
M
ized to the chip area of the classical delay element are
If the input signal frequency is above f D , the output listed. The implementation in a standard CMOS tech-
signal y2 (t) of the delay element becomes a sampled nology had been assumed [12]. Furthermore, it has been
version of the input signal with sampling frequency assumed that the chip area for wiring and the control
1/Tgran. Note, however, that even for signal frequen- logic is nearly constant for both types of delay elements.
cies larger than f D , the signal is only sampled in its fast The improvements with respect to chip area and
changing regions. A sinusoidal signal with maximum power consumption get larger with increasing M. If
input frequency f in,max is still processed as a continuous- M = 1, there is no difference between the classical and
time signal during time intervals where the signal is only the novel base cell. The improvements converge to a
slowly changing. maximum for M = ND1 when all storage elements are in
one base cell. However, for large M, the limit frequency
f D becomes very low, thus the signal would be sampled
18.2.2 Required Chip Area
most of the time. The optimum for M is thus between the
In the following, the required chip area for the different maximum and the minimum value. The determination
delay element realizations will be determined in more of the optimum number of storage elements per base cell
detail. Each base cell of the classical CT delay element will be discussed in Section 18.2.4.
426 Event-Based Control and Signal Processing
TABLE 18.1
Normalized Delay Element Size for Different Values of M, ND1 = 202, A DC /A R = 13, and N = 8
M 1 2 3 4 5 6 10 20 30 40 50 100
AD2/AD1 1 0.54 0.38 0.31 0.26 0.23 0.17 0.12 0.1 0.1 0.09 0.08

1
1

0.9

0.8
Normalized activity index, S

0.7 2

0.6

0.5 3

0.4 4
5
0.3
7 6
8
0.2 9
10
15
0.1 20
50 25
202
0 f
0 1 2 3 4

Input frequency (kHz)

FIGURE 18.4
Normalized number of active analog delay cells per second versus frequency; parameter is M, the number of storage elements per base cell.

18.2.3 Reduction of Power Consumption the shift of several signal values is initiated by only one
delay circuit, and S2 will be smaller than S1 .
To compare the power consumption of the new base
Figure 18.4 shows a plot of the normalized activity
cell with that of a classical CT delay element, an activ-
index S versus the input frequency for several values
ity index S is introduced, where S is equivalent to the
of M. The parameters are the same as in the example in
number of active analog delay circuits per second. In
Section 18.1. The curves in Figure 18.4 confirm that the
the classical CT delay element, an incoming signal value
savings increase fast for small values of M and converge
has to pass all base cells of a delay element and, there-
against a maximum for the case where all registers are
fore, ND1 delay circuits are activated for transmission.
combined in one base cell.
In [4], it has been shown that (2 N +1 − 4) f in signal values
are generated per second when time continuously quan-
tizing a sinusoidal signal of frequency f in with N bits.
18.2.4 Optimum Number of Registers per Base Cell
Thus, the respective activity index S1 is given by
To profit from the advantages of CT signal process-
S1 = (2 N +1 − 4) · f in · ND1 . (18.7) ing also for the highest signal frequency the system is
designed for, the proposed delay element should oper-
Since the new delay element contains several registers ate time continuously at least during the time intervals
per base cell, the number of active analog delay circuits with slower signal changes. Since the onset of sampled
is reduced correspondingly. For frequencies below f D , data operation is determined by M · Tgran, the optimum
the time distance between consecutive changes is larger value for M is an important design parameter for the
than TBDC and, therefore, the base cell will operate like new base cell.
the classical base cell. Thus, the respective activity index As has been shown previously, the required chip area
S2 is equal to S1 . For frequencies above f D , however, decreases with increasing values of M. However, if the
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 427
TABLE 18.2
Maximum Number Mmax of Register Stages per Base Cell for Different
Quantizer Resolutions, N
N 4 5 6 7 8 9 10 11 12 13 14 15
Mmax 5 7 10 15 21 31 44 63 89 127 180 255

chosen M is too large, the base cell operates as a sam-


pled data system all the time and any benefit of CT 100

Signal-to-noise ratio (dB)


SNRCT (continuous-time signal)
signal processing is lost. To avoid permanent sampling
of the signal, M must be equal or smaller than a max- 80
imum value Mmax . Mmax should be chosen such that
the regions of a sinusoidal signal, where the ampli-
tude approaches its minimum or maximum value, are 60
SNRDT (discrete-time signal)
still processed continuously in time. In [9], it had been
shown that Mmax can be determined by the following 40
relationship: 0 1.0 2.0 3.0 4.0
fin,max/3
⎢ ⎛ ' (⎞ ⎥
⎢ − Q ⎥ Input frequency (kHz)
⎢ 2 1 arcsin 1 2A ⎥
Mmax = ⎣ · ⎝ − ⎠ −1 ⎦
Tgran 4 f in,max 2π f in,max FIGURE 18.5

>    ? Signal-to-noise ratio (SNR) for a continuous-time system and SNR for
π Q a discrete-time system for a sampled data system, with 8-bit quantizer
= 2 ·N
− arcsin 1 − −1 ,
2 2A resolution.
(18.8)
CT signal is determined by the harmonics within the
where Q = 1/2 N −1 is the quantization unit and
baseband. For frequencies above f in,max /3, all harmon-
A = 1 − Q is the amplitude of the sinusoidal input
ics are shifted out of the band of interest and the SNRCT
signal.
approaches infinity.
With (18.8), it is now possible to evaluate the opti-
Since the signal is partially sampled by the new delay
mum number of registers per base cell for any number
element TB , the SNR will be degraded compared with
N of bits used for quantization. In Table 18.2, Mmax is
a conventional CT system. In a worst-case scenario,
listed for different values of N. For the example con-
which is either approached for frequencies above f D and
sidered previously with N = 8, a value of Mmax = 21 is
M > Mmax , or for frequencies above f in,max , the signal
obtained by (18.8). Therefore by using (18.6) and (18.7),
is completely sampled. For a sinusoidal signal with fre-
the improvements with respect to chip area and power
quency f in,max and amplitude A = 1, the worst-case SNR
consumption can be determined. It turns out that chip
in the baseband is given by
area is reduced to 12%, and the number of active delay
circuits is reduced to 8% of the original values. SNRCT,wc = 1.76 dB + N · 6.02 dB + 10 log10 ( R) dB.
(18.9)
18.2.5 Performance Considerations
The oversampling factor R is the ratio of the sampling
Even though the delay element based on the new base frequency 1/Tgran to the baseband width, and with
cell TB operates like a sampled data system in the (18.1), the following relationship holds
regions of fast signal changes, the advantages of CT sig-
nal processing still hold for regions with slower signal 1
R=
changes, if M is chosen smaller than Mmax . Further- Tgran · 2 f in,max (18.10)
more, compared with systems with CT delay elements N −1
= π2 .
proposed up to now, the power consumption can be
considerably reduced. Thus, (18.9) can be written as
The improvement of the in-band SNR due to CT sig-
nal processing compared with a sampled data signal SNRCT,wc = (3.72 + N · 9.03) dB. (18.11)
processing is illustrated in Figure 18.5. While the in-
band SNR of the DT signal is limited by the aliasing Even though the worst-case SNR approaches the SNR
components of the quantization noise, the SNR of the of a system sampled with 1/Tgran , in most cases the SNR
428 Event-Based Control and Signal Processing

SNRCT for M = 21 Finally, simulation results are presented and a quantita-


120
tive comparison with delta modulator (DM) is given.
Signal to noise ratio (dB)

100
18.3.1 Asynchronous Sigma–Delta Modulation
80
SNRCT,wc A promising approach for the reduction of imple-
mentation costs and for improving performance of
60 time-continuous DSP systems is to use sigma–delta
SNRDT
modulation (SDM) for quantization. SDM is a well-
40 established method for sampled data systems and a
0 1 2 3 4 large number of different realizations of respective con-
fD fin,max/3
verters is implemented and described in the literature
Input frequency (kHz)
[13,14]. The digital output signal has a low word length
FIGURE 18.6 which nevertheless can have an excellent SNR in certain
SNR for a conventional CT system and a CT system with the new base frequency bands. This is achieved by applying over-
cell TB , with M = 21 register stages and 8-bit quantizer resolution. sampling and noise-shaping techniques to improve the
SNR in the band of interest. Thus, the classical sigma
delta modulator is a sampled data system and requires
obtained with the new base cell is much better. This is a clock for sampling. Noise-shaping is obtained by a
due to the fact that major parts of the signal are still closed-loop configuration which results in filtering of
processed continuously in time. the quantization noise. When a high-pass characteris-
Figure 18.6 shows a plot of the SNR versus the fre- tic is implemented, the quantization noise is reduced
quency of the input signal for M = 21. As can be seen, in the base band and is shifted to the higher frequency
the SNR of the system with the new delay cell is much range. While in classical SDM, sampling is done before
better than the worst-case SNRCT,wc, even though it is the closed-loop system, in continuous-time configura-
slightly degraded compared with the SNR of the original tions sampling is carried out inside the closed loop in
CT signal. front of the quantizer [13,14]. Both approaches, however,
generate sampled data signals as output signals.
A realization which operates completely in continu-
ous time is obtained by using ASDMs. In the respective
18.3 Asynchronous Sigma Delta Modulation loop configuration shown in Figure 18.7, a limit cycle
is automatically generated which is modulated with the
for CT DSP Systems
input signal [15–20]. This modulator type also converts
Another promising approach for optimizing CT digital the analog input signal into a digital signal with a small
signal processing arises from quantization by a time- word length. The most important design parameter is
continuous sigma delta modulator. This converter type the limit cycle frequency, which determines the spectral
generates a digital signal with a small word length from properties of the output signal and the SNR in the band
the analog input. Similar to delta-modulated signals [2], of interest. Further design parameters are the order of
the respective digital signal processing must be thus the linear filter, and the resolution and the hysteresis of
designed only for the small word length. However, con- the quantizer. The configuration shown in Figure 18.7
trary to delta modulation, not only the deltas between is a closed-loop system with a linear filter with transfer
successive data values are processed, even though the function HL (s) and the nonlinear quantizer. The resolu-
data stream can have a word length as small as 1 tion of the quantizer is only 1 bit in the simplest case and
bit. Thus, no accumulators are needed to retrieve the can be modeled by a nonlinear transfer characteristic
actual signal. Furthermore, due to noise-shaping per- HNL ( A), with A the amplitude of the input signal.
formed by the modulator, the SNR can be considerably Normally, the quantizer is implemented with a hys-
improved in the band of interest. However, since no teresis h which can be also used for optimization of
sampling is performed in CT systems only an Asyn- the overall system performance. x (t) and y(t) are the
chronous Sigma–Delta Modulator (ASDM) [13–20] can input and output signals of the system, respectively. For
be used for quantization. simplicity, a simple integrator is chosen as the linear
The fundamentals of this approach are described in part, and its output signal is i (t). For actual implemen-
the following and a mathematical description is given. tation, the ideal integrator can be replaced by a first-
Furthermore, a relationship between the limit cycle fre- or second-order low-pass filter, which limits the gain
quency and the hysteresis of the quantizer is derived. below the characteristic frequency to a constant value.
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 429

Order Resolution
N = 1,2,3... L-Bit (L = 1,2,3)

e(t) i(t)
x(t) + HL(S) HNL(A) y(t)

T
Quantizer

t
A
α
D
1 Bit

FIGURE 18.7
ASDM closed-loop system.

Sigma-delta modulator output (time domain)

1
y(t)

0.5

i(t)
Amplitude

h
0
–h

–0.5

–1
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (μs)

FIGURE 18.8
ASDM output signal y( t) and integrator output i( t).

The 1-bit quantizer operates as a nonlinear element with train, which is two times the frequency of the output
hysteresis h [15,16,20]. square wave. For minimum power consumption, this
Figure 18.8 shows the time-domain wave forms of rate should be as small as possible.
y (t) and i (t) for a constant input signal with x (t) = 0.7 Even though no clock signal is applied, an unforced
applied to the ASDM. A value of h = 0.1 was chosen for periodic oscillation is generated at the output of the loop
the hysteresis of the quantizer. As shown by the figure, structure. The frequency of this limit cycle is maximal
the output of the ASDM is a periodic square wave. if the input signal is zero and changes with the ampli-
For further processing by a continuous-time DSP sys- tude of the input signal. This self-oscillation adds extra
tem, the square wave can be transformed into a pulse frequency components to the output spectrum. To keep
train at the transitions of y(t) and an additional signal the band of interest unaffected by these distorting fre-
corresponding to the data values. The accumulation pro- quency components, the limit cycle frequency ωc must
cess necessary for the DM-encoded signal is not required be properly chosen.
anymore. Thus, the ASDM output can be processed by As shown in Figure 18.8, the transitions of the ASDM
the circuit configurations described before for CT digital output signal y(t) occur whenever the integrator output
signal processing. i (t) reaches the threshold levels h or −h of the quantizer.
Again the power consumption of the CT DSP system ASDM systems introduce no quantization noise to the
is mainly determined by the rate of change of the pulse system since no sampling is performed.
430 Event-Based Control and Signal Processing

In the following, the limit cycle frequency f c = ωc /2π of f 0 is determined by the distorting frequency compo-
will be determined for a system with a zero input signal nents which are shifted into the band of interest and thus
and an integrator as the linear filter. As has been shown reduce the in-band SNR. Therefore, the ASDM should
in [15], the relationship between the filter transfer func- be designed in a way that f 0 is chosen large enough,
tion HL ( jω) and the hysteresis h for an ASDM system so that these components are out of the band of inter-
with input signal x (t) = 0 can be described by est. Furthermore, small values of h make the hardware
implementation more critical. A reasonable lower value

4 1
· ∑ Im { HL ( jnωc )} = ±h. (18.12) for h is 0.01.
π n=1,3,5,... n

The transfer function HL ( jω) of an integrator with 18.3.2 Optimization of a First-Order Asynchronous
characteristic frequency ω p is given by Sigma–Delta Modulator
Since the transfer characteristic of an ideal integrator can
1
HL ( jω) = . (18.13) be approximated in a real implementation only over a
jω/ω p limited frequency range, a better choice for HL ( jω) is
Inserting this expression into (18.12) results in a first-order low-pass filter transfer function, which is
given by
∞ 1
4 1 1 HL ( jω) =
− · ∑ · = ±h. (18.14) 1 + jω/ω p
. (18.20)
π n nωc /ω p
n =1,3,5,...
The output power spectrum of a respective ASDM with
The limit cycle frequency ωc can be thus deter- hysteresis h = 0.01 and characteristic frequency ω p =
mined by 15 kHz is shown in Figure 18.9. The sinusoidal input sig-
∞ nal with frequency f in = 3.644 kHz had an amplitude of
4ω p 1
ωc = · ∑ (18.15) A = 0.8. The noise of the output signal increases up to
πh n=1,3,5,... n2 the limit cycle frequency f 0 and for higher frequencies it
π ωp decreases again. The respective spectrum for a sampled
= · . (18.16)
2 h data SDM looks different, since it increases up to half the
sampling frequency.
Therewith, the maximum limit cycle frequency f c =
The limit cycle frequency of this first-order modu-
ωc /2π is given by
lator can be again determined by (18.17), as has been
ωp shown in [15]. Using this relationship with the param-
fc = . (18.17)
4h eters given above, a maximum limit cycle frequency
f = 375 kHz is obtained. Thus, with (18.19) the aver-
In [16], it was shown that for a nonzero input signal, c
age limit cycle frequency for the considered sinusoid is
the limit cycle frequency is lower than the value which
given by f 0 = 255 kHz. As can be seen in Figure 18.9,
is given by (18.17). If a sinusoidal input signal x (t) =
there are a number of Bessel components around f 0 . The
A · sin(2π f in t) is applied, the limit cycle frequency is a
frequency band of interest from 0 to 4 kHz is, however,
function of time and is given by
only slightly affected and an SNR of more than 70 dB is
- . obtained.
f cs (t) = 1 − A · sin (2π f in t) · f c .
2 2
(18.18)
The distorting frequency components diminish com-
pletely from the baseband range and the SNR becomes
Furthermore, the average limit cycle frequency for a
even better by increasing the characteristic frequency
sinusoidal signal x (t) is obtained from
of the low-pass filter and therewith the average value
  of the limit cycle frequency f 0 . However, when increas-
A2
f0 = 1 − · fc . (18.19) ing the limit cycle frequency, the unforced oscillations
2
at the ASDM also increase. This results in an increased
Additional distorting frequency components are gen- pulse rate of the change signal. Based on these consid-
erated at the output of the ASDM by the sinusoidal input erations, it becomes clear that the choice of the limit
signal. These components appear around f 0 and can be cycle frequency is a trade-off between the SNR and the
described by Bessel functions at frequencies f 0 ± k · f in , data rate.
where k is an even integer [16]. Figure 18.10 shows a plot of the in-band signal-to-
The average limit cycle frequency f 0 can be made error ratio (SER) versus the limit cycle frequency ω p and
smaller by decreasing ω p and by increasing h as can the frequency of the input signal. The amplitude of the
be concluded from (18.17) and (18.19). The lower limit input signal was A = 0.8. As can be seen from the figure,
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 431
0

–20

–40

–60

Level (dB)
–80

–100

–120

–140

–160

–180
102 104 106 108
fo
Frequency (Hz)

FIGURE 18.9
ASDM output power spectrum.

SER (dB)
80

75

70

65

60

55

50
20
4
15
3
10 2
ωp (kHz) 1
5 0 Frequency (kHz)

FIGURE 18.10
SER of the ASDM output.

the SER improves with increasing limit cycle frequency The maximum rate R ASDM,max = 2 · f c is obtained
ω p and degrades with increasing input frequency f in . for a zero input signal. The unwanted out-of-band fre-
For ω p ≥ 15 kHz, the SER is better than 70 dB for all quency components and the limit cycle of the binary
input frequencies in the range of 0–4 kHz. ADSM signal can be suppressed by decimation filters
The previous considerations have shown that for an at the output of the ASDM system. For this signal pro-
ASDM system the rate of the pulse train is constant cessing, it is essential that the time distances between
and does not depend on the input signal frequency. It the transitions of the ASDM output are preserved. In the
is obtained from the average limit cycle frequency by classical DT DSP, the required accuracy for the detection
of these transitions would result in a very high oversam-
R ASDM = 2 f 0 < 2 f c . (18.21) pling rate. When using CT DSP, these time distances are,
432 Event-Based Control and Signal Processing

however, automatically kept by the CT delay lines with a 18.3.3 Performance and Hardware Comparison of
very high accuracy. In Chapter 18, a proper architecture ASDM with DM
for an optimized CT decimation filter is proposed.
To achieve about the same performance with a delta-
The granularity Tgran,ASDM required for processing of
modulated signal, a quantizer with a resolution of N =
the ASDM output signal can be determined from the
10 bits is required. The respective SER versus the input
minimum distance between adjacent signal transitions.
frequency is shown in Figure 18.11. The in-band SER is
The input signal x (t) is assumed to be either a constant
considerably degraded for frequencies below f in,max /3
or a sinusoidal signal. The instantaneous duty cycle D (t)
due to the third harmonic falling into the band of inter-
of the output signal is defined as the ratio of the instan-
est. For f in > f in,max /3, the remaining in-band distortion
taneous pulse width α(t) and the limit cycle period
component is located at the input frequency itself. Since
T (t) [16] and is thus given by
this component is constant for input frequencies in the
α( t ) 1 range of f in,max > f in > f in,max /3, the respective SER is
D (t) = = (1 + x (t)) . (18.22) also constant.
T (t) 2
For a delta-modulated signal, the pulse rate of the
With the instantaneous limit cycle frequency f cs (t) = change signal increases linearly with the input fre-
in . Since all 2 − 1 quantization levels used by
quency f N
1/T (t) and solving for α(t), the following relationship is
obtained: a mid-tread quantizer are crossed two times during one
1 + x (t) period of a sinusoidal signal with maximum amplitude,
α( t ) = . (18.23) the respective maximum pulse rate of the change signal
2 f cs (t)
is given by [6]:
With (18.18), the limit cycle frequency can be written ' (
as f cs (t) = (1 − x2 (t)) · f c . Inserting this relationship into R DM,max = 2 N +1 − 4 · f in . (18.26)
(18.23) gives
1 The input frequency f in ∗ , for which both systems gen-
α( t ) = . (18.24)
(1 − x (t)) · f c erate the same number of change pulses per unit time,
For a sinusoidal signal with amplitude A, the required can be determined by using (18.21) and (18.26):
granularity Tgran,ASDM for CT processing is obtained
∗ f
from the minimum value of α(t) [20]. With (18.24) it is f in = N0 . (18.27)
given by 2 −2
1 ∗,
Thus, for input signal frequencies smaller than f in
Tgran,ASDM = . (18.25)
(1 + | A|) · 2 f c the DM system generates fewer change pulses than

110

105

100

95

90
SER (dB)

85

80

75

70

65

60
0 1 f /3 2 3 4
in,max

Frequency (kHz)

FIGURE 18.11
SER of the DM output.
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 433

the ASDM, whereas for frequencies larger than f in ∗ , the systems. Digital filters are required either for separation
ADSM performs better with respect to the pulse rate. of signals which have been combined or for restoration
To compare the performance of the optimized ASDM of signals that have been distorted. Plenty of structures
and DM systems, a sinusoidal input signal with a max- and architectures have been proposed over the years
imum amplitude of A = 0.8 is applied to both systems. for filter implementation. All of them have their advan-
For DM, a resolution of N = 10 bits is required to fulfill tages and disadvantages, and a number of parameters
the specification with an SER>70 dB. Similarly, as shown decide the best choice for a certain application. These
in Figure 18.9, a first-order ASDM with ω p = 15 kHz parameters are, for example, cost and speed limitations,
and h = 0.01 also fulfills the specification. With (18.16) required performance, and others.
and (18.18), a value of f 0 = 255 kHz is obtained for Of course, the selection of the best suited filter struc-
the average limit cycle frequency. Inserting the values ture for a certain application is also an important issue
∗ = 250 Hz. Therefore, for input
into (18.27), we obtain f in for event-driven systems. The well-established meth-
signal frequencies above 250 Hz, the ASDM generates ods for the design of digital sampled data filters can
fewer change pulses and still fulfills the specification. be also applied to CT filters. Conventional digital filters
The relationship (18.25) also holds for a con- consist of registers as storage elements, adders, and pos-
stant input signal x (t) = A. Thus, with A = 0.8 and sibly multipliers. In a continuous-time realization, the
f c = 375 kHz, a very moderate value of Tgran,ASDM = registers are replaced by continuous-time delay lines to
740 ns is obtained. Using (18.1), the required granular- eliminate the clock. When choosing the delays of the
ity for the DM system with N = 10 can be determined delay lines equal to the delays of the sampled data filter,
to be Tgran,DM = 97 ns, which is significantly smaller. which is usually one clock cycle, the transfer function
Therefore, the requirements for the delay elements used remains the same, including the periodicity. Since the
for CT signal processing behind the quantizer are more constraints with respect to a hardware implementation
relaxed for the ASDM system, the granularity can be are however different, the optimization procedure can
chosen more than seven times larger. The number of result in a different architecture best suited for a cer-
required delay cells per delay element and therewith tain application. The most important difference is the
the implementation costs are reduced by the same considerably higher implementation cost for the time-
amount. continuous delay elements. Thus, the number of delay
In a classical DT SDM system, each doubling of the elements and the respective word length should be as
sampling rate results in an improvement of about 9 dB small as possible for an optimized solution.
for the SNR [13]. Therefore, to achieve an in-band SNR Furthermore, in a classical digital filter, the time delay
of 70 dB for the DT SDM output, an oversampling ratio must be either one clock period or a multiple of it. For
of 140 must be used. For f in,max = 4 kHz, this results in continuous-time filters, this restriction, however, does
a sampling rate of 1.2 MHz. Comparing this value with no longer hold. Thus, an additional degree of freedom
f c = 375 kHz for the CT ASDM, verifies the superior per- is available for the filter design by adjusting the time
formance of the latter also in comparison to a sampled delays of the delay lines [10,11]. Various values can be
data system with SDM. chosen for every delay element. This additional flexi-
Thus, it could be verified that digitization by an bility can be used for designing filters with very sim-
ASDM can result in several advantages. Compared with ple coefficients thus avoiding multiplications. Instead,
DM systems, the pulse rate of the change signal can the respective operations can be exclusively realized
be reduced for a large range of input frequencies, even by shift-and-add operations. Furthermore, the required
though it does not depend on the input frequency itself number of shift-and-adds for every coefficient can be
anymore. Furthermore, it has been shown that decima- minimized by delay line adjustment. In the simplest
tion filtering behind the ASDM can be performed very case, the coefficients are made equal to a power of
efficiently by CT filters since the granularity require- two. Furthermore, delay line adjustment can be used for
ments are relaxed and no accumulation is required. improving the filter characteristic in the band of interest
compared with the case with equal delays.

18.4.1 Continuous-Time Digital FIR Filters

18.4 Optimized Filter Structures for The filter structure most often used for sampled data
realizations is the finite impulse response filter shown
Event-Driven Signal Processing
in Figure 18.12, where the delays Di all have the same
Digital filtering is one of the most often implemented value [21]. The delay time in sampled data filters is
functionalities realized by digital signal processing usually equal to the sample time or a multiple of it.
434 Event-Based Control and Signal Processing

x(t) D1 D2 DM–1

b0 b1 b2 bM–2 bM–1

+ + + + y(t)

FIGURE 18.12
Continuous-time digital FIR-filter.

An advantage of this structure is its regularity; only performance can be obtained by adjusting the values of
Multiply and Accumulate (MAC) operations are the delay elements.
required. Furthermore, stability is always guaranteed The output of the filter in Section 18.12 with coeffi-
and the group delay of the filter can be made constant cients bk and time delays Di is described in the time
over the complete frequency range. The drawbacks domain by the following relationship:
of this architecture are, however, that the filter order
and therewith the number of delay elements and mul-
M −1
tiplications is large compared with other structures
such as recursive filters. Due to its robustness and its
y(t) = ∑ bk · x ( t − D k ), (18.28)
k =0
simplicity, the FIR structure had been also selected for k
the hardware implementation of continuous-time filters, with Dk = ∑ Di and D0 = 0. (18.29)
a 16th-order FIR-filter is described in [5]. To minimize i =1
the hardware requirements for the delay cells, a delta
modulator had been used to generate a data stream
with 1-bit word length as input to the filter. Therefore, The respective transfer function H ( jω), with ω the
the delay elements must only be designed for this small frequency variable, is given by
word length. Due to the 1-bit input to the multipliers,
the multipliers degenerate to accumulators, further M −1
reducing the hardware requirements. In spite of these H ( jω) = ∑ bk · e− jωDk . (18.30)
improvements, the required chip area for the proposed k =0
FIR-filter is, however, considerably larger than that of
a respective sampled data filter. Furthermore, due to To make the transfer function symmetrical with
delta modulation, errors are accumulated. Each error respect to ω = 0, which is desirable for the majority of
or missed value has an effect on all succeeding values. applications, the following restriction must hold for the
Thus, a reset for the filter must be generated at regular values of the delay elements:
intervals. In [22,23], an improved realization of an FIR
CT-filter of 16th order is described. The input data word
is processed in parallel with a word length of 8 bits and Dk = D M − k for k = 1 to M − 1. (18.31)
respective 8 × 8 bit multipliers are implemented. The
required chip area is, however, considerably larger than
for the previously mentioned bit-serial implementation. Furthermore, to obtain a linear-phase filter, either a
Thus additional efforts are necessary to make CT-filter symmetry or an antisymmetry condition must hold for
interesting for a commercial product. the filter coefficients:
Considerable hardware reduction and performance
improvement can be obtained by applying the meth- bk = ± b M − 1 − k for k = 0 to M − 1. (18.32)
ods described in the preceding sections. The drawbacks
of delta modulation can be avoided by using either an
asynchronous ΣΔ-modulator for quantization or by pro- If these relationships hold, the group delay of the filter
cessing input signals with a larger word length. As has is determined by
been shown, the size of the time-continuous delay ele-
ments can be reduced by up to 90% when using the
M −1
delay element concept described in Section 18.2. Further 1
improvements with respect to implementation costs and
τg =
2 ∑ Di . (18.33)
i =0
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 435

Equation 18.33 confirms that the group delay is con- If the filter is implemented as a sampled data system,
stant over all frequencies even for the time-continuous a 32nd-order FIR-filter is required for a minimum stop-
case. Constant group delay is a quite important property band attenuation of 42 dB. To avoid severe performance
for many applications. degradation, the coefficients must be quantized at least
with 12-bit resolution. The respective values are listed in
Table 18.3. The pass-band droop is 0.1 dB in the range
18.4.2 Optimized Continuous-Time Digital FIR-Filter
from 0 to 0.25 and increases to 1.8 dB at the pass-band
As an example, we will consider in the following a edge frequency, which is acceptable for the considered
linear-phase low-pass filter which can be used, for exam- application. The respective filter transfer function is
ple, for suppression of the quantization noise behind shown in Figure 18.13, curve 1. To avoid multipliers and
a continuous-time ΣΔ-A/D-converter [7,8]. The opti- to implement each coefficient with no more than one
mization of the filter transfer function is performed by shift-and-add operation, the values have been simplified
adjusting the delay lines. When normalizing the fre- as shown in the third column of Table 18.3. Use of these
quency axis to the pass-band edge frequency, the pass- coefficient values, however, results in significant perfor-
band extents from 0 to 1 and the stopband from 4 to 32. mance degradation, the stopband attenuation is reduced

TABLE 18.3
Coefficient Values for the 32nd-Order Linear-Phase FIR Low-Pass Filter
Coefficient b1 , b33 b2 , b32 b3 , b31 b4 , b30 b5 , b29 b6 , b28 b7 , b27 b8 , b26 b9 , b25
Value quantized −0.02920 0.05109 0.07299 0.11679 0.16788 0.23358 0.31387 0.40146 0.48905
to 12 bits
Optimized for 2 −5 2 −4 2 −3 − 2 −6 2 −3 + 2 −5 2 −2 − 2 −5 2 −2 + 2 −5 2 −1 − 2 −3 2 −1 − 2 −4 2 −1 + 2 −5
1 shift&add

Coefficient b10 , b24 b11 , b23 b12 , b22 b13 , b21 b14 , b20 b15 , b19 b16 , b18 b17
Value quantized 0.58394 0.67883 0.76642 0.84672 0.91241 0.96350 0.99270 1
to 12 bits
Optimized for 2 −1 + 2 −3 1 − 2 −2 1 − 2 −2 1 − 2 −3 1 − 2 −4 1 − 2 −5 1 1
1 shift&add

Filter characteristic
0

–10

–20

–30
Level (dB)

(2)
(1)
–40
(3)

–50

–60

–70

–80
0 5 10 15 20 25 30
Normalized frequency ( f/fc)

FIGURE 18.13
Transfer functions of the 32nd-order FIR low-pass filter. Curve (1), multiplications realized by 12-bit coefficients; curve (2), multiplications
realized by 1 shift&add operation; and curve (3), optimized delays, multiplications are realized by 1 shift&add.
436 Event-Based Control and Signal Processing
TABLE 18.4
Delay Values of the Optimized 32nd-Order Linear-Phase FIR Low-Pass Filter
Delay D1 D2 D3 D4 D5 D6 D7 D8
D32 D31 D30 D29 D28 D27 D26 D25
Di /Ts 0.24 1.17 1.03 1.05 0.98 1.025 0.985 0.98

Delay D9 D10 D11 D12 D13 D14 D15 D16


D24 D23 D22 D21 D20 D19 D18 D17
Di /Ts 0.975 1.025 0.98 0.95 1.005 0.98 0.985 0.98

to less than 37.9 dB. The respective transfer characteristic 30%


is shown by curve (2) in Figure 18.13.
To improve the filter characteristic, delay line adjust-
ment has been applied. The values of the delay ele-
ments are optimized using an optimization procedure 20%
based on differential evolution. The respective transfer
characteristic, which has been obtained with the simple
coefficients, is shown by curve (3) in Figure 18.13. The
stopband attenuation is improved to more than 45 dB by 10%
using the optimized delay values listed in Table 18.4.
Note that the overall delay, the group delay, and the
pass-band behavior of the filter are only slightly affected,
the optimized filter is still in linear phase. 0%
To evaluate the influence of delay tolerances, a 41 42 43 44 45 dB
Monte–Carlo analysis had been performed. For the as
simulation, a standard deviation of 0.5% for the toler-
FIGURE 18.14
ances of the delay values was assumed. The respective
Histogram of the minimum stopband attenuation as for a standard
histogram for the minimum stopband attenuation as is
deviation of 0.5% for the tolerances of the delays.
shown in Figure 18.14. The mean value of the minimum
stopband attenuation is 43 dB; in more than 90% of the
cases, the attenuation in the stopband is better than
42 dB confirming the good sensitivity properties of the specification. The respective filter structure is shown
this structure with respect to tolerances of the delay in Figure 18.15. The coefficient values are truncated to
elements. These mismatch requirements can be directly 6 bits and the values of the delay elements are given
translated into mismatch specifications on transistor in Table 18.5. The transfer characteristic is shown in
level. Thus, the strength of the proposed method had Figure 18.16.
been confirmed by the example. It has been shown An optimization of the filter characteristic and of the
that the adjustment of the delay elements gives an coefficient values had been performed by adjusting
additional degree of freedom for the optimization of the delay lines starting with the values in Table 18.5.
continuous-time digital filters. The respective optimized values are also listed in the
table. Even though the coefficient values are simpli-
fied, the minimum stopband attenuation is improved
by about 7 dB to more than 37 dB. The respective filter
18.4.3 Optimized Continuous-Time Recursive
characteristic is also shown in Figure 18.16.
Low-Pass Filter
Thus, the strength of the proposed method could be
As a second example, we will consider a recursive low- confirmed by both examples. It has been shown that the
pass filter. The stopband and pass-band edge frequen- adjustment of the delay elements gives an additional
cies are 3.4 kHz and 4.6 kHz, respectively, for a minimum degree of freedom for optimization of continuous-time
stopband attenuation of 30 dB and a pass-band ripple digital filters.
better than 0.2 dB. The unit delay was chosen to 62.5 μs Note that if all filter delays have the same value TD ,
corresponding to a repetition frequency for the transfer the filter transfer characteristic normally is replicated
characteristic of 16 kHz. It turned out that a birecipro- with the frequency 1/TD . This does not hold however
cal wave digital filter [8,10,24,25] of fifth order fulfills anymore, if the filter characteristic is optimized by delay
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 437

B2

D2


x(t) D1 + B2 + + y(t)

B1

D3


+ B1 +

FIGURE 18.15
Fifth-order bireciprocal wave digital filter.

TABLE 18.5
Coefficient and Delay Values of the Recursive Filter
B1 B2 D1 D2 D3
Sampled data filter 2 −2 + 2 −5 −1 + 2 −2 T 2T 2T
CT filter 2 −2 −1 + 2 −2 1.09 · T 2.10 · T 2.16 · T

–10

–20
(1)

–30
Level (dB)

(2)
–40

–50

–60

–70

–80
0 1 2 3 4 5 6 7 8
Frequency f (kHz)

FIGURE 18.16
Transfer functions of the fifth-order bireciprocal wave digital filter, curve (1), sampled data filter, curve (2), optimized CT filter.
438 Event-Based Control and Signal Processing
0

–10

–20
(1)
(2)
–30

Level (dB) –40

–50

–60

–70

–80
0 2 4 6 8 10 12 14 16
Frequency f (kHz)

FIGURE 18.17
Transfer functions of the recursive wave digital filter for a wider frequency range, curve (1), sampled data filter, curve (2), optimized CT filter.

line adjustment in the band of interest. The respec- The proposed delay element enables parallel processing
tive filter characteristic is shown for a wider frequency with only minor additional hardware costs.
band in Figure 18.17. The stopband attenuation drops Another way to reduce implementation costs for CT
in the out-of-band region from 8 kHz to 11.4 kHz to less DSP systems is to perform quantization by ASDM. This
than 20 dB. quantizer type can generate a 1-bit time-continuous dig-
ital signal without the drawbacks of delta modulation.
This approach was analyzed in detail in Section 18.3.
In Section 18.4, filter structures that are well-suited for
implementation in continuous time are analyzed. It has
18.5 Summary been shown that an additional degree of freedom for the
Event-driven CT DSP systems can have a number of filter optimization process is given, since the filter char-
advantages compared with conventional sampled data acteristic depends on the length of the CT delay lines.
systems. Due to the nonsampled operation, no alias- By adjusting the values of the delays, the filter charac-
ing of higher frequency components and of noise occurs teristic can be improved and the coefficient values can
and due to the asynchronous operation the power con- be simplified. Thus, the hardware requirements for an
sumption can be minimized. Since the implementations implementation can be also reduced if the filter is prop-
proposed up to now, however, require a considerably erly designed. This has been verified by applying the
larger chip area than the respective sampled data sys- method to a 32nd-order FIR-filter and to a fifth-order
tems, continuous-time systems have not yet reached the recursive filter.
maturity for commercial implementation. The proposed methods can bring continuous-time
Therefore, in this contribution, a number of improve- systems an important step closer to commercial appli-
ments have been proposed, which result in considerable cation. Event-driven systems based on continuous-time
savings with respect to the required hardware and with DSPs can be attractive for a number of applications
respect to power consumption. Since the continuous- including biomedical implants.
time delay elements are the most critical functional
blocks with respect to chip area and power consump-
tion, a new CT base cell has been proposed. Using this
new cell, the chip area can be reduced by up to 90% at the
Bibliography
cost of only minor performance losses. Furthermore, the
drawbacks of delta modulation can be avoided by using [1] Y. P. Tsividis. Event-driven data acquisition a digital
a conventional continuous-time quantizer and perform- signal processing—A tutorial. IEEE Transactions on
ing digital signal processing with the full word length. Circuits and Systems—II, 57(8):577–581, 2010.
Concepts for Hardware-Efficient Implementation of Continuous-Time Digital Signal Processing 439

[2] C. Weltin-Wu and Y. Tsividis. An event-driven [12] J. Al-Eryani, A. Stanitzki, K. Konrad,


clockless level-crossing ADC with signal- N. Tavangaran, D. Brückmann, and R. Kokozinski.
dependent adaptive resolution. IEEE Journal of Continuous-time digital filters using delay cells
Solid-State Circuits, 48(9):2180–2190, 2013. with feedback. 13. ITG/GMM-Fachtagung (Analog
2013), Aachen, 4–6 March 2013.
[3] B. J. Rao, O. V. Krishna, and V. S. Kesav. A con-
tinuous time ADC and digital signal processing [13] S. R. Norsworthy, R. Schreier, and G. C. Temes.
system for smart dust wireless sensor applications. Delta-Sigma Data Converters, Theory, Design, and
International Journal in Engineering and Technology, Simulation. IEEE Press, Piscataway, NJ, 1997.
2(6):941–949, 2013.
[14] O. Shoaei and W. M. Snelgrove. Optimal (band-
[4] Y. P. Tsividis. Mixed-domain systems and signal pass) continuous-time sigma delta modulator.
processing based on input decomposition. IEEE Proceedings of ISCAS, 5:489–492, 1994.
Transactions on Circuits and Systems—I: Regular
Papers, 53(10):2145–2156, 2006. [15] S. Ouzounov, E. Roza, J. A. Hegt, G. van der Weide,
and A H. M. van Roermund. Analysis and design of
[5] Y. W. Lee, K. L. Shepard, and Y. P. Tsividis. high-performance asynchronous sigma-delta mod-
A continuous-time programmable digital FIR-filter. ulators with a binary quantizer. IEEE Journal of
IEEE Journal of Solid State Circuits, 39(21):2512–2520, Solid-State Circuits, 41(3):588–596, 2006.
2006.
[16] E. Roza. Analog-to-digital conversion via duty-
[6] B. Schell and Y. P. Tsividis. A continuous-time
cycle modulation. IEEE Transactions on Circuits and
ADC/DSP/DAC system with no clock and with
Systems, 44(11):907–914, 1997.
activity-dependent power dissipation. IEEE Journal
of Solid-State Circuits, 43(11):2472–2481, 2008. [17] P. Benabes, M. Keramat, and R. Kielbasa.
A methodology for designing continuous-time
[7] F. Aeschlimann, E. Allier, L. Fesquet, and
sigma delta modulators. In IEEE European Design
M. Renaudin. Asynchronous FIR filters, towards a
and Test Conference (EDIC), pp. 46–50, March 1997.
new digital processing chain. In Proceedings of the
International Symposium on Asynchronous Circuits
[18] L. Breems and J. Huising. Continuous-Time Sigma-
and Systems, pp. 198–206, 2004.
Delta Modulation for A/D Conversion in Radio
[8] D. Brückmann. Design and realization of Receivers. Springer, Heidelberg, Berlin, New York,
continuous-time wave digital filters. In Pro- NY, 2001.
ceedings ISCAS 2008, pp. 2901–2904, Seattle, WA,
18–21 May 2008. [19] J. Daniels, W. Dehaene, M. S. J. Steyaert, and
A. Wiesbauer. A/D conversion using asynchronous
[9] K. Konrad, D. Brückmann, N. Tavangaran, delta-sigma modulation and time-to-digital conver-
J. Al-Eryani, R. Kokozinski, and T. Werthwein. sion. IEEE Transactions on Circuits and Systems I,
Delay element concept for continuous time dig- Regular Papers, 57(9):2404–2412, 2010.
ital signal processing. In 2013 IEEE International
Symposium on Circuits and Systems (ISCAS), pp. [20] N. Tavangaran, D. Brückmann, R. Kokozinski,
2775–2778, Bejing, China, 19–23 May 2013. and K. Konrad. Continuous time digital systems
with asynchronous sigma delta modulation. In
[10] K. Konrad, D. Brückmann, and N. Tavangaran. EUSIPCO 2012, Bucharest, Romania, pp. 225–229,
Optimization of continuous time filters by delay 27–31 August 2012.
line adjustment. In 2010, 53rd IEEE Interna-
tional Midwest Symposium on Circuits and Systems [21] J. G. Proakis and D. G. Manolakis. Digital Signal
(MWSCAS), pp. 1238–1241, Seattle, WA, 1–4 Processing, Principles. Algorithms and Applications,
August 2010. 4th edition. Pearson Prentice Hall, Upper Saddle
River, NJ, 2007.
[11] D. Brückmann, K. Konrad, and N. Tavangaran.
Delay line adjustment for the optimization of digi- [22] C. Vezyrtzis, W. Jiang, S. M. Nowick, and
tal continuous time filters. In ICECS 2010 17th IEEE Y. Tsividis. A flexible, clockless digital filter. In
International Conference on Electronics, Circuits, and Proceedings of the IEEE European Solid-State Circuits
Systems, pp. 843–846, 12–15 December 2010. Conference, pp. 65–68, September 2013.
440 Event-Based Control and Signal Processing

[23] C. Vezyrtzis, W. Jiang, S. M. Nowick, and [28] D. Brückmann, T. Feldengut, B. Hosticka,


Y. Tsividis. A flexible, event-driven digital fil- R. Kokozinski, K. Konrad, and N. Tavangaran.
ter with frequency response independent of input Optimization and implementation of continuous
sample rate. IEEE Journal of Solid-State Circuits, time DSP-systems by using granularity reduction.
49(10):2292–2304, 2014. In 2011 IEEE International Symposium on Circuits and
Systems (ISCAS), pp. 410–413, 15–18 May 2011.
[24] A. Fettweis. Wave digital filters: Theory and prac-
tice. Proceedings of the IEEE, 74:270–327, 1986. [29] N. Tavangaran, D. Brückmann, T. Feldengut,
B. Hosticka, K. Konrad, R. Kokozinski, and
[25] D. Brückmann and L. Bouhrize. Filter stages for R. Lerch. Effects of jitter on continuous time digital
a high-performance reconfigurable radio receiver systems with granularity reduction. In EUSIPCO
with minimum system delay. In Proceedings ICASSP 2011, Barcelona, Spain, 29 August–2 September
2007, Honolulu, HI, 15–20 April 2007. 2011.
[26] M. Renaudin. Asynchronous circuits and systems. [30] J. Al-Eryani, A. Stanitzki, K. Konrad,
A promising design alternative. Journal of Microelec- N. Tavangaran, D. Brückmann, and R. Kokozinski.
tronic Engineering, 54:133–149, 2000 Low-power area-efficient delay element with a
[27] M. Kurchuk and Y. P. Tsividis. Signal-dependent wide delay range. In IEEE International Conference
variable-resolution clockless A/D conversion with on Electronics, Circuits and Systems (ICECS), Seville
application to continuous-time digital signal pro- Spain, 9–12 December 2012.
cessing. IEEE Transactions on Circuits and Systems—
I: Regular Papers, 57(5):982–991, 2010.
19
Asynchronous Processing of Nonstationary Signals

Azime Can-Cimino
University of Pittsburgh
Pittsburgh, PA, USA

Luis F. Chaparro
University of Pittsburgh
Pittsburgh, PA, USA

CONTENTS
19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
19.2 Asynchronous Sampling and Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
19.2.1 LC Sampling and Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
19.2.2 ASDM Sampling and Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
19.3 ASDM Signal Decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
19.3.1 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
19.4 Modified ASDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
19.4.1 ASDM/MASDM Bank of Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
19.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455

ABSTRACT Representation and processing of non-


stationary signals, from practical applications, are com- 19.1 Introduction
plicated by their time-varying statistics. Signal- and
time–frequency-dependent asynchronous approaches In this chapter, we discuss issues related to the
discussed in this chapter provide efficient sampling asynchronous data acquisition and processing of
and data storage, low-power consumption, and ana- nonstationary signals that result from practical appli-
log processing of such signals. The asynchronous pro- cations. Data acquisition, storage, and processing of
cedures considered and contrasted are based on the these signals are challenging issues; such is the case, for
level-crossing (LC) approach and the asynchronous instance, in continuous monitoring in biomedical and
sigma delta modulator (ASDM). LC sampling results sensing network applications. Moreover, low-power
in nonuniform sampling, and the signal reconstruction consumption and analog processing are important
requires better localized functions than the sinc, in this requirements for devices used in biomedical applica-
case the prolate spheroidal wave functions, as well as tions. The above issues are addressed by asynchronous
regularization. An ASDM maps its analog input into a signal processing [1,6,7,16,20,29].
binary output with zero-crossing times depending on Typically, signals resulting from practical applica-
the input amplitude. Duty-cycle analysis allows devel- tions are nonstationary, and the variation over time
opment of low- and high-frequency signal decomposers, of the signal statistics characterizes them. As such,
and by modifying the ASDM, a filter bank analysis is the representation and processing of nonstationary sig-
developed. These approaches are useful in denoising, nals are commonly done assuming the statistics either
compression, and decomposition of signals in biomed- do not change with time (stationarity) or remain con-
ical and control applications. stant in short time intervals (local stationarity). Better

441
442 Event-Based Control and Signal Processing

approaches result by using joint time–frequency spec- functions allow more compression in the sampling
tral characterization that directly considers the time [9,11], and whenever the signal has band-pass character-
variability [8,32]. According to the Wold–Cramer repre- istics, modulated PSW functions provide parsimonious
sentation [28], a nonstationary process can be thought representations [27,34].
of as the output of a time-varying system with white Sampling and reconstruction of nonstationary sig-
noise as input. It can thus be shown that the distribu- nals are also possible using the asynchronous sigma
tion of the power of a nonstationary signal is a function delta modulator (ASDM) [2,23–25,30]. The ASDM is
of time and frequency [18,19,28]. By ignoring that the an analog, low-power, nonlinear feedback system that
frequency content is changing with time, conventional maps a bounded analog input signal into a binary out-
synchronous sampling collects unnecessary samples in put signal with zero-crossing times that depend on the
segments where the signal is quiescent. A more appro- amplitude of the input signal. The ASDM operation
priate approach would be to make the process signal- can be shown to be that of an adaptive LC sampling
dependent, leading to a nonuniform sampling that can scheme [10]. Using duty-cycle modulation to analyze
be implemented with asynchronous processes. the ASDM, a multilevel representation of an analog
Power consumption and type of processing are nec- signal in terms of localized averages—computed in win-
essary constrains for devices used to process signals dows with supports that depend on the amplitude
from biomedical and sensor network applications. In of the signal—is obtained. Such representation allows
brain–computer interfacing, for instance, the size of the us to obtain a signal decomposition that generalizes
devices, the difficulty in replacing batteries, possible the Haar wavelet representation [5]. This is accom-
harm to the patient from high frequencies generated plished by latticing the time–frequency plane choosing
by fast clocks, and the high power cost incurred in fixed frequency ranges and allowing the time win-
data transmission calls for asynchronous methodolo- dows to be set by the signal in each of these frequency
gies [9,13,14,25,26], and of analog rather than digital ranges. It is possible to obtain multilevel decomposi-
processing [38,39]. tions of a signal by cascading modules using ASDMs
Sampling and reconstruction of nonstationary signals sharing the greedy characteristics of compressive sens-
require a signal-dependent approach. Level-crossing ing. We thus have two equivalent representations of
(LC) sampling [13,14] is a signal-dependent sampling a signal: one resulting from the binary output of the
method such that for a given set of quantization levels, ASDM and the other depending on the duty-cycle
the sampler acquires a sample whenever the signal coin- modulation.
cides with one of those levels. LC sampling is thus inde- The local characterization of the input signal of an
pendent of the bandlimited signal constraint imposed ASDM is given by an integral equation that depends on
by the sampling theory and has no quantization error the parameters of the ASDM and on the zero-crossing
in the amplitude. Although very efficient in the collec- times of the binary output. Rearranging the feedback
tion of significant data from the signal, LC sampling system, the resulting modified ASDM can be charac-
requires an a priori set of quantization levels and results terized by a difference equation that can be used to
in nonuniform sampling where for each sample we need obtain nonuniform samples of the input. In a way,
to know not only its value but also the time at which this is equivalent to an asynchronous analog-to-digital
it occurs. Quantization in the LC approach is required converter that requires the quantization of the sam-
for the sample times rather than the amplitudes. Select- pling times. Using the difference equation to recursively
ing quantization levels that depend on the signal, rather estimate input samples and the quantized sampling
than fixed levels, as is commonly done, is an issue of times, it is possible to reconstruct the analog input
research interest [10,14]. signal.
The signal dependency of LC—akin to signal sparse- With the modified ASDM and latticing the joint
ness in compressive sensing—allows efficient sam- time–frequency space into defined frequency bands and
pling for nonstationary signals but complicates their time ranges depending on the scale parameter, we
reconstruction. Reconstruction based on the Nyquist– obtain a bank of filter decomposition for nonstation-
Shannon’s sampling theory requires the signal to be ary signals, which is similar to those obtained from
bandlimited. In practice, the bandlimitedness condition wavelet representations. This provides a way to effi-
and Shannon’s sinc interpolation are not appropriate ciently process nonstationary signals. The asynchronous
or well-posed problem when considering nonstation- approaches proposed in this chapter are especially well
ary signals. The prolate spheroidal wave (PSW) or suited for processing signals that are sparse in time,
Slepian functions [35], having better time and fre- and signals in low-power applications [6]. The differ-
quency localization than the sinc function [9,40], are ent approaches are illustrated using synthetic and actual
more appropriate for the signal reconstruction. These signals.
Asynchronous Processing of Nonstationary Signals 443

a sample whenever the amplitude of the signal coin-


19.2 Asynchronous Sampling and cides with one of these given quantization levels. It
thus results in nonuniform sampling: more samples are
Reconstruction
taken in regions where the signal has significant activity,
According to the Wold–Cramer decomposition [28], and fewer where the signal is quiescent. The nonuni-
a nonstationary process ξ(t) can be represented as the formity of the sampling requires that the value of the
output of a linear, time-varying system with impulse sample and the time at which it occurs be kept—a more
response h(t, τ) and white noise ε(t) as input: complex approach than synchronous sampling—but LC
 ∞ sampling has the advantage that no bandlimiting con-
ξ( t ) = h(t, τ)ε(τ)dτ. (19.1) straint is required. From the sample values obtained by
−∞
the LC sampler, the signal ξ(t) is approximated by a
The white noise ε(t), as a stationary process, is expressed multilevel signal
in terms of sinusoids with random amplitudes and
phases as ξ̂(t) = ∑ qk pk (t), (19.5)
 ∞ k
ε( t ) = e jΩt dZ (Ω), (19.2)
−∞ for a unit pulse pk (t) = u(t − tk ) − u(t − tk+1), tk+1 > tk ,
where the process Z (Ω) is a random process with where u(t) is the unit-step function and qk is the quan-
orthogonal increments such that tization level that coincides with ξ(tk ). Typically, the
∗ quantization levels are chosen uniformly, but better
E[dZ (Ω1 )dZ (Ω2 )] = 0 Ω1  = Ω2 , choices have been considered [10,13]. Reconstruction
and from the nonuniform samples {ξ(tk )} can be attained
using the prolate spheroidal wave functions (PSWFs)
E[|dZ (Ω)|2 ] =

.
{φm (t)} [9,35,40].
2π The PSWFs, among all orthogonal bases defined in
Replacing (19.2) into (19.1), we have a time-limited domain [− T, T ], are the ones with max-
 ∞ imum energy concentration within a given band of
ξ( t ) = H (t, Ω)e jΩt dZ (Ω), (19.3) frequencies (−W, W ). They are eigenfunctions of the
−∞ integral operator:
where H (t, Ω) is the generalized transfer function of the  T
linear time-varying system. Thus, the mean and vari- 1
φn ( t ) = φn ( x )s(t − x )dx, (19.6)
ance of ξ(t) are functions of time, and the Wold–Cramer λn − T
evolutionary spectrum is defined as
where s(t) is the sinc function and {λn } are the corre-
S(t, Ω) = | H (t, Ω)|2. (19.4) sponding eigenvalues.
Accordingly, the distribution of the power of the nonsta- Given that the PSWFs {φn (t)} are orthogonal in infi-
tionary process ξ(t) for each time t is a function of the nite and finite supports, the sinc function s(t) can be
frequency Ω [18,19], and the sampling of nonstationary expanded in terms of them for a sampling period Ts as
signals, given their time-varying spectra, requires a con- ∞
tinuously changing sampling time. In segments where s(t − kTs ) = ∑ φm (kTs )φm (t), (19.7)
there is a great deal of variation of the signal, the sam- m =0
pling period should be small and it should be large
so that the Shannon’s sinc interpolation [9,40] of a signal
in segments where the signal does not change much.
ξ(t) becomes
Thus, sampling using a uniform sampling period—
 
determined by the highest frequency in the signal— ∞ ∞
is wasteful in segments with low activity; that is, the ξ( t ) = ∑ ∑ ξ(kTs )φm (kTs ) φm (t). (19.8)
sampling should be signal dependent and nonuniform. m =0 k =− ∞
As we will see, it is important to consider not only the
If ξ(t) is finitely supported in 0 ≤ t ≤ ( N − 1) Ts and has
frequency content of the signal but also its amplitude.
a Fourier transform Ξ(Ω) essentially bandlimited in a
frequency band (−W, W ) (i.e., most of the signal energy
19.2.1 LC Sampling and Reconstruction is in this band), then ξ(t) can be expressed as
The LC sampler [13] is signal dependent and efficient, M −1
especially for signals sparse in time. For a set of ampli-
tude quantization levels {qi }, an LC sampler acquires
ξ( t ) = ∑ γm φm ( t ) , (19.9)
m =0
444 Event-Based Control and Signal Processing

where 19.2.2 ASDM Sampling and Reconstruction


N −1
γm = ∑ ξ(kTs )φm (kTs ), Sampling using an ASDM [10] provides a better alterna-
k =0 tive to the LC sampler. An ASDM [25,30] is a nonlinear
where the value of M is chosen from the eigenvalues analog feedback system that operates at low power and
{λn } indicating the energy concentration. consists of an integrator and a Schmitt trigger (see Figure
Using PSWFs, Nc sample values {ξ(tk )}, acquired 19.1). It has been used to time-encode bounded and
by an LC sampler at nonuniform times {tk , k = 0, . . . , bandlimited analog signals into continuous-time signals
Nc − 1}, are represented by a matrix equation [11] with binary amplitude [25].
ξ( t k ) = Φ( t k )γ M , (19.10) Given a bounded signal x (t), | x (t)| ≤ c, for an appro-
priate value of the scale parameter κ of the ASDM, it
where Φ(tk ) is an Nc × M matrix, in general not square maps the amplitude of x (t) into a binary signal z(t) of
as typically M < Nc , with entries the PSWFs {φi (tk )} amplitude ±b, where b > c. If in tk ≤ t ≤ tk+1 the out-
computed at the times {tk } of the vector tk . The expan- put of the Schmitt trigger is the binary signal z(t) =
sion coefficients {γm } in the vector γ M are found as b(−1)k+1 [u(t − tk ) − u(t − tk+1 )], where u(t) is the unit-
γ M = Φ† ( t k ) ξ ( t k ) , (19.11) step function, then the output y (t) of the integrator
is bounded, that is, |y(t)| < δ. As such, the difference
where † indicates the pseudoinverse. Replacing these
y(tk+1 ) − y(tk ) equals
expansion coefficients in (19.9) provides at best an
 t 
approximation to the original signal. However, depend- 1 k +1
k +1
ing on the distribution of the time samples, the above x (τ)dτ − (−1) b(tk+1 − tk ) = 2δ,
κ tk
problem could become ill-conditioned requiring regular-
ization techniques [15,37]. or the integral equation [25]
The Tikhonov regularized solution of the measure-  t
k +1
ment equation (19.10) can be posed as the mean-square x (τ)dτ = (−1)k [−b(tk+1 − tk ) + 2κδ], (19.14)
minimization: t k

γ M μ = arg min{Φ(tk )γ M − ξ(tk )2 + μγ M 2 }, relating the amplitude information of x (t) to the dura-
γM
tion of the pulses in z(t)—or equivalently, to the zero-
(19.12)
crossing times {tk }. Approximating the integral in
where μ is a trade-off between losses and smoothness (19.14) provides an approximation of the signal using
of the solution, and for μ = 0 the solution coincides the zero-crossing times [23].
with the pseudoinverse solution (19.11). The optimal The ASDM parameters b and κ relate to the signal as
value for the regularization parameter μ is determined follows: if | x (t)| < c, b is chosen to be b > c to guarantee
ad hoc [11,15]. The regularized solution γ M μ of (19.12) is together with κ that the output of the integrator increases
given by and decreases within bounds ± δ; κ is connected with the
amplitude and the maximum frequency of the signal as
γ M μ = (Φ(tk ) T Φ(tk ) + μI)−1 Φ(tk ) T ξ(tk ). (19.13)
shown next. A sufficient condition for the reconstruction
The above illustrates that the sampling of nonstationary of the original signal from nonuniform samples is [22]
signals using the LC sampler is complicated by the need
to keep samples and sampling times in the nonuniform max(tk+1 − tk ) ≤ TN , (19.15)
k
sampling, and that the signal reconstruction is in gen-
eral a not a well-posed problem requiring regularization where TN = 1/2 f max is the Nyquist sampling period for
with ad hoc parameters. a bandlimited signal with maximum frequency f max .

Analog signal z(t) Binary signal


b
x(t) y(t) y(t) z(t)
1 −δ
(.) dt
+ − κ
δ
−b
Integrator Schmitt trigger

FIGURE 19.1
Asynchronous sigma delta modulator.
Asynchronous Processing of Nonstationary Signals 445

Using Equations 19.15 and 19.14, and that | x (t)| < c, It is of interest to remark here the following:
we have that
• Using the trapezoidal rule to approximate the inte-
κ κ 1
≤ t k +1 − t k ≤ ≤ , (19.16) gral in Equation 19.14 is possible to obtain an
b+c b−c 2 f max expression to reconstruct the original signal using
displaying the relation between κ, the difference in zero-crossing times [10].
amplitude b − c and the maximum frequency f max of
the signal. Because of this, the representation given by • A multilevel signal can be represented by zero
(19.14) requires x (t) to be bandlimited. crossings using the ASDM. Consider the signal
The binary rectangular pulses z(t), connected with the given as the approximation from the LC sampler:
amplitude of the signal x (t), are characterized by the N
duty cycle 0 < αk /Tk < 1 of two consecutive pulses of x̂ (t) = ∑ qi [u(t − τi ) − u(t − τi+1 )], (19.19)
duration Tk = αk + βk , where αk = tk+1 − tk and βk = i =− N
tk+2 − tk+1 [30]. Letting
where u(t) is the unit-step function and {qi } are
α − βk given quantization levels. Processing the compo-
ζk = k , (19.17)
α k + βk nent x̂ i (t) = qi [u(t − τi ) − u(t − τi+1 )] with the
we have that the duty cycle is given by αk /Tk = ASDM, the output would be a binary signal zi (t) =
(1 + ζk )/2, where, as shown next, ζk is the local aver- u(t − ti ) − 2u(t − ti+1 ) + u(t − ti+2 ) for some zero-
age of x (t) in tk ≤ t ≤ tk+2 . Indeed, the integral equation crossing times ti+k , k = 0, 1, 2. Letting ti = τi and
(19.14) can be written in terms of z(t) as ti+2 = τi+1, b > max( x (t)) and δ be a fixed num-
ber, we need to find ti+1 and the corresponding
 t  t
k +1 k +1 scale parameter κi . If qi is considered the local
x (τ)dτ = (−1)k+1 z(τ)dτ + 2(−1)k κδ,
tk tk average and αi = ti+1 − τi and βi = τi+1 − ti+1, we
obtain the following two equations to find ti+1
which can be used to get local estimate of the signal aver- and κi :
age ζk in [tk , tk+2 ]. Using the above equation to find the
integral in [tk , tk+2 ] and dividing it by Tk , we have α i − βi
(i ) qi = (−1)i+1 b,
⎡ ⎤ α i + βi
 t ⎢ t  t ⎥ qi αi + (−1)k bαi
1 k +2 (−1)k+1 ⎢ k +1 k +2 ⎥ (ii ) κi = ,
x (τ)dτ = ⎢ z(t)dt − z(t)dt⎥ 2δ
Tk Tk ⎢ t ⎥
tk ⎣ k  t
 k +1  ⎦ or the local mean and the value of κi obtained
αk βk from the integral equation (19.14). Since αi + βi =
α k − βk τi+1 − τi and αi − βi = 2ti+1 − (τi + τi+1 ), we
= (−1)k+1 obtain from equation (i )
α k + βk
= (−1)k+1ζk , (19.18) q i ( τi +1 − τi ) + ( τi +1 + τi )
t i +1 = ,
or the local mean of x (t) in the segment [tk , tk+2 ]. 2(−1)i+1b
Thus, for a bounded and bandlimited analog sig- that is used to obtain αi , which we can then use
nal, the corresponding scale parameter κ determines to find κi in equation (ii ). This implies that for
the width of an appropriate window—according to the a multilevel signal, the ASDM is an adaptive LC
nonuniform zero-crossing times—to compute an esti- sampler. For each pulse in the multilevel signal, we
mate of the local average. This way, the ASDM provides can find appropriate zero-crossing times to repre-
either a representation of its input x (t) by its binary sent it exactly. In the above, the value of κi adapts
output z(t) and the integral equation, or a sequence of to the width of the pulse so that the values of αi
local averages {ζk } at nonuniform times {t2k }. Differ- and βi are connected with the local average qi in
ent from the LC sampler, the ASDM only requires the the window set by κi .
sample times. The only drawback of this approach for
general signals is the condition that the input signal be For any signal x (t), not necessarily multilevel, the sec-
bandlimited. A possible alternative to avoid this is to lat- ond comment indicates that x (t) can be approximated
tice the time–frequency plane using arbitrary frequency by a multilevel signal where the qi values are estimates
windows and time windows connected to the ampli- of the local average for windows set by the parameter κ.
tude of the input signal. This results in an asynchronous The advantage of obtaining such a multilevel represen-
decomposition, which we discuss next. tation is that it can be converted into continuous-time
446 Event-Based Control and Signal Processing

binary signals, which can be processed in the contin- decompose the input signal. For a scale parameter κi ,
uous time [21,38,39]. Moreover, since the ASDM cir- the ASDM maps the input signal into a binary signal
cuitry does not include a clock, it consumes low power with sequences {αk } and {βk }, which the averager con-
and is suitable for low-voltage complementary metal- verts into local averages {ζk }. The smoother is used to
oxide-semiconductor (CMOS) technology [30]. These smooth out the multilevel signal output so that there is
two characteristics of the ASDM make it very suitable no discontinuity inserted by the adder when the mul-
for biomedical data acquisition systems [2,17] where tilevel signal is subtracted from the input signal of the
low-power and analog signal processing are desirable. corresponding module. Each of the modules operates
One of the critical features of the ASDM that moti- similarly but at a different scale.
vates further discussions is the modulation rate of its Starting with a scale factor κ1 = Δ/2B0 , where B0 is
output binary signal, that is, how fast z(t) crosses zero, the bandwidth of the low-pass filter of the first module,
which depends on the input signal as well as the param- the other scales are obtained according to
eters δ and κ. To see the effect of just κ on the output, we κ1
let x (t) = c, δ = 0.5, and b > c, giving κ =  = 2, . . ., L,
2−1
κ κ
α k = t k +1 − t k = , βk = , for the th module, by appropriately changing the band-
c + (−1) b
k c − (−1)k b
width of the corresponding low-pass filter. The width of
so that z(t) is a sequence of pulses that repeat periodi- the analysis windows decreases as  increases. For the
cally. It thus can be seen that the lower the κ, the faster first module, we let f 0 (t) = x (t), and the input to the
the z(t) crosses zero and vice versa (see Figure 19.2). We modules beyond the first one can be written sequentially
thus consider κ a scale parameter, which we will use as follows:
to decompose signals. To simplify the analysis, the bias
parameter of the ASDM is set to b = 1, which requires f 1 ( t ) = x ( t ) − d1 ( t )
normalization of the input signal, so that max( x (t)) < 1 f 2 ( t ) = f 1 ( t ) − d2 ( t ) = x ( t ) − d1 ( t ) − d2 ( t )
and the threshold parameter can be set to δ = 1/2. ..
.
f j ( t ) = x ( t ) − d1 ( t ) − d2 ( t ) · · · − d j ( t )
..
.
L
19.3 ASDM Signal Decompositions f L (t) = x (t) − ∑ d l ( t ), (19.21)
A nonstationary signal can be decomposed using the =1
ASDM by latticing the time–frequency plane so that the where the {d (t)} are low-frequency components. We
frequency is divided into certain bands and the time into thus have the decomposition
window lengths depending on the amplitude of the sig-
L
nal. A parameter critical in the decomposition is the scale
parameter κ. If we let in (19.16) δ = 0.5 and b = c + Δ, for
x (t) = ∑ d  ( t ) + f L ( t ). (19.22)
=1
Δ > 0, values of κ are obtained as
The component f L (t) can be thought as the error of the
b−c Δ
κ≤ = , (19.20) decomposition. Considering d  (t) a multilevel signal,
2 f max 2 f max we thus have the decomposition
so that for a chosen value of Δ, we can obtain for dif-
L L
ferent frequency bands with maximum frequency f max
x (t) = ∑ d (t) + f L (t) ≈ ∑ ∑ ζk, pk (t) + f L (t),
the corresponding scale parameters. These parameters =1 =1 k
in turn determine the width of the analysis time win- (19.23)
dows. Considering the local average ζk , the best linear
estimator of the signal in [tk , tk+2 ] when no data are where as before pk (t) = u(t − tk ) − u(t − tk+2 ), and the
provided, the ASDM time-encoder can be thought of an averages {ζk, } depend on the scale being used. This
optimal LC sampler. scale decomposition is analogous to a wavelet decom-
A basic module for the proposed low-frequency position for analog signals.
decomposition is shown in Figure 19.3. The decom- Also similar to the wavelet analysis, a dual of the
poser consists of a cascade of L of these modules, each above decomposition is also possible; that is, nonsta-
one consisting of a low-pass filter, an ASDM, an aver- tionary signals can be decomposed in terms of high-
ager, a smoother, and an adder. The number of mod- frequency components. A basic module for such a
ules, L, is determined by the scale parameters used to decomposer is shown in Figure 19.4. The input is
Asynchronous Processing of Nonstationary Signals 447
1

0.5

–0.5

–1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(a)

0.5

–0.5

–1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(b)

0.2

0.1

–0.1

–0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(c)

0.2

0.1

–0.1

–0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(d)

FIGURE 19.2
Local-average approximation of a sinusoidal signal with (a) κ = 0.1 and (b) κ = 1. Local-average approximation of a signal with (c) κ = 1 and
(d) κ = 2.
448 Event-Based Control and Signal Processing
ki+1 mi+1(t) di+1(t)
fi(t)
LPF ASDM Averager Smoother

fi+1(t)

FIGURE 19.3
Basic module of the low-frequency decomposer.

κi+1
gi(t) gi+1(t)
LPF ASDM Averager Smoother

+
e i+1(t)

FIGURE 19.4
Basic module of the dual high-frequency decomposer.

now expressed in terms of high-frequency components the components of the input signal. This
{ei (t)}. Letting g0 (t) = x (t), we have that interpretation uses time encoding [24].
g1 ( t ) = x ( t ) − e 1 ( t ) 2. Pulse-modulation: Each {z (t)}, the output
of the ASDMs in the decomposition, provides
g2 ( t ) = x ( t ) − e 1 ( t ) − e 2 ( t )
random sequences {α,k , β,k } from which
.. we can compute sequences of local averages
.
M over two-pulse widths and their length or
g M ( t ) = x ( t ) − ∑ ei ( t ). (19.24) {ζ,k , T,k } for 1 ≤  ≤ L, k ∈ K , where K
i =1 corresponds to the number of pulses used in
The decomposition in terms of high-frequency compo- each decomposition module. For a nonde-
nents is then given by terministic signal, the sequence {ζ,k } would
be random, and as such their distributions
M
would characterize d  (t) as well as the signal
x (t) = ∑ ei ( t ) + g M ( t ), (19.25)
x (t). Thus, {ζ,k , T,k } provides the same com-
i =1
pression as the one provided by {α,k , β,k }
where g M (t) can be seen as the error of the and {t,k }. To obtain higher compression, we
decomposition. could consider the distribution of the {ζ,k }
In terms of signal compression, the decomposition of and ignore values clustered around one of the
the input signal x (t) can be interpreted in two equiva- averages.
lent ways:
The local-average approximation obtained above for
1. Time-encoding: If the multilevel signals scale parameters {κk } of the ASDM is similar to the
{m (t)} (see Figure 19.3) were inputted into Haar wavelet representation [36] but with the distinc-
ASDMs, for each of these signals we would tion that the time windows are signal dependent instead
obtain binary signals {z (t)} with crossing of being fixed. The pulses with duty cycle defined by the
times {t,k } that would permit us to recon- sequence {αk , βk } can be regarded as Haar wavelets with
struct the multilevel signals. As before, these signal-dependent scaling and translation [4,7].
signals could then be low-pass filtered to
obtain the {d (t)} to approximate x (t) as
19.3.1 Simulations
in (19.22). Thus, the array {t,k }, 1 ≤  ≤ L,
k ∈ K , where K corresponds to the num- To illustrate the decompositions, we consider phonocar-
ber of pulses used in each decomposition diograph recordings of heart sounds [31,33]. The heart
module, would provide a representation of sounds have a duration of 1.024 s and were sampled
Asynchronous Processing of Nonstationary Signals 449

0.2
0.2
0.15

0 0.1
0.05
–0.2
0
–0.05
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
(a) (c)

4000
4000
3000
3000

2000
2000

1000 1000

0 0
–0.4 –0.2 0 0.2 0.4 –0.05 0 0.05 0.1 0.15 0.2
(b) (d)

FIGURE 19.5
(a) Multilevel output signal from the first module and (b) its local-mean distribution; (c) multilevel output signal from the second module and
(d) its local-mean distribution.

at a rate of 4000 samples per second. For the signal to where N is the length of the signal, x is the origi-
approximate an analog signal, it was interpolated with a nal (uncontaminated by noise) signal, and x̂ is the
factor of 8. Two modules were needed to decompose the reconstructed signal.
test signal. Figure 19.5 displays the local-mean distribu-
tion and multilevel signal obtained by the two-module • The root mean square error:
8
decomposition. The multilevel signals obtained in the 9
91 N
=:
consecutive modules enable significant compression in
N n∑
( x (n) − x̂(n))2 = 0.0012.
information (mainly in the spikes and in the neural firing =1
signals). The reconstruction of the signal from these local
averages are shown in Figure 19.6b, where a root mean • The maximum error:
square error of 2.14 × 10−6 and a compression ratio of
24.8% are obtained. ψ = max[ x (n) − x̂ (n)] = 0.0014.
To illustrate the robustness to additive noise of the
algorithms, we added a zero-mean noise to the origi-
nal signal (see Figure 19.6c) giving a noisy signal with
a signal-to-noise ratio (SNR) of −2.48 dB, and attempted 19.4 Modified ASDM
its reconstruction. The reconstructed signal is shown in
Although the signal dependency of the ASDM provides
Figure 19.6d, having an SNR of 27.54 dB. We obtained
an efficient representation of an analog signal by means
the following results using performance metrics typi-
of the zero-crossing times, the asynchronous structure
cally used in biomedical applications:
complicates the signal reconstruction by requiring a
solution to the integral equation (19.14). By extracting
• The cross-correlation value evaluating the similar- the integrator in the feedforward loop of the ASDM, we
ity between the reconstructed and original signal: obtain a different configuration which we call the mod-
ified asynchronous sigma delta modulator (MASDM)
∑nN=1 ( x (n) − μx )( x̂ (n) − μx̂ ) (Figure 19.7). The MASDM provides a better solution
γ=   × 100
for encoding and processing as we will see, while keep-
∑nN=1 ( x (n) − μx )2 ∑nN=1 ( x̂ (n) − μx̂ )2
ing the signal dependency and the low power [3].
= 99%, To keep the input of the reduced model x (t) now
450 Event-Based Control and Signal Processing

0.2
0
–0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(a)

0.2
0
–0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(b)

0.2
0
–0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(c)

0.2
0
–0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(d)

FIGURE 19.6
(a) Phonocardiogram recording and (b) reconstructed signal; (c) phonocardiogram recording with additive noise and (d) reconstructed signal.

Reduced model

z(t) b
dx(t)/dt x(t) −δ y(t)
(.)dt 1/κ̃ z̃(t)
+ − δ
−b

(.)dt

FIGURE 19.7
Block diagram of the modified asynchronous sigma delta modulator (MASDM).

requires that the input of the MASDM be its derivative or the recursive equation
dx (t)/dt.
It is very important to notice that the MASDM is x (t̃k+1 ) = x (t̃k ) + (−1)k [−(t̃k+1 − t̃k ) + κ̃], (19.27)
equivalent to an ASDM with the input being the deriva-
tive of the signal, dx (t)/dt. The new configuration pro- that will permit us to obtain the samples using an ini-
vides a different approach to dealing with the integral tial value of the signal and the zero-crossing times.
equation (19.14). Letting b = 1 (i.e., the input is normal- The tilde notation for the zero-crossing times and the
ized) and δ = 0.5, Equation 19.14 with input dx (t)/dt ASDM scale parameter is used to emphasize that when
now gives the input is dx (t)/dt instead of x (t), these values could
 t̃
be different.
k +1 A possible module for a low-frequency decomposer
dx (τ) = x (t̃k+1 ) − x (t̃k ) = (−1) [−(t̃k+1 − t̃k ) + κ̃],
k
t̃ k is shown in Figure 19.8. The top branch provides an
(19.26) estimate of the dc value of the signal by means of a
Asynchronous Processing of Nonstationary Signals 451

H0(s) ASDM {t0,k, κ0} Letting T ≤ 1/(2 f max ), where f max is the maximum
frequency of x (t), we have
˜
dx(t)
dt ~ ~ 2c
x(t) H1(s) s MASDM {t1,k, κ1} cd ≤ . (19.28)
T
Band-pass filter The lack of knowledge of f max does not permit us to
determine a value for the scale parameter κ̃ directly.
FIGURE 19.8
However, if the bandwidth of the low-pass filter H1 (s)
Low-frequency analysis module using the ASDM and MASDM. The is B [rad/s], the scale parameter should satisfy
filter H0 ( s ) is a narrowband low-pass, and H1 ( s ) is a wide-band
low-pass filter. π(1 − c d )
κ̃ ≤
. (19.29)
B
To illustrate the nonuniform sampling using
narrowband low-pass filter H0 (s) and an ASDM while the MASDM, consider the sparse signal shown in
the lower branch uses a wider bandwidth low-pass fil- Figure 19.9. The location of the nonuniform samples
ter H1 (s) cascaded with a derivative operator. The input acquired from the signal is shown on the right of the
to the MASDM is the derivative of the output of H1 (s). figure. Notice that, as desired, more samples are taken in
This module displays two interesting features that sim- the sections of the signal where information is available,
plify the analysis by letting the input be the derivative and fewer or none when there is no information.
of the low-passed signal:

• For the branch using the MASDM, the integral 19.4.1 ASDM/MASDM Bank of Filters
equation is reduced to a first-order difference
Asynchronous analysis using filter banks exhibit simi-
equation with input a function of zero-crossing
lar characteristics as wavelets. Indeed, compression and
times {t̃1,k } and a scale parameter κ̃1 . Thus, one
denoising of nonstationary signals are possible by con-
only needs to keep the zero-crossing times and
figuring ASDMs and MASDMs into filter banks [36].
the scale parameter to recursively obtain samples
To extend the scope of asynchronous processing pro-
of the output signal of the low-pass filter H1 (s)
vided by the previous low- and high-frequency decom-
at nonuniform times rather than solving the inte-
posers, we now consider the analysis and synthesis of
gral equation (19.14). The upper branch, using
nonstationary signals using a bank of filters and the
the ASDM, would require an averager (as in the
ASDM and the MASDM in the analysis. In the synthesis
decomposers given before) to obtain an estimate
component, we use an averager and a PSWF interpola-
of the mean component x̄ (t). The proposed low-
tor. The bank-of-filters structure shown in Figure 19.10
frequency decomposer can thus be thought of as
extends the low-frequency decomposer shown in Figure
a nonuniform sampler that allows the recovery of
19.8. We now have a set of low-pass G0 (s) and band-pass
the sample values recursively using Equation 19.27
filters {sGi (s), i = 1, . . . , M } that provide bandlimited
and an averager for the mean component.
signals for processing with ASDMs to estimate the dc
• The low-pass filter cascaded with the derivative bias, and for MASDMs to obtain the components at other
operator is equivalent to a band-pass filter with frequencies. The upper frequency of each of these filters
a zero at zero. The proposed configuration has is used to determine the corresponding scale ASDMs
some overlap in the low frequencies but points to and MASDMs.
a bank-of-filters structure which we develop next. For each of the frequency bands in the bank of filters,
the ASDMs and the MASDMs provide the zero-crossing
When implementing the MASDM, the value of κ̃ times {ti,k } and the corresponding scale parameters {κi }
depends on the maximum of the derivative: needed to recursively obtain the nonuniform samples
  in the synthesis component.∗ As indicated before, the
 dx (t)  parameters of the MASDM are set according to the
cd = max  .
dt  derivative of the input, which can be done by using
Equations 19.28 and 19.29. An averager in the synthesis
Although this value would not be available, cd can be part is needed to reconstruct the dc bias signal x̄0 (t), and
associated with the maximum amplitude c of the input the recursive equation (19.27) together with an interpola-
x (t) under the assumption that x (t) is continuous: tor to obtain the x̂ i (t). Finally, it is important to recognize
 
 dx (t)  | x (ti + T )| + | x (ti )| 2c ∗ For simplicity in Figure 19.10, we do not use the tilde notation for
 
 dt  ≤ Tlim
→0 T
= lim .
T →0 T the zero-crossing times or the scale parameters.
452 Event-Based Control and Signal Processing

1.2
300

200 1

100 0.8

0
0.6

tk
−100
0.4
−200
0.2
−300
x(tn)
−400 x(tk) 0

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t (s) t (s)
(a) (b)

FIGURE 19.9
Left: nonuniform sampling using the MASDM; right: sample locations.

x(t) x0(t) _
κ0, {t0,k} x0(t) dc bias x̂(t)
G0(s) ASDM Averager +
κ0

x1(t) x1, (t1,k)

G1(s) s MASDM κ1, {t1,k} Iterative Interpolator


κ1 estimator x̂1(t)

xM(t) xM, (tM,k)

GM(s) s MASDM κM, {tM,k} Iterative Interpolator


κM estimator x̂M(t)

Analysis Synthesis

FIGURE 19.10
ASDM/MASDM bank of filters: analysis and synthesis components.

that one of the zeros at zero of the band-pass filters bank of filters (and possibly the scale parameters if
{sGi (s)} are used to provide the derivative input to the they are not known a priori or if do not use the above
MASDMs. remark). Lazar and Toth [25] have obtained bounds for
As a side remark, the dependence of the recursion the reconstruction error for a given number of bits used
(19.27) on κ̃ discussed before can be eliminated by con- to quantize the {tk }. They show that for a bandlimited
sidering evaluating Equation 19.26 in two consecutive signal, quantizing the zero-crossing times is equivalent
time intervals to get to quantizing the signal amplitude in uniform sampling.
 t̃ Thus, the time elapsed between two consecutive sam-
k +2
dx (τ) = x (t̃k+2 ) − x (t̃k ) = tk+2 − 2tk+1 + tk , ples is quantized according to a timer with a period
t̃ k of Tc seconds. Theoretically, the quantization error can
so that the new recursion is given by be minimized as much as needed by simply reducing Tc ,
or increasing the number of quantization time levels.
x (t̃k+2 ) = x (t̃k ) + t̃k+2 − 2t̃k+1 + t̃k . (19.30) The filter-bank decomposer can be considered a
This not only eliminates the value of κ̃ in the calculations nonuniform analog-to-digital converter, where it is the
but also reduces the number of sample values { x (t̃k )}. zero-crossing times that are digitized. Reconstruction of
To recover the nonuniform samples, we need to trans- the original analog signal requires a different approach
mit the zero crossing for each of the branches of the to the uniform sampling. As indicated before, the sinc
Asynchronous Processing of Nonstationary Signals 453

basis in Shannon’s reconstruction can be replaced by the do not work well if the signal has band-pass character-
PSWFs [35] having better time and frequency localiza- istics. Parsimonious representations are possible using
tion than the sinc function [40]. The procedure is similar modulated PSWFs, which are obtained by multiply-
to the one used for signal reconstruction from samples ing the PSWFs with complex exponentials; that is, the
obtained by the LC sampler in Section 19.2.1. entries of the Φ matrix are then
Consider, for instance, the reconstruction of the sparse
ψn,τ (t) = e jωm t φn,τ (t),
signal shown in Figure 19.9 using the samples {t̃k }
taken by the decomposer. The sample values { x (t̃k )} where {φn,τ (t)} are baseband PSWFs. Modulated
are obtained iteratively using Equation 19.27. As in PSWFs keep the time–frequency concentration of the
Section 19.2.1, these samples can be represented using baseband PSWFs while being confined to a certain
a projection of finite dimension with PSWFs as frequency interval. Appropriate values for the expan-
sion coefficients are obtained again using Tikhonov’s
x (t̃k ) = Φ(t̃k )αM , (19.31)
regularization.
where αM is the vector of the expansion coefficients asso- Figure 19.11 compares the reconstruction errors when
ciated with the PSWFs. In some cases, baseband PSWFs using PSWFs and modulated PSWFs. For comparison

Error Error

300 300

200 200

100 100

0 0

−100 −100

−200 −200

−300 −300

−400 −400

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t (s) t (s)
(a) (b)

Error

300

200

100

−100

−200

−300

−400

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5


t (s)
(c)

FIGURE 19.11
Reconstruction errors when 25% of the samples are used: (a) discrete cosine transform (DCT)-based compressive sensing (signal-to-noise ratio
[SNR] = 7.45 dB); (b) baseband prolate spheroidal wave (PSW) functions (SNR = 7.11 dB); and (c) modulated PSW functions (SNR = 16.54 dB).
454 Event-Based Control and Signal Processing
PSWFs: M = 520 Modulated PSWFs: M = 100

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

−500 −400 −300 −200 −100 0 100 200 300 400 500 −500 −400 −300 −200 −100 0 100 200 300 400 500
(a) (b)

FIGURE 19.12
Signal spectra of the signal and of the prolate spheroidal wave (PSW) functions when using (a) 520 baseband functions and (b) 100 modulated
functions.

reasons, we also include compressive sensing method LC sampler, commonly used, requires an a priori set of
quantization levels and results in a set of samples and
using discrete cosine transform (DCT) [12]. If the PSWFs
and the signal occupy the same band, the PSWFs the corresponding sampling times that at best can give
provide an accurate and parsimonious representation. a multilevel approximation to the original signal. Sam-
Otherwise, the representation is not very accurate as pling and reconstruction of nonstationary signals is then
illustrated in Figure 19.11b. To improve this perfor- shown to be more efficiently done by using an ASDM,
mance, we capture the spectrum of the signal with which provides a binary representation of a bandlimited
narrowband-modulated PSWFs (see Figure 19.12b), and and bounded analog signal by means of zero-crossing
times that are related to the signal amplitude. Deriving
the resulting recovery error is depicted in Figure 19.11c.
The reconstruction using modulated PSWFs results in a local-mean representation from the binary represen-
much improved error performance. tation is possible to obtain an optimal LC representa-
In addition to improved performance, the number tion using the ASDM. Moreover, studying the effect of
the scale parameter used in the ASDM, and consid-
of functions used in the reconstruction is significantly
reduced when adapting their frequency support to ering a latticing of the time–frequency space so that
the frequency bands are determined and the time win-
that of the signal. As indicated in Figure 19.12b, only
100 modulated PSWFs are needed for the test sig- dows are set by the signal, we derive a low-frequency
nal, rather than 520 baseband PSWFs (Figure 19.12a). decomposer and its dual. The low-frequency decom-
Hence, we conclude that the reconstruction using modu-poser can be seen as a modified Haar wavelet decom-
lated PSWFs is better suited to reconstruct the MASDM position. By modifying the structure of the ASDM, we
samples. obtain a more efficient bank-of-filters structure. The
interpolation of the nonuniform samples in the syn-
thesis component is efficiently done by means of the
PSW or Slepian functions, which are more compact in
both time and frequency than the sinc functions used in
19.5 Conclusions
the Shannon’s interpolation. Measurement equations—
In this chapter, we considered the asynchronous pro- similar to those obtained in compressive sensing—are
cessing of nonstationary signals that typically occur typically ill-conditioned. The solution is alleviated by
in practical applications. As indicated by the Wold– using Tikhonov’s regularization. Once the analysis and
Cramer spectrum of these signals, their power spectra synthesis is understood, the bank-of-filters approach is
depend jointly on time and frequency. It is thus that capable of providing efficient processing of nonstation-
synchronous sampling is wasteful by using a unique ary signals, comparable to the one given by wavelet
sampling period, which in fact should be a function packages. The proposed techniques should be useful in
of time. As shown, the sampling of nonstationary sig- processing signals resulting from biomedical or control
nals needs to be signal dependent, or nonuniform. The systems.
Asynchronous Processing of Nonstationary Signals 455

[11] S. Şenay, J. Oh, and L. F. Chaparro. Regularized


Bibliography signal reconstruction for level-crossing sampling
using Slepian functions. Signal Processing, 92:1157–
[1] F. Aeschlimann, E. Allier, L. Fesquet, and 1165, 2012.
M. Renaudin. Asynchronous FIR filters: Towards a
new digital processing chain. In 10th International [12] D. L. Donoho. Compressed sensing. IEEE Transac-
Symposium on Asynchronous Circuits and Systems, tions on Information Therapy, 52:1289–1306, 2006.
2004, Crete, Greece, pp. 198–206, April 19–23, 2004.
[13] K. Guan, S. Kozat, and A. Singer. Adaptive ref-
[2] E. V. Aksenov, Y. M. Ljashenko, A. V. Plotnikov, erence levels on a level–crossing analog-to-digital
D. A. Prilutskiy, S. V. Selishchev, and E. V. converter. EURASIP Journal on Advances in Signal
Vetvetskiy. Biomedical data acquisition systems Processing, 2008:1–11, January 2008.
based on sigma-delta analogue-to-digital convert- [14] K. Guan and A. Singer. A level-crossing sampling
ers. In IEEE Signal Processing in Medicine and Biology scheme for non-bandlimited signals. IEEE Inter-
Symposium, Istanbul, Turkey, volume 4, pp. 3336– national Conference on Acoustics, Speech, and Signal
3337, October 25–28, 2001. Processing, 3:381–383, 2006.
[3] A. Can and L. Chaparro. Asynchronous sampling [15] P. Hansen and D. P. Oleary. The use of L–curve in
and reconstruction of analog sparse signals. In 20th the regularization of discrete ill–posed problems.
European Signal Processing Conference, Bucharest SIAM Journal on Scientific Computing, 14:1487–1503,
Romania, pages 854–858, August 27–31, 2012. 1993.

[4] A. Can, E. Sejdić, O. Alkishriwo, and L. Chaparro. [16] T. Hawkes and P. Simonpieri. Signal coding using
Compressive asynchronous decomposition of heart asynchronous delta modulation. IEEE Transactions
sounds. In IEEE Statistical Signal Processing Work- on Communications, 22(5):729–731, 1974.
shop (SSP), pp. 736–739, August 2012.
[17] C. Kaldy, A. Lazar, E. Simonyi, and L. Toth. Time
[5] A. Can, E. Sejdić, and L. F. Chaparro. An asyn- Encoded Communications for Human Area Network
chronous scale decomposition for biomedical sig- Biomonitoring. Technical Report 2-07, Department
nals. In IEEE Signal Processing in Medicine and of Electrical Engineering, Columbia University,
Biology Symposium, New York, NY, pp. 1–6, Decem- New York, 2007.
ber 10, 2011.
[18] A. S. Kayhan, A. El-Jaroudi, and L. F. Chaparro.
[6] A. Can-Cimino, E. Sejdić, and L. F. Chaparro. Asyn- Evolutionary periodogram for nonstationary sig-
chronous processing of sparse signals. IET Signal nals. IEEE Transactions on Signal Processing, 42:1527–
Processing, 8(3):257–266, 2014. 1536, 1994.

[19] A. S. Kayhan, A. El-Jaroudi, and L. F. Chaparro.


[7] L. F. Chaparro, E. Sejdić, A. Can, O. A. Alkishriwo,
Data–adaptive evolutionary spectral estimation.
S. Senay, and A. Akan. Asynchronous represen-
IEEE Transactions on Signal Processing, 43:204–213,
tation and processing of nonstationary signals:
1995.
A time-frequency framework. IEEE Signal Process-
ing Magazine, 30(6):42–52, 2013. [20] D. Kinniment, A. Yakovlev, and B. Gao. Syn-
chronous and asynchronous a-d conversion. IEEE
[8] L. Cohen. Time–frequency Analysis: Theory and Appli- Transactions on VLSI Systems, 8(2):217–220, 2000.
cations. Prentice Hall, NJ, 1995.
[21] M. Kurchuk and Y. Tsividis. Signal-dependent
[9] S. Şenay, L. Chaparro, and L. Durak. Reconstruc- variable-resolution clockless A/D conversion with
tion of nonuniformly sampled time-limited signals application to continuous-time digital signal pro-
using Prolate Spheroidal Wave functions. Signal cessing. IEEE Transactions on Circuits and Systems—
Processing, 89:2585–2595, 2009. I: Regular Papers, 57(5):982–991, 2010.

[10] S. Şenay, L. F. Chaparro, M. Sun, and R. Sclabassi. [22] A. Lazar, E. Simonyi, and L. Toth. Time encoding
Adaptive level–crossing sampling and reconstruc- of bandlimited signals, an overview. In Proceed-
tion. In European Signal Processing Conference ings of the Conference on Telecommunication Systems,
(EUSIPCO), Aalborg, Denmark, pp. 1296–1300, Modeling and Analysis, Dallas, TX, November 17–20,
August 23–27, 2010. 2005.
456 Event-Based Control and Signal Processing

[23] A. Lazar, E. Simonyi, and L. Toth. An overcomplete spheroidal sequences. EURASIP Journal on
stitching algorithm for time decoding machines. Advances in Signal Processing, 2012:101, 2012.
IEEE Transactions on Circuits and Systems, 55:2619–
2630, 2008. [32] E. Sejdić, I. Djurović, and J. Jiang. Time-frequency
feature representation using energy concentration:
[24] A. Lazar and L. Toth. Time encoding and perfect An overview of recent advances. Digital Signal Pro-
recovery of band-limited signals. In IEEE Inter- cessing, 19:153–183, 2009.
national Conference on Acoustics, Speech, and Signal
[33] E. Sejdić and J. Jiang. Selective regional correla-
Processing, volume 6, pp. 709–712, April 2003.
tion for pattern recognition. IEEE Transactions on
[25] A. Lazar and L. Toth. Perfect recovery and sensi- Systems, Man and Cybernetics, 37(1):82–93, 2007.
tivity analysis of time encoded bandlimited signals.
[34] E. Sejdić, M. Luccini, S. Primak, K. Baddour, and
IEEE Transactions on Circuits and Systems, 51:2060–
T. Willink. Channel estimation using DPSS based
2073, October 2004.
frames. In IEEE International Conference on Acoustics,
[26] F. Marvasti. Nonuniform Sampling: Theory and Prac- Speech and Signal Processing, pp. 2849–2952, March
tice. Kluwer Academic/Plenum Publishers, New 2008.
York, 2001. [35] D. Slepian and H. Pollak. Prolate spheroidal wave
[27] J. Oh, S. Şenay, and L. F. Chaparro. Signal functions, Fourier analysis and uncertainty. Bell
reconstruction from nonuniformly spaced samples System Technical Journal, 40:43–64, 1961.
using evolutionary Slepian transform-based POCS. [36] G. Strang and T. Nguyen. Wavelets and Filter Banks.
EURASIP Journal on Advances in Signal Processing, Wellesley-Cambridge Press, Wellesley, MA, 1997.
February 2010.
[37] A. N. Tikhonov. Solution of incorrectly formulated
[28] M. B. Priestley. Spectral Analysis and Time Series. problems and the regularization method. Soviet
Academic Press, London, 1981. Mathematics—Doklady, 4:1035–1038, 1963.
[29] M. Renaudin. Asynchronous circuits and systems: [38] Y. Tsividis. Digital signal processing in continu-
A promising design alternatives. Microelectronic ous time: A possibility for avoiding aliasing and
Engineering, 54:133–149, 2000. reducing quantization error. In IEEE International
Conference on Acoustics, Speech and Signal Processing,
[30] E. Roza. Analog-to-digital conversion via duty
volume 2, pp. 589–92, May 2004.
cycle modulation. IEEE Transactions on Circuits and
Systems II: Analog to Digital Processing, 44(11):907– [39] Y. Tsividis. Mixed-domain systems and signal pro-
914, 1997. cessing based on input decomposition. IEEE Trans-
actions on Circuits and Systems, 53:2145–2156, 2006.
[31] E. Sejdić, A. Can, L. F. Chaparro, C. Steele, and
T. Chau. Compressive sampling of swallowing [40] G. Walter and X. Shen. Sampling with prolate
accelerometry signals using time-frequency dic- spheroidal wave functions. Sampling Theory in Sig-
tionaries based on modulated discrete prolate nal and Image Processing, 2:25–52, 2003.
20
Event-Based Statistical Signal Processing

Yasin Yılmaz
University of Michigan
Ann Arbor, MI, USA

George V. Moustakides
University of Patras
Rio, Greece

Xiaodong Wang
Columbia University
New York, NY, USA

Alfred O. Hero
University of Michigan
Ann Arbor, MI, USA

CONTENTS
20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
20.1.1 Event-Based Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
20.1.2 Decentralized Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
20.1.3 Decentralized Statistical Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
20.1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
20.2 Decentralized Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
20.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
20.2.2 Channel-Aware Decentralized Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
20.2.2.1 Procedure at Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
20.2.2.2 Procedure at the FC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
20.2.2.3 Ideal Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
20.2.2.4 Noisy Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
20.2.3 Multimodal Decentralized Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
20.2.3.1 Latent Variable Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
20.2.3.2 Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
20.2.3.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
20.2.3.4 Decentralized Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
20.3 Decentralized Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
20.3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
20.3.2 Optimum Sequential Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
20.3.2.1 Restricted Stopping Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
20.3.2.2 Optimum Conditional Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
20.3.3 Decentralized Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
20.3.3.1 Linear Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
20.3.3.2 Event-Based Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
20.3.3.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
20.3.3.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480

457
458 Event-Based Control and Signal Processing

20.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481


Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
ABSTRACT In traditional time-based sampling, the closely related signal-dependent sampling techniques
sampling mechanism is triggered by predetermined have been proposed, for example, level-crossing sam-
sampling times, which are mostly uniformly spaced (i.e., pling [24], Lebesgue sampling [17], send-on-delta [42],
periodic). Alternatively, in event-based sampling, some time-encoding machine [29], and level-triggered sam-
predefined events on the signal to be sampled trigger the pling [73]. In these event-based sampling methods,
sampling mechanism; that is, sampling times are deter- samples are taken based on the signal amplitude
mined by the signal and the event space. Such an alter- instead of time, as opposed to the conventional uniform
native mechanism, setting the sampling times free, can sampling. Analogous to the comparison between the
enable simple (e.g., binary) representations in the event Riemann and Lebesgue integrals, the amplitude-driven
space. In real-time applications, the induced sampling and conventional time-driven sampling techniques are
times can be easily traced and reported with high accu- also called Lebesgue sampling and Riemann sampling,
racy, whereas the amplitude of a time-triggered sample respectively [2]. As a result, the signal is encoded in
needs high data rates for high accuracy. the sampling times, whereas in uniform sampling the
In this chapter, for some statistical signal processing sample amplitudes encode the signal. This yields a
problems, namely detection (i.e., binary hypothesis test- significant advantage in real-time applications, in which
ing) and parameter estimation, in resource-constrained sampling times can be tracked via simple one-bit signal-
distributed systems (e.g., wireless sensor networks), we ing. Specifically, event-based sampling, through one-bit
show how to make use of the time dimension for representations of the samples, enables high-resolution
data/information fusion, which is not possible through recovery, which requires many bits per sample in uni-
the traditional fixed-time sampling. form sampling. In other words, event-based sampling
can save energy and bandwidth (if samples are trans-
mitted to a receiver) in real-time applications in terms
of encoding samples.

20.1 Introduction
20.1.1 Event-Based Sampling
Event-based paradigm is an alternative to conventional
time-driven systems in control [2,13,28] and signal pro- In level-crossing sampling, which is mostly used for
cessing [37,43,61]. Event-based methods are adaptive to A/D conversion [23,50,61], in general, uniform sam-
the observed entities, as opposed to the time-driven pling levels in the amplitude domain are used, as shown
techniques. In signal processing, they are used for data in Figure 20.1. A/D converters based on level-crossing
compression [37], analog-to-digital (A/D) conversion sampling are free of a sampling clock, which is a primary
[23,30,61], data transmission [42,43,55], imaging appli- energy consumer in traditional A/D converters [61].
cations [11,26,29], detection [17,24,73], and estimation A version of level-crossing sampling that ignores suc-
[16,75]. We also see a natural example in biological sens- cessive crossings of the same level is used to reduce the
ing systems. In many multicellular organisms, including sampling rate, especially for noisy signals [28]. This tech-
plants, insects, reptiles, and mammals, the all-or-none nique is called level-crossing sampling with hysteresis
principle, according to which neurons fire, that is, trans- (LCSH) due to the hysteretic quantizer it leads to (see
mit electrical signals, is an event-based technique [19]. Figure 20.1).
In signal processing applications, event-based Time-encoding machine is a broad event-based sam-
paradigm is mainly used as a means of nonuniform pling concept, in which the signal is compared with a
sampling. In conventional uniform sampling, the sam- reference signal and sampled at the crossings [21,29].
pling frequency is, in general, selected based on the The reference signal is possibly updated at the sam-
highest expected spectral frequency. When the lower pling instants (Figure 20.2). Motivated by the integrate-
frequency content in the input signal is dominant (e.g., and-fire neuron model, a mathematical model for nerve
long periods of small change), such high-frequency cells, in some time-encoding machines, the signal is
sampling wastes considerable power. For many emerg- first integrated and then sampled. The asynchronous
ing applications that rely on scarce energy resources delta–sigma modulator, a nonlinear modulation scheme
(e.g., wireless sensor networks), a promising alternative mainly used for A/D conversion, is an instance
is event-based sampling, in which a sample is taken of integrate-and-fire time-encoding machine [30]. The
when a significant event occurs in the signal. Several ON–OFF time-encoding machine, which models the
Event-Based Statistical Signal Processing 459
xˆ l(t) the signal) [4,30]. Various reconstruction methods have

x(t) been proposed in [4,30,38,70]. The similarity measures
xˆ lh(t) for sequences of level-crossing samples have been dis-
cussed, and an appropriate measure has been identified
Δ in [44].

0 t 20.1.2 Decentralized Data Collection


l1 l2 l3 l4
Decentralized data collection in resource-constrained
–Δ networked systems (e.g., wireless sensor networks) is
+1 another fundamental application (in addition to A/D
conversion) of sampling. In such systems, the central
t
processor does not have access to all observations in the
–1 system due to physical constraints, such as energy and
communication (i.e., bandwidth) constraints. Hence, the
FIGURE 20.1
choice of sampling technique is of great importance to
Level-crossing sampling with uniform sampling levels results in
obtain a good summary of observations at the central
nonuniform sampling times l1−4 and the quantized signal x̂l ( t). If the
processor. Using an adaptive sampling scheme, only
repeated crossings at l2 and l3 are discarded, x̂lh ( t) is produced by a
the informative observations can be transmitted to the
hysteretic quantizer. One-bit encoding of the samples is shown below.
central processor. This potentially provides a better sum-
mary than the conventional (nonadaptive) sampling
x(t) scheme that satisfies the same physical constraints. As a
Reference
toy example to adaptive transmission, consider a bucket
xˆtem(t) carrying water to a pool from a tap with varying flow.
After the same number of carriages, say ten, the scheme
that empties the bucket only when it is filled (i.e., adap-
tive to the water flow) exactly carries ten buckets of
water to the pool, whereas the scheme that periodically
t empties the bucket (i.e., nonadaptive), in general, carries
less water.
Based on such an adaptive scheme, the send-on-delta
concept, for decentralized acquisition of continuous-
1 time band-limited signals, samples and transmits only
0 t when the observed signal changes by ±Δ since the last
sampling time [42,55]. In other words, instead of trans-
FIGURE 20.2 mitting at deterministic time instants, it waits for the
Reference signal representation of a time-encoding machine and the event of ±Δ change in the signal amplitude to sample
piecewise constant signal x̂tem ( t) resulting from the samples. At the and transmit. Although the change here is with respect
sampling times, the reference signal switches the pair of offset and to the last sample value, which is in general different
slope between (−δ, b ) and (δ, − b ), as in [30]. One-bit encoding of the from the last sampling level in level-crossing sampling,
samples is also shown below. they coincide for continuous-time band-limited signals.
Hence, for continuous-time band-limited signals, send-
on-delta sampling is identical to LCSH (Figure 20.1). For
ON and OFF bipolar cells in the retina [39], uses two systems in which the accumulated, instead of the cur-
reference signals to capture the positive and nega- rent, absolute error (similar to the mean absolute error)
tive changes in the signal. The ON–OFF time-encoding is used as the performance criterion, an extension of the
machine without integration coincides with the LCSH send-on-delta concept, called the integral send-on-delta,
[29]. Hardware applications of ON–OFF time-encoding has been proposed [43]. This extension is similar to the
machines are seen in neuromorphic engineering [33,77] integrate-and-fire time-encoding machine. Specifically,
and brain–machine interfaces [3,49]. a Δ increase in the integral of absolute error triggers
The theory of signal reconstruction from nonuniform sampling (and transmission).
samples applies to event-based sampling [61]. Exact In essence, event-based processing aims to simplify
reconstruction is possible if the average sampling rate the signal representation by mapping the real-valued
is above the Nyquist rate (i.e., twice the bandwidth of amplitude, which requires infinite number of bits after
460 Event-Based Control and Signal Processing

the conventional time-driven sampling, to a digital An event-based sampling technique, called level-
value in the event space, which needs only a few bits. triggered sampling, has been proposed to report the
In most event-based techniques, including the ones dis- corresponding sufficient statistic in binary hypothe-
cussed above, a single bit encodes the event type when sis testing [17] and parameter estimation [16] for
an event occurs (e.g., ±Δ change, upward/downward continuous-time band-limited observations. The opera-
level crossing, reference signal crossing). In decentral- tion of level-triggered sampling is identical to that of
ized data collection, this single-bit quantization in the send-on-delta sampling (i.e., ±Δ changes in the local
event space constitutes a great advantage over the sufficient statistic since the last sampling time triggers a
infinite-bit representation of a sample taken at a deter- new sample), but it is motivated by the sequential proba-
ministic time. Moreover, to save further energy and bility ratio test (SPRT), the optimum sequential detector
bandwidth, the number of samples (i.e., event occur- (i.e., binary hypothesis test) for independent and iden-
rences) can be significantly reduced by increasing Δ. tically distributed (iid) observations. Without a link to
That is, a large enough Δ value in send-on-delta spar- event-based sampling, it was first proposed in [27] as
sifies the signal with binary nonzero values, which is a repeated SPRT procedure for discrete-time observa-
ideal for decentralized data collection. On the contrary, tions. In particular, when its local LLR exits the interval
the resolution of observation summary at the central (−Δ, Δ), each node makes a decision: null hypothesis H0
processor decreases with increasing Δ, showing the if it is less than or equal to −Δ, and alternative hypoth-
expected trade-off between performance and consump- esis H1 if it is greater than or equal to Δ. Then another
tion of physical resources (i.e., energy and bandwidth). cycle of SPRT starts with new observations. The central
In real-time reporting of the sparse signal in the event processor, called the fusion center, also runs SPRT by
space to the central processor, only the nonzero val- computing the joint LLR of such local decisions.
ues are sampled and transmitted when encountered∗ . Due to the numerous advantages of digital signal pro-
Since event-based processing techniques, in general, first cessing (DSP) and digital communications over their
quantize the signal in terms of the events of interest, and analog counterparts, a vast majority of the existing hard-
then sample the quantized signal, they apply a quantize- ware work with discrete-time signals. Although there
and-sample strategy, instead of the sample-and-quantize is a significant interest in building a new DSP the-
strategy followed by the conventional time-driven pro- ory based on event-based sampling [30,44,60,64], such
cessing techniques. a theory is not mature yet, and thus it is expected
that the conventional A/D converters, based on uni-
form sampling, will continue to dominate in the near
20.1.3 Decentralized Statistical Signal Processing future. Since digital communications provide reliable
If, in a decentralized system, data are collected for a spe- and efficient information transmission, with the support
cific purpose (e.g., hypothesis testing, parameter estima- of inexpensive electronics, it is ubiquitous nowadays [25,
tion), then we should locally process raw observations page 23]. Hence, even if we perform analog signal pro-
as much as possible before transmitting to minimize cessing and then event-based sampling on the observed
processing losses at the central processor, in addition continuous-time signal, we will most likely later on need
to the transmission losses due to physical constraints. to quantize time (i.e., uniformly sample the resulting
For instance, in hypothesis testing, each node in the continuous-time signal) for communication purposes. In
network can first compute and then report the log- that case, we should rather apply event-based sampling
likelihood ratio (LLR) of its observations, which is the to uniformly sampled discrete-time observations at the
sufficient statistic. Assuming independence of observa- nodes. This also results in a compound architecture
tions across nodes, the central processor can simply which can perform time-driven, as well as event-driven,
sum the reported LLRs and decide accordingly with- tasks [42,46]. As a result, level-triggered sampling with
out further processing. On the contrary, if each node discrete-time observations (see Figure 20.3) has been
transmits its raw observations in a decentralized fash- considered for statistical signal processing applications
ion, the central processor needs to process the lossy data [32,73–76].
to approximate LLR, which is in general a nonlinear In level-triggered sampling, a serious complication
function. The LLR approximation in the latter report- arises with discrete-time observations: when a sample
and-process strategy is clearly worse than the one in the is taken, the change since the last sampling time, in
former process-and-report strategy. general, exceeds Δ or −Δ due to the jumps in the
discrete-time signal, known as the overshoot problem
(see Figure 20.3). Note from Figure 20.3 that the sam-
∗ Unlike compressive sensing, the binary nonzero values are simply pling thresholds are now signal dependent, as opposed
reported in real time without any need for offline computation. to level-crossing sampling (with hysteresis), shown in
Event-Based Statistical Signal Processing 461
x[n] networks [76], joint spectrum sensing and channel
estimation in cognitive radio networks [71], security
Δ in multiagent reputation systems [32], and power qual-
ity monitoring in power grids [31].
Overshoot
Δ
Δ
20.1.4 Outline
Overshoot
0 n
1 2 3 4 5 6 7 8 9 10
In this chapter, we analyze the use of event-based sam-
l1 l2pling as a means of information transmission for decen-
–Δ tralized detection and estimation. We start with the
decentralized detection problem in Section 20.2. Two
FIGURE 20.3 challenges, namely noisy transmission channels and
Level-triggered sampling with discrete-time observations. multimodal information sources, have been addressed
via level-triggered and level-crossing sampling in Sec-
Figure 20.1. Overshoots disturb the binary quantized tions 20.2.2 and 20.2.3, respectively.
values in terms of Δ change (i.e., the one-bit encoding in Then, in Section 20.3, we treat the sequential estima-
Figure 20.1) since now fractional Δ changes are possible. tion of linear regression parameters under a decentral-
As a result, when sampled, such fractional values in ized setup. Using a variant of level-triggered sampling,
the event space cannot be exactly encoded into a single we design a decentralized estimator that achieves a
bit. Overshoots are also observed with continuous-time close-to-optimum average stopping time performance
band-unlimited signals (i.e., that have jumps), which are and linearly scales with the number of parameters while
observed in practice due to noise. Hence, for practical satisfying stringent energy and computation constraints.
purposes, we need to deal with the overshoot problem. Throughout the chapter, we represent scalars with
In level-triggered sampling, the overshoot problem is lower-case letters, vectors with bold lower-case letters,
handled in several ways. In the first method, a single- and matrices with bold upper-case letters.
bit encoding is still used with quantization levels that
include average overshoot values in the event space.
That is, for positive (≥Δ)/negative (≤−Δ) changes, the
transmitted +1/−1 bit represents a fractional change 20.2 Decentralized Detection
θ̄ > Δ/θ < −Δ, where θ̄ − Δ/θ + Δ compensates for
¯
the average overshoot above Δ/below¯ −Δ. Examples We first consider the decentralized detection (i.e.,
of this overshoot compensation method are seen in the hypothesis testing) problem, in which a number of
decentralized detectors of [27,74], and also in Section distributed nodes (e.g., sensors), under energy and
20.2.2, in which the LLR of each received bit at the bandwidth constraints, sequentially report a summary
fusion center (FC) is computed. There are two other of their discrete-time observations to an FC, which
overshoot compensation methods in the literature, both makes a decision as soon as possible satisfying some
of which quantize each overshoot value. In [73] and [75], performance constraints.
for detection and estimation purposes, respectively, each
quantized value is transmitted in a few bits via sepa-
20.2.1 Background
rate pulses, in addition to the single bit representing the
sign of the Δ change. On the contrary, in [76], and also Existing works on decentralized detection mostly con-
in Section 20.3.3.2, pulse-position modulation (PPM) is sider the fixed-sample-size approach, in which the FC
used to transmit each quantized overshoot value. Specif- makes a decision at a deterministic time using a fixed
ically, the unit time interval is divided into a number number of samples from nodes (e.g., [57,59,67]). The
of subintervals, and a short pulse is transmitted for the sequential detection approach, in which the FC at each
sign bit at the time slot that corresponds to the overshoot time chooses either to continue receiving new samples
value. Consequently, to transmit each quantized over- or to stop and make a decision, is also of significant
shoot value, more energy is used in the former method, interest (e.g., [8,40,62]). In [17,27,73,74,76], SPRT is used
whereas more bandwidth is required in the latter. both at the nodes and the FC. SPRT is the optimum
In the literature, level-triggered sampling has been sequential detector for iid observations in terms of mini-
utilized to effectively transmit the sufficient local mizing the average sample number among all sequential
statistics in decentralized systems for several appli- tests satisfying the same error probability constraints
cations, such as spectrum sensing in cognitive radio [65]. Compared with the best fixed-sample-size detec-
networks [73], target detection in wireless sensor tor, SPRT requires, on average, four times less samples
462 Event-Based Control and Signal Processing

for the same level of confidence for Gaussian signals In this section, we first design in Section 20.2.2
[47, page 109]. channel-aware decentralized detection schemes based
Under stringent energy and bandwidth constraints on level-triggered sampling for different noisy channel
where nodes can only infrequently transmit a single bit models. We then show in Section 20.2.3 how to fuse mul-
(which can be considered as a local decision), the opti- timodal data from disparate sources for decentralized
mum local decision function is the likelihood ratio test detection.
(LRT), which is nothing but a one-bit quantization of
LLR, for a fixed decision fusion rule under the fixed- 20.2.2 Channel-Aware Decentralized Detection
sample-size setup [57]. Similarly, the optimum fusion
rule at the FC is also an LRT under the Bayesian [6] Consider a network of K distributed nodes (e.g., a wire-
and Neyman–Pearson [58] criteria. Since SPRT, which less sensor network) and an FC, which can be one of
is also a one-bit quantization of LLR with the dead- the nodes or a dedicated processor (Figure 20.4). Each
band (−Δ, Δ), is the sequential counterpart of LRT, node k computes the LLR Lk [n] of discrete-time sig-
these results readily extend to the sequential setup as a nal x k [n] it observes, and sends the level-triggered LLR
double-SPRT scheme [27]. samples to the FC, which fuses the received samples
Under relaxed resource constraints, the optimum local and sequentially decides between two hypotheses, H0
scheme is a multibit quantization of LLR [66], which and H1 .
is the necessary and sufficient statistic for the detec- Assuming iid observations { xk [n]}n across time, and
tion problem, while the optimum data fusion detector independence across nodes, the local LLR at node k and
at the FC is still an LRT under the fixed-sample-size the global LLR are given by
setup. Thanks to the event-based nature of SPRT, even
its single-bit decision provides data fusion capabilities. f k,1 ( xk [1], . . . , xk [n]) n f ( x [m])
Lk [n] = log = ∑ log k,1 k
More specifically, when it makes a decision, we know f k,0 ( xk [1], . . . , xk [n]) m =1
f k,0 ( x k [ m ])
that LLR ≥Δ if H1 is selected, or LLR ≤−Δ if H0 is n
selected. For continuous-time bandlimited observations, = ∑ l k [ m ] = L k [ n − 1 ] + l k [ n ],
we have a full precision, that is, LLR = Δ or LLR = m =1
−Δ depending on the decision, which requires infinite K
number of bits with LRT under the fixed-sample-size L[n] = ∑ L k [ n ],
setup. The repeated SPRT structure of level-triggered k =1
sampling enables LLR tracking, that is, sequential data
fusion [17,74]. For discrete-time observations, the single- respectively, where f k,j , j = 0, 1 is the probability
bit decision at each SPRT step (i.e., one-bit representation density/mass function of the observed signal at node k
of a level-triggered sample as in Figure 20.3) may pro- under H j , and lk [n] is the LLR of x k [n].
vide high-precision LLR tracking if overshoots are small
compared with Δ. Otherwise, under relaxed resource b1,i Z1,i
x1[n] N1 Ch1
constraints, each overshoot can be quantized into addi- FC ds
tional bits [73,76], resulting in a multibit quantization
of the changes in LLR with the deadband (−Δ, Δ), Z2,i
ZK, i
analogous to the multibit LLR quantization under the Ch2
fixed-sample-size setup [66]. b2,i
The conventional approach to decentralized detec- x2[n] N2
ChK
tion, assuming ideal transmission channels, addresses
only the noise that contaminates the observations at
bK, i
nodes (e.g., [17,57]). Nevertheless, in practice, the chan-
nels between nodes and the FC are noisy. Following the
NK
conventional approach, at the FC, first a communication
block recovers the transmitted information bits, and
then an independent signal processing block performs xK [n]
detection using the recovered bits. Such an indepen-
dent two-step procedure inflicts performance loss due FIGURE 20.4
to the data-processing inequality [9]. For optimum A network of K nodes and a fusion center (FC). Each node k processes
performance, without a communication block, the its observations { xk [n ]} n and transmits information bits { bk,i } i . The FC
received signals should be processed in a channel-aware then, upon receiving the signals { zk,i }, makes a detection decision dS
manner [7,34]. at the random time S.
Event-Based Statistical Signal Processing 463

20.2.2.1 Procedure at Nodes is received at time tm . Then at tm , the FC performs the


following update:
Each node k samples Lk [n] via level-triggered sampling
at a sequence of random times {tk,i }i that are determined
by Lk [n] itself. Specifically, the ith sample is taken when L̃[tm ] = L̃[tm−1 ] + λ̃m ,
the LLR change Lk [n] − Lk [tk,i−1 ] since the last sampling
time tk,i−1 exceeds a constant Δ in absolute value, that is, and uses L̃[tm ] in an SPRT procedure with two thresh-
$ % olds A and − B, and the following decision rule
tk,i  min n > tk,i−1 : Lk [n] − Lk [tk,i−1 ] ∈ (−Δ, Δ) ,

tk,0 = 0, Lk [0] = 0. (20.1) ⎨ H1 , if L̃[tm ] ≥ A,
dtm  H , if L̃[tm ] ≤ − B,
⎩ 0
It has been shown in [73, section IV-B] that Δ can be wait for λ̃m+1 , if L̃[tm ] ∈ (− B, A).
determined by
  The thresholds (A, B > 0) are selected to satisfy the
K
Δ 1 error probability constraints
Δ tanh
2
=
R ∑ |Ej [ Lk [1]]|, (20.2)
k =1
P0 (dS = H1 ) ≤ α and P1 (dS = H0 ) ≤ β, (20.5)
to ensure that the FC receives messages with an average
rate of R messages per unit time interval. with equalities, where P j , j = 0, 1, denotes the probabil-
Let λk,i denote the LLR change during the ith sam- ity under H , α and β are the error probability bounds
j
pling interval, (tk,i−1, tk,i ], that is, given to us, and
t k,i
S  min{n > 0 : L̃[n] ∈ (− B, A)},
λk,i  Lk [tk,i ] − Lk [tk,i−1 ] = ∑ l k [ n ]. (20.6)
n = t k,i −1 +1
is the decision time.
Immediately after sampling at tk,i , as shown in Comparing (20.1) with (20.6), we see that each node,
Figure 20.4, an information bit bk,i indicating the in fact, applies a local SPRT with thresholds Δ and −Δ
threshold crossed by λk,i is transmitted to the FC, that is, within each sampling interval. At node k, the ith local
SPRT starts at time tk,i−1 + 1 and ends at time tk,i when
bk,i  sign(λk,i ). (20.3) the local test statistic λk,i exceeds either Δ or −Δ. This
local hypothesis testing produces a local decision repre-
sented by the information bit bk,i in (20.3), and induces
the local error probabilities
20.2.2.2 Procedure at the FC
Let us now analyze the received signal zk,i at the FC cor- αk  P0 (bk,i = 1) and βk  P1 (bk,i = −1). (20.7)
responding to the transmitted bit bk,i (see Figure 20.4).
The FC computes the LLR
We next discuss how to compute λ̃k,i , the LLR of
gk,1 (zk,i ) received signal zk,i , given by (20.4), under ideal and
λ̃k,i  log , (20.4) noisy channels.
gk,0 (zk,i )

of each received signal zk,i and approximates the global 20.2.2.3 Ideal Channels
LLR, L[n], as
Lemma 20.1
K Jk,n
L̃[n]  ∑ ∑ λ̃k,i ,
k =1 i =1 Assuming ideal channels between nodes and the FC,
that is, zk,i = bk,i , we have
where Jk,n is the total number of LLR messages received
from node k until time n, and gk,j , j = 0, 1, is the pdf ⎧
⎪ P ( b =1 )
of zk,i under Hj . ⎨ log P1 (bk,i =1) = log 1− βk
αk ≥ Δ, if bk,i = 1,
0 k,i
In fact, the FC recursively updates L̃[n] whenever it λ̃ k,i =
⎪ log P1 (bk,i =−1) = log βk ≤ −Δ, if b = −1.

receives an LLR message from any node. In particular, P0 ( bk,i =−1) 1 −α k k,i
suppose that the mth LLR message λ̃m from any sensor (20.8)
464 Event-Based Control and Signal Processing

PROOF The equalities follow from (20.7). The inequal- For the proof and more details on the result, see
ities can be obtained by applying a change of measure. [17, Theorem 2] and the discussion therein. Using the
To show the first one, we write traditional uniform sampler followed by a quantizer, a
similar order-1 asymptotic optimality result cannot be
αk = P0 (λk,i ≥ Δ) = E0 [1{λk,i ≥Δ} ], (20.9) obtained by controlling the sampling period with a con-
stant number of quantization bits [73, section IV-B]. The
where E j is the expectation under H j , j = 0, 1 and 1{·} is
significant performance gain of level-triggered sampling
the indicator function. Note that
against uniform sampling is also shown numerically in
f ( x [ t − + 1 ] , . . . , x [ t ]) [73, section V].
e−λk,i =
k,0 k k,i 1 k k,i
,
f k,1 ( xk [tk,i−1 + 1], . . . , xk [tk,i ]) Order-1 is the most frequent type of asymptotic opti-
mality encountered in the literature, but it is also the
can be used to compute the expectation integral in terms weakest. Note that in order-1 asymptotic optimality,
of f k,1 instead of f k,0 , that is, to change the probability although the average decision time ratio converges
measure under which the expectation is taken from f k,0 to 1, the difference E j [S] − E j [So ] may be unbounded.
to f k,1 . Hence, Therefore, stronger types of asymptotic optimality are
defined. The difference remains bounded (i.e., E j [S] −
αk = E1 [e−λk,i 1{λk,i ≥Δ} ] E j [So ] = O(1)) in order-2 and diminishes (i.e., E j [S] −
≤ e−Δ E1 [1{λk,i ≥Δ} ] = e−Δ P1 (λk,i ≥ Δ) E j [So ] = o (1)) in order-3. The latter is extremely rare in
the literature, and the schemes of that type are consid-
= e − Δ ( 1 − βk ) , ered optimum per se for practical purposes.
giving us the first inequality in (20.8). The second
inequality follows similarly. 20.2.2.4 Noisy Channels

We see from Lemma 20.1 that the FC, assuming ideal In the presence of noisy channels, one subtle issue is that
channels, can compute λ̃k,i , the LLR of the sign bit bk,i since the sensors asynchronously sample and transmit
if the local error probabilities αk and βk are available. the local LLR, the FC needs to first reliably detect the
It is also seen that λ̃k,i is, in magnitude, larger than the sampling time to update the global LLR. We first assume
corresponding sampling threshold, and thus includes a that the sampling time is reliably detected and focus on
constant compensation for the random overshoot of λk,i deriving the LLR update at the FC. We discuss the issue
above Δ or below −Δ. The relationship of this constant of sampling time detection later on.
compensation to the average overshoot, and the order-1 In computing the LLR λ̃k,i of the received signal zk,i ,
asymptotic optimality it achieves are established in [17]. we make use of the local sensor error probabilities αk , βk ,
In the no-overshoot case, as with continuous-time and the channel parameters that characterize the statis-
band-limited observations, the inequalities in (20.8) tical property of the channel.
become equalities since in (20.9) we can write αk =
P0 (λk,i = Δ). This shows that the LLR update in (20.8)
20.2.2.4.1 Binary Erasure Channels
adapts well to the no-overshoot case, in which the LLR
change that triggers sampling is either Δ or −Δ. We first consider binary erasure channels (BECs)
between sensors and the FC with erasure probabilities
Theorem 20.1: [17, Theorem 2] k , k = 1, . . . , K. Under BEC, a transmitted bit bk,i is lost
with probability k , and it is correctly received at the FC
Consider the asymptotic regime in which the target error (i.e., zk,i = bk,i ) with probability 1 − k .
probabilities α, β → 0 at the same rate. If the sampling
threshold Δ → ∞ is slower than | log α|, then, under
ideal channels, the decentralized detector which uses Lemma 20.2
the LLR update given by (20.8) for each level-triggered
sample is order-1 asymptotically optimum, that is, Under BEC with erasure probability k , the LLR of zk,i is
given by
E j [S]
= 1 + o (1), j = 0, 1, (20.10) ⎧
E j [ So ] ⎪ log P1 (zk,i =1) = log 1−βk ,
⎨ P0 ( z k,i =1) αk if zk,i = 1,
where So is the decision time of the optimum (cen- λ̃ k,i =

⎩ log P1 (zk,i =−1) = log βk , if zk,i = −1.
tralized) sequential detector, SPRT, satisfying the error P0 ( z k,i =−1) 1 −α k
probability bounds α and β [cf. (20.5)]. (20.11)
Event-Based Statistical Signal Processing 465

PROOF We have zk,i = b, b = ±1, with probability and similarly β̂k > βk . Thus, |λ̃k,i BSC | < |λ̃ BEC | from which
k,i
1 − k only when bk,i = b. Hence, we expect a higher performance loss under BSC than
the one under BEC. Finally, note also that, unlike the
P j (zk,i = b) = P j (bk,i = b)(1 − k ), j = 0, 1.
BEC case, under BSC the FC needs to know the channel
In the LLR expression, the 1 − k terms on the numer- parameters {k } to operate in a channel-aware manner.
ator and denominator cancel out, giving the result in
(20.11). 20.2.2.4.3 Additive White Gaussian Noise Channels
Now, assume that the channel between each sensor and
Note that under BEC, the channel parameter k is not the FC is an additive white Gaussian noise (AWGN)
needed when computing the LLR λ̃k,i . Note also that channel. The received signal at the FC is given by
in this case, a received bit bears the same amount of
LLR information as in the ideal channel case [cf. (20.8)], zk,i = yk,i + wk,i , (20.13)
although a transmitted bit is not always received. Hence, where wk,i ∼ Nc (0, σ2 ) is the complex white Gaussian
k
the channel-aware approach coincides with the conven- noise, and yk,i is the transmitted signal at sampling
tional approach which relies solely on the received sig- time tk,i , given by
nal. Although the LLR updates in (20.8) and (20.11) are 
identical, the fusion rules under BEC and ideal channels a, if λk,i ≥ Δ,
y k,i = (20.14)
are not. This is because under BEC, the decision thresh- b, if λk,i ≤ −Δ,
olds A and B in (20.6), due to the information loss, are in where the transmission levels a and b are complex in
general different from those in the ideal channel case. general.

20.2.2.4.2 Binary Symmetric Channels Lemma 20.4


Next, we consider binary symmetric channels (BSCs)
with crossover probabilities k between sensors and Under the AWGN channel model in (20.13), the LLR of
the FC. Under BSC, the transmitted bit bk,i is flipped zk,i is given by
(i.e., zk,i = −bk,i ) with probability k , and it is correctly
received (i.e., zk,i = bk,i ) with probability 1 − k . (1 − βk )e−ck,i + βk e−dk,i
λ̃k,i = log , (20.15)
αk e−ck,i + (1 − αk )e−dk,i
Lemma 20.3 | z − a |2 | z − b |2
where ck,i = k,iσ2 and d k,i = k,iσ2 .
k k
Under BSC with crossover probability k , the LLR of zk,i
can be computed as PROOF The distribution of the received signal given
⎧ y k,i is zk,i ∼ Nc (yk,i , σ2k ). The probability density function
⎨ log 1−β̂k , if z = 1, of zk,i under H j is then given by
α̂k k,i
λ̃k,i = (20.12)
⎩ log β̂k , if zk,i = −1, gk,j (zk,i ) = gk,j (zk,i |yk,i = a)P j (yk,i = a)
1−α̂ k
+ gk,j (zk,i |yk,i = b)P j (yk,i = b),
where α̂k = αk (1 − 2k ) + k and β̂k = βk (1 − 2k ) + k .
| z k,i − a |2 | z k,i − b |2
− −
σ2 σ2
PROOF Due to the nonzero probability of receiving a ( 1 − βk ) e k + βk e k
e.g., gk,1 (zk,i ) = .
wrong bit, we now have πσ2k
(20.16)
P j (zk,i = b) = P(zk,i = b|bk,i = b)P j (bk,i = b)
|z k,i − a |2 |z k,i −b|2
+ P(zk,i = b|bk,i = −b)P j (bk,i = −b), Defining ck,i  σ2k
and dk,i  σ2k
, and substi-
e.g., P0 (zk,i = 1) = (1 − k )αk + k (1 − αk ), gk,1 ( z k,i )
tuting gk,0 (zk,i ) and gk,1 (zk,i ) into λ̃k,i = log gk,0 ( z k,i )
, we
j = 0, 1, b = ±1. Defining α̂k = αk (1 − 2k ) + k and β̂k = obtain (20.15).
βk (1 − 2k ) + k , we obtain the LLR expression given in
(20.12). If the transmission levels a and b are well separated,
|y |
and the signal-to-noise ratio |wk,i | is high enough, then
k,i
Note that for αk < 0.5, βk < 0.5, ∀k, which we assume ⎧
⎨ log 1 − β
true for Δ > 0, αk , if y k,i = a,
k

λ̃k,i ≈
α̂k = αk + k (1 − 2αk ) > αk , ⎩ log βk , if y = b,
1 −α k
k,i
466 Event-Based Control and Signal Processing

resembling the ideal channel case, given by (20.8). Due due to the energy constraints, it is numerically shown
to the energy constraints at nodes, assume a maxi- in [74, Section V-D] that the optimum signaling scheme
mum transmission power P2 . In accordance with the corresponds to either | a| = P, |b| = Q or | a| = Q, |b| = P.
above observation, it is shown in [74, Section V-C]
that the antipodal signaling (e.g., a = P and b = − P) is 20.2.2.4.5 Rician Fading Channels
optimum.
For Rician fading channels, we have h k,i ∼ Nc (μk , σ2h,k )
in (20.17).
20.2.2.4.4 Rayleigh Fading Channels
Assuming a Rayleigh fading channel model, the Lemma 20.6
received signal is given by
With Rician fading channels, λ̃k,i is given by (20.18),
zk,i = hk,i yk,i + wk,i , (20.17)
|z k,i − aμk |2 |z k,i −bμk |2
where ck,i = σ a,k
2 , d k,i = σ2b,k
, σ2a,k = | a|2 σ2h,k +
where h k,i ∼ Nc (0, σ2h,k ), yk,i , and wk,i are as before.
σ2k , and σb,k = |b| σh,k + σk .
2 2 2 2

Lemma 20.5
PROOF Given yk,i , the received signal is distributed
as zk,i ∼ Nc (μk yk,i , |yk,i |2 σ2h,k + σ2k ). The likelihoods
Under the Rayleigh fading channel model in (20.17), the
LLR of zk,i is given by gk,1 (zk,i ) and gk,0 (zk,i ) are then written as in (20.19) with
σ2a,k = | a|2 σ2h,k + σ2k , σ2b,k = |b|2 σ2h,k + σ2k , and the new
1−βk − c k,i βk − d k,i |z k,i − aμk |2 |z k,i −bμk |2
σ2a,k
e + σ2b,k
e definitions ck,i = , d k,i = . Finally, the
σ2a,k σ2b,k
λ̃k,i = log αk − c k,i 1−αk − d k,i
, (20.18)
σ2a,k
e + σ2b,k
e LLR is given by (20.18).

|z k,i |2 |z k,i |2 The Rician model covers the previous two continu-
where ck,i = σ2a,k
, dk,i = σ2b,k
, σ2a,k = | a|2 σ2h,k + σ2k , and ous channel models. Particularly, the σ2h,k = 0 case cor-
σ2b,k = |b|2 σ2h,k + σ2k . responds to the AWGN model, and the μk = 0 case
corresponds to the Rayleigh model. It is numerically
shown in [74, Section V-E] that depending on the values
PROOF Given yk,i , we have zk,i ∼ Nc (0, |yk,i |2 σ2h,k +
of parameters (μk , σ2h,k ), either the antipodal signaling of
σ2k ). Similar to (20.16), we can write the AWGN case or the ON–OFF type signaling of the
1 − βk −ck,i β Rayleigh case is optimum.
gk,1 (zk,i ) = e + k2 e−dk,i ,
πσ2a,k πσb,k
(20.19) 20.2.2.4.6 Discussions
αk −ck,i 1 − αk −dk,i
gk,0 (zk,i ) = e + e , Considering the unreliable detection of sampling times
πσ2a,k πσ2b,k under continuous channels, we should ideally integrate
this uncertainty into the fusion rule of the FC. In other
|z k,i |2 |z k,i |2
where ck,i  σ a,k
2 , d k,i  σ2b,k
, σ2a,k  | a|2 σ2h,k + σ2k , and words, at the FC, the LLR of received signal
σ2b,k  |b|2 σ2h,k + σ2k . Substituting gk,0 (zk,i ) and gk,1 (zk,i ) z k [ n ] = h k [ n ] y k [ n ] + w k [ n ],
g (z )
into λ̃k,i = log gk,1 (zk,i ) , we obtain (20.18). instead of zk,i given in (20.17), should be computed at
k,0 k,i
each time instant n if the sampling time of node k can-
In this case, different messages a and b are expressed not be reliably detected. In the LLR computations of
only in the variance of zk,i . Hence, with antipodal sig- Lemmas 20.4 and 20.5, the prior probabilities P j (yk,i = a)
and P j k,i = b) are used. These probabilities are in fact
( y
naling, they become indistinguishable (i.e., σ2a,k = σ2b,k )
conditioned on the sampling time tk,i . Here, we need
and as a result λ̃k,i = 0. This suggests that we should the unconditioned prior probabilities of the signal y [n]
k
separate | a| and |b| as much as possible to decrease the which at each time n takes a value of a or b or 0, that is,
uncertainty at the FC, and in turn to decrease the loss ⎧
in the LLR update λ̃k,i with respect to the ideal chan- ⎨ a if Lk [n] − Lk [tk,i−1 ] ≥ Δ,
2 y k [ n ] = b if Lk [n] − Lk [tk,i−1 ] ≤ −Δ,
nel case. Assuming a minimum transmission power Q ⎩
to ensure reliable detection of an incoming signal at the 0 if Lk [n] − Lk [tk,i−1 ] ∈ (−Δ, Δ),
2
FC, in addition to the maximum transmission power P instead of yk,i given in (20.14).
Event-Based Statistical Signal Processing 467

Then, the LLR of zk [n] is given by in connecting more and more devices with various
sensing capabilities [41]. The envisioned future power
gk,1 (zk [n]) grid, called Smart Grid, is a good example for such
λ̃k [n] = log , heterogeneous networks. Monitoring and managing
gk,0 (zk [n])
- wide-area smart grids require the integration of multi-
gk,1 (zk [n]) = gk,1 (zk [n]|yk [n] = a)(1 − βk ) modal data from electricity consumers, such as smart
. home and smart city systems, as well as various elec-
+ gk,1 (zk [n]|yk [n] = b)βk P1 (yk [n] = 0) tricity generators (e.g., wind, solar, coal, nuclear) and
+ gk,1 (zk [n]|yk [n] = 0) P1 (yk [n] = 0), sensing devices across the grid [15].
- Multisensor surveillance (e.g., for military or envi-
gk,0 (zk [n]) = gk,0 (zk [n]|yk [n] = a)αk ronmental purposes) is another application in which
. multimodal data from a large number of sensors (e.g.,
+ gk,0 (zk [n]|yk [n] = b)(1 − αk ) acoustic, seismic, infrared, optical, magnetic, tempera-
ture) are fused for a common statistical task [18]. An
× P0 ( y k [ n ]  = 0 )
interesting multidisciplinary example is nuclear facil-
+ gk,0 (zk [n]|yk [n] = 0) P0 (yk [n] = 0), ity monitoring for treaty verification. From a data-
processing perspective, using a variety of disparate
where gk,j (zk [n]|yk [n]) is determined by the channel information sources, such as electricity consumption,
model. Since the FC has no prior information on the satellite images, radiation emissions, seismic vibrations,
sampling times of nodes, the probability of sampling, shipping manifests, and intelligence data, a nuclear facil-
that is, P j (yk [n] = 0), can be shown to be E [1τ ] , where ity can be monitored to detect anomalous events that
j k,i
E j [τk,i ] is the average sampling interval of node k under violate a nuclear treaty.
H j , j = 0, 1. Information-theoretic and machine-learning ap-
Alternatively, a two-step procedure can be applied proaches to similar problems can be found in [18]
by first detecting a message and then using the LLR and [56], respectively. We here follow a Bayesian
updates previously derived in Lemmas 20.4–20.6. Since probabilistic approach to the multimodal detection
it is known that most of the time λ̃k [n] is uninforma- problem.
tive, corresponding to the no message case, a simple
thresholding can be applied to perform LLR update only
when it is informative. The thresholding step is in fact a 20.2.3.1 Latent Variable Model
Neyman–Pearson test (i.e., LRT) between the presence Consider a system of K information sources (i.e., a net-
and absence of a message signal. The threshold can be work of K nodes). From each source k, a discrete-time
adjusted to control the false alarm (i.e., type-I error) and signal x [n], n ∈ N, is observed, which follows the prob-
k
misdetection (i.e., type-II error) probabilities. Setting the ability distribution D (θ ) with the parameter vector
k k
threshold sufficiently high, we can obtain a negligible θ , k = 1, . . . , K. Given θ , the temporal observations
k k
false alarm probability, leaving us with the misdetection { x [n]}n from source k are assumed iid. Some informa-
k
probability. Thus, if an LLR survives after thresholding, tion sources may be of same modality.
in the second step it is recomputed as in the channel- A latent variable vector φ is assumed to correlate
aware fusion rules obtained in Lemmas 20.4–20.6. information sources by controlling their parameters
An information-theoretic analysis for the decentral- (Figure 20.5). Then, the joint distribution of all observa-
ized detectors in Sections 20.2.2.3 and 20.2.2.4 can be tions collected until time N can be written as
found in [74]. Specifically, using renewal processes,
closed-form expressions for average decision time are ' (
derived under both the nonasymptotic and asymptotic f { xk [n]}K,N
k =1,n=1
regimes.    ' (

= ··· f { xk [n]}k,n {θk }, φ
χφ χ1 χK
20.2.3 Multimodal Decentralized Detection
'  ( ' (
In monitoring of complex systems, multimodal data, × f {θk }φ f φ dθ1 · · · dθK dφ,
such as sensor measurements, images, and texts, are
collected from disparate sources. The emerging con-
cepts of Internet of Things (IoT) and Cyber-Physical where χφ and χk are the supports of φ and
Systems (CPS) show that there is an increasing interest θk , k = 1, . . . , K. Assuming {θk } are independent,
468 Event-Based Control and Signal Processing

φ Using f 1 ({ xk [n]}k,n ) and f 0 ({ xk [n]}k,n ), the global LLR


at time N is written as
'  (
θ1 θk θK K f { xk [n]}nN=1 φ = φ1 K
L[ N ] = ∑ log '  ( = ∑ L k [ N ].
k =1 f { xk [n]}nN=1 φ = φ0 k =1

x1[n] xk[n] xK[n]


(20.22)
For sequential detection, SPRT can be applied by com-
paring L[n] at each time to two thresholds A and − B.
FIGURE 20.5
The sequential test continues until the stopping time
A Bayesian network of K information sources linked through the
latent variable vector φ. The probability distribution of the observation
S = min{n ∈ N : L[n] ∈ (− B, A)}, (20.23)
xk [n ] is parameterized by the random vector θk , whose distribution is
determined by φ, which is also random. The observed variables are and makes the decision
represented by filled circles. 
H1 , if L[S] ≥ A,
dS = (20.24)
H0 , if L[S] ≤ − B,
given φ we have
at time S.
' (
In a decentralized system, where all observations
f { xk [n]}k,n
cannot be made available to the FC due to resource


N '  ( '  ( constraints, each node k (corresponding to information
= ∏ f x1 [n]θ1 f θ1 φ dθ1 source k) can compute its LLR Lk [n] and transmit event-
χφ χ1 n = 1
based samples of it to the FC, as will be described in

,
N '  ( '  ( ' ( Section 20.2.3.4. Then, summing the LLR messages from
×··· ∏ f xK [n]θK f θK φ dθK f φ dφ nodes, the FC computes the approximate global LLR
χ K n =1
 ' L̃[n] and uses it in the SPRT procedure similar to (20.23)
 ( '  ( ' (
=  
f { x1 [n]} φ · · · f { xK [n]} φ f φ dφ, and (20.24).
χφ In many cases, it may not be possible to determinis-
(20.20) tically specify φ under the hypotheses, but a statistical
description may be available, that is,
'  (

where f x k [n] θk , k = 1, . . . , K, is the probability
H0 : φ ∼ Dφ,0 (θφ,0 ),
density/mass
'  ( function of the distribution D ( θ ) . If (20.25)
k k H1 : φ ∼ Dφ,1 (θφ,1 ).
f θk φ corresponds to the conjugate prior distribution
'  ( In such a case, to compute the likelihood under H j , we
for Dk (θk ), then f { xk [n]}n φ can be written in closed need to integrate over φ as shown in (20.20). Hence, in
form. general, the global LLR

20.2.3.2 Hypothesis Testing L[ N ]


 '  ( '  ( ' (
If the latent variable vector φ is deterministically speci- f { x1 [n]}φ · · · f { xK [n]}φ f 1 φ dφ
χφ
fied under both hypotheses, that is, = log  '  ( '  ( ' ( ,
χφ f { x1 [n]}φ · · · f { xK [n]}φ f 0 φ dφ
H0 : φ = φ 0 , (20.26)
(20.21)
H1 : φ = φ 1 ,
does not have a closed-form expression. However, for
then observations { xk [n]}k from different sources are a reasonable number of latent variables (i.e., entries
independent under H j , j = 0, 1, since {θk } are assumed of φ), effective numerical computation may be possible
independent given φ. In that case, the global likelihood through Monte Carlo simulations. Once L[n] is numer-
under H j is given by (20.20) without the integral over φ, ically computed, SPRT can be applied as in (20.23)
that is, and (20.24).
For decentralized detection, each node k can
'  ( now the functions of { xk [n]}n included in
K
' compute  (
f j ({ xk [n]}k,n ) = ∏ f { xk [n]}n φ = φ j .
f { xk [n]}n φ (see the example below), which has a
k =1
Event-Based Statistical Signal Processing 469

closed-form expression thanks to the assumed conju- μφ of the Gaussian model, a gamma prior on the rate
gate prior on the parameter vector θk [see (20.20)], and parameter λφ of the Poisson model, and a Dirichlet prior
send event-based samples to the FC. Upon receiving on the probability vector pφ of the multinomial model,
such messages, the FC computes approximations to that is,
those functions; uses them in (20.26) to compute L̃[ N ],
an approximate global LLR; and applies the SPRT
procedure using L̃[ N ]. Details will be provided in μφ ∼ N (μ̄φ , σ̄2φ ), λφ ∼ Γ(αφ , βφ ), pφ ∼ Dir(γφ ),
Section 20.2.3.4. (20.28)

20.2.3.3 Example where the hyperparameters μ̄φ , σ̄2φ , αφ , βφ , and γφ are


As an example to the multimodal detection scheme pre- completely specified by the latent variable vector φ.
sented in this section, consider a system with three
types of information sources: Gaussian source (e.g., Lemma 20.7
real-valued physical measurements), Poisson source
(e.g., event occurrences), and multinomial source (e.g., For the example given in (20.27) and (20.28), the joint dis-
texts). We aim to find' the closed-form expression of the tribution of observations from each source conditioned
 (
sufficient statistic f { x [n]} φ for each modality. Let the on φ is given by

discrete-time signals '  (
f { x g [n]}nN=1 φ
x g [n] ∼ N (μφ , σ2 ), x p [n] ∼ Pois(λφ ), ⎛  2 ⎞
N x [ n ] μ̄
∑n =1 g + φ
xm [n] ∼ Mult(1, pφ ), n ∈ N, (20.27) ⎜ ∑ N x g [ n ]2 μ̄2φ σ2 σ̄2φ ⎟
exp ⎜
⎝−
n =1
2σ2
− 2σ̄2φ
+   ⎟

2 N
+ 1
σ2 σ̄2
φ
denote the Gaussian, Poisson, and multinomial obser- =  ,
vations, respectively (see Figure 20.6). Multinomial dis- (2π) N/2σ N σ̄φ N
σ2
+ 1
σ̄2φ
tribution with a single trial and category probabilities
pφ = [ pφ,1 , . . . , pφ,M ] is used for xm [n], whose realiza- (20.29)
tion is a binary vector with an entry 1 at the index '  (
f { x p [n]}nN=1 φ
corresponding to the category observed at time n, and
' (
0 at the others. The Poisson observation x p [n] denotes Γ αφ + ∑nN=1 x p [n] α
βφφ
the number of occurrences for an event of interest in a = ,
unit time interval, where λφ is the average rate of event Γ (αφ ) ∏nN=1 x p [n]! (βφ + N )αφ +∑n=1 x p [n]
occurrences. (20.30)
Among the parameters, only the variance σ2 of the '  (
Gaussian model is assumed known. We assume con- f { xm [n]}nN=1 φ
jugate prior distributions for the unknown parameters. ' (
Γ ∑iM γ
=1 φ,i
Specifically, we assume a Gaussian prior on the mean = ' " #(
Γ ∑i=1 γφ,i + ∑nN=1 xm,i [n]
M
' (
φ M Γ γφ,i + ∑ n N
=1 x m,i [ n ]
×∏ " # , (20.31)
i =1 Γ γ φ,i

σ2 μφ λφ pφ
where Γ(·) is the gamma function.

PROOF Given φ, { x g [n]} are iid with N (μφ , σ2 ),


xg[n] xp[n] xm[n] where μφ ∼ N (μ̄φ , σ̄2φ ), hence

FIGURE 20.6
∑nN=1 ( x g [n ]−μφ )2 (μφ −μ̄φ )2
The Bayesian network considered in the example. The variance of the
" # exp(− 2σ2
− 2σ̄2φ
)
Gaussian source, which is a known constant, is represented by a filled f { x g [n]}, μφ = N +1
.
square. (2π) 2 σ N σ̄φ
470 Event-Based Control and Signal Processing

After some manipulations, we can show that and the posterior is Dir(γφ + ∑nN=1 xm [n]); hence,
' (
" # f { xm [n]}, pφ
f { x g [n]}, μφ
⎛  2 ⎞ ' (
M
N x [ n ] μ̄
∑n =1 g + φ M
∑ N
x [n]
Γ ∑ i =1 γ φ,i M
γ −1
⎜ ∑ N x g [ n ]2 μ̄2φ σ2 σ̄2φ ⎟ = ∏ pφ,in=1 m,i " # ∏ pφ,iφ,i
exp⎜
⎝−
n =1
2σ2
− 2σ̄2φ
+   ⎟
⎠ i =1
M
∏i=1 Γ γφ,i i=1
2 N
+ 1 ' " #(
σ2 σ̄2
= 
φ Γ ∑i=1 γφ,i + ∑nN=1 xm,i [n]
M M
γ + N x [n ]−1
= ' ( ∏ pφ,iφ,i ∑n=1 m,i
(2π) N/2σ N σ̄φ N
σ2
+ 1
σ̄2φ ∏ i=1 Γ γφ,i + ∑n=1 xm,i [n] i=1
M N

⎛ ' (
8
91 ⎛ ⎞2 ⎞
9 2 + N 1
+ N μ̄φ
+
∑nN=1 x g [n ] Γ ∑iM =1 γ φ,i
: σ̄φ σ2 ⎜ σ̄2φ σ2 ⎜ σ̄2φ σ2 ⎟⎟ ' " #(
× exp⎜
⎝− ⎝ μφ − ⎠⎟ ⎠, Γ ∑i=1 γφ,i + ∑nN=1 xm,i [n]
M
2π 2 1
+ N
×
σ̄2φ σ2 ' (,
  M Γ γφ,i + ∑n =1 x m,i [ n]
N

f (μφ |{ x g [n ]} ) × ∏ " #
i =1 Γ γφ,i
 
f ({ x m [n ]}|φ)
where from the conjugate prior property, it is known
that the posterior distribution of μφ is also Gaussian
μ̄φ N x [n ]
∑n
concluding the proof.
+ =1 g
σ̄2 σ2
φ
with mean and variance 1
σ̄2φ
+ σN2 . Hence, the In testing hypotheses that deterministically specify φ
1
+ N2
σ̄2φ σ
 ( ' as in (20.21), the local LLR for each modality can be
result in (20.29) follows. Note that f { x g [n]}nN=1 φ is a computed using Lemma 20.7, for example,
multivariate Gaussian distribution, where all entries of '  (
the mean vector are μ̄φ , the diagonal entries of covari- f { x g [n]}nN=1 φ = φ1
ance matrix are σ̄2φ + σ2 , and the off-diagonals are σ̄2φ . L g [ N ] = log '  (.
f { x g [n]}nN=1 φ = φ0
Similarly, for Poisson observations, we write
Then, under a centralized setup, SPRT can applied as in
x p [n] α
" # N λφ e − λφ βφφ α −1 −βφ λφ
(20.23) and (20.24) using the global LLR
f { x p [n]}, λφ = ∏ x p [n]! Γ ( αφ )
λφφ e ,
n =1 L [ N ] = L g [ N ] + L p [ N ] + L m [ N ], (20.32)

or under a decentralized setup, each node reports event-


since { x p [n]} are iid with Pois(λφ ) given λφ , and the
based samples of its local LLR, and the FC applies SPRT
prior is Γ(αφ , βφ ). The posterior distribution is known to
using L̃[ N ] = L̃ g [ N ] + L̃ p [ N ] + L̃m [ N ].
be Γ(αφ + ∑nN=1 x p [n], βφ + N ); hence, On the contrary, while testing hypotheses that statisti-
cally specify φ as in (20.25), the global LLR is computed
" #
f { x p [n]}, λφ using the results of Lemma 20.7 in (20.26). In this case,
for decentralized detection, each node can only compute
(βφ + N )αφ +∑n=1 x p [n] αφ +∑nN=1 x p [n]−1 −(βφ + N )λφ
N

= ' ( λφ e the functions of its observations that appear in the con-


Γ αφ + ∑nN=1 x p [n] ditional joint distributions, given by (20.29)–(20.31), and
  do not depend on φ.
f (λφ |{ x p [n ]} ) For example, the Gaussian node from (20.29) can
' ( compute
α
Γ αφ + ∑nN=1 x p [n] βφφ
× , ∑ nN=1 x g [n]2 ∑ N x g [n]
Γ (αφ ) ∏nN=1 x p [n]! (βφ + N )αφ +∑n=1 x p [n] and n=1 2
N
, (20.33)
  2σ 2 σ
f ({ x p [n ]} )
and send their event-based samples to the FC, which
can effectively recover such samples as will be shown
proving (20.30). next, and uses them in (20.29). Although, in this case,
Finally, for the multinomial observations, { x m [n]} are event-based sampling is used to transmit only some sim-
iid with the probability vector pφ ; the prior is Dir(γφ ); ple functions of the observations, which needs further
Event-Based Statistical Signal Processing 471

processing at the FC, the advantages of using event- q2 {


based sampling on the functions of x g [n], instead of 3Δ
conventional uniform sampling on x g [n] itself, is still sig- q4 {
y[n]
nificant. First, the error induced by using the recovered 2Δ ~
y[n]
functions in the highly nonlinear expression of (20.29) is
smaller than that results from using the recovered obser- q1{
Δ q3 {
vations in (20.29) because transmission loss grows with
the processing at the FC. Second, the transmission rate n
can be considerably lower than that of uniform sam- 1 2 3 4 5 6 7 8 9 10 11 12 13
pling because only the important changes in functions t1 t2 t3 t4
–Δ
are reported, censoring the uninformative observations. 1 10 00 1
The Poisson and multinomial processes are inherently
FIGURE 20.7
event-based as each observation x p [n]/x m [n] marks an
Level-crossing sampling with hysteresis applied to y[n ]. The recovered
event occurrence after a random (e.g., exponentially
signal ỹ[n ] at the FC and the transmitted bits are shown. Multiple cross-
distributed for Poisson process) waiting time since
ings are handled by transmitting additional bits. Overshoots { q i } take
x p [n − 1]/x m [n − 1]. Therefore, each new observation
effect individually (no overshoot accumulation).
x p [n]/x m [n] is reported to the FC. Moreover, they take
integer values (xm [n] can be represented by the index
of nonzero element); thus, no quantization error takes more complicated nonasymptotic performance analysis.
place. Furthermore, we consider multiple crossings of sam-
pling levels due to large jumps in the signal (Figure 20.7).
20.2.3.4 Decentralized Implementation Sampling a signal y[n] via LCSH with level spacing Δ,
Due to the nonlinear processing of recovered messages a sample is taken whenever an upper or lower sampling
at the FC [cf. (20.33)], in the decentralized testing of level is crossed, as shown in Figure 20.7. Specifically, the
hypotheses with statistical descriptions of φ, we should ith sample is taken at time
more carefully take care of the overshoot problem. t i  { n > t i −1 : | y [ n ] − ψ i −1 Δ | ≥ Δ }, (20.34)
In level-triggered sampling, the change in the signal
is measured with respect to the signal value at the most where ψi−1 is the sampling level in terms of Δ that
recent sampling time, which possibly includes an over- was most recently crossed. In general, y[ti ] may cross
shoot and hence is not perfectly available to the FC multiple levels, that is, the number of level crossings
> ?
even if a multibit scheme is used to quantize the over- | y [ t i ] − ψi −1 Δ |
shoot. Therefore, the past quantization errors, as well as ηi  ≥ 1. (20.35)
Δ
the current one, cumulatively decrease the precision of
recovered signal at the FC. The accumulation of quan- In addition to the sign bit
tization errors may not be of practical interest if the bi,1 = sign(y[ti ] − ψi−1 Δ), (20.36)
individual errors are small (i.e., sufficiently large num-
ber of bits are used for quantization and/or the jumps which encodes the first crossing with its direction, we
in the signal are sufficiently small) and stay small after send A B
ηi − 1
the processing at the FC, and the FC makes a quick deci- r , (20.37)
sion (i.e., the constraints on detection error probabilities 2
are not very stringent). However, causing an avalanche more bits bi,2 , . . . , bi,r +1, where each following bit 1/0
effect, it causes a significant problem for the asymptotic represents a double/single crossing. For instance, the
decision time performance of the decentralized detector bit sequence 0110, where the first 0 denotes down-
(e.g., in a regime of large decision times due to strin- ward crossing (i.e., bi,1 = −1), is sent for −7Δ < y[ti ] −
gent error probability constraints) even if the individual ψi−1Δ ≤ −6Δ.
errors at the FC are small. In that way, the FC can obtain ηi from received bits
In [31], using of fixed reference levels is proposed to and keep track of the most recently crossed level as
improve the asymptotic performance of level-triggered
ψi = ψi−1 + bi,1 ηi . (20.38)
sampling, which corresponds to LCSH (see Figure 20.7).
Since LCSH handles the overshoot problem better than It approximates y[n] with
level-triggered sampling, it suits better to the case in
ỹ [n] = ψi Δ, ti ≤ n < ti+1. (20.39)
(20.33) where the FC performs nonlinear processing on
the recovered signal. We here show that it also achieves As a result, only the current overshoot causes error in
a better asymptotic performance at the expense of much ỹ[n]; that is, overshoots do not accumulate, as opposed
472 Event-Based Control and Signal Processing

to level-triggered sampling. This is especially important where applying a change of measure using e− L[S] as in
when reporting signals that are processed further at the Lemma 20.1 we can write
FC [cf. (20.33)]. It also ensures order-2 asymptotic opti- - .
mality with finite number of bits per sample when used α = E1 e− L[S] 1{ L̃[S]≥ A}
to transmit iid local LLRs from unimodal sources (i.e., - .
− L̃[S]+ L̃[S]− L[S]
the case in Theorem 20.1). = E 1 e 1 { L̃[S]≥ A } .

Theorem 20.2 From (20.43),

α ≤ e− A+KΔ
Consider the decentralized detector that uses the LCSH-
A ≤ | log α| + KΔ. (20.45)
based transmission scheme given in (20.34)–(20.39) to
report the local LLRs, { Lk [n]}, from iid nodes to the FC, Combining (20.42)–(20.45), we get
and applies the SPRT procedure at the FC substituting
the recovered global LLR, L̃[n] = ∑K k =1 L̃k [ n], in (20.23) E1 [ L[S]] ≤ | log α| + 2KΔ + C. (20.46)
and (20.24). It is order-2 asymptotically optimum, that is,
In SPRT with discrete-time observations, due to the over-
E j [S] − E j [So ] = O(1), j = 0, 1, as α, β → 0, (20.40) shoot problem, the KL divergence at the stopping time
is larger than that in the no-overshoot case [53, page 21],
where S and So are the decision times of decentralized
that is,
detector and the optimum (centralized) SPRT satisfying
the same (type-I and type-II) error probability bounds α 1−β β
and β [cf. (20.5)]. E1 [ L[So ]] ≥ (1 − β) log + β log
α 1−α
= (1 − β)| log α| − β| log β| + (1 − β) log(1 − β)
PROOF Assuming finite and nonzero Kullback–
− β log(1 − α). (20.47)
Leibler (KL) information numbers −E0 [ L[1]], E1 [ L[1]],
for order-2 asymptotic optimality, it suffices to show From (20.46) and (20.47),
that
E1 [ L[S]] − E1 [ L[So ]
E1 [ L[S]] − E1 [ L[So ]] = O(1), j = 0, 1, as α, β → 0.
(20.41) ≤ 2KΔ + C − β| log α| − β| log β|
+ (1 − β) log(1 − β) − β log(1 − α),
The proof under H0 follows similarly. Let us start by
writing where the last three terms tend to zero as α, β → 0.
! Assuming α and β tend to zero at comparable rates,
E1 [ L[S]] = E1 L̃[S] + ( L[S] − L̃[S]) . (20.42) the term β| log α| also tends to zero, leaving us with the

Thanks to the multibit transmission scheme based on constant 2KΔ + C. The decentralized detector applies
LCSH, no overshoot accumulation takes place, and thus the SPRT procedure with a summary of observations;
the absolute errors satisfy hence, it cannot satisfy the error probability constraints
with a smaller KL divergence than that of the centralized
| L̃k [n] − Lk [n]| < Δ, ∀k, n, SPRT, that is, E1 [ L[S]] − E1 [ L[So ]] ≥ 0. This concludes
 K  the proof.
 
| L̃[n] − L[n]| =  ∑ L̃k [n] − Lk [n]
k =1 In fact, the proof for (20.40) holds also for the case
K of multimodal sources in (20.22) and (20.32), where
≤ ∑ | L̃k [n] − Lk [n]| < KΔ, ∀n. (20.43) the local LLRs are independent but not identically dis-
k =1 tributed. Since in this non-iid case SPRT may not be
optimum, we cannot claim asymptotic optimality by sat-
The approximate LLR L̃[S] at the stopping time
isfying (20.40). However, centralized SPRT still serves as
exceeds A or − B by a finite amount, that is,
a very important benchmark; hence, (20.40) is a valuable
L̃[S] < A + C or L̃[S] > − B − C, (20.44) result also for the multimodal case.
The power of Theorem 20.2 lies in the fact that
where C is a constant. Now let us analyze how the the LCSH-based decentralized detector achieves order-
stopping threshold A behaves as α, β → 0. Start with 2 asymptotic optimality by using a finite (in most cases
- . small) number of bits per sample. Order-2 asymptotic
α = P0 ( L̃[S] ≥ A) = E0 1{ L̃[S]≥ A} , optimality resolves the overshoot problem because it
Event-Based Statistical Signal Processing 473

is the state-of-the-art performance in the no-overshoot and is given by


case (i.e., with continuous-time band-limited observa-   −1
N N
tions), achieved by LCSH, which coincides with level-
triggered sampling in this case. On the contrary, for
θ̂ N = ∑ h[n]h [n] T ∑ h [n] x [n]
n =1 n =1
order-2 asymptotic optimality with discrete-time obser-
vations, the number of bits per sample required by the = ( H nT H n )−1 H nT xn , (20.50)
level-triggered sampling-based detector tends to infinity where H n = [h [1], . . . , h[n]] T and xn = [ x [1], . . . , x [n]] T .
with a reasonably low rate, log | log α| [73, Section IV-B]. Note that spatial diversity (i.e., a vector of observations
In the LCSH-based detector, to avoid overshoot accu- and a regressor matrix at time n) can be easily incorpo-
mulation, the overshoot of the last sample is included rated in (20.48) in the same way we deal with temporal
toward the new sample, correlating the two samples. diversity. Specifically, in (20.49) and (20.50), we would
Consequently, samples (i.e., messages of change in the also sum over the spatial dimensions.
signal) that result from LCSH are neither indepen- Under the Gaussian noise, w[n] ∼ N (0, σ2 ), the LS
dent nor identically distributed. As opposed to level- estimator coincides with the minimum variance unbi-
triggered sampling, in which samples are iid and hence ased estimator (MVUE) and achieves the CRLB, that
form a renewal process, the statistical descriptions of is, Cov(θ̂n | H n ) = CRLBn . To compute the CRLB, we
samples in LCSH are quite intractable. The elegant first write, given θ and H n , the log-likelihood of the
(nonasymptotic and asymptotic) results obtained for vector x n as
level-triggered sampling in [74] therefore do not apply
to LCSH here. Ln = log f ( xn |θ, H n )
n
( x [ m ] − h [ m ] T θ) 2 t
=− ∑ 2σ2
− log(2πσ2 ). (20.51)
2
m =1

Then, we have
20.3 Decentralized Estimation   −1
∂2 
CRLBn = E − 2 Ln H n  = σ2 U − 1
n , (20.52)
In this section, we are interested in sequentially estimat- ∂θ
ing a vector of parameters (i.e., regression coefficients) -  .
θ ∈ R p at a random stopping time S in the following where E − ∂ 2 Ln  H n
2
is the Fisher information
∂θ
linear (regression) model: matrix and U n  H n H n is a nonsingular matrix. Since
T

E[ xn | H n ] = H n θ and Cov( xn | H n ) = σ2 I, from (20.50)


x [n] = h [n] T θ + w[n], n ∈ N, (20.48) we have E[θ̂n | H n ] = θ and Cov(θ̂n | H n ) = σ2 U − 1
n ; thus,
from (20.52) Cov(θ̂n | H n ) = CRLBn . Note that the maxi-
where x [n] ∈ R is the observed sample, h[n] ∈ R p is the mum likelihood (ML) estimator that maximizes (20.51)
vector of regressors, and w[n] ∈ R is the additive noise. coincides with the LS estimator in (20.50).
We consider the general case in which h [n] is random In general, the LS estimator is the best linear unbi-
and observed at time n, which covers the determin- ased estimator (BLUE). In other words, any linear unbi-
istic h [n] case as a special case. This linear model is ased estimator of the form An xn with An ∈ Rn×t , where
commonly used in many applications. For example, in E[ An xn | H n ] = θ, has a covariance no smaller than that
system identification, θ is the unknown system coeffi- of the LS estimator in (20.50), that is, Cov( An xn | H n ) ≥
cients, h[n] is the (random) input applied to the system, σ2 U − 1
n in the positive semidefinite sense. To see this
and x [n] is the output at time n. Another example is the result, we write An = ( H nT H n )−1 H nT + Bn for some
estimation of wireless (multiple-access) channel coeffi- Bn ∈ Rn×t , and then Cov( An xn | H n ) = σ2 U − n + σ Bn Bn ,
1 2 T
cients, in which θ is the unknown channel coefficients, where Bn Bn is a positive semidefinite matrix.
T
h[n] is the transmitted (random) pilot signal, x [n] is the The recursive least squares (RLS) algorithm enables us
received signal, and w[n] is the additive channel noise. to compute θ̂n in a recursive way as follows:
In (20.48), at each time n, we observe the sample
x[n] and the vector h[n]; hence, {( x[m], h[m])}nm=1 are θ̂n = θ̂n−1 + qn ( x [n] − h [n] T θ̂n−1 ),
available. We assume {w [n]} are i.i.d. with E[w[n]] = 0 P n −1 h [ n ]
and Var(w[n]) = σ2 . The least squares (LS) estimator where qn = (20.53)
1 + h [ n ] T P n −1 h [ n ]
minimizes the sum of squared errors, that is,
and P n = P n−1 − qn h[n] T P n−1 ,
N
where qn ∈ R p is a gain vector and P n = U − 1
n . While
θ̂ N = arg min ∑ ( x [n] − h[n] T θ)2 , (20.49)
θ n =1 applying RLS, we first initialize θ̂0 = 0 and P0 = δ−1 I,
474 Event-Based Control and Signal Processing

where 0 represents a zero vector and δ is a small num- use a form of level-triggered sampling that infrequently
ber, and then at each time n compute qn , θ̂n , and P n as transmits a single pulse from sensors to the FC and,
in (20.53). at the same time, achieves a close-to-optimum average
stopping time performance [76].
20.3.1 Background The stopping capability of sequential estimators
comes with the cost of sophisticated analysis. In most
Energy constraints are inherent to wireless sensor net-
cases, it is not possible with discrete-time observations
works [1]. Since data transmission is the primary source
to find an optimum sequential estimator that attains
of energy consumption, it is essential to keep transmis-
the sequential Cramér-Rao lower bound (CRLB) if the
sion rates low in wireless sensor networks, resulting in
stopping time S is adapted to the complete observation
a decentralized setup. Decentralized parameter estima-
history [20]. Alternatively, in [22] and more recently in
tion is a fundamental task performed in wireless sensor
[16,75], it was proposed to restrict S to stopping times
networks [5,10,14,35,45,48,51,52,54,68,69,78]. In sequen-
that are adapted to a specific subset of the complete
tial estimation, the objective is to minimize the (aver-
observation history, which leads to simple optimum
age) number of observations for a given target accuracy
solutions. This idea of using a restricted stopping time
level [36]. To that end, a sequential estimator (S, θ̂S ),
first appeared in [22] with no optimality result. In [16],
as opposed to a traditional fixed-sample-size estima-
with continuous-time observations, a sequential estima-
tor, is equipped with a stopping rule which determines
tor with a restricted stopping time was shown to achieve
an appropriate time S to stop taking new observations
the sequential version of the CRLB for scalar parameter
based on the observation history. Hence, the stopping
estimation. In [75], for scalar parameter estimation with
time S (i.e., the number of observations used in estima-
discrete-time observations, a similar sequential estima-
tion) is a random variable. Endowed with a stopping
tor was shown to achieve the conditional sequential
mechanism, a sequential estimator saves not only time
CRLB for the same restricted class of stopping times.
but also energy, both of which are critical resources. In
We deal with discrete-time observations in this
particular, it avoids unnecessary data processing and
section. In Section 20.3.2, the optimum sequential esti-
transmission.
mator that achieves the conditional sequential CRLB for
Decentralized parameter estimation has been mainly
a certain class of stopping times is discussed. We then
studied under two different network topologies. In
develop in Section 20.3.3 a computation- and energy-
the first one, sensors communicate to an FC that per-
efficient decentralized scheme based on level-triggered
forms estimation based on the received information
sampling for sequential estimation of vector parameters.
(e.g., [14,35,45,48,51,68]). The other commonly studied
topology is called ad hoc network, in which there is no
designated FC, but sensors compute their local estima- 20.3.2 Optimum Sequential Estimator
tors and communicate them through the network (e.g.,
[5,10,52,54,78]). Decentralized estimation under both In this section, we aim to find the optimal pair (S, θ̂S )
of stopping time and estimator corresponding to the
network topologies is reviewed in [69]. Many existing
works consider parameter estimation in linear models optimal sequential estimator. The stopping time for a
(e.g., [10,14,35,45,54,68]). Whereas in [5,48,51,52,69,78] a sequential estimator is determined according to a tar-
get estimation accuracy. In general, the average stopping
general nonlinear signal model is assumed. The major-
ity of existing works on decentralized estimation (e.g., time is minimized subject to a constraint on the esti-
mation accuracy, which is a function of the estimator
[10,14,35,45,48,51,52,54,68,69]) study fixed-sample-size
estimation. There are a few works, such as [5,16], that covariance, that is,
consider sequential decentralized parameter estimation. " #
min E[S] s.t. f Cov(θ̂S ) ≤ C, (20.54)
Nevertheless, [5] assumes that sensors transmit real S,θ̂S
numbers, and [16] focuses on continuous-time observa-
tions, which can be seen as practical limitations. where f (·) is a function from R p× p to R and C ∈ R is the
In decentralized detection [17,73,74,76] and estima- target accuracy level.
tion [75], level-triggered sampling (cf. Figure 20.3), an The accuracy function f should be a monotonic func-
adaptive sampling technique which infrequently trans- tion of the covariance matrix Cov(θ̂S ), which is positive
mits a few bits, for example, one bit, from sensors to semidefinite, to make consistent accuracy assessments;
the FC, has been used to achieve low-rate transmission. for example, f (Cov(θ̂S )) > f (Cov(θ̂S
)) for S < S
since
It has been also shown that the decentralized schemes Cov(θ̂S )  Cov(θ̂S
) in the positive definite sense. Two
based on level-triggered sampling significantly outper- popular and easy-to-compute choices are the trace Tr(·),
form their counterparts based on conventional uniform which corresponds to the mean squared error (MSE),
sampling in terms of average stopping time. We here and the Frobenius norm  ·  F . Before handling the
Event-Based Statistical Signal Processing 475

problem in (20.54), let us explain why we are interested for all unbiased estimators under Gaussian noise
in restricted stopping times that are adapted to a subset and for all linear unbiased estimators under non-
of observation history. Gaussian noise. The indicator function 1{ A} = 1 if A
is true, and 0 otherwise. We used the facts that the
20.3.2.1 Restricted Stopping Time event {S = n} is Hn -measurable and E[(θ̂n − θ)(θ̂n −
θ) T | H n ] = Cov(θ̂n | H n ) ≥ σ2 U − 1
to write (20.57) and
Denote {Fn } as the filtration that corresponds to the n
(20.58), respectively.
samples { x [1], . . . , x[n]} where Fn = σ{ x [1], . . . , x[n]} is
the σ-algebra generated by the samples observed up
20.3.2.2 Optimum Conditional Estimator
to time n, that is, the accumulated history related to
the observed samples, and F0 is the trivial σ-algebra. We are interested in {Hn }-adapted stopping times to
Similarly, we define the filtration {Hn } where Hn = use the optimality property of the LS estimator in the
σ{h [1], . . . , h[n]} and H0 is again the trivial σ-algebra. It
sequential sense, shown in Lemma 20.8.
is known that, in general, with discrete-time observationsThe common practice in sequential analysis mini-
and an unrestricted stopping time, that is {Fn ∪ Hn }- mizes the average stopping time subject to a constraint
adapted, the sequential CRLB is not attainable under on the estimation accuracy which is a function of the
any noise distribution except for the Bernoulli noise [20].
estimator covariance. The optimum solution to this clas-
On the contrary, in the case of continuous-time obser-
sical problem proves to be intractable for even moderate
vations with continuous paths, the sequential CRLB is number of unknown parameters [72]. Hence, it is not a
attained by the LS estimator with an {Hn }-adapted convenient model for decentralized estimation. There-
stopping time that depends only on H S [16]. Moreover, fore, we follow an alternative approach and formulate
in the following lemma, we show that, with discrete- the problem conditioned on the observed {h [n]} val-
time observations, the LS estimator attains the condi- ues, which yields a tractable optimum solution for any
tional sequential CRLB for the {Hn }-adapted stopping number of parameters.
times. In the presence of an ancillary statistic whose dis-
tribution does not depend on the parameters to be
Lemma 20.8 estimated, such as the regressor matrix H n , the con-
ditional covariance Cov(θ̂n | H n ) can be used to assess
With a monotonic accuracy function f and an {Hn }- the accuracy of the estimator more precisely than the
adapted stopping time S, we can write (unconditional) covariance, which is in fact the mean
' ( of the former (i.e., Cov(θ̂S ) = E[Cov(θ̂n | H n )]) [12,22].
" # 2 −1
f Cov(θ̂S |HS ) ≥ f σ U S , (20.55) Motivated by this fact, we propose to reformulate the
problem in (20.54) conditioned on H n , that is,
for all unbiased estimators under Gaussian noise, and " #
for all linear unbiased estimators under non-Gaussian min E[S] s.t. f Cov(θ̂S | H S ) ≤ C. (20.60)
S,θ̂S
noise, and the LS estimator
θ̂S = U − 1 T
S VS , VS  H S x S ,
Note that the constraint in (20.60) is stricter than the
(20.56)
one in (20.54) since it requires that θ̂S satisfies the tar-
satisfies the inequality in (20.55) with equality. get accuracy level for each realization of H S , whereas
in (20.54) it is sufficient that θ̂S satisfies the target accu-
PROOF Since the LS estimator, with Cov(θ̂n |Hn ) = racy level on average. In other words, in (20.54), even
" #
σ2 U − 1
n , is the MVUE under Gaussian noise and the if f Cov(θ̂S | H S ) > C for some realizations of H S , we
BLUE under non-Gaussian noise, we write " #
can still satisfy f Cov(θ̂S ) ≤ C. In fact, we can always
" # " #
f Cov(θ̂S |HS ) have f Cov(θ̂S ) = C by using a probabilistic stop-
   ping"rule such that
∞  # we sometimes stop above C, that
= f E ∑ (θ̂n − θ)(θ̂n − θ) 1{n=S} H n
T  is, f Cov ( θ̂ | H ) > C, and the rest of the time at or
S S " #
n =1 below C, that is, f Cov(θ̂S | H S ) ≤ C. On the contrary,
  " #
∞ -  . in (20.60) we always have f Cov(θ̂S | H S ) ≤ C; more-
T
= f ∑ E (θ̂n − θ)(θ̂n − θ) H n 1{n=S} (20.57) over, since we observe discrete-time samples, in general
n =1
" #
  we have f Cov(θ̂S | H S ) < C for each realization of H S .
∞ Hence, the optimal objective value E[S] in (20.54) will,
≥ f ∑ σ2 U − 1
n 1{ n = S } (20.58) in general, be smaller than that in (20.60). Note that
n =1
' ( on the contrary, if we observed continuous-time pro-
= f σ2 U − S
1
, (20.59) cesses with continuous paths, then we could always
476 Event-Based Control and Signal Processing
" #
have f Cov(θ̂S | H S ) = C for each realization of H S , and case is given by
thus the optimal objective values of (20.60) and (20.54)  )
1 v
would be the same. S = min n ∈ N : un ≥
, θ̂S = S , (20.64)
Since minimizing S also minimizes E[S], in (20.60) C uS
we are required to find the first time that a mem- where uσ2n is the Fisher information at time n. That is,
ber of our class of estimators (i.e., unbiased estima- we stop the first time the gathered Fisher information
tors under Gaussian noise and linear unbiased estima- exceeds the threshold 1/C, which is known.
tors
" under non-Gaussian
# noise) satisfies the constraint
f Cov(θ̂S | H S ) ≤ C, as well as the estimator that attains
this earliest stopping time. From Lemma 20.8, it is seen 20.3.3 Decentralized Estimator
that the LS estimator, given by (20.56), among its com- In this section, we propose a computation- and energy-
" #
petitors, achieves the best accuracy level f σ2 U − S
1
at efficient decentralized estimator based on the optimum
any stopping time S. Hence, for the conditional prob- conditional sequential estimator and level-triggered
lem, the optimum sequential estimator is composed of sampling. Consider a network of K distributed sensors
the stopping time and an FC which is responsible for determining the stop-
ping time and computing the estimator. In practice, due
' (
S = min{n ∈ N : f σ2 U −
n
1
≤ C }, (20.61) to the stringent energy constraints, sensors must infre-
quently convey low-rate information to the FC, which
is the main concern in the design of a decentralized
and the LS estimator sequential estimator.
As in (20.48), each sensor k observes
θ̂S = U − 1
S VS , (20.62)
xk [n] = h k [n] T θ + wk [n], n ∈ N, k = 1, . . . , K, (20.65)

which can be computed recursively as in (20.53). The as well as the regressor vector hk [n] =
recursive computation of U − n = P n in the test statistic
1 [hk,1 [n], . . . , hk,p [n]] T at time n, where {wk [n]}k,n are
in (20.61) is also given in (20.53). independent, zero-mean, that is, E[wk [n]] = 0, ∀k, n,
Note that for an accuracy function f such that and Var(wk [n]) = σ2k , ∀n. Then, similar to (20.50), the
f (σ2 U − 1 2 −1
n ) = σ f (U n ), for example, Tr(·) and  ·  F , we weighted least squares (WLS) estimator
can use the following stopping time: " #2
K n xk [m] − h k [m] T θ
' ( θ̂n = arg min ∑ ∑ ,
θ k =1 m =1 σ2k
S = min{n ∈ N : f U −
n
1
≤ C
}, (20.63)
is given by
where C
= is the relative target accuracy with respect
C   −1
σ2 K n
h k [m ]hk [m ] T K n
h k [m] xk [m]

to the noise power. Hence, given C we do not need θ̂n = ∑ ∑ ∑ ∑


to know the noise variance σ to run the test given by
2 k =1 m =1 σ 2
k k =1 m =1 σ2k
(20.63). Note that U n = H nT H n is a nondecreasing pos- = Ū − 1
n V̄n , (20.66)
itive semidefinite matrix, that is, U n  U n−1 , ∀t, in the
positive semidefinite sense.'Thus, from
( the monotonic- where Ū kn  σ12 ∑nm=1 h k [m]hk [m] T , V̄nk  σ12 ∑nm=1 hk [m]
k k
ity of f , the test statistic f σ2 U − 1
is a nonincreasing xk [m], Ū n = ∑K k =1 Ū n , and V̄n = ∑ k =1 V̄n . As before, it
n k K k

scalar function of time. Specifically, for accuracy func- can be shown that the WLS estimator θ̂n in (20.66) is the
tions Tr(·) and  ·  F , we can show that if the minimum BLUE under the general noise distributions. Moreover,
eigenvalue of U n tends to infinity as t → ∞, then the in the Gaussian noise case, where wk [n] ∼ N (0, σ2k ) ∀n
stopping time is finite, that is, S < ∞. for each k, θ̂n is also the MVUE.
In the conditional problem, for any n, we have a sim- Following the steps in Section 20.3.2.2, it is straight-
ple stopping rule given in (20.63), which uses the target forward to show that the optimum sequential estimator
accuracy level σC2 as its threshold, hence known before- for the conditional problem in (20.60) is given by the
hand. For the special case of scalar parameter estimation, stopping time
we do not need a function f to assess the accuracy of * ' ( +
the estimator because instead of a covariance matrix we S = min n ∈ N : f Ū − n
1
≤C , (20.67)
now have a variance uσn , where un = ∑nm=1 h2m and h n is
2

the scaling coefficient in (20.48). Hence, from (20.62) and and the WLS estimator θ̂S , given by (20.66). Note
(20.63), the optimum sequential estimator in the scalar that (S, θ̂S ) is achievable only in the centralized case,
Event-Based Statistical Signal Processing 477

where all local observations until time n, that is, of Ūnn vanish, and thus we have Ūnn ≈ Dnn and
{( xk [m], hk [m])}kK,n
=1,m=1 , are available to the FC. Local Tr(Ū − −1
n ) ≈ Tr( D n ). For the general case where we
1

processes {Ū n }k,n and {V̄n }k,n are used to compute the might have E[hk,i [n]hk,j [n]] = 0 for some k and i = j,
k k

stopping time and the estimator as in (20.67) and (20.66), using the diagonal matrix Dn we write
respectively. On the contrary, in a decentralized system,   −1 
k ' (
3
the FC can compute approximations U n and Vn and then3 k − − −
Tr Ū n 1
= Tr 1/2
D n D n Ū n Dn 1/2 1/2
D 1/2
,
use these approximations to compute the stopping time   n
and estimator as in (20.67) and (20.66), respectively. Rn
(20.71)
' (
20.3.3.1 Linear Complexity
= Tr D− n
1/2 −1 −1/2
Rn Dn ,
If each sensor k reports Ū n ∈ R
k p × p and V̄n ∈ R to the
k p ' (
FC in a straightforward way, then O( p2 ) terms need to = Tr D− 1 −1
n Rn . (20.72)
be transmitted, which may not be practical, especially
for large p, in a decentralized setup. Similarly, in the lit- Note that each entry rn,ij of the newly defined matrix
erature, the distributed implementation of the Kalman Rn is a normalized version of the ū n,ij
corresponding entry
ū n,ij
filter, which covers RLS as a special case, through its ūn,ij of Ū n . Specifically, rn,ij = d d = √ū ū , i, j =

n,i n,j n,ii n,jj
inverse covariance form, namely the information filter, 1, . . . , p, where the last equality follows from the defini-
requires the transmission of an p × p information matrix tion of d in (20.68). Hence, R has the same structure
n,i n
and an p × 1 information vector (e.g., [63]). as in (20.69) with entries
To overcome this problem, considering Tr(·) as the
accuracy function f in (20.67), we propose to transmit h k,i [m ]h k,j [m ]
k
∑K n
k =1 ∑ m =1 σ2k
only the p diagonal entries of Ū n for each k, yielding lin- rn,ij =  ,
ear complexity O( p). Using the diagonal entries of Ū n , K n h k,i [m ] 2
K n h k,j [m ]2
∑ k = 1 ∑ m = 1 σ2 ∑ k = 1 ∑ m = 1 σ2
we define the diagonal matrix k k
" # i, j = 1, . . . , p.
Dn  diag dn,1 , . . . , d n,p
K n hk,i [m]2 (20.68) For sufficiently large n, by the law of large numbers
where dn,i = ∑ ∑ σ2k
, i = 1, . . . , p.
E[h k,i [n ]h k,j [n ]]
k =1 m =1
∑K
k =1 σ2k
We further define the correlation matrix rn,ij ≈ rij =  , (20.73)
⎡ ⎤ E[h k,i [n ]2 ] E[h k,j [n ]2 ]
1 r12 · · · r1p ∑K
k =1 σ2
∑K
k =1 σ2
k k
⎢ r12 1 · · · r2p ⎥
⎢ ⎥
R=⎢ . .. .. .. ⎥, (20.69) and Rn ≈ R, where R is given in (20.69). Hence, for suf-
⎣ .. . . . ⎦ ficiently large n, we can make the approximations in
r1p r2p · · · 1 (20.70) using (20.71) and (20.72).
E[h k,i [n ]h k,j [n ]]
∑K
k =1 σ2k
where rij =  , i, j = 1, . . . , p. Then, assuming that$ the FC knows % the correla-
$ %
E[h k,i [n ]2 ] E[h k,j [n ]2 ] tion matrix R, that is, E[hk,i [n]hk,j [n]] i,j,k ∗ and σ2k
∑K
k =1 σ2k
∑K
k =1 σ2k [cf. (20.69)], it can compute the approximations
* + in (20.70)
if sensors report their local processes Dkn to the
Proposition 20.1 k,n
FC,
* where
+ Dn = ∑K Dkn .
Note that each local process
k =1
For sufficiently large n, we can make the following k
Dn is p-dimensional, and its entries at time n are
n
approximations:
∗ The subscripts i and j in the set notation denote i = 1, . . . , p
Ū n ≈ D1/2
n R Dn
1/2
and j = i, . . . , p. In the special case where E[hk,i [n ]2 ] = E[h,i [n ]2 ], k,
' ( ' ( (20.70)  = 1, . . . , K, i = 1, . . . , p, the correlation coefficients
and Tr Ū −
n
1
≈ Tr D −1 −1
n R . ⎧ ⎫
⎨ E[hk,i [n ]hk,j [n ]] ⎬
ξ = 
k
: i = 1, . . . , p − 1, j = i + 1, . . . , p ,
⎩ ij E[ h [ n ]2 ]E[ h [ n ]2 ] ⎭
PROOF The approximations are motivated from k,i k,j
k
the special case where E[hk,i [n]hk,j [n]] = 0, ∀k, i, j = $ % ∑K k 2
k = 1 ξij /σk
1, . . . , p, i = j. In this case, by the law of large num- together with σ2k are sufficient statistics since rij = ∑K 2
k = 1 1/σk
from
bers for sufficiently large n, the off-diagonal elements (20.73).
478 Event-Based Control and Signal Processing
 )
given by dkn,i = ∑nm=1
h k,i [m ]2
[cf. (20.68)]. Hence, we 20.3.3.2.1 Sampling and Recovery of Dkn
σ2k
i
propose that each sensor k sequentially reports the local Each sensor k samples each entry d kn,i of Dkn at a sequence
processes { Dkn }n and {V̄nk }n to the FC, achieving lin- of random times {skm,i }m∈N given by
ear complexity O( p). On the other side, the FC, using  )
the information received from sensors, computes the skm,i  min n ∈ N : dkn,i − dksk ,i ≥ Δki k
, s0,i = 0,
approximations { D 3n }, which are then used to
3 n } and {V m −1,i

compute the stopping time (20.76)


h [ p ]2
* ' −1 ( + where dkn,i = ∑np=1 k,iσ2 , k
d0,i = 0, and Δki > 0 is a con-
S3 = min n ∈ N : Tr U
3n ≤ C3 ,
k
(20.74) stant threshold that controls the average sampling
interval. Note that the sampling times {skm,i }m in
and the estimator (20.76) are dynamically determined by the signal to
be sampled, that is, realizations of dkn,i . Hence, they
−1
3 33 V
θS3 = U 33, (20.75) are random, whereas sampling times in the conven-
S S
tional uniform sampling are deterministic with a certain
similar to (20.67)
' −1 ( and (20.66), respectively. The approxi- period. According to the sampling rule in (20.76), a sam-
mations Tr U 3n in (20.74) and U 3 3 in (20.75) are com- ple is taken whenever the signal level d kn,i increases by
S
3 is selected at least Δi since the last sampling time. Note that dn,i =
k k
puted using D 3 n as in (20.70). The threshold C
h [ p ]2
through simulations to 'satisfy the constraint in (20.60) ∑np=1 k,iσ2 is nondecreasing in n.
" #( k
with equality, that is, Tr Cov 3 θ 3| H 3 = C.
S S After each sampling time sk , sensor k transmits a m,i
single pulse to the FC at time
20.3.3.2 Event-Based Transmission
tkm,i  skm,i + δkm,i ,
Level-triggered sampling provides a very convenient
way of information transmission in decentralized sys-
indicating that dkn,i has increased by at least Δki since
tems [17,73–76]. Specifically, decentralized methods
based on level-triggered sampling, transmitting low-rate the last sampling time skm−1,i . The delay δkm,i between
information, enable highly accurate approximations and the transmission time and the sampling time is used to
thus high-performance schemes at the FC. They sig- linearly encode the overshoot
nificantly outperform conventional decentralized meth-  
ods, which sample local processes using the traditional qkm,i  dksk ,i − dksk ,i − Δki , (20.77)
uniform sampling and send the quantized versions of m,i m −1,i

samples to the FC [73,75].


and is given by
Existing methods employ level-triggered sampling to
report a scalar local process to the FC. Using a simi-
qkm,i
lar procedure to report each distinct entry of Ū kn and δkm,i = ∈ [0, 1), (20.78)
V̄nk , we need O( p2 ) parallel procedures, which may be φd
prohibitive in a decentralized setup for large p. Hence,
where φ− 1
d is the slope of the linear encoding function,
we use the approximations introduced in the previ-
as shown in Figure 20.8, known to sensors and the FC.
ous subsection, achieving linear complexity O( p). Data
Assume a global clock, that is, the time index n ∈ N
transmission and thus energy consumption also scale
is the same for all sensors and the FC, meaning that the
linearly with the number of parameters, which may eas-
FC knows the potential sampling times. Assume further
ily become prohibitive for a sensor with limited battery.
ultra-wideband (UWB) channels between sensors and
We address this energy efficiency issue by infrequently
the FC, in which the FC can determine the time of flight
transmitting a single pulse with very short duration,
of pulses transmitted from sensors. Then, FC can mea-
which encodes, in time, the overshoot in level-triggered
sure the transmission delay δkm,i if it is bounded by unit
sampling [76].
We will next describe the proposed decentralized esti- time, that is, δkm,i ∈ [0, 1). To ensure this, from (20.78), we
mator based on level-triggered sampling in which each need to have φd > qkm,i , ∀k, m, i. Assuming a bound for
sensor nonuniformly samples the local processes { D kn }n overshoots, that is, qkm,i < θd , ∀k, m, i, we can achieve this
and {V̄nk }n , and transmits a single pulse for each sample by setting φd > θd .
to the FC, and the FC computes { D 3n } using
3 n } and {V Consequently, the FC can uniquely decode the over-
received information. shoot by computing qkm,i = φd δkm,i (cf. Figure 20.8), using
Event-Based Statistical Signal Processing 479

Note that in general d3tm,i ,i = dsm,i,i unlike (20.80) since


k
dn,i
all sensors do not necessarily sample and transmit at
$ %
the same time. The approximations d3n,i i form D 3n =
3 3
diag(d n,1, . . . , d n,p ), which is used in (20.74) and (20.75)
to compute the stopping time and the estimator, respec-
θd k tively. Note that to determine the stopping time as in
q1,i
" −1 #
(20.74), we need to compute Tr U 3t using (20.70) at
Slope = θd $ %
Δki Slope = φd times tm when a pulse is received from any sensor
k
δ1,i regarding any dimension. Fortunately, when the mth
pulse in the global order is received from sensor k m
n
k
s1,i k
t1,i k
s1,i +1 at time tm regarding dimension im , we can compute
" −1 #
Tr U3t recursively as follows:
m
FIGURE 20.8
Illustration of sampling time s km,i , transmission time tkm,i , transmission
delay δkm,i , and overshoot q km,i . We encode q km,i < θd in δkm,i = tkm,i −
'
−1
( ' −1 ( κ i ( Δ k m + q m )
3
Tr U tm = Tr U 3t −
m im
,
s km,i < 1 using the slope φd > θd . m −1
d tm ,im d3tm−1,im
3
' −1 ( p
κ
30
Tr U = ∑ i, (20.83)
which it can also find the increment occurred in d kn,i 
i =1
during the interval (skm−1,i , skm,i ] as
where κi is the ith diagonal element of the inverse cor-
d ksk − dksk = Δki + qkm,i , relation matrix R−1 , known to the FC. In (20.83), pulse
m,i ,i m −1,i ,i
arrival times are assumed to be distinct for the sake of
from (20.77). It is then possible to reach the signal level simplicity. In case multiple pulses arrive at the same
d kk by accumulating the increments occurred until the time, the update rule will be similar to (20.83) except that
s m,i ,i
mth sampling time, that is, it will consider all new arrivals together.
m ' ( m
20.3.3.2.2 Sampling and Recovery of V̄nk
d ksk
m,i ,i
= ∑ Δki + qk,i = mΔki + ∑ qk,i . (20.79)
=1 Similar to (20.76), each sensor k samples each entry v̄ kn,i
=1
$ % $ %
Using d kk , the FC computes the staircase approxi- of V̄nk at a sequence of random times ρkm,i m written as
s m,i ,i m

mation d3kn,i as  )
 
ρkm,i  min n ∈ N : v̄ kn,i − v̄ρk k ,i
 ≥ γk , ρk = 0,
i 0,i
d3kn,i = dksk , t ∈ [tkm,i , tkm+1,i ), (20.80) m −1,i

m,i ,i (20.84)
h kp,i y kp
which is updated when a new pulse is received from where v̄ kn,i = ∑np=1 σ2 and γki is a constant threshold,
k
sensor k, otherwise kept constant. Such approximate available to both sensor k and the FC. See (20.2) for
local signals of different sensors are next combined to selecting γki . Since v̄ kn,i is neither increasing nor decreas-
obtain the approximate global signal d3n,i as ing, we use two thresholds γki and −γki in the sampling
K rule given in (20.84).
d3n,i = ∑ d3kn,i . (20.81) Specifically, a sample is taken whenever v̄kn,i increases
k =1 or decreases by at least γki since the last sampling time.
In practice, when the mth pulse in the global order Then, after a transmission delay
regarding dimension i is received from sensor k m at time
tm,i , instead of computing (20.79) through (20.81), the FC ηkm,i
3
only updates d n,i as χ k
= ,
m,i
φv
d3tm,i ,i = d3tm−1,i,i + Δki m + qm,i , d30,i = , (20.82)  
where ηkm,i  v̄ k k − v̄ρk k  − γk is the overshoot, sen-
i
ρm,i ,i m −1,i ,i
and keeps it constant when no pulse arrives. We initial- sor k at time
ize d3n,i to a small constant  to prevent dividing by zero
while computing the test statistic [cf. (20.83)]. τkm,i  ρkm,i + χkm,i ,
480 Event-Based Control and Signal Processing

transmits a single pulse bm,i k to the FC, indicating the precision, can be easily made large enough so that
whether v̄ n,i has changed by at least γki or −γki since the
k the quantization error |q̂km,i − qkm,i | becomes insignificant.
Compared with conventional transmission techniques
last sampling time ρkm−1,i . We can simply write bm,i
k as
which convey information by varying the power level,
" #
k
bm,i = sign v̄ρk k ,i − v̄ρk k ,i . (20.85) frequency, and/or phase of a sinusoidal wave, PPM
m,i m −1,i (with UWB) is extremely energy efficient at the expense
Assume again that (i) there exists a global clock among of high bandwidth usage since only a single pulse with
sensors and the FC, (ii) the FC determines channel delay very short duration is transmitted per sample. Hence,
(i.e., time of flight), and (iii) overshoots are bounded by PPM suits well to energy-constrained sensor network
a constant, that is, ηkm,i < θv , ∀k, m, i, and we set φv > θv . systems.
With these assumptions, we ensure that the FC can mea-
sure the transmission delay χkm,i and accordingly decode 20.3.3.4 Simulations
the overshoot as ηkm,i = φv χkm,i . Then, upon receiving the
We next provide simulation results to compare the per-
mth pulse bm,i regarding dimension i from sensor k m at
formances of the proposed scheme with linear complex-
time τm,i , the FC performs the following update:
ity, given in Algorithms 20.1 and 20.2, the nonsimplified
" km #
v3τm,i ,i = v3τm−1,i ,i + bm,i γi + ηm,i , (20.86) version of the proposed scheme with quadratic com-
$ % plexity and the optimal centralized scheme. A wireless
where v3n,i i compose the approximation V 3n = sensor network with 10 identical sensors and an FC is
[v3n,1 , . . . , v3n,p ] T . Recall that the FC employs V 3n to considered to estimate a five-dimensional determinis-
compute the estimator as in (20.75). tic vector of parameters, that is, p = 5. We assume i.i.d.
The level-triggered sampling procedure at each sensor Gaussian noise with unit variance at all sensors, that
k for each dimension i is summarized in Algorithm 20.1. is, wk [n] ∼ N (0, 1), ∀k, n. We set the correlation coeffi-
Each sensor k runs p of these procedures in parallel. cients {rij } [cf. (20.73)] of the vector h k [n] to 0 and 0.5
The sequential estimation procedure at the FC is also in Figure 20.9 to test the performance of the proposed
summarized in Algorithm 20.2. We assumed, for the
sake of clarity, that each sensor transmits pulses to the
FC for each dimension through a separate channel, that Algorithm 20.1 The level-triggered sampling procedure at
is, parallel architecture. On the contrary, in practice the the kth sensor for the ith dimension
number of parallel channels can be decreased to two 1: Initialization: n ← 0, m ← 0,  ← 0, λ ← 0, ψ ← 0
by using identical sampling thresholds Δ and γ for all 2: while λ < Δki and ψ ∈ (−γki , γki ) do
sensors and for all dimensions in (20.76) and (20.84), 3: n ← n + 1
respectively. Moreover, sensors can even employ a sin- 4: λ ← λ + hk,i [2n]2
σk
gle channel to convey information about local processes h k,i [ n] x k [ n]
{dn,i } and {v̄n,i } by sending ternary digits to the FC. 5: ψ ← ψ +
k k
σ2k
This is possible since pulses transmitted for {dn,i } are k 6: end while
unsigned. 7: if λ ≥ Δki {sample dkn,i } then
8: m ← m+1
20.3.3.3 Discussions 9: skm,i = n
We introduced the decentralized estimator in Section 10: Send a pulse to the fusion center at time instant
20.3.3.2 initially for a system with infinite time preci- λ−Δki
tkm,i = skm,i + φd
sion. In practice, due to bandwidth constraints, discrete- 11: λ←0
time systems with finite precision are of interest.
12: end if
For example, in such systems, the overshoot qkm,i ∈
- ( 13: if ψ  ∈ (−γki , γki ) {sample v̄kn,i } then
j θNd , ( j + 1) θNd , j = 0, 1, . . . , N − 1, is quantized into
' ( 14:  ← +1
q̂km,i = j + 12 θNd , where N is the number of quantiza- 15: ρk,i = n
tion levels. More specifically, a pulse is transmitted at 16: Send bk,i = sign(ψ) to the fusion center at time instant
j +1/2
time tkm,i = skm,i + N , where the transmission delay |ψ|−γk
τk,i = ρk,i + φv i
j +1/2
N ∈ (0, 1) encodes q̂km,i . This transmission scheme is 17: ψ←0
called pulse position modulation (PPM). 18: end if
In UWB and optical communication systems, PPM is
19: Stop if the fusion center instructs so; otherwise go to line 2.
effectively employed. In such systems, N, which denotes
Event-Based Statistical Signal Processing 481
600
Algorithm 20.2 The sequential estimation procedure at the
Linear r = 0.5
fusion center
κi
m ← 1,  ← 1, d3i ←  ∀i,
p Quadratic
1: Initialization: Tr ← ∑i=1 , 500
Centralized
v3i ← 0 ∀i
2: while Tr < C 3 do 400

Average stopping time


3: Wait to receive a pulse
r=0
4: if mth pulse about dn,i arrives from sensor k at time n 300
then
5: q m = φd (n −  n ) 200
κi (Δki + q m )
6: Tr ← Tr −
d3i ( d3i +Δki + q m )
7: d3i = d3i + Δki + q m 100

8: m ← m+1
9: end if 0
0.5 1 1.5 2 2.5 3
10: if th pulse b about vn,j arrives from sensor k at time n |log10 nMSE|
then
11: η = φv (n −  n ) FIGURE 20.9
Average stopping time performances of the optimal centralized
12: v3j = v3j + b (γkj + η )
scheme and the decentralized schemes based on level-triggered sam-
13:  ← +1 pling with quadratic and linear complexity versus normalized MSE
14: end if values when scaling coefficients are uncorrelated, that is, r ij = 0, ∀i, j,
15: end while and correlated with r ij = 0.5, ∀i, j.
16: Stop at time S 3= n
3 −1 = D
3 = diag(d31 , . . . , d3p ), U
17: D 3 −1/2 R−1 D
3 −1/2 ,
3 = [ v31 , . . . , v3p ] T
V
18: 3 3 −1 V
θ=U 3 It is seen in Figure 20.9 that the proposed simplified
19: Instruct sensors to stop. scheme exhibits an average stopping time performance
close to those of the nonsimplified scheme and the opti-
mal centralized scheme even when the scaling coeffi-
cients {hk,i [n]}i are correlated with rij = 0.5, ∀i, j, i = j,
justifying the simplification proposed in Section 20.3.3.1
scheme in the uncorrelated and correlated cases. We to obtain linear complexity.
compare the average stopping time performance of the
proposed scheme with linear complexity to those of
the other two schemes for different MSE values. In
Figure 20.9, the horizontal axis represents the signal-to-
error ratio in decibel, where nMSE  MSEθ2 , that is, the 2 20.4 Conclusion
MSE normalized by the square of the Euclidean norm
of the vector to be estimated. Event-based sampling techniques, adapting the sam-
In the uncorrelated case, where rij = 0, ∀i, j, i = j, the pling times to the signal to be sampled, provide
proposed scheme with linear complexity nearly attains energy- and bandwidth-efficient information transmis-
the performance of the nonsimplified scheme with sion in resource-constrained distributed (i.e., decentral-
quadratic complexity as seen in Figure 20.9. This result ized) systems, such as wireless sensor networks. We
is rather expected since in this case Ū n ≈ Dn for suffi- have first designed and analyzed event-based detec-
ciently large n, where Ū n and Dn are used to compute tion schemes under challenging environments, namely
the stopping time and the estimator in the nonsimpli- noisy transmission channels between nodes and the
fied and simplified schemes, respectively. Strikingly, the fusion center, and multimodal observations from dis-
decentralized schemes (simplified and nonsimplified) parate information sources. Then, we have identified
achieve very close performances to that of the optimal an optimum sequential estimator which lends itself to
centralized scheme, which is obviously unattainable in a decentralized systems. For large number of unknown
decentralized system, thanks to the efficient information parameters, we have further proposed a simplified
transmission through level-triggered sampling. scheme with linear complexity.
482 Event-Based Control and Signal Processing

[9] B. Chen, L. Tong, and P. K. Varshney. Channel-


Acknowledgments aware distributed detection in wireless sensor net-
works. IEEE Signal Processing Magazine, 23(4):16–26,
This work was funded in part by the U.S. National Sci- 2006.
ence Foundation under grant CIF1064575, the U.S. Office
of Naval Research under grant N000141210043, the Con- [10] A. K. Das and M. Mesbahi. Distributed linear
sortium for Verification Technology under Department parameter estimation over wireless sensor net-
of Energy National Nuclear Security Administration works. IEEE Transactions on Aerospace and Electronic
award number DE-NA0002534, and the Army Research Systems, 45(4):1293–1306, 2009.
Office (ARO) grant number W911NF-11-1-0391. [11] D. Drazen, P. Lichtsteiner, P. Hafliger, T. Delbruck,
and A. Jensen. Toward real-time particle track-
ing using an event-based dynamic vision sensor.
Experiments in Fluids, 51(5):1465–1469, 2011.
[12] B. Efron and D. V. Hinkley. Assessing the accuracy
Bibliography of the maximum likelihood estimator: Observed
[1] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and versus expected fisher information. Biometrika,
E. Cayirci. A survey on sensor networks. IEEE 65(3):457–487, 1978.
Communications Magazine, 40(8):102–114, 2002. [13] P. Ellis. Extension of phase plane analysis to quan-
tized systems. IRE Transactions on Automatic Control,
[2] K. J. Astrom and B. M. Bernhardsson. Compari- 4(2):43–54, 1959.
son of Riemann and Lebesgue sampling for first
order stochastic systems. In 41st IEEE Conference on [14] J. Fang and H. Li. Adaptive distributed estima-
Decision and Control, Las Vegas, Nevada, volume 2, tion of signal power from one-bit quantized data.
pages 2011–2016, December 2002. IEEE Transactions on Aerospace and Electronic Sys-
tems, 46(4):1893–1905, 2010.
[3] R. Bashirullah, J. G. Harris, J. C. Sanchez,
[15] H. Farhangi. The path of the smart grid. IEEE Power
T. Nishida, and J. C. Principe. Florida wireless
and Energy Magazine, 8(1):18–28, 2010.
implantable recording electrodes (FWIRE) for brain
machine interfaces. In IEEE International Symposium [16] G. Fellouris. Asymptotically optimal parame-
on Circuits and Systems (ISCAS 2007), New Orleans, ter estimation under communication constraints.
LA, pages 2084–2087, May 2007. Annals of Statistics, 40(4):2239–2265, 2012.

[4] F. J. Beutler. Error-free recovery of signals from [17] G. Fellouris and G. V. Moustakides. Decentralized
irregularly spaced samples. SIAM Review, 8: sequential hypothesis testing using asynchronous
328–355, 1966. communication. IEEE Transactions on Information
Theory, 57(1):534–548, 2011.
[5] V. Borkar and P. P. Varaiya. Asymptotic agree- [18] J. W. Fisher, M. J. Wainwright, E. B. Sudderth,
ment in distributed estimation. IEEE Transactions on and A. S. Willsky. Statistical and information-
Automatic Control, 27(3):650–655, 1982. theoretic methods for self-organization and fusion
of multimodal, networked sensors. The International
[6] Z. Chair and P. K. Varshney. Optimal data fusion Journal of High Performance Computing Applications,
in multiple sensor detection systems. IEEE Trans- 16(3):337–353, 2002.
actions on Aerospace and Electronic Systems, 22(1):98–
101, 1986. [19] J. Fromm and S. Lautner. Electrical signals and their
physiological significance in plants. Plant, Cell &
[7] J.-F. Chamberland and V. V. Veeravalli. Decentral- Environment, 30(3):249–257, 2007.
ized detection in sensor networks. IEEE Transactions [20] B. K. Ghosh. On the attainment of the Cramér-rao
on Signal Processing, 51(2):407–416, 2003. bound in the sequential case. Sequential Analysis,
6(3):267–288, 1987.
[8] S. Chaudhari, V. Koivunen, and H. V. Poor.
Autocorrelation-based decentralized sequential [21] D. Gontier and M. Vetterli. Sampling based on
detection of OFDM signals in cognitive timing: Time encoding machines on shift-invariant
radios. IEEE Transactions on Signal Processing, subspaces. Applied and Computational Harmonic
57(7):2690–2700, 2009. Analysis, 36(1):63–78, 2014.
Event-Based Statistical Signal Processing 483

[22] P. Grambsch. Sequential sampling based on the [34] B. Liu and B. Chen. Channel-optimized quantiz-
observed fisher information to guarantee the accu- ers for decentralized detection in sensor networks.
racy of the maximum likelihood estimator. Annals IEEE Transactions on Information Theory, 52(7):3349–
of Statistics, 11(1):68–77, 1983. 3358, 2006.

[23] K. M. Guan, S. S. Kozat, and A. C. Singer. Adaptive [35] Z.-Q. Luo, G. B. Giannakis, and S. Zhang. Opti-
reference levels in a level-crossing analog-to-digital mal linear decentralized estimation in a bandwidth
converter. EURASIP Journal on Advances in Signal constrained sensor network. In International Sym-
Processing, 2008:183:1–183:11, 2008. posium on Information Theory, Adelaide, Australia,
pages 1441–1445, September 2005.
[24] K. M. Guan and A. C. Singer. Opportunistic sam-
pling by level-crossing. In IEEE International Con- [36] N. Mukhopadhyay, M. Ghosh, and P. K. Sen.
ference on Acoustics, Speech and Signal Processing Sequential Estimation. Wiley, New York, NY, 1997.
(ICASSP), Honolulu, HI, volume 3, pages III-1513–
III-1516, April 2007. [37] J. W. Mark and T. D. Todd. A nonuniform sam-
pling approach to data compression. IEEE Transac-
[25] S. Haykin. Communication Systems, 4th edition. tions on Communications, 29(1):24–32, 1981.
Wiley, New York, NY, 2001.
[38] F. Marvasti. Nonuniform Sampling Theory and Prac-
[26] M. Hofstatter, M. Litzenberger, D. Matolin, and tice. Kluwer, New York, NY, 2001.
C. Posch. Hardware-accelerated address-event pro-
cessing for high-speed visual object recognition. In [39] R. H. Masland. The fundamental plan of the retina.
18th IEEE International Conference on Electronics, Cir- Nature Neuroscience, 4(9):877–886, 2001.
cuits and Systems (ICECS), Beirut, Lebanon, pages
89–92, December 2011. [40] Y. Mei. Asymptotic optimality theory for decen-
tralized sequential hypothesis testing in sensor
[27] A. M. Hussain. Multisensor distributed sequen- networks. IEEE Transactions on Information Theory,
tial detection. IEEE Transactions on Aerospace and 54(5):2072–2089, 2008.
Electronic Systems, 30(3):698–708, 1994.
[41] D. Miorandi, S. Sicari, F. De Pellegrini, and
[28] E. Kofman and J. H. Braslavsky. Level crossing I. Chlamtac. Internet of things: Vision, applica-
sampling in feedback stabilization under data-rate tions and research challenges. Ad Hoc Networks,
constraints. In 45th IEEE Conference on Decision and 10(7):1497–1516, 2012.
Control, San Diego, CA, pages 4423–4428, December
2006. [42] M. Miskowicz. Send-on-delta concept: An event-
based data reporting strategy. Sensors, 6:49–63,
[29] A. A. Lazar and E. A. Pnevmatikakis. Video time 2006.
encoding machines. IEEE Transactions on Neural
Networks, 22(3):461–473, 2011. [43] M. Miskowicz. Asymptotic effectiveness of the
event-based sampling according to the integral
[30] A. A. Lazar and L. T. Toth. Perfect recovery and criterion. Sensors, 7:16–37, 2007.
sensitivity analysis of time encoded bandlimited
signals. IEEE Transactions on Circuits and Systems I: [44] B. A. Moser and T. Natschlager. On stability of
Regular Papers, 51(10):2060–2073, 2004. distance measures for event sequences induced by
level-crossing sampling. IEEE Transactions on Signal
[31] S. Li and X. Wang. Cooperative change detec- Processing, 62(8):1987–1999, 2014.
tion for online power quality monitoring.
http://arxiv.org/abs/1412.2773. [45] E. J. Msechu and G. B. Giannakis. Sensor-centric
data reduction for estimation with WSNs via cen-
[32] S. Li and X. Wang. Quickest attack detection in soring and quantization. IEEE Transactions on Signal
multi-agent reputation systems. IEEE Journal of Processing, 60(1):400–414, 2012.
Selected Topics in Signal Processing, 8(4):653–666,
2014. [46] F. De Paoli and F. Tisato. On the complementary
nature of event-driven and time-driven models.
[33] P. Lichtsteiner, C. Posch, and T. Delbruck. A 128 × Control Engineering Practice, 4(6):847–854, 1996.
128 120 db 15 μs latency asynchronous temporal
contrast vision sensor. IEEE Journal of Solid-State [47] H. Vincent Poor. An Introduction to Signal Detection
Circuits, 43(2):566–576, 2008. and Estimation. Springer, New York, NY, 1994.
484 Event-Based Control and Signal Processing

[48] A. Ribeiro and G. B. Giannakis. Bandwidth- [60] Y. Tsividis. Digital signal processing in continu-
constrained distributed estimation for wireless ous time: A possibility for avoiding aliasing and
sensor networks-part II: Unknown probability den- reducing quantization error. In IEEE International
sity function. IEEE Transactions on Signal Processing, Conference on Acoustics, Speech, and Signal Processing
54(7):2784–2796, 2006. (ICASSP ’04), Montreal, Quebec, Canada, volume 2,
pages II-589–II-592, May 2004.
[49] J. C. Sanchez, J. C. Principe, T. Nishida, R. Bashir-
ullah, J. G. Harris, and J. A. B. Fortes. Technology [61] Y. Tsividis. Event-driven data acquisition and digi-
and signal processing for brain–machine interfaces. tal signal processing—A tutorial. IEEE Transactions
IEEE Signal Processing Magazine, 25(1):29–40, 2008. on Circuits and Systems II: Express Briefs, 57(8):577–
581, 2010.
[50] N. Sayiner, H. V. Sorensen, and T. R. Viswanathan.
[62] V. V. Veeravalli, T. Basar, and H. V. Poor. Decen-
A level-crossing sampling scheme for a/d conver-
tralized sequential detection with a fusion center
sion. IEEE Transactions on Circuits and Systems II:
performing the sequential test. IEEE Transactions on
Analog and Digital Signal Processing, 43(4):335–339,
Information Theory, 39(2):433–442, 1993.
1996.
[63] T. Vercauteren and X. Wang. Decentralized sigma-
[51] I. D. Schizas, G. B. Giannakis, and Z.-Q. Luo. Dis- point information filters for target tracking in col-
tributed estimation using reduced-dimensionality laborative sensor networks. IEEE Transactions on
sensor observations. IEEE Transactions on Signal Signal Processing, 53(8):2997–3009, 2005.
Processing, 55(8):4284–4299, 2007.
[64] C. Vezyrtzis and Y. Tsividis. Processing of signals
[52] I. D. Schizas, A. Ribeiro, and G. B. Giannakis. Con- using level-crossing sampling. In IEEE International
sensus in ad hoc wsns with noisy links-part I: Symposium on Circuits and Systems (ISCAS 2009),
Distributed estimation of deterministic signals. Taipei, Taiwan, pages 2293–2296, May 2009.
IEEE Transactions on Signal Processing, 56(1):350–
[65] A. Wald and J. Wolfowitz. Optimum character of
364, 2008.
the sequential probability ratio test. The Annals of
[53] D. Siegmund. Sequential Analysis, Tests and Confi- Mathematical Statistics, 19(3):326–329, 1948.
dence Intervals. Springer, New York, NY, 1985. [66] D. J. Warren and P. K. Willett. Optimal decen-
tralized detection for conditionally independent
[54] S. S. Stankovic, M. S. Stankovic, and D. M. sensors. In American Control Conference, Pittsburgh,
Stipanovic. Decentralized parameter estimation by PA, pages 1326–1329, June 1989.
consensus based stochastic approximation. IEEE
Transactions on Automatic Control, 56(3):531–543, [67] P. Willett, P. F. Swaszek, and R. S. Blum. The good,
2011. bad and ugly: Distributed detection of a known sig-
nal in dependent Gaussian noise. IEEE Transactions
[55] Y. S. Suh. Send-on-delta sensor data transmission on Signal Processing, 48(12):3266–3279, 2000.
with a linear predictor. Sensors, 7(4):537–547, 2007.
[68] J.-J. Xiao, S. Cui, Z.-Q. Luo, and A. J. Gold-
[56] S. Sun. A survey of multi-view machine learning. smith. Linear coherent decentralized estimation.
Neural Computing and Applications, 23(7–8):2031– IEEE Transactions on Signal Processing, 56(2):757–
2038, 2013. 770, 2008.
[69] J.-J. Xiao, A. Ribeiro, Z.-Q. Luo, and G. B. Giannakis.
[57] R. R. Tenney and N. R. Sandell. Detection with dis-
Distributed compression-estimation using wireless
tributed sensors. IEEE Transactions on Aerospace and
sensor networks. IEEE Signal Processing Magazine,
Electronic Systems, 17(4):501–510, 1981.
23(4):27–41, 2006.
[58] S. C. A. Thomopoulos, R. Viswanathan, and D. C. [70] J. L. Yen. On nonuniform sampling of bandwidth-
Bougoulias. Optimal decision fusion in multiple limited signals. IRE Transactions on Circuit Theory,
sensor systems. IEEE Transactions on Aerospace and 3(4):251–257, 1956.
Electronic Systems, 23(5):644–653, 1987.
[71] Y. Yılmaz, Z. Guo, and X. Wang. Sequential
[59] J. Tsitsiklis. Decentralized detection by a large num- joint spectrum sensing and channel estimation for
ber of sensors. Mathematics of Control, Signals, and dynamic spectrum access. IEEE Journal on Selected
Systems, 1(2):167–182, 1988. Areas in Communications, 32(11):2000–2012, 2014.
Event-Based Statistical Signal Processing 485

[72] Y. Yılmaz, G. V. Moustakides, and X. Wang. Sequen- fisher information. IEEE Transactions on Information
tial and decentralized estimation of linear regres- Theory, 60(2):1281–1300, 2014.
sion parameters in wireless sensor networks. IEEE
Transactions on Aerospace and Electronic Systems, to [76] Y. Yılmaz and X. Wang. Sequential distributed
be published. http://arxiv.org/abs/1301.5701. detection in energy-constrained wireless sensor
networks. IEEE Transactions on Signal Processing,
[73] Y. Yılmaz, G. V. Moustakides, and X. Wang. Coop- 62(12):3180–3193, 2014.
erative sequential spectrum sensing based on level-
triggered sampling. IEEE Transactions on Signal [77] K. A. Zaghloul and K. Boahen. Optic nerve signals
Processing, 60(9):4509–4524, 2012. in a neuromorphic chip I: Outer and inner retina
models. IEEE Transactions on Biomedical Engineering,
[74] Y. Yılmaz, G. V. Moustakides, and X. Wang. 51(4):657–666, 2004.
Channel-aware decentralized detection via level-
triggered sampling. IEEE Transactions on Signal Pro- [78] T. Zhao and A. Nehorai. Distributed sequential
cessing, 61(2):300–315, 2013. Bayesian estimation of a diffusive source in wire-
less sensor networks. IEEE Transactions on Signal
[75] Y. Yılmaz and X. Wang. Sequential decentralized Processing, 55(4):1511–1524, 2007.
parameter estimation under randomly observed
21
Spike Event Coding Scheme

Luiz Carlos Paiva Gouveia


University of Glasgow
Glasgow, UK

Thomas Jacob Koickal


Beach Theory
Sasthamangalam, Trivandrum, India

Alister Hamilton
University of Edinburgh
Edinburgh, UK

CONTENTS
21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
21.2 Asynchronous Spike Event Coding Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
21.2.1 The Conversion: Ternary Spike Delta Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
21.2.2 The Channel: Address Event Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
21.3 Computation with Spike Event Coding Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
21.3.1 Computational Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
21.3.2 Arithmetic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
21.3.2.1 Negation Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
21.3.2.2 Gain Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
21.3.2.3 Modulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
21.3.2.4 Summation and Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
21.3.2.5 Multiplication and Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
21.3.2.6 Average Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
21.3.3 Signal Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
21.3.4 Other Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
21.3.4.1 Shift Keying Modulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
21.3.4.2 Weighted Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
21.4 ASEC aVLSI Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
21.4.1 Comparator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
21.4.1.1 Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
21.4.2 Spike and Pulse Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
21.4.3 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
21.4.3.1 Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
21.4.4 Filter and AER Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
21.4.5 Demonstration: Speech Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
21.4.6 Resolution: Single-Tone Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
21.4.7 Computation: Gain, Negation, Summation, and BPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
21.5 Applications of ASEC-Based Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511

487
488 Event-Based Control and Signal Processing

ABSTRACT In this chapter, we present and describe contributing to the massive use of digital systems, that
a spike-based signal coding scheme for general analog is, digital signal processors (DSPs), microcontrollers, and
computation, suitable for diverse applications, includ- microprocessors.
ing programmable analog arrays and neuromorphic sys- Analog signal representation is also used despite the
tems. This coding scheme provides a reliable method for cited digital advantages. The main reason is that many
both communication and computation of signals using signals are either originally presented to the system in
an asynchronous digital common access channel. Using this representation or need to be delivered as such by
ternary coders and decoders, this scheme codes ana- the system. Typical examples of this situation are most
log signals into time domain using asynchronous digital sensing systems—such as image, audio, temperature,
spikes. The spike output is dependent on input signal and other sensors—and aerial signal transmission such
activity and is similar for constant signals. as the wide variety of radio communications. In other
Systems based on this scheme present intrinsic com- words, analog circuits are used to interface with real
putation: several arithmetic operations are performed world because most of its information—measurements
without additional hardware and can be fully pro- and controls—are continuous variables as well. More-
grammed. The design methodology and analog circuit over, analog designs usually present a more efficient
design of the scheme are presented. Both simulations solution than digital circuits because they are usually
and test results from prototype chips implemented using smaller, consume less power, and provide a higher pro-
a 3.3-V, 0.35-μm CMOS technology are presented. cessing speed.
Signal dynamics are also important and add another
dimension to signal classification: time. As for the ampli-
tude, timing characteristics of the signal are also pre-
sented in continuous or discrete fashion. While in the
former case the signal value is valid at every instant,
21.1 Introduction in the latter its value is sampled at defined instants.
Electronic systems are composed of a number of inter- Therefore, it is possible to use different combinations of
connected components ranging from single devices signal amplitude and timing representations. Systems in
to more complex circuits. Such components operate such a two-dimensional classification method [1] can be
a defined transformation (transfer function) on given defined as a DTDA, a CTDA, a CTDA, or a DTCA sys-
information (signals). There are two aspects of such sys- tem, where D, C, T, and A stand for discrete, continuous,
tems: the signals processing themselves and the trans- time, and amplitude, respectively.
mission of such processed signals to other components Most event-based signal processing uses CTDA rep-
of the system. Such aspects are not always well dis- resentation of signals. Therefore, it combines—among
tinguished one from another but are always present, other factors—the higher amplitude noise tolerance
and their designs are of fundamental importance as from digital signals and the more efficient bandwidth
integrated systems grow in size and complexity. and power usage of asynchronous systems. Several
The design of signal processing and communication event-driven communication and processing techniques
techniques and methods are dependent on how the sig- and methods have been proposed and used for many
nal is represented. In other words, how the information years. In this chapter, we present a specific scheme—
is presented in a physical medium. A common method named the Asynchronous Spike Event Coding Scheme
to classify a signal is according to their number of pos- (ASEC)—and discuss some signal processing functions
sible states. And in this classification system there are inherent to it. We also present an analog very-large-
two extreme cases: analog and digital representations. scale integration (aVLSI) implementation and its results.
The information encoded into analog signals has infi- Although such a coding scheme can be useful for several
nite states while digital representation uses a number of applications, we refer to a set of interesting applications
discrete—usually binary—possible states. at the end of the chapter.
Systems designed and operating using digital repre-
sentation are more robust and flexible than their analog
counterparts. The discrete nature of the digital repre-
sentation implies signals are less prone to electronic 21.2 Asynchronous Spike Event Coding
noise, component mismatch, and other physical restric-
Scheme
tions, improving their information accuracy and allow-
ing easier regeneration. This robustness allows for the Signals are processed in several stages in most signal
easier, cheaper, and faster design of complex systems, processing architectures. An example is a typical signal
Spike Event Coding Scheme 489

acquisition system where an analog signal is first con- 21.2.1 The Conversion: Ternary Spike Delta
ditioned (filtered and amplified), converted to digital Modulation
representation, and stored or transmitted. The commu-
The coding scheme used in this chapter is based on a
nication flow from one stage to another is normally
variant of delta modulation. The one used in this work is
fixed, but flexible systems also require flexible signal
named Ternary Spike Delta (TSD) modulation although
flow (routing). To obtain maximum flexibility, a com-
it has been given different denominations over the years.
munication channel that is commonly available to every
As a delta modulation variation, general aspects of delta
processing stage—or signal processing units (SPUs)—is
modulation are presented first.
required. A suitable channel shall be defined along with
The working principle of delta modulation is based on
methods to access it.
limiting the error e(t) between the input (sender) signal
For flexible-routing systems using the method pro-
x s (t) and an internal variable zs (t) such as
posed in this chapter, an asynchronous, digital channel
is selected—due to its compatibility with event-based
|e(t)| = | xs (t) − zs (t)| ≤ e(t)max , (21.1)
systems—and the Address Event Representation (AER)
communication method [2] is used to control the access
to it. Due to the asynchronous nature of these events, the using a negative feedback loop in the modulator or coder.
AER protocol is an appropriate choice although other In other words, these modulations work by forcing a
asynchronous communication protocols can be used feedback signal zs (t) to track the input signal. The error
as well. is sensed by a (normally 1-bit) quantizer system to pro-
A method to convert CTCA (analog) signals into duce the coder output ys (t) while the internal variable
a representation suitable for this asynchronous digital zs (t) is generated by the integration of this output signal,
channel—and convert back—is required. In this chapter, that is,

we use a variation of delta modulation. The association zs (t) = k i ys (t)dt, (21.2)
of this signal conversion with the access mechanism to
the channel is named the Asynchronous Spike Event where k i is the integration gain.
Coding (ASEC) communication scheme [3]. An ASEC The communication process is completed at the
scheme presents a Spike Event coder and decoder pair demodulator or decoder side where the signal zs (t) is
as shown in Figure 21.1. This pair works on the onset replicated as zr (t)
of specific events: the activity of input analog signals
triggers the coder operation whereas changes on the 
state of the channel similarly triggers the decoder oper- zr ( t ) = k i yr (t)dt, (21.3)
ation. These spike events are then transmitted using the
specified channel. with yr (t) being the spikes at the decoder input.

Analog realm

Analog Analog
Spikes signal(s) signal(s) Spikes
ASEC decoder

received received sent sent


ASEC coder

AER Tx
AER Rx

yr(t) xr(t) xs(t) ys(t)


SPU

1 2 3 2 3
Asynchronous channel

FIGURE 21.1
Generic ASEC–SPU interface diagram. Block diagram of the Asynchronous Spike Event Coding (ASEC) scheme interfacing with a given analog
SPU and the common asynchronous digital channel. In this architecture, the realm of analog representation of the signals is limited by the coder
and decoder to the SPU. This helps to reduce the signal degradation due to noise sources present in analog channels. Spikes are coded and
decoded into the channel digital representation (SPU address) using the AER transmitter (AER Tx) and receiver (AER Rx) circuits. Control and
registers (not shown) are used to configure the operation of the scheme.
490 Event-Based Control and Signal Processing

To obtain this replication, an ideal channel CH where threshold eth2 is different.∗ Both comparator outputs
yr (t) = ys (t) is considered. Furthermore, both coder and feed the SG block to generate proper spikes y s (t). These
decoder integrators are required to present the same ini- spikes are then transmitted simultaneously to the coder
tial condition and gain so that zr (t) and zs (t) signals and decoder Pulse Generator (PG) blocks. Depending
present the same DC levels and amplitudes. on these spike signals, each PG block generate control
A better approximation xr (t) of the input signal is signals to define the respective integrator (INT) output.
obtained by averaging the reconstructed signal zr (t). The thresholds difference Δeth = |eth2 − eth1 | impacts
A low-pass filter (LPF) with an impulse response func- the performance of the coder, and it is ideally the same
tion h(t) can be used for this purpose: as the tracking step, that is, Δeth = δ. This ideal value is
used in this initial analysis but deviations from this ideal
xr (t) = h(zr (t)) ≈ zr (t) = xs (t) − e(t). (21.4) value will be considered later in this chapter.
Although the integrator can present different gains for
positive and negative spikes, only those cases where the
Equation 21.4 shows that the difference between the
gains are symmetric are considered here. After analyz-
reconstructed signal and the input signal is bounded by
ing the Figure 21.2b, one can verify the variation in the
the maximum allowed error e(t)max .
feedback signal zs (t) since the initial time t0 is
Modulation methods based on such principles are
known as delta modulations because the feedback con- Δzs (t) = δ(n p − nn ), (21.6)
trol updates the feedback signal by a fixed amount delta
(δ), which is a function of the resolution of the converter. where n p is the number of positive spikes and nn
If the system is designed to provide Nb bits of resolution, of negative spikes generated since t0 . According to
then Equation 21.6, the value of the signal zs (t) at any time
Δx max t in TSD depends only on the number of positive and
e(t)max = δ = N
, (21.5) negative spikes received and processed since t0 .
2 b

where Δx max = xmax − xmin is the maximum amplitude 21.2.2 The Channel: Address Event Representation
variation of the input signal. The parameter δ—known
The AER communication protocol was proposed and
as tracking or quantization step—is used in quantizer
developed by researchers interested in replicating con-
and integrator designs to limit the error to its maximum.
cepts of signal processing of biological neural systems
In the most common delta modulation version (PWM-
into VLSI systems [2]. The requirement for hundreds
based), the quantized output is the output of the coder
or thousands of intercommunicating neurons with dedi-
itself, that is, ys (t) = Q(e(t)). However, in spike-based
cated hardwired connections is impossible in such large
versions, the output of the quantizer(s) trigger(s) a spike
numbers in current 2D integrated circuits (ICs). AER
generator (SG) circuit such as y s (t) = SG (c(t)) with
aims to overcome this limitation by creating virtual
c(t) = Q(e(t)).
connections using a common physical channel.
The Ternary Spike Delta (TSD) modulation was first
The original AER implementation was a point-to-
presented in [4]. This modulation is similar to the
point asynchronous handshaking protocol for transmis-
schemes described in [1,5,6], which are based on the
sion of digital words using a multiple access channel
principle of irregular sampling used to implement
common to every element in the system. The informa-
asynchronous A/D converters, for instance. A similar
tion coded in these transmitted digital words represents
version was also used in [7,8] for low-bandwidth, low-
the identification (address) of a SDU. Depending on the
power sensors.
implementation, either the transmitting (as used in this
In ternary modulations, the output presents three pos-
work) or the receiving SPU address is coded.
sible states. The transmitted signal is represented by a
The asynchronous nature of the AER protocol greatly
series of “positive” and “negative” spikes, each result-
preserves the information conveyed in the time differ-
ing in an increment or decrement of δ, respectively. As
ence between events. The main sources of inaccuracy
the decrement in zs (t) is also controlled by spikes, the
between the generated and the received time differ-
absence of spikes in this modulation—the third state—
ence are timing jitter and the presence of event colli-
indicates no change in the current value of zs (t).
sions. Because the access to the channel is asynchronous,
Due to the three-state output, it is not possible to
different elements may try to access the channel
use only one 1-bit quantizer (comparator). Another
simultaneously, but the AER protocol offers mechanisms
comparator is inserted in the coder loop as shown in
to handle these spike collisions. A unique and central
Figure 21.2a, which presents the coder and decoder
block diagrams of TSD modulation. This extra compara- ∗ For the sake of simplicity, e
th1 and e th2 are symmetric in this
tor also senses the error signal e(t) but its comparison chapter.
Spike Event Coding Scheme 491

c1(t)
SG
xs(t) e(t) eth1 ys(t) yr(t) zr(t) xr(t)
+ CH ki dt

PG+INT LPF
zs(t) c2(t)
eth2

ki dt

ASEC coder PG+INT ASEC decoder

(a)

δ
xs(t) xs(t)≈xr(t)
zs(t)

T Δt

zs(t)=zr(t)
Δz(t)
1p 2p np t
ys(t)=yr(t)

t=t0 1n 2n nn t=t1
(b)

FIGURE 21.2
Ternary Spike Delta (TSD) modulation, modified from [3]. (a) Shows the block diagrams of coder and decoder using the channel CH. They
present two comparators with different thresholds (e th1 and e th2), a spike generator (SG), pulse generators (PG), integrators (INT), and the
decoder low-pass filter (LPF). Signal xs ( t) is the coder input signal while the feedback signal zs ( t) is an approximated copy of xs ( t) generated
from the spike output ys ( t). Ideally, signals zr ( t) and yr ( t) are copies of signals zs ( t) and ys ( t), respectively, on the decoder side. Decoder output
xr ( t) is an approximation of xs ( t) generated by the LPF. (b) Presents illustrative waveforms of related signals. The spike train ys ( t) presents
three states: “positive” spikes (1 p , 2 p , . . . , n p ), “negative” spikes (1n , 2n , . . . , n n ), and no activity. In the inset figure, T is the fixed pulse period,
Δt is the variable inter-spike period, and δ is the fixed tracking step.

arbiter is usually used to manage collisions that are fabrication. While CMOS digital market (processors,
resolved by an arbiter by queuing and transmitting suc- memories, etc.) requirements pushes for faster circuits,
cessively all the spike events involved in the collision. many applications present well defined and fixed band-
Although this process can be made relatively faster than width. Therefore the difference between the fastest
the analog signal dynamics, it can be a source of signal signal and the used bandwidth increases and so the
distortion. dynamic range when using timing representation.∗
One of the first uses of timing processing was pre-
sented in [11] where a different filter structure was pre-
sented where its filter coefficients were set by switching
21.3 Computation with Spike Event Coding
Scheme ∗ This
assumption is valid when considering similar signals used
in the current computations. Many of these signals are measurements
The use of timing as the numerical representation for
of physical properties and, therefore, their dynamics are well defined.
computations has been studied [9,10] and presents an For instance, auditory signals are useful only up to 20 kHz for human
important advantage due to the evolution of CMOS hearing purposes.
492 Event-Based Control and Signal Processing
Operation 1

Coder 1

Decoder X
Asynchronous channel (AER)
Coder 2

Decoder Y
Coder 3
Operation 2

FIGURE 21.3
Computation within the communication scheme, modified from [3]. Coders convert analog signals into spikes which are presented to the
communication channel. Arithmetic computations may be by proper configuration of the AER channel and decoders. In this example, the top
decoder amplifies the audio signal from the middle coder while the bottom decoder adds the signal from top and bottom coders.

times. A multiplication of a signal by a constant instance, in the configuration illustrated in Figure 21.3,
(gain) using switches and a low-pass filter was shown. the top decoder (decoder X) outputs an amplification
This kind of operation—the summation of amplified of the audio signal while the second one (decoder Y)
signals—is used extensively in artificial neural net- provides the summation of two inputs (sine and trian-
works [12]. gular waves) using the same communication channel.
Some research groups used timing as signal repre- These operations are simply performed by program-
sentation to develop analog processing. Pulse-width ming parameters of the communication channel. Other
modulation techniques were used in several applica- operations can be implemented by combining the chan-
tions. For instance it was applied to implement switched nel reconfigurability with the analog signal process-
filters [9], to compute arithmetic operations [13], to con- ing capability of the SPUs. The computational realm
vert signals, and in information storage [14]. Other of this communication scheme enhances the comput-
modulations have been used in specific applications ing power of the flexible system with no additional
as voice and sound and video processing [15], but hardware.
few groups are keen to use them in a more generic The shared nature of the channel means it can enable
scope within analog processing. One example was multiple communications virtually at the same time:
the hybrid computation technique presented in [16]. the computation operations can also be performed
Hybrid computation was defined as a type of processing simultaneously. This intrinsic computation capability
that combines analog computation primitives with the allows a simpler implementation than other pulse-based
digital signal restoration and discretization using spikes approaches [17]. Basic arithmetic operations will be the
to transform a real (analog variable, the inter-spike first examples to be presented in Section 21.3.2.
intervals) number into an integer (discrete digital, the
number of spikes) number.
The asynchronous spike event coding communication 21.3.2 Arithmetic Operations
method is suitable to perform a series of generic com-
In this section, a set of fundamental arithmetic com-
putations. Moreover, all computations are realized with
putations performed by the communication scheme are
the circuits already implemented for the communication
presented. Each operation is illustrated by simulation
process. The idea underlying the computational proper-
of the mathematical models (actual VLSI results will
ties of the communication scheme and some examples
be presented in this chapter as well). The following
are described next.
results were obtained for a 4-bit resolution conversion—
for easier visualization—but the working principle can
21.3.1 Computational Framework
be applied to any arbitrary resolution, mostly defined by
The asynchronous spike event coding scheme allows the system specifications and limited by its VLSI imple-
a set of arithmetic operations to be performed. For mentation. Also, the output waveforms in the figures
Spike Event Coding Scheme 493
Input xs(t) and output zr(t) signals

0.5

–0.5

–1

ys(t)

yr(t)

0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)

xs(t) zr(t)
FIGURE 21.4
Negation operation. Snapshot of the input sine wave xs ( t) and the respective negated signal zr ( t). Signal negation is performed by interchanging
the positive and negative incoming spikes ys ( t) to generate the received spikes yr ( t).

correspond to the decoder filter input zr (t) rather than Since zs (t) and zr (t) are the quantized versions
the coder output xr (t) for better understanding. of x s (t) and xr (t), respectively, the operation of
Equation 21.7 is obtained. An example using a sine
21.3.2.1 Negation Operation wave signal is shown in Figure 21.4.
A fundamental operation on analog signals is changing
21.3.2.2 Gain Operation
their polarity, resulting in signal negation or its opposite:
Another common operation in any analog processing is
xneg (t) = − x (t). (21.7) to change the amplitude of a signal, that is, provide a
gain G to the signal such as
This operation is also known as the additive inverse
of a number. Equation 21.6 states that if coder’s and x gain (t) = G × x (t). (21.10)
decoder’s initial conditions are known, the signal value
at any instant t is a function of both the number of This gain operation can lead to an increase (amplifica-
events received and their polarity. Changing the incom- tion) or a decrease (attenuation) of the amplitude of the
ing spikes polarity will result in an opposite signal input signal, with both types being possible using the
on the decoder output. Therefore, the negation opera- ASEC scheme.
tion is performed by setting the AER decoder to inter- Because Equation 21.6 holds true for both coder and
change the addresses of positive ysp (t) and negative decoder blocks, the decoder output will be proportional
ysn (t) spikes transmitted by the sender coder to form the to the input signal. This proportionality defines the
received spikes yr p (t) and yrn (t) at the decoder.∗ In other operation gain:
words, the AER decoder performs the following address
operation: Δx gain (t) Δzr (t) δr
ysp (t) → yrn (t) G= ≈ = . (21.11)
(21.8) Δx (t) Δzs (t) δs
ysn (t) → yr p (t),
resulting in the decoder signal zr (t) being an opposite where δr is the decoder tracking step and δs is its coder
version of the signal zs (t). If zr (t0 ) = zs (t0 ), then from equivalent. Having different tracking steps, the ASEC
Equation 21.6, scheme will result in the coder and decoder work-
ing with different resolutions. However, this difference
xneg (t) ≈ zr (t) = −zs (t) = δ(nn − n p ) − zs (t0 ). (21.9) would be attenuated by a proper decoder filter design.
Figure 21.5a shows simulations for an amplification of a
∗ The coder output y ( t ) as presented in Section 21.2 is actually the
s
sine wave by a factor of 2 while an attenuation by the
result of merging both positive and negative spike streams. same factor is shown in Figure 21.5b.
494 Event-Based Control and Signal Processing
Input xs(t) and output zr(t) signals
xs(t)
2 zr(t)
1
0

–1
–2
ys(t)

yr(t)
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
(a)

Input xs(t) and output zr(t) signals


xs(t)
2
zr(t)
1
0

–1

–2

ys(t)

yr(t)
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
(b)

FIGURE 21.5
Gain operation. The input sine wave signal xs ( t) is amplified by a factor of 2 in (a) and attenuated by 2 in (b), at the decoder zr ( t). The gain
operation is obtained by selecting the ratio of the tracking steps of the coder and the decoder. This operation will result in signal amplification
by setting the tracking step in the decoder greater than in the coder. Otherwise, a signal attenuation will be produced.

21.3.2.3 Modulus the decoder is the modulus of the signal. This operation
is illustrated in Figure 21.6 for a sine wave signal.
Another unary function commonly presented is the
In the first method, the original signal is compared
modulus (or absolute value) of a signal. The modulus
against a reference signal representing the zero reference
of a signal is the magnitude of such signal defined as
value. This comparator can be included in the same or
x abs (t) = | x (t)|. (21.12) in a distinct SPU. The comparator output signal x c (t) is
coded and transmitted to the communication channel.
This function is performed by implementing the follow- The AER decoder controller reads this signal and sets
ing AER control on the transmission of the spikes: (+1) or resets (−1) the variable sig( xs (t)) on positive and
negative spikes reception, respectively. If this variable is
ysp (t) → yr p (t), y sn (t) → yrn (t) i f sig( xs (t)) = +1 reset, the AER decoder performs the negation operation
ysp (t) → yrn (t), y sn (t) → yr p (t) i f sig( xs (t)) = −1. on the original signal as described before. Otherwise, no
(21.13) operation is performed and the output signal replicates
the input signal.
where sig( xs (t)) represents a control variable, which is a The second method employs a digital counter to mea-
function of the signal sig( xs (t)) = xs (t)/| xs (t)|. sure the positive and negative spikes generated by the
This algorithm can be implemented in two different original signal. To work properly, the initial condition
ways. The first requires an analog comparator and the of the signal should represent the zero value. If so,
second uses a digital counter. In both cases, the result in the counter is incremented for each positive spike and
Spike Event Coding Scheme 495
Input xs(t) and output zr(t) signals
xs(t)
zr(t)
1

0.5

–0.5

–1

ys(t)

yr(t)

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)

FIGURE 21.6
Modulus operation. In this configuration, the modulus of the input signal xs ( t) is output as the result of selectively applying the negation
operation. An digital counter or an analog comparator can be used to control the periods when the negation is performed.

x1(t) x1(t) A/S

x2(t) Σ x2(t) A/S M

s(t) s(t) S/A

(a) (b)

FIGURE 21.7
Signal summation in analog (a) and spike coding event systems (b). Analog summation can be implemented in diverse forms, the simplest
being current summation according to Kirchoff’s current law on a node. The summation on the proposed architecture is performed by simply
merging the spikes generated by the input signals (operands). In (b) A/S and S/A stand for Analog-to-Spike and Spike-to-Analog conversions,
respectively.

decremented for each negative spike. Every time the it is possible to demonstrate that the summation signal
counter value changes from a negative value to a pos- of N operators is obtained at the decoder output x sum (t)
itive and vice-versa, the variable sig( xs (t)) is updated. by simply merging their respective output spikes
 
N N
21.3.2.4 Summation and Subtraction
x sum (t) ≈ zr (t) = δ ∑ nr pi − ∑ nrni + zr ( t0 ),
Arithmetic operations involving two or more signals i =1 i =1
are also required by any generic analog computation (21.15)
system. A fundamental arithmetic operation involv- where nr pi and nrni are the number of positive and neg-
ing multiple signals is the summation of these signals.
ative spikes, respectively, received from the i th operand
Figure 21.7 shows the conventional concept of perform-
after the initial instant t0 .
ing summation in the analog signal domain as well as in
In this architecture, the summation is performed
the proposed architecture. The summation signal xsum (t)
by routing the spikes of all operands to the same
with N signal operators is defined as
decoder, with the SPU input—after the spike-to-analog
N conversion—being the result. Simulation results show-
xsum (t) = ∑ xi (t). (21.14) ing the summation of two sine wave signals x s1 (t) and
i =1
xs2 (t) are shown in Figure 21.8a, with the decoder inte-
To implement this operation with the ASEC scheme, grator output zr (t). The signal xsum (t) is the theoretical
Equation 21.6 is considered again. From this equation, result. Subtraction xsub (t) may be achieved by applying
496 Event-Based Control and Signal Processing
Input xs1(t), xs2(t) and output zr(t) signals
xs1(t)
3 xs2(t)
2
zr(t)
1
0
–1
–2

ys1(t)

ys2(t)

yr(t)

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
(a)

Input xs1(t), xs2(t) and output zr(t) signals


xs1(t)
xs2(t)
zr(t)
3
2
1
0
–1
–2

ys1(t)

ys2(t)

yr(t)

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
(b)

FIGURE 21.8
Summation and subtraction operations. (a) Shows the summation of two input sine waves, x1 ( t) and x2 ( t). Signal zr ( t) is the respective sum-
mation while x T ( t) is the ideal summation result. The subtraction of the same input signals x1 ( t) − x2 ( t) is shown in (b). The summation is
performed by the concatenation yr ( t) of the spikes from all operands, that is, y1 ( t) and y2 ( t) in this case while subtraction also requires the
negation operation. Spikes collisions are identified whenever yr ( t) presents twice the amplitude in these figures.

both the summation and the negation operations out- implemented in this architecture. This assumption is
lined before, and an example is depicted in Figure 21.8b. fulfilled using the following logarithmic property:
 
N N
21.3.2.5 Multiplication and Division x mult (t) = ∏ xi (t) = exp ∑ ln xi . (21.16)
i =1 i =1
While some operations are performed using asyn-
chronous spike event coding and decoding methods In other words, this property establishes that mul-
together with the AER decoder, the range of possible tiplication operation can be computed from summa-
computations can be expanded when the communica- tion operation if the operands xi (t) are submitted to
tion scheme capabilities are combined with the func- a logarithm compression and the summation result is
tionality of the SPUs. Multiplication is an example of expanded in an exponential fashion. Although this com-
this cooperation. Once summation operation can be pression and expansion are not performed by the ASEC
performed by the ASEC scheme, it is also possible to scheme, if those operations are implemented inside
Spike Event Coding Scheme 497
D R

R D
xin(t) xin(t)
– xout(t) – xout(t)

+ +

(a) (b)

FIGURE 21.9
Logarithm and exponential amplifiers. These figures show traditional circuits used to realize both logarithm (a) and exponential (b) opera-
tions on analog signals. These circuits were not implemented in any chip designed for this work. These circuits can be implemented to allow
multiplication and division operations.

Input xs1(t), xs2(t) and output zr(t) signals


xs1(t)
xs2(t)
1.5 zr(t)
1
0.5
0
–0.5
–1
–1.5
ys1(t)

ys2(t)

yr(t)

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
(a)

FIGURE 21.10
Average operation. This figure shows the input signals xs1 ( t) and xs2 ( t) and the respective average signal zr ( t). This operation is performed by
combining the summation and the gain operations, where the gain factor is the inverse of the number of operands.

the SPUs, both multiplication and division operations To implement this operation, ASEC uses the same sum-
would be available in the system. Division is achieved mation and gain procedures, where the gain G in 21.3.2.2
similarly using the subtraction operation. is explained here as
A conventional method to generate this loga-
1
rithm compression is using the circuits presented G= . (21.18)
in Figure 21.9. They are known as logarithmic and N
exponential amplifiers. Although they present a sim- Figure 21.10 presents an example of such computation
ple topology, the diode resistance is usually highly using the same signals of the summation operation with
dependent on the temperature. These circuits were not the gain G = 0.5. This expression indicates that the min-
implemented in the work described here. imum possible amplitude for the tracking step limits the
maximum number of operators in this operation.
21.3.2.6 Average Operation
Once the gain and the summation operations are avail- 21.3.3 Signal Conversion
able, another possible operation is the average of several
inputs in each instant given by The main advantage of an analog system is to process
analog signals using its SPUs. However, it might be of
N some interest to have a digital representation of the ana-
1
x avg (t) = ∑
N i =1
x i ( t ). (21.17) log signals, both for storage and transmission purposes.
Implementations of analog-to-digital converters (ADC)
498 Event-Based Control and Signal Processing

based on asynchronous modulations were presented word. At this point, a spike with the correct polar-
in [16,18–20]. ity is transmitted to an ASEC decoder throughout the
A simple method to provide a digital representation is AER communication channel to (re)generate the ana-
to use a up/down digital counter updated on the onset log signal. These applications can be used to implement
of received spikes, where the counter direction is deter- complex systems that could not be fit in a simple analog
mined by the spike polarity. However, such a method is system. Furthermore, this works only for nonreal-time
not suitable for signal storage as the timing information systems.
(time stamping) is not measured.
Realizing data conversion with the proposed 21.3.4 Other Operations
scheme—and keeping the timing information for
signal storage—can be achieved using time-to-digital 21.3.4.1 Shift Keying Modulations
(TDC) and digital-to-time (DTC) converters. In its Specific applications can also be realized with the pro-
simplest implementation, TDC circuits consist of a posed architecture. An example of those is phase-shift
digital counter operating at a specific frequency which keying. Phase-shift keying is a digital modulation where
determines the converter resolution. Its input is an the phase of a carrier changes according to the digital
asynchronous digital signal which triggers the conver- information. For instance, the binary phase-shift keying
sion process: read (and store) the current counter value (BPSK) is defined as
and then reset it. The digital value represents the ratio
between the interval of two consecutive inputs and the 
a sin(ωc t) bi = 0
if
counter clock period: s ( t ) = zr ( t ) =
a sin(ωc t + 180◦ )
bi = 1,
if
t i − t i −1 (21.20)
Xi = , (21.19)
tclk where ωc is the angular frequency of the carrier, bi is the
bit to be transmitted, and a is a function of the energy
where Xi is the digital representation of the time interval
per symbol and the bit duration [21].
ti − ti−1. Therefore, an AD conversion is implemented.
To implement this function using ASEC architecture,
The TDC can be used to read the output spikes ys (t) gen-
one must provide a carrier (sine wave) signal to the
erated by a ASEC coder with input xs (t) because each
input of the coder and rout the resulting spikes accord-
spike transmitted using the ASEC scheme presents the
ing to the digital value to be transmitted. Whenever
same signal magnitude change. An extra bit is, there-
the input bit is zero, the carrier is replicated at the
fore, needed to represent the polarity of the spike. For
decoder output while a negation operation is performed
instance, the least significant bit may indicate the spike
otherwise, that is,
polarity.
However, with constant input signals, the ASEC coder
y sp (t) → yr p (t), y sn (t) → yrn (t) i f bi = 0
will not generate any spikes. In this case, the digital (21.21)
y sn (t) → yr p (t), y sp (t) → yrn (t) i f bi = 1.
counter can overflow and the information would be lost.
One method to avoid this is to ensure the registry of
these instants, storing or transmitting the maximum or Figure 21.11 shows the modulation result zr (t) for
minimum counter value, for instance. These values will an input word x2 (t) = 01011001. Quadrature phase-shift
add up until an incoming spike stops and resets the keying (QPSK) is possible using the summation opera-
counter. tion with another sine wave (90◦ phase shifted) and the
Another method, more complex, would be dynam- fast generation of a fixed number of successive spikes to
ically changing the tracking step of the ASEC coder quickly change the carrier phase due to signal disconti-
in the same fashion as the continuously variable slope nuity. This function has a similar working principle as
delta (CVSD) modulation. Every time the counter has the function modulus described earlier. The difference is
overflowed, the tracking step would be reduced (halved that the AER routing is controlled by an external signal
for instance) to capture small signal variations. This b(t) rather than the input signal itself.
approach increases the digital word size because the step
21.3.4.2 Weighted Summation
size has to be stored. However, the number of generated
digital words is smaller. The last operation described in this section is the
The complementary process can be used to perform weighted summation. This operation, along a nonlin-
an digital-to-analog conversion (DAC). In this case, a ear transfer function, is the functional core of artificial
counter (DTC) is loaded with a digital word and then neural networks (ANN) systems. Other applications for
starts to count backwards until the minimum value is the weighted summation include bias compensation in
reached (typically zero) before reloading the next digital statistics, center of mass calculation of a lever, among
Spike Event Coding Scheme 499
Input xs1(t) and control xs2(t) signals xs1(t)
xs2(t)
1

–1
ys1(t)
ys2(t)
Output signal zr(t) zr(t)
1

–1
yr(t)
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0
t (ms)
(a)

FIGURE 21.11
Binary phase-shift keying. This modulation is implemented by routing the spikes generated by the sine wave carrier x1 ( t) according to the
digital input x2 ( t). In this example, the modulated digital word is 01011001.

others. The implementation using the proposed com- block diagram presented in Figure 21.12 is the result of
munication method is straightforward by combining the the rearrangement of the TSD block diagram presented
gain and the summation operations.∗ The expression for in Figure 21.2a and the ASEC–SPU interface diagram
weighted summation is presented in Figure 21.1. This rearrangement was per-
formed to better understand the implementation of each
N
WS(t) = ∑ wi (t) xi (t). (21.22) of the blocks designed. The comparator, the spike gen-
i =0 erator, the pulse generator, and the integrator circuit
design are further described in this section. Before ana-
One advantage of implementing such a computa-
lyzing each of these blocks individually, ASEC parame-
tion in this architecture is the possibility of implement-
ters and their designs are explained.
ing massive ANNs in real time because of the system
The first step in the design of the coder is to define
inherent scalability. On the down side, because differ-
the main∗ parameter of comparators of Figure 21.2a, that
ent inputs x i (t) are multiplied by different and usu-
is, the thresholds eth1 and eth2 . Absolute values of these
ally changing gains wi (t), the decoder tracking step δr
thresholds impact on the DC level of both zs (t) and
need to be changed before receiving the correspondent
z (t). For a null DC level error, that is, e(t) = 0, these
spike. This results in a further delay between the instant r
thresholds must be symmetrical (eth1 = −eth2 ). A posi-
when the AER routing computes the target address and
tive DC error is produced when |eth1 | > |eth2 |. Similarly,
the moment when the ASEC decoder implements the
|eth1 | < |eth2 | results in a negative error. More impor-
update on its output. This delay is due to setting up of
tantly, the difference between the comparator thresholds
new tracking step. This usually requires waiting for a
holds a relation with the coding resolution as
DAC controlling the decoder integrator to settle to a new
value.
Δeth = eth1 − eth2 = δ, (21.23)

where δ—the tracking step—is a function of the integra-


tor gain k i as in
21.4 ASEC aVLSI Implementation
δ = k i T, (21.24)
The ASEC scheme uses the TSD modulation to perform
the analog-to-time conversion and the AER protocol and T is the duration of the pulses produced by the pulse
to route the spikes in an analog system. The ASEC generator block. The parameter T is designed according
∗ This method does not work when changing the gain for constant ∗ For the targeted applications of this scheme, the speed is not the

signals. most crucial parameter.


500 Event-Based Control and Signal Processing

Δ eth ys(t) yr(t) I


xs(t) c1(t) req req inc
zr(t) xr(t)
arb. ki dt
zs(t) c2(t) ack ack dec

CC AER PG INT LPF


I ASEC decoder
inc

ki dt dec

INT SG & PG
ASEC coder

FIGURE 21.12
Block diagram of the ASEC implementation. CC is the composite comparator that performs the function of both comparators in Figure 21.2a. This
block outputs the signals c1 ( t) and c2 ( t) according to the input signal xs ( t) and the system parameter Δe th = e th1 − e th2. SG is the spike generator
and PG is the pulse generator. PG and SG are integrated on the coder side. SG generates the AER request signal req and waits for the acknowledge
signal ack from the arbiter in the AER channel. On the reception of ack, PG outputs pulses inc and dec. The gain k i defines the tracking step δ
value of the integrator block INT and is set by the current I.

to the predicted input signals, system size, and the over- The communication scheme presented in Figure 21.12
all configuration because these characteristics determine can be implemented using a small number of compact
the channel communication activity. circuits for each block (comparator, spike generator, and
The communication channel may be overloaded when integrators). Design methodology is presented in more
the activity of all coders exceed the channel capacity. detail for each of these circuits in the next subsections.
In this implementation, the spike generator sets a mini- The decoder low-pass filter (LPF) was not implemented
mum interval between successive output spikes as a pre- on-chip, because it is an optional part of the circuit
ventive method to avoid this condition. This “refractory which increases the method resolution and, therefore,
period” is defined as an offline, software-based implementation was used
instead.∗ These ICs were designed using a 3.3 V power
1
Δt(min) = − T. (21.25) supply, four metal layers, a single polysilicon layer,
f sk(max ) 0.35 μm pitch CMOS process. The voltage domain was
used to represent the analog signals involved in the com-
while the spike width is determined from munication process. This choice would allow an easier
integration with the SPU proposed in [23].
δ
T= . (21.26)
(k + 1) |x˙s (t)|(max )
21.4.1 Comparator
if the period is defined as a multiple of the spike “width” In the block diagram presented in Figure 21.2a, signal
Δt(min) = k T. subtraction and comparison functions are performed by
An estimation about the limitations imposed to the different blocks but they are implemented as a single cir-
input signal can be derived from the definition of cuit: the comparator. The design of comparators can be
Δt(min). From Equation 21.26, the frequency of an input implemented using a preamplifier (PA) feeding a deci-
sine wave x s (t) = A sin(ωin t) is sion circuit (DC) and an output buffer (OB) [24], as
shown in Figure 21.13. The preamplifier circuit provides
ω 1 δ 1 1
f in = in = f sk(max ) = f . (21.27) enough gain on the input signal to reduce the impact
2π 2π A 2π 2 Nb sk(max ) of the mismatch on the decision circuit, which out-
This means that once the system parameters are defined, puts a digital-like transition signaling the comparison.
the maximum input frequency is inversely propor-
tional to the signal amplitude. In case the input sig- ∗ The main design parameter of the decoder filter is its pole loca-
nal frequency is greater than the value defined by
tion as it impacts the signal resolution by attenuating the undesirable
Equation 21.27, the system will present slope over- out-of-band high frequency harmonics generated during the decoding
load [22]. process [3].
Spike Event Coding Scheme 501

x(t) c(t)

PA DC OB

z(t)

FIGURE 21.13
Block diagram of a typical comparator. The input signals’ difference is magnified by the preamplifier (PA) and then used by the decision circuit
(DC) to provide a comparison signal. An output buffer (OB) keeps the transition times independent of the load.

x(t) + ixz(t) + c1(t)


PAsig DC1 OB1
z(t) –

+ + c2(t)
Δeth PAth –1 DC2 OB2
– ieth

FIGURE 21.14
Block diagram of the compound comparator, adapted from [3]. The output current Δi xz from preamplifier PA A is a function of the inputs x ( t)
and z( t) while preamplifier PA B outputs a fixed offset current ieth . ieth is both added and subtracted from Δi xz , and the results are applied to the
respective decision circuits (DC) and output buffers (OB).

The output buffer provides the current needed to keep preamplifiers are identical transconductance ampli-
the rising and falling times short for any load. fiers. The decision circuit is a positive feedback circuit
Capacitive or resistive dividers at the comparator while the output buffer is a self-biased amplifier [27].
input nodes can be used to realize the required Δeth , but The comparator was designed to provide a hysteresis
they impact negatively on the input impedance of the smaller than the tracking step to help avoiding excessive
circuit [25]. Δeth can also be generated using offset com- switching due to noise. This hysteresis is generated with
parators [25]; however, this topology suffers from low the size difference between transistors M9 − M10 and
input dynamic range. An additional preamplifier, which M11 − M12 .
provides a respective Ieth on the decision circuit input,
can also be used to generate programmable offset [26]. 21.4.1.1 Design Considerations
A compound comparator outputs both c1 (t) and c2 (t),
The system performs at its best when the tracking step δ
as shown in Figure 21.14. Initially, this would require the
generated at the integrator outputs is the same as the dif-
use of four preamplifiers with two sensing the inputs
ference between the thresholds Δeth of the comparators.
x (t) and z(t) and two providing different offsets (two
This is very unlikely as process mismatches [28] will cer-
sets for each output). However, only two preamplifiers
tainly vary the comparator offsets Vos1 and Vos2 from the
are needed. The preamplifier PA A outputs a differential
designed value ΔethD , as shown in Figure 21.16b. Hence,
current ΔIxz (t) according to the difference between x (t)
the actual Δeth is bounded, for a 3σ variation (99.7%), by
and z(t), that is, the error signal e(t) in Figure 21.2a. The
additional capacitive load on the input nodes x (t) and ΔethD + 3σos ≥ Δeth ≥ ΔethD − 3σos , (21.28)
z(t) when using two preamplifiers is therefore avoided.
The other preamplifier (PA B ) provides a differential cur- where σos = σ(Vos1 ) + σ(Vos2 ) for uncorrelated variables
rent Ieth according to voltage Δeth on its inputs. The and σ(Vos1 ) and σ(Vos2 ) are the standard deviations of
results of adding to (Ixz + Ieth ) and subtracting (Ixz − the comparator offsets Vos1 and Vos2 , respectively.
Ieth ) from these currents are forwarded to the decision Therefore, a design margin is required to compensate
circuits to speed up the result. for the random offsets due to process variations. If δ <
Figure 21.15 depicts electrical schematics of the Δeth , the feedback signal zs (t) is delayed and distorted,
circuits used in the compound comparator. The in particular in the regions close to the signal inflection
502 Event-Based Control and Signal Processing
Preamplifier

Ibias

+ −
M2 M1

a b

M6, 8 M4 M3 M5, 7

(a)

Decision circuit Output buffer


M18
M13

M15 M14
c Vout d
M10 M12 M11 M9

M17 M16
c d

(b) M19

(c)

FIGURE 21.15
Circuit schematic for each block of the compound comparator, adapted from [3]. Transconductance preamplifier (PA) in (a), decision circuit (DC)
in (b), and output stage (OB) in (c). In Figure 21.14, the connection of blocks PAsig , DC1 , and OB1 is achieved by connecting the nodes a and b to
nodes c and d, respectively, where i a ( t) − ib ( t) = i xy ( t). Similarly, for the circuit PA th in Figure 21.14, i a ( t) − ib ( t) = ieth , and node a connects to
node d and node b to node c to implement the negative gain.

c(t) ΔethD

c2(t) c1(t)
Δeth(min)

e(t) e(t)

eth2 eth1
Δeth Δeth2 Δeth1
(a) (b)

FIGURE 21.16
Design and mismatch effects of comparator thresholds, adapted from [3]. (a) Shows comparator transfer functions while comparator threshold
variations Δe th1 and Δe th2 , with Δe thX ≈ 6σ(VosX ), used to calculate the designed threshold difference Δe thD , are presented in (b). For optimum
design, Δe th (min ) = δ.
Spike Event Coding Scheme 503
1.0 Xs(t)
xs(t), zs(t) 0.0 Zs(t)

–1.0
0.125
e(t)
–0.125
ys(t)

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
0.125
e(x)
–0.125
–1 –0.5 0 0.5 1
xs (a.u.)
(a)

1.0 xs(t)
xs(t), zs(t) 0.0 zs(t)

–1.0
e(t) 0.3125
–0.3125
ys(t)

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
0.3125
e(x)
–0.3125
–1 –0.5 0 0.5 1
xs (a.u.)
(b)

1.0 xs(t)
xs(t), zs(t) 0.0 zs(t)

–1.0
0.1
e(t)
–0.1
ys(t)
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
t (ms)
0.1
e(x)
–0.1
–1 –0.5 0 0.5 1
xs (a.u.)
(c)

FIGURE 21.17
Simulation of the comparator model with mismatch. TSD model results for 3-bit resolution with different mismatch between tracking step δ
and comparator threshold difference Δe th . (a) Shows the case for Δe th = δ, (b) Δe th = 2.5 ∗ δ, and (c) Δe th = 0.8 ∗ δ, which presents excessive
switching activity.

points. More critically, zs (t) will oscillate for δ > Δeth . To 21.4.2 Spike and Pulse Generators
avoid the last case, the comparators’ threshold difference
The spike generator block can output positive and neg-
is designed to meet the following safety margin
ative spikes according to the state of c1 (t) and c2 (t)
ΔethD ≥ δ + 6σos . (21.29) signals. It is required that a positive spike is generated
Figure 21.17 illustrates the effects of this mismatch whenever the error e(t) > Δeth /2, and similarly a nega-
when Δeth is equal, greater, and smaller than δ. These tive spike is generated when e(t) < −Δeth /2. The spike
results were obtained from the TSD model simulation generator will not generate any spikes otherwise. From
results for a 3-bit resolution. Figure 21.18 shows the these spikes, corresponding pulses are generated both at
influence of this mismatch type on the resolution. the coder and the decoder. Figure 21.19a and b presents
504 Event-Based Control and Signal Processing
EN oB (zr(t))
Nb = 3
10 Nb = 4
9 Nb = 5
Nb = 6
8 Nb = 7

ENoB (bits)
7 Nb = 8
Nb = 9
6
Nb = 10
5
4
3
2

0 100 200 300 400 500 Mismatch (%)

FIGURE 21.18
Impact of comparator mismatch on the resolution. Simulation results for TSD model with different mismatches between tracking step δ and
comparator threshold difference Δe th . The ENoB is measured in zr ( t) = zs ( t) signal. The simulation does not include other noise sources than
mismatch.

the block diagrams of the spike and pulse generators The schematic of the implemented SCI circuit is pre-
on the coder and the pulse generator on the decoder, sented in Figure 21.21, and it works as follows. On the
respectively. Figure 21.19c and d shows an example of detection of a positive spike, the inc signal is set at a high
the behavior of the control signals. voltage level (and inc is set at low voltage). This allows
The local arbiter senses which comparator outputs for currents 0.5I and 1.5I to flow in transistors M21 and
c1 (t) and c2 (t) changed first and locks its state until M20 , respectively, for a fixed interval T. The current dif-
end of the cycle of the signal zs (t), disregarding further ference will then charge the integrating capacitor Cint .
changes, including noise. The arbiter output then trig- Symmetrically, the transistors M22 and M23 , controlled
gers the spike generator which starts the handshaking by dec and dec signals, will provide a similar current
communication with the AER arbiter setting either req p to discharge the capacitor. The resulting current I that
or reqn signal. On receipt of the ack signal, the pulse gen- discharges or charges the integrating capacitor Cint is
erator is activated and provides a pulse—inc or dec—to given by
the integrator according to the arbiter output. δ
I = Cint , (21.30)
The pulse generator block includes programmable T
delay circuits for the generation of T and Δt(min) time and from Equation 21.24, the designed integrator gain
intervals. These blocks were implemented as a current may be obtained as
integrator feeding a chain of inverters. This is a suit-
able implementation although more efficient methods I
ki = , (21.31)
have been available [29]. The control logic for this cir- Cint
cuit was implemented using a technique developed to
design asynchronous digital circuits [30]. Although the and when there are no spikes from the spike generator,
circuit in [7] presents a simpler hardware, it lacks noise currents are driven to low-impedance nodes (d1 − d4 )
tolerance provided by the arbiter. through M24 − M27 by setting the voltages of signals dp
to high and dp to low levels.
This circuit operation of using simultaneously charg-
21.4.3 Integrator
ing and discharging paths for the integration capacitor
The integrator is the last block remaining to be analyzed. penalizes the power consumption of the system but
Together with the comparator threshold difference Δeth , reduces the charge injection on the integration nodes
its gain k i defines the resolution of the coder. An inte- zs (t) and zr (t) [33]. Charge injection at the gate to drain
grator based on the switched current integration (SCI) capacitances of the complimentary switches will cancel
technique [13] was designed. This implementation, used each other if switches M20 and M21 (M22 and M23 ) have
in charge pump circuits, was also used in other modula- the same dimensions.
tion schemes [13,31], and a unipolar version of this type Another design approach would be switching on and
of circuit driving resistors is used in steering current cells off the current mirror itself instead of driving the cur-
of some DAC converters [32], as shown in Figure 21.20. rent to low-impedance nodes. This solution would result
Spike Event Coding Scheme 505

reqp, reqn reqp, reqn


c1(t)
ack ack

Arbiter
Control Control
c2(t) circuit circuit inc, dec
inc, dec

T, Δt(min) T, Δt(min)

(a) (b)

yp(t), + δ generation Idle yn(t), – δ generation


c1(t)
Noise removed by arbiter c2(t)
reqp
reqn
ack
Fixed width T
Δt(min)
inc
dec
(c)

+ δ generation Idle – δ generation


reqp
reqn
ack
Fixed width
T
Δt(min)
inc
dec
(d)

FIGURE 21.19
Spike (SG) and pulse (PG) generators, adapted from [3]. Coder’s combined spike and pulse generators (a) and decoder’s pulse (b) generator
block diagrams. (c) Is an example of timing diagram of the block in the coder while (d) shows timing diagrams on the decoder. In the coder,
a local arbiter selects between c1 ( t) and c2 ( t) inputs, starts the handshaking signaling (either setting the appropriate request signal req p or req n ),
and waits for the acknowledge signal ack from the AER arbiter. On completion of the handshaking process, either a inc or dec pulse is generated
using delay blocks for T and Δt(min ).

in a lower power consumption, but switching the volt- is in the pulse generation by the time delays in the
age reference at the gate terminal would also result in pulse generator block. Because of process variations, the
temporary errors that would corrupt the tracking step current source, the capacitor, and the inverters’ thresh-
amplitude. old can vary and then provide different pulse width T.
The gain operation described in 21.3.2.2 relies on the The second mismatch source is due to the size of the
fact that the tracking steps δs and δr are programmable capacitor Cint in each integrator. The third is the mis-
and different. Those different tracking steps can be real- match between current sources that provide the currents
ized by applying different bias currents I for coder and 1.5I and 0.5I in Figure 21.21. Moreover, the mismatch
decoder integrators according to Equation 21.30. between current sources and current sink may cause a
difference between the increase and the decrease track-
ing steps. Cascoded current mirrors were used to reduce
21.4.3.1 Design Considerations
the current mismatch.
According to Equation 21.30, the integrator based on the The resulting difference may impact the functionality
SCI technique presents three different sources of mis- of the scheme. For instance, while the coder integrator
matches between the coder and the decoder. The first will still track the input signal, the decoder integrator
506 Event-Based Control and Signal Processing

I
I 2I 2N−1 I

φ1
φ1 φ2 φN–1

φ2

R
I C

(a) (b)

FIGURE 21.20
Switched current integrator (SCI) examples. (a) Unipolar version as used in current-steering DACs and (b) bipolar version used in signal
modulation.

1.5I 0.5I

M24 M20 inc M23 M27 dp


dp dec
d1 zs(t), zr(t) d1

d2 d2
Cint
dp M26 M22 dec inc M21 M25 dp

1.5I 0.5I

FIGURE 21.21
Schematic of the integrator based on an SCI technique, adapted from [3]. A positive spike triggers a voltage increment of δ according to (21.30)
by turning on transistors M20 and M21 which then feed 1.5I and 0.5I currents to charge and discharge the capacitor Cint , respectively. For
negative spikes, the complementary process decreases the capacitor voltage. In the absence of spikes, currents are drawn to low impedance
nodes (d1 − d2 ).

might present a different behavior, normally saturating and after this solution. In the former, both inc and dec
at either of the power supplies. A solution for this erratic signals’ swing amplitude is 1.65 V as designed originally
behavior was found by changing the amplitude of the while in the last the amplitude swing for the signal inc
inc and dec signals. Normally, these signals present a was reduced to 0.9 V.
full or half-scale amplitude.∗ By changing the ON state
amplitude, for instance limiting the dec signal ampli-
tude to a value Vdec closer to M22 threshold voltage, 21.4.4 Filter and AER Communication
we can force the current mirror output transistor, which
The filter provides the averaging function in
provides the 1.5I current, to leave the saturation region
Equation 21.4 by removing high-frequency spec-
and enter the linear region, thus reducing the effec-
trum components. An ideal, unrealizable filter (sinc
tive mirrored current. The IC provides a mechanism
or brick-wall filter) would provide total rejection of
to change this amplitude. This calibration mechanism
out-of-band harmonics and null in-band attenuation.
can thus compensate for all three mismatches sources.
In practice, the cutoff frequency ω p of a realizable
Figure 21.22a and b shows the decoder behavior before
low-pass filter is designed to be greater than the input
∗ In this implementation, the control signals were originally signal bandwidth. The trade-off between out-of-band
designed for half-scale voltage amplitude, that is, inc and dec swing components suppression and signal distortion has to be
from ground to Vdd while inc and dec from ground to Vss . considered on the filter design.
Spike Event Coding Scheme 507

zr(t)

zr(t)

(a) (b)

FIGURE 21.22
SCI divergence. (a) Encoder zs ( t) and decoder zr ( t) integrator outputs with both inc and dec swing set to 1.65 V and (b) same signals with inc
swing reduced to 0.9 V for compensation of integrator mismatches.

TABLE 21.1 21.4.5 Demonstration: Speech Input


Test IC Characteristics
In this section, IC test results to demonstrate the com-
Parameter Value Unit
munication aspects of the event coding scheme are pre-
Technology CMOS 0.35 μm —
sented. Two different input signals were used to test the
1P4M
communication system: a speech signal and a single-
Power Supply 3.3 V tone (sine wave) signal. Measured test results for the
Area Coder 0.03 mm2 computation capabilities of the communication scheme
Decoder 0.02 presented in Section 21.2 are shown as well. A 140 ms
AER 0.025 speech signal was used to demonstrate the coding func-
Total 4.8 tionality. This signal was originally sampled at 44.1 kSps
Power Coder 400 μW with an 8-bit resolution. The coding system was pro-
Consumption Comparator 340 grammed to provide a resolution of 4 bits to demonstrate
Decoder 50 the coding properties.
IC 2700 Figures 21.23 and 21.24 were obtained from the imple-
mented IC. The former shows first (a) the coder input
signal xs (t) (top), the decoder integrator output zr (t)
(middle), and the coder output spikes ys (t) (bottom,
The delay between the coder input x s (t) and the with the negative spikes first). The same signals are
decoder output xr (t) is mostly impacted by the fil- presented in detail in (b) where x (t) and z (t) are over-
s r
ter design. For instance, the filter will produce a 45◦ lapped. This figure shows a key benefit from ASEC
phase shift from the signal zr (t) when designing the system as it generates no activity in the channel for
pole to be ω p = ωin , for a sine wave given by xs (t) = signal changes smaller than Δe .
th
A sin(ωin t). For systems designed for low frequency The second figure (21.24) shows signals used in the
signals, this delay is the dominant contributor when AER protocol: request and acknowledge signals and
compared against the modulator loop, Δt(min) , and AER both inbound and outbound spike address buses. It is
arbitration process. For instance, the conversion of a also possible to notice the difference between the 8-bit
tone signal (ωin = 2 π 4 kHz) using an 8-bit resolution resolution input signal x (t) and the 4-bit resolution
s
imposes Δt(min) ≈ 300 ns 31.2 μs (45◦ phase shift). signal zr (t) in 21.24(b).
The AER communication method was introduced to
fully implement the asynchronous spike event coding
scheme. Although the arbitration process was imple-
21.4.6 Resolution: Single-Tone Input
mented inside the IC, the routing control was imple-
mented in an external FPGA. Therefore, this IC presents A sine wave input signal with an amplitude of 2.0 Vpp
two sets of control signals (request and acknowledge) and frequency of 20 Hz ( f in ) was used to measure the
and two address buses: one set from the IC to the FPGA resolution of the system. It was generated by a DAC
and the other in the opposite direction. and it was sampled at 44.1 kSps. The coder was pro-
The aVLSI implementation of the ASEC also contains grammed to work with a 4-bit resolution. Figure 21.25a
four coders and four decoders. The total area of the this shows two periods of the input signal and the decoder
IC and the area of each block is presented in Table 21.1 signal zr (t), and Figure 21.25b shows the offline filter-
together with the total power consumption and that for ing results using a digital LPF with a cutoff frequency of
each block. 20 Hz, the same frequency as the input signal.
508 Event-Based Control and Signal Processing

xs(t)
zr(t)

zr(t)
xs(t)
No spikes while xs(t) is appr. constant
yrn(t) yrn(t)
yrp(t) yrp(t)
(a) (b)

FIGURE 21.23
ASEC demonstration, adapted from [3]. (a) Shows the coder input xs ( t) and decoder integrator output zr ( t) for an audio signal with the resulting
negative yn ( t) and positive y p ( t) spikes shown at the bottom of this figure. A detailed view of (a) is presented in (b) to show the absence of
spikes during the periods when the variation on the input signal xs ( t) amplitude is smaller than the coder resolution.

xs(t)
xs(t)

zr(t)
No communication activity zr(t)
ack(t) while xs(t) is appr. constant ack(t)
req(t) req(t)
addrin(t) addrin(t)
addrout(t) addrout(t)
(a) (b)

FIGURE 21.24
Full ASEC demonstration. (a) Shows the handshaking AER signals (request, acknowledge, and spike addresses). In (b), the difference between
the 8-bit resolution input xs ( t) and the 4-bit resolution output zr ( t) is clear. In addr out ( t), the absence of spikes is represented by the address 0
while the same condition is represented in the addr in ( t) bus as number 7.

The measured total harmonic distortion (THD) for mathematical operators such as negation, summation,
zr (t) signal is −26 dB leading to a resolution of approx- and possible extension to multiplication operations.
imately 4 bits. The inclusion of a filter with a cutoff fre- Therefore, the scope of use of this coding method—
quency equal to f in not only realizes the signal xr (t) with either purely ASEC-based or hybrid systems—in general
a resolution of 6.35 bits but also attenuates the signal and signal processing is unbounded. Next, we describe two
inserts a phase shift on the signal. For comparison, when specific application fields as they are the main subjects
the cutoff frequency is increased to 10× f in , the measured of our research: programmable analog and mixed-mode
resolution of signal postfilter is 4.93 bits. Resolution is architectures and neuromorphic systems.
ultimately limited by mismatch of the components. If a digital system is suitable for flexible architec-
tures due its properties, analog circuits have advanta-
21.4.7 Computation: Gain, Negation, geous characteristics of being faster, smaller, and less
Summation, and BPSK energy demanding [35] in general. Consequently, it
is desirable to develop analog architectures that also
Computation properties of the ASEC were explained
provide a high degree of flexibility experienced by
in Section 21.3. These explanations were illustrated
FPGAs, DSPs, or digital microprocessors [36]. Similar
with software simulations of mathematical models. Test
to FPGAs, there is a wide range of potential applica-
results are also presented for some of the computations.
tions for programmable analog systems, including low-
Figure 21.26 corresponds to the gain operation while
power computing [37], remote sensing [38], and rapid
Figures 21.27 and 21.28 correspond to negation and
prototyping [39].
summation (and subtraction) operations. Figure 21.29
Although digital programmable arrays mature and
is the test result for the BPSK. Similar inputs from the
popular over the years, the same cannot be said about
simulation were used for the IC test results.
analog arrays. Among the technical challenges is the
problem of sensibility of analog signals to interferences.
Previous architectures relied on careful layout rout-
ing [40], using specific circuit techniques [41] or trying
21.5 Applications of ASEC-Based Systems to limit the scope of routing [42,43]. However, many
Some computational operations using ASEC were pre- architectures have been tried in the past with no or little
sented in this chapter. These included very fundamental success.
Spike Event Coding Scheme 509
yr(t)
yrp(t)
xs(t) yrn(t)

zr(t)

(a)

1
Norm. amplitude (V)

zr(t)
0.5

–0.5 xr(t)

–1
0.905 0.91 0.915 0.92 0.925 0.93 0.935 0.94 0.945 0.95
t (s)
0 Signal xr(s) zr(s)
Norm. amplitude (dB)

–20 ENoB (bits) 6.35 4.04


–40
–60
–80
–100
–120 zr(s) xr(s)
–140
0 200 400 600 800 1000 1200
f (Hz)
(b)

FIGURE 21.25
Resolution of the coding of the first implementation, adapted from [3]. (a) Oscilloscope snapshot with the input xs ( t) (2.0Vp p sine wave),
decoder integrator output zr ( t), negative yn ( t) and positive y p ( t) spikes from the coder output. (b) Reconstructed zr ( t) from ADC data and
decoder output xr ( t) from the software filter (top). The low-pass filter cutoff frequency is the same as the input signal. The frequency spectrum
Zr ( s ) and Xr ( s ) of the filter input and output (bottom) is computed.

zr(t) xs(t)

xs(t) zr(t)

(a) (b)

FIGURE 21.26
Test results for the gain operation, adapted from [34]. The input xs ( t) is similar to the one in Figure 21.5. (a) Illustrates the signal amplification
while (b) shows the results when the ASEC is configured to perform signal attenuation.
510 Event-Based Control and Signal Processing

zr(t)

xr(t) zr(t)

b(t)
clk(t)

FIGURE 21.27 FIGURE 21.29


IC results for negation operation, adapted from [34]. The ASEC circuits Results of the binary phase-shift keying operation from IC, adapted
were configured to produce a negated version of the sine wave xs ( t), from [34]. In this snapshot, the input digital word b ( t) to be modulated
as shown in Figure 21.4. by the method is 010011. clk( t) frequency is the same as that of the sine
carrier (not shown).

xsum(t)
better prospectus: artificial neural networks and neu-
xs2(t)
romorphic systems in particular. Artificial neural net-
zr(t) works (ANNs) are computational architectures where
each element performs a similar computational role of
xs1(t) biological neurons and synapses. The computation is
highly associated with the interconnection between the
(a)
elements in the network rather than with the elements’
functionality. Usually, its configuration is adaptive, and
it is defined through the use of learning algorithms.
zr(t) Although ANN can be implemented in software, its
xsub(t)
xs2(t) hardware implementations (HNN) lead to greater com-
putational performance [44]. ANNs have been designed
on programmable platforms, in digital [45], analog
xs1(t)
[46,47], and mixed-mode domains [48].
(b) Neuromorphic systems are VLSI systems designed to
mimic at least some functional and computational prop-
FIGURE 21.28
erties of the biological nervous systems [35]. Many neu-
Summation and subtraction operation results from chip, adapted
romorphic systems are spike neural networks (SNN).
from [34]. This figure presents chip results similar to the simulation
They are based on the concept that the information is
results in Figure 21.8. (a) Shows the summation zr ( t) of two input sine
transferred between neurons using temporal informa-
waves, x1 ( t) and x2 ( t), with same amplitude but different frequen-
tion conveyed on the onset of spikes [49]. Most neu-
cies (23 Hz and 4.7 Hz) and xsum ( t) being the ideal summation. The
romorphic systems are very specialized programmable
subtraction of the same input signals x1 ( t) − x2 ( t) is shown in (b),
arrays, where neurons, synapses, and other functional
where xsub ( t) is the ideal result. Scope DC levels were shifted to better
blocks act as basic blocks. However, these systems
illustration.
can also be implemented using generic programmable
arrays, both digital—DSPs [50] and FPGAs [51,52]—and
analog. For instance, a commercial analog FPAA was
By using ASEC-based systems, one can improve the used to implement a I&F SNN [53] and a pulsed cou-
architecture in two areas. First, the flexibility and scala- pled oscillator [54]. In both ANN and neuromorphic
bility of the system can be increased due the digital char- systems, an essential signal processing operation con-
acteristics of the signal to be routed, reducing its noise sists of applying a weighted summation to the variables
susceptibility. Second, the signal dynamic range is also (ANN) or electric pulses (neuromorphic). This operation
increased. While voltage and current dynamic ranges is readily available in ASEC-based systems as shown
tend to be reduced with the evolution of the CMOS earlier.
fabrication technologies, timing dynamic range tends to In this chapter, we presented an asynchronous spike
increase as the IC fabrication technology delivers faster event coding scheme (ASEC) that is not only suitable
and smaller transistors. for flexible analog signal routing but also presents the
While general purpose FPAAs architectures have been ability to perform a set of signal processing functions
continuously undermined by successive commercial with no dedicated hardware needed. The working prin-
failures, at least one application field has provided a ciple of this communication scheme was demonstrated
Spike Event Coding Scheme 511

as the association of analog-to-spike and spike-to-analog Circuits and Systems II: Analog and Digital Signal
coding methods with an asynchronous digital commu- Processing, 49(6):379–389, 2002.
nication channel (AER). Spike events are essentially
[10] V. Ravinuthula and J. G. Harris. Time-based arith-
asynchronous and robust digital signals and are easy to
metic using step functions. In Proceedings of the
route on shared channels not only between CABs but
International Symposium on Circuits and Systems,
also between ICs, providing improved scalability.
volume 1, pages I-305–I-308, May 2004.
[11] Y. Tsividis. Signal processors with transfer function
coefficients determined by timing. IEEE Transac-
tions on Circuits and Systems, 29(12):807–817, 1982.
Bibliography [12] A. Murray. Pulse techniques in neural VLSI:
[1] Y. Li, K. Shepard, and Y. Tsividis. Continuous-time A review. In Proceedings of the IEEE International
digital signal processors. In IEEE International Sym- Symposium on Circuits and Systems, volume 5, pages
posium on Asynchronous Circuits and Systems, IEEE, 2204–2207, IEEE, San Diego, CA, 2002.
pages 138–143, March 2005. [13] M. Nagata. PWM signal processing architecture for
[2] M. Mahowald. VLSI analogs of neuronal visual intelligent systems. Computers & Electrical Engineer-
processing: A synthesis of form and function. PhD ing, 23(6):393–405, 1997.
thesis, California Institute of Technology, Pasadena, [14] A. Iwata and M. Nagata. A concept of analog-
CA, 1992. digital merged circuit architecture for future VLSI’s.
Analog Integrated Circuits and Signal Processing,
[3] L. C. Gouveia, T. J. Koickal, and A. Hamilton. An
11(2):83–96, 1996.
asynchronous spike event coding scheme for pro-
grammable analog arrays. IEEE Transactions on Cir- [15] J. T. Caves, S. D. Rosenbaum, L. P. Sellars, C. H.
cuits and Systems I: Regular Papers, 58(4):791–799, Chan, and J. B. Terry. A PCM voice codec with
2011. on-chip filters. IEEE Journal of Solid-State Circuits,
14(1):65–73, 1979.
[4] H. Inose, T. Aoki, and K. Watanabe. Asynchronous
delta modulation system. Electronics Letters, 2(3):95, [16] R. Sarpeshkar and M. O’Halloran. Scalable hybrid
1966. computation with spikes. Neural Computation,
14(9):2003–2038, 2002.
[5] V. Balasubramanian, A. Heragu, and C. C. Enz.
Analysis of ultra low-power asynchronous ADCs. [17] B. Brown and H. Card. Stochastic neural computa-
In Proceedings of the IEEE International Symposium on tion. I. Computational elements. IEEE Transactions
Circuits and Systems, IEEE, pages 3593–3596, May on Computers, 50(9):891–905, 2001.
2010.
[18] E. Roza. Analog-to-digital conversion via duty-
[6] E. Allier, G. Sicard, L. Fesquet, and M. Renaudin. cycle modulation. IEEE Transactions on Analog and
A new class of asynchronous A/D converters based Digital Signal Processing, 44(11):907–914, 1997.
on time quantization. In 9th International Symposium [19] M. Kurchuk and Y. Tsividis. Signal-dependent
on Asynchronous Circuits and Systems, volume 12, variable-resolution quantization for continuous-
pages 197–205, IEEE Computer Society, 2003. time digital signal processing. In IEEE International
[7] D. Chen, Y. Li, D. Xu, J. G. Harris, and J. C. Principe. Symposium on Circuits and Systems, pages 1109–
Asynchronous biphasic pulse signal coding and its 1112, IEEE, Taipei, May 2009.
CMOS realization. In 2006 IEEE International Sym- [20] W. Tang and E. Culurciello. A pulse-based amplifier
posium on Circuits and Systems, pages 2293–2296, and data converter for bio-potentials. In IEEE Inter-
IEEE, Island of Kos, 2006. national Symposium on Circuits and Systems, pages
337–340, IEEE, May 2009.
[8] M. Miskowicz. Send-on-delta concept: An event-
based data reporting strategy. Sensors, 6(1):49–63, [21] S. G. Wilson. Digital Modulation and Coding.
2006. Prentice-Hall, Upper Saddle River, NJ, 1995.
[9] K. Papathanasiou, T. Brandtner, and A. Hamil- [22] S. Park. Principles of sigma-delta modulation
ton. Palmo: Pulse-based signal processing for for analog-to-digital converters. Mot. Appl. Notes
programmable analog VLSI. IEEE Transactions on APR8, 1999.
512 Event-Based Control and Signal Processing

[23] T. J. Koickal, L. C. Gouveia, and A. Hamilton. [35] C. Mead. Neuromorphic electronic systems. Pro-
A programmable spike-timing based circuit block ceedings of the IEEE, 78(10):1629–1636, 1990.
for reconfigurable neuromorphic computing. Neu-
rocomputing, 72(16–18):3609–3616, 2009. [36] P. Dudek and P. J. Hicks. A CMOS general-purpose
sampled-data analog processing element. IEEE
[24] R. Baker, H. Li, and D. Boyce. CMOS Circuit Design, Transactions on Circuits and Systems II Analog and
Layout, and Simulation, 2nd edition. Wiley-IEEE Digital Signal Processing, 47(5):467–473, 2000.
Press, New York, 1997.
[37] P. Hasler. Low-power programmable signal pro-
[25] A. A. Fayed and M. Ismail. A high speed, low cessing. In Fifth International Workshop on System-on-
voltage CMOS offset comparator. Analog Integrated Chip for Real-Time Applications, pages 413–418, IEEE,
Circuits and Signal Processing, 36(3):267–272, 2003. Washington, DC, 2005.
[26] D. A. Yaklin. Offset comparator with common
[38] A. Stoica, R. Zebulum, D. Keymeulen, R. Tawel,
mode voltage stability, Patent No. US 5517134 A,
T. Daud, and A. Thakoor. Reconfigurable VLSI
September 1996.
architectures for evolvable hardware: From exper-
[27] M. Bazes. Two novel fully complementary self- imental field programmable transistor arrays to
biased CMOS differential amplifiers. IEEE Journal evolution-oriented chips. IEEE Transactions on Very
of Solid-State Circuits, 26(2):165–168, 1991. Large Scale Integration Systems, 9(1):227–232, 2001.

[28] J.-B. Shyu, G. C. Temes, and F. Krummenacher. Ran- [39] T. Hall and C. Twigg. Field-programmable analog
dom error effects in matched MOS capacitors and arrays enable mixed-signal prototyping of embed-
current sources. IEEE Journal of Solid-State Circuits, ded systems. In 48th Midwest Symposium on Circuits
19(6):948–956, 1984. and Systems 2005, Cincinnati, OH, volume 1, pages
83–86, 2005.
[29] M. Kurchuk and Y. Tsividis. Energy-efficient asyn-
chronous delay element with wide controllability. [40] Lattice Semiconductor Corp. ispPAC Overview,
In Proceedings of 2010 IEEE International Symposium 2001.
on Circuits and Systems, pages 3837–3840, IEEE,
2010. [41] T. Hall, C. Twigg, J. Gray, P. Hasler, and D. Ander-
son. Large-scale field-programmable analog arrays
[30] A. Martin. Translating concurrent programs into for analog signal processing. IEEE Transactions on
VLSI chips. In D. Etiemble and J.-C. Syre, Eds., Circuits and Systems—I: Regular Papers, 52(11):2298–
PARLE ’92 Parallel Architectures and Languages 2307, 2005.
Europe, volume 605, pages 513–532–532, Springer,
Berlin. [42] T. Roska and L. O. Chua. The CNN universal
machine: An analogic array computer. Transactions
[31] D. Kościelnik and M. Miskowicz. Asynchronous
on Circuits and Systems II Analog and Digital Signal
sigma-delta analog-to digital converter based on
Processing, 40:163–173, 1993.
the charge pump integrator. Analog Integrated Cir-
cuits and Signal Processing, 55(3):223–238, 2007. [43] J. Becker and Y. Manoli. A continuous-time field
[32] J. J. Wikner and N. Tan. Modeling of CMOS digital- programmable analog array (FPAA) consisting of
to-analog converters for telecommunication. IEEE digitally reconfigurable Gm-cells. In Proceedings of
Transactions on Circuits and Systems II Analog and the IEEE International Symposium on Circuits and Sys-
Digital Signal Processing, 46(5):489–499, 1999. tems, Vancouver, BC, volume 1, pages I-1092–I-1095,
IEEE, May 2004.
[33] H. Pan. Method and system for a glitch-free differ-
ential current steering switch circuit for high speed, [44] J. Misra and I. Saha. Artificial neural networks in
high resolution digital-to-analog conversion, Patent hardware: A survey of two decades of progress.
No. US 7071858 B2, June 2005. Neurocomputing, 74(1–3):239–255, 2010.

[34] L. C. Gouveia, T. J. Koickal, and A. Hamilton. Com- [45] J. Liu and D. Liang. A survey of FPGA-based hard-
putation in communication: Spike event coding for ware implementation of ANNs. In Proceedings of
programmable analog arrays. In IEEE International the International Conference on Neural Networks and
Symposium on Circuits and Systems, pages 857–860, Brain, Beijing, China, volume 2, pages 915–918,
IEEE, Paris, May 2010. IEEE, 2005.
Spike Event Coding Scheme 513

[46] J. Maher, B. Ginley, P. Rocke, and F. Morgan. Intrin- for Neural, Fuzzy and Bio-Inspired Systems, Granada,
sic hardware evolution of neural networks in recon- Spain, pages 324–331, IEEE Computer Society,
figurable analogue and digital devices. In IEEE 1999.
Symposium on Field-Programmable Custom Comput-
ing Machines, Napa, CA, pages 321–322, IEEE, April [51] L. Maguire, T. Mcginnity, B. Glackin, A. Ghani,
2006. A. Belatreche, and J. Harkin. Challenges for large-
scale implementations of spiking neural networks
[47] P. Dong, G. Bilbro, and M.-Y. Chow. Implementa- on FPGAs. Neurocomputing, 71(1–3):13–29, 2007.
tion of artificial neural network for real time appli-
cations using field programmable analog arrays. [52] R. Guerrero-Rivera, A. Morrison, M. Diesmann,
In Proceedings of the IEEE International Joint Con- and T. Pearce. Programmable logic construction
ference on Neural Networks, Vancouver, BC, pages kits for hyper-real-time neuronal modeling. Neural
1518–1524, IEEE, 2006. Computation, 18(11):2651–79, 2006.

[48] S. Bridges, M. Figueroa, D. Hsu, and C. Diorio. [53] P. Rocke, B. Mcginley, J. Maher, F. Morgan, and
Field-programmable learning arrays. Neural Infor- J. Harkin. Investigating the suitability of FPAAs for
mation Processing Systems, 15:1155–1162, 2003. evolved hardware spiking neural networks. Evolv-
able Systems: From Biology to Hardware, 5216:118–
[49] W. Maass. Computing with spikes. Special Issue on 129, 2008.
Foundations of Information Processing of TELEMATIK,
8(1):32–36, 2002. [54] Y. Maeda, T. Hiramatsu, S. Miyoshi, and
H. Hikawa. Pulse coupled oscillator with learning
[50] C. Wolff, G. Hartmann, and U. Ruckert. ParSPIKE- capability using simultaneous perturbation and its
a parallel DSP-accelerator for dynamic simulation FPAA implementation. In ICROS-SICE International
of large spiking neural networks. In Proceedings of Joint Conference, Fukuoka, Japan, pages 3142–3145,
the 7th International Conference on Microelectronics IEEE, August 2009.
22
Digital Filtering with Nonuniformly Sampled Data:
From the Algorithm to the Implementation

Laurent Fesquet
University Grenoble Alpes, CNRS, TIMA
Grenoble, France

Brigitte Bidégaray-Fesquet
University Grenoble Alpes, CNRS, LJK
Grenoble, France

CONTENTS
22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
22.1.1 Nonuniform Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
22.1.2 Number of Nonuniform Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
22.2 Finite Impulse Response Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
22.2.1 Digital and Analog Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
22.2.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
22.2.2.1 Zeroth-Order Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
22.2.2.2 First-Order Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
22.2.3 Numerical Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
22.3 Infinite Impulse Response Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
22.3.1 State Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
22.3.2 Finite Difference Approximations of the Differential State Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
22.3.2.1 Euler Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
22.3.2.2 Backward Euler Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
22.3.2.3 Bilinear Approximation of an IIR Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
22.3.2.4 Runge–Kutta Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.3 Quadrature of the Integral State Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.3.1 Zeroth-Order Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.3.2 Nearest Neighbor Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.3.3 Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.5 Numerical Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
22.4 Nonuniform Filters in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
22.4.1 Sampling in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
22.4.2 Coefficient Number Reduction: The Ideal Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
22.4.3 Algorithm and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
22.5 Filter Implementation Based on Event-Driven Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
22.5.1 Analog-to-Digital Conversion for Event-Driven Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
22.5.2 Event-Driven Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
22.5.3 Micropipeline Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
22.5.4 Implementing the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
22.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527

515
516 Event-Based Control and Signal Processing

ABSTRACT This chapter targets the synthesis and chapter gives the first step toward well-suited filtering
the design of filters using nonuniformly sampled data techniques for nonuniform sampled signals [16].
for use in asynchronous systems. Data are sampled Today signal processing systems uniformly sample
with a level-crossing technique. It presents several filter- analog signals (at Nyquist rate) without taking advan-
ing strategies, namely, finite impulse-response (FIR) and tage of their intrinsic properties. For instance, tempera-
infinite impulse-response (IIR) filters. The synthesis of ture, pressure, electro-cardiograms, and speech signals
filters in the frequency domain based on a level-crossing significantly vary only during short moments. Thus, the
sampling scheme of the transfer function is also pre- digitizing system part is highly constrained due to the
sented. Finally, an architecture for an hardware imple- Shannon theory, which fixes the sampling frequency at
mentation of a FIR filter is given. This implementation is least twice the input signal frequency bandwidth. It has
based on an asynchronous data-driven logic in order to been proved in [15] and [19] that analog-to-digital con-
benefit from the events produced by the asynchronous verters (ADCs) using a non-equi-repartition in time of
analog-to-digital converter. samples lead to interesting power savings compared to
Nyquist ADCs. A new class of ADCs called A-ADCs (for
asynchronous ADCs) based on level-crossing sampling
(which produces nonuniform samples in time) [2,5] and
related signal processing techniques [4,18] have been
developed.
This work also suggests an important change in the
22.1 Introduction filter design. Like analog signals which are usually
sampled uniformly in time, filter transfer functions are
Today, our digital society exchanges data as never it has
also usually sampled with a constant frequency step.
been the case in the past. The amount of data is incredi-
Nonuniform sampling leads to an important reduction
bly large and the future promises that not only humans
in the number of weight-function coefficients. Combined
will exchange digital data but also technological equip-
with a nonuniform level-crossing sampling technique
ments, robots, etc. We are close to opening the door of
performed by an A-ADC, this approach drastically
the Internet of things. This data orgy wastes a lot of
reduces the computation load by minimizing the num-
energy and contributes to a nonecological approach of
ber of samples and operations, even if they are more
our digital life. Indeed, the Internet and the new tech-
complex.
nologies consume about 10% of the electrical power
produced in the world. Design solutions already exist
to enhance the energetic performances of electronic sys-
tems and circuits, of computers and their mobile appli- 22.1.1 Nonuniform Samples
cations: a lot of techniques and also a lot of publications! We consider here nonuniform samples that are obtained
Nevertheless, another way to reduce energy is to rethink through a level-crossing sampling technique. With no
the sampling techniques and digital processing chains. a priori knowledge of the input signal we can simply
Considering that our digital life is dictated by the Shan- define equispaced levels in the range of the input signal.
non theory, we produce more digital data than expected, For specific applications, other distributions of levels can
more than necessary. Indeed, useless data induce more be defined. Contrarily to the uniform world where the
computation, more storage, more communications, and sampled signal is defined by a sampling frequency and
also more power consumption. If we disregard the Shan- amplitudes, here each sample is the couple of an ampli-
non theory, we can discover new sampling and process- tude x and a time duration δt , which is the time elapsed
j j
ing techniques. A small set of ideas is given through since the previous sample was taken (see Figure 22.1).
examples such as filters, pattern recognition techniques, This time delay can be computed with a local clock, and
and so on, but the Pandora’s Box has to be opened to no global clock is needed for this purpose.
drastically reduce the useless data, and mathematicians
probably have a key role to play in this revolution.
Reducing the power consumption of mobile
22.1.2 Number of Nonuniform Samples
systems—such as cell phones, sensor networks, and
many other electronic devices—by one to two orders Nonuniform sampling is especially efficient for sporadic
of magnitude is extremely challenging, but it will be signals. We, therefore, use here an electrocardiogram
very useful to increase the system autonomy and reduce (ECG) record displayed in Figure 22.2. Its time duration
the equipment size and weight. In order to reach such is 14.2735 s. It has been recorded using 28,548 regular
a goal, the signal processing theory and the associ- samples with 2000 Hz uniform sampling and displays
ated system architectures have to be rethought. This 22 heartbeats.
Digital Filtering with Nonuniformly Sampled Data 517

Clearly all these samples are not useful for many where only 1377 samples are kept to describe the sig-
applications. For example, to detect the main features of nal. This number, and the induced signal compression,
the signal we can use only eight levels (i.e., a 3-bit stor- strongly depends on the statistical features of the sig-
age of amplitudes). The result is displayed in Figure 22.3, nal [6], and the position and number of levels. For the
same signal and 4- and 5-bit storage of amplitudes,
Table 22.1 displays the number of samples and statistical
data about the delays.
xj–1

xj
22.2 Finite Impulse Response Filtering
δtj
Classical FIR filters are strongly based on the regular
nature of both the input signal and the impulse response.
However, a continuous time analog can be given that can
tj–1 tj Time be generalized to nonuniform input data.

FIGURE 22.1 22.2.1 Digital and Analog Convolutions


Nonuniform samples via level-crossing sampling.
In the classical context, a Nth order (causal) FIR filter
× 10–4
computes the filtered signal yn , from the N + 1 previous
input samples x i , i = 0, . . . , N:
20
N
15
10
yn = ∑ h j xn− j . (22.1)
j =0
5
The filter is defined as its response to a sequence of
0
discrete impulses. For an input signal x (t) = ∑nj=0 x j
–5
0 2 4 6 8 10 12 14 δ(t − j), the output at time n is yn . Equation 22.1 can also
Time (s) be cast as
 +∞
FIGURE 22.2 y ( n) = h(τ) x (n − τ) dτ, (22.2)
−∞
Original ECG signal composed of 28,548 regular samples.
and y is the convolution of the signal x with the
× 10–4 impulse response h(t) = ∑kN=0 hk δ(t − k). A real-life sig-
nal is analog and not a series of discrete impulses, but
20
Equation 22.2 can be extended to the continuous context:
15
 +∞  +∞
10 y(t) = h(τ) x (t − τ) dτ = h(t − τ) x (τ) dτ.
5 −∞ −∞
0
(22.3)
–5 A finite number of coefficients hk is then mirrored by a
0 2 4 6 8 10 12 14 finite support function h.
Time (s) In the nonuniform context, a continuous signal is
FIGURE 22.3
only known at times t j . The continuous convolution is
A 3-bit level-crossing sampling of the ECG signal.
approximated with an algorithm that only makes use of
these samples.

TABLE 22.1
Statistics of Uniform and Level-Crossing Samples
Number of Samples Min(δt j ) Max (δt j ) Average Median
Regular 28,548 0.5 0.5 0.5 0.5
3-bit 1377 0.0028 350.2 10.4 1.7
4-bit 2414 0.0034 132.1 5.9 1.5
5-bit 5081 0.0028 63.2 2.8 1.1
Note: Delays are given in milliseconds.
518 Event-Based Control and Signal Processing

22.2.2 Algorithms [k, k + 1] on a nontrivial interval if j = n − k. Then


The signal can be approximated interpolating its n N

nonuniform samples. Since we intend to implement the ∑ a j ∑ hk length([n − j, n − j + 1] ∩ [k, k + 1])


j =0 k =0
obtained algorithms on asynchronous systems, we want
n N
to keep them as simple as possible. We only describe
here zeroth- and first-order interpolation, but the same
= ∑ x j ∑ hk δ j=n−k
j =0 k =0
derivation could be applied to higher-order interpola-
N
tion of the samples.
= ∑ xn− j h j .
j =0
22.2.2.1 Zeroth-Order Interpolation
The computation of the interval intersection lengths
The signal x and the impulse response h are only known would use the knowledge of times τk and tn − t j , but
from their samples ( x j , δt j ) and (hk , δτk ). We use classi- only delays δτk and δt j are available. The algorithm
cal filters, and δτk are all equal, but this specific feature is introduced by Aeschlimann [3] is therefore slightly dif-
not taken into account to describe the algorithm, which ferent. The subintervals are computed iteratively start-
confers symmetric roles to x and h. Zeroth-order inter- ing from k = 0 and j = 0, comparing the length of each
polation consists in approximating x and h by piecewise sample, and subdividing the longer sample.
constant functions:
Algorithm 22.1
n N
while k ≤ N + 1 and j ≤ n
x (t) . ∑ x j χ[t j−1,t j ] and h(t) . ∑ hk χ[τk ,τk+1] , (22.4)
j =0 k =0 compare δtn− j and δτk and choose the smallest: Δ
add Δx n− j hk to the convolution
where χ I is the characteristic function of interval I. We
have not chosen the same “side” for the zeroth-order go to the next sample for the finished interval(s)
interpolation of both signals. This is done on purpose, in and subdivide the other, that is,
order to have the same side, once the signal x is reversed if Δ = δtn− j , j ← j + 1, else δtn− j ← δtn− j − Δ
in the convolution algorithm:
if Δ = δτk , k ← k + 1, else δτk ← δτk − Δ
 +∞ n N
yn =
− ∞ j =0
∑ x j χ[t j−1,t j ] (tn − t) ∑ hk χ[θk ,θk+1] (t) dt This subdivision is done keeping the same ampli-
k =0 tude since zeroth-order interpolation is involved. The
n N  +∞ algorithm should be performed on copies of the delay
= ∑ x j ∑ hk −∞
χ[t j−1,t j ] (tn − t) χ[τk ,τk +1 ] (t) dt sequences, in order to be able to do the computation for
j =0 k =0 the successive values of n.
n N
= ∑ x j ∑ hk length([tn − t j , tn − t j−1] ∩ [τk , τk+1 ]). 22.2.2.2 First-Order Interpolation
j =0 k =0
The principle is the same as for zeroth-order interpo-
This theoretical formula would lead to the thought that lation. Now x and h are approximated by piecewise
n( N + 1) elementary contributions are needed to com- linear functions. The support of each piece is the same as
pute the convolution. This number can be reduced to for zeroth-order interpolation, and we have to compute
at most n + N + 1 contributions, which correspond to at most n + N + 1 elementary contributions. On each
the maximum number of nonempty intersections [tn − subinterval, the signals are linear. Their product is there-
t j , tn − t j−1 ] ∩ [τk , τk+1 ]. This value is obtained interpolat- fore a second-order polynomial. The Simpson quadra-
ing x at times τk and h at times tn − t j . Once this is done, ture formula is exact for polynomials up to degree 3, it
both signals are known at the same times and both are is therefore exact in this context. On an interval of length
r for x and h 
constant on each interval. The contribution to the convo- Δ with end (left and right) values x and x
r
lution is the product of both amplitudes by the length of and h for h, the elementary contribution is
 
the subinterval. Δ x  + x r h  + hr
 
x h +x h +4
r r
6 2 2
REMARK 22.1 If the samples for x (and h) are uni-
'
Δ   (
form, we recover the classical FIR filtering formula: = x h + xr hr + ( x  + xr )(h + hr ) .
[tn − t j , tn − t j−1 ] = [n − j, n − j + 1] and only intersects 6
Digital Filtering with Nonuniformly Sampled Data 519

The differences with the zeroth-order algorithm are which is precompiled and cannot have an equivalent in
printed in bold font in Algorithm 22.2. the targeted architectures.
We choose a 10th-order FIR low-pass filter (MATLAB
function fir1) with cutoff frequency 100 Hz. The regular
Algorithm 22.2 case is the simple sum (22.1) but has to deal with the
while k ≤ N + 1 and j ≤ n 28,548 samples and also computes 28,548 filtered sam-
ples. For the nonuniform test case, we choose the 3-bit
compare δtn− j and δτk and choose the smallest: Δ
sampled data, with 1377 samples, and also compute
add new contribution to the convolution with 1377 output samples. The input and filtered signal are
Simpson rule displayed in Figure 22.4.
In the case of zeroth-order interpolated data (which
go to the next sample for the finished interval(s)
is closer to (22.1)), the gain (compared to the regular
and subdivide and interpolate the amplitude
case) is a factor of 30 in CPU time. Although the treat-
for the other, that is,
ment of each sample is more complex, a significative
if Δ = δtn− j , j ← j + 1, else δtn− j ← δtn− j − Δ gain is obtained due to the lower number of samples.
and compute interpolated value for xn− j It is therefore more efficient to treat directly the nonuni-
form samples given by the event-driven architectures. If
if Δ = δτk , k ← k + 1, else δτk ← δτk − Δ
linear interpolation is used, there is still some gain, of
and compute interpolated value for h k
about seven in our example.

22.2.3 Numerical Illustration


22.3 Infinite Impulse Response Filtering
We compare the filtering of the regular and the nonuni-
formly sampled ECG signal within the SPASS Toolbox There are various ways to describe IIR filters, but the
[8], which is developed in MATLAB. To be fair, we state representation is the one that can be the most
do not use the native filtering procedure in MATLAB, naturally adapted to nonuniform samples.

× 10–3
3
Initial
2
1
0
–1
0 5 10 15
× 10–3
3
Regular
2
1
0
–1
0 5 10 15
× 10–3
3
Nonuniform
2
1
0
–1
0 5 10 15
Time (s)

FIGURE 22.4
Regular and nonuniform FIR filtering.
520 Event-Based Control and Signal Processing

22.3.1 State Representation where


In the uniform world, an IIR filter is given by its transfer ⎛ ⎞
0 1 ··· 0 0
function in the Laplace variable: ⎜ ⎟
.. .. .. .. ..
⎜ . . . . . ⎟
⎜ ⎟
N A=⎜ 0 0 ··· 1 0 ⎟,
∑ αk p k ⎜ ⎟
⎝ 0 0 ··· 0 1 ⎠
k =0
ĥ( p) = . (22.5) −β0 −β1 ··· − βK − 2 − βK − 1
N
∑ βk p k
k =0
is the N × N state matrix, B = (0 · · · 0 1)t is the com-
Then, the filtered signal ŷ ( p) is computed from the mand vector, C = (α0 − α N β0 · · · α N −1 − α N β N −1 ) is
input signal x̂ ( p) by the relation ŷ( p) = ĥ( p) x̂( p). the observation vector, and D = α N is the direct link
This definition is difficult to apply in the nonuniform coefficient.
context. Therefore, we use an equivalent formulation in The integral form of the state equation is given by
the time domain which is given by the state equation. We  t
define the state vector as Ŝ( p) = (ŝ0 ( p), . . . , ŝ N −1 ( p))t S(t) = e At S(0) + e A(t−τ) Bx (τ) dτ. (22.7)
with coordinates t0

sk x̂ ( p) Algorithms follow from the discretization of either


ŝk ( p) = for k = 0, . . . , N − 1,
N form, differential (22.6) or integral (22.7). The last equa-
∑ βj p j tion will always be simply given by
j =0

(this definition is extended to k = N). The Laplace trans- y n = CSn + Dxn .


form of the filtered signal y can be written in terms of
Ŝ( p), namely,
N 22.3.2 Finite Difference Approximations of the
ŷ( p) = ∑ αk ŝk ( p). Differential State Equation
k =0
22.3.2.1 Euler Approximation
The coordinates ŝk may be computed step by step,
thanks to the formula ŝk ( p) = pŝk−1 ( p) for k = The Euler method consists in writing the state equation
1, . . . , N − 1. (22.6) at time tn−1 and using an upward approximation
Besides, the definition of ŝ0 ( p) may also be written for the time derivative, namely,
∑kN=0 βk pk ŝ0 ( p) = x̂ ( p), which if β N = 1 (normalization
condition) yields Sn − Sn −1
= ASn−1 + Bxn−1 ,
δtn
N −1
ŝ N ( p) = − ∑ βk ŝk ( p) + x̂ ( p). which also reads Sn = ( I + δtn A)Sn−1 + Bδtn xn−1 .
k =0

This finally leads to 22.3.2.2 Backward Euler Approximation


For the backward Euler approximation, the state equa-
N −1 tion (22.6) is discretized at time tn :
ŷ( p) = α N ŝ N ( p) + ∑ αk ŝk ( p)
k =0
Sn − Sn −1
N −1 = ASn + Bxn ,
= ∑ (αk − αN βk )ŝk ( p) + αN x̂( p). δtn
k =0
that is, Sn = ( I − δtn A)−1 (Sn−1 + Bδtn xn ).
These results can be cast in a system of N equations,
called state equations, in the time domain. Namely,
22.3.2.3 Bilinear Approximation of an IIR Filter
dS(t) Poulton and Oksman [17] have chosen a bilinear
= AS(t) + Bx (t), (22.6)
dt method to approximate the time derivative in the state
y(t) = CS(t) + Dx (t), equation. This method consists in writing a centered
Digital Filtering with Nonuniformly Sampled Data 521

approximation of Equation 22.6 at time tn−1/2 = 12 (tn + 22.3.3.1 Zeroth-Order Interpolation


tn−1 ), that is,
In the zeroth-order interpolation x (t) = xn−1 for t ∈
Sn − Sn −1 Sn + Sn −1 x n + x n −1 ]tn−1, tn [. Then,
=A +B ,
δtn 2 2  tn
or Sn = e Aδtn Sn−1 + e A(tn −τ) Bx n−1 dτ
t n −1
  −1
δtn = e Aδtn Sn−1 − A−1 ( I − e Aδtn ) Bxn−1.
Sn = I− A
2
   If we choose x (t) = xn for t ∈]tn−1 , tn [, we of course
δtn δtn
× I+ A Sn −1 + B ( x n + x n −1 ) . have
2 2 Sn = e Aδtn Sn−1 − A−1 ( I − e Aδtn ) Bx n .

22.3.2.4 Runge–Kutta Schemes 22.3.3.2 Nearest Neighbor Interpolation


We have implemented the classical fourth-order Runge– We may center time intervals in a different way and sup-
Kutta scheme (RK4), which for the state equation (22.6) pose that x (t) = x
n −1 for t ∈] t n −1 , t n −1/2 [ and x ( t ) = x n
and a linear approximation of x (tn−1/2) reads for t ∈]tn−1/2, tn [. Then,
 
1 1 1  t
Sn = I + δtn A + δt2n A2 + δt3n A3 + δt4n A4 Sn−1 n −1/2
2 6 24 Sn = e Aδtn Sn−1 + e A(tn −τ) Bx n−1 dτ
  t n −1
1 1 1 1  tn
+ δtn I + δtn A + δt2n A2 + δt3n A3 Bxn−1
2 3 8 24 + e A(tn −τ) Bx n dτ
  t n −1/2
1 1 1
+ δtn I + δtn A + δt2n A2 Bxn . = e Aδtn sn−1 − A−1 (e Aδtn /2 − e Aδtn ) Bxn−1
2 6 24
− A−1 ( I − e Aδtn /2 ) Bxn .
We may notice that the iteration matrix that operates on
Sn−1 is the fourth-order Taylor expansion of exp(δtn A),
which would be the exact operator. 22.3.3.3 Linear Interpolation
We have also tested a third-order two-stage Runge–
Kutta semi-implicit method (RK23). For the state equa- If x (t) is approximated by a piecewise linear function
t−t
tion (22.6), it reads x (t) = xn−1 + δtnn−1 ( xn − xn−1 ) for t ∈]tn−1, tn [, and the
  −1 same type of computation leads to
Sn = ( I − δtn A) −1 2
I − δtn A  
−1 Aδtn −1 1

3
 S n = e Aδt n
S n −1 + A e + A ( I − e Aδt n
) Bx n−1
δtn
×
2 1 2 2
I − δtn A − δtn A Sn−1  
−1 −1 1
3

2
 − A I + A ( I − e Aδt n
) Bxn .
δtn
1 1
+ δtn I − δtn A Bxn−1
2 2 The comparison of these various schemes in terms
  
1 3 of stability and complexity has been performed in [12].
+ δtn I − δtn A Bxn . It shows that the Euler scheme has to be rejected for
2 2
being unstable, and the implicit Euler scheme for being
in a sense too stable (i.e., too dissipative). In the case of
22.3.3 Quadrature of the Integral State Equation Runge–Kutta schemes, extra redundant samples can be
taken to avoid too large inactive parts in the input signal
In [13], Fontaine and Ragot choose to discretize the and ensure stability.
integral form of the state equation (22.7) directly. The
integral equation is written between two asynchronous
times: 22.3.4 Algorithm
 tn In practice, the formulae cannot be easily implemented
S(tn ) = e Aδtn S(tn−1 ) + e A(tn −τ) Bx (τ) dτ. as they are given above because they need the compu-
t n −1
tation of inverse matrices or exponential of matrices.
The different algorithms consist in approximating the In order to implement an Nth-order filter, it is usual
continuous signal x (t) by the interpolation of order 0 or to decompose it in multiple first- and second-order fil-
1 of the asynchronous samples. ters that are easily implemented with classical electrical
522 Event-Based Control and Signal Processing

× 10–3
3
Initial
2
1
0
–1
0 5 10 15
× 10–3
3
Regular
2
1
0
–1
0 5 10 15
× 10–3
3
Nonuniform
2
1
0
–1
0 5 10 15
Time (s)

FIGURE 22.5
Regular and nonuniform IIR filtering.

structures. It consists in writing Equation 22.5 as a prod- defined in the frequency domain and should be eas-
uct of similar rational fractions but with N = 1 or 2. ier to describe in this domain than in the time domain.
It is then equivalent to apply successively the induced We come back to the continuous convolution formula
first- and second-order filters, or the Nth-order filter. (22.3) and express the impulse response h as the inverse
It is equivalent from the theoretical point of view, but Fourier transform of the analog filter transfer func-
the succession of first- and second-order filters is eas- tion H:
ier to implement on asynchronous architecture, because  +∞
1
we can easily give explicit values for the inverse or h(t) = H (ω)eiωt dω. (22.8)
exponential matrices. 2π −∞

22.3.5 Numerical Illustration As for the previous FIR and IIR filters, we suppose that
the input signal is interpolated (at most) linearly from its
The test cases in MATLAB are performed in the same nonuniform samples:
framework and with the same algorithmic choices as
for FIR filters. We use here a 10th-order Butterworth
n
filter and the linearly interpolated quadrature formula x (t) ∼ ∑ ( a j + b j t)χ[t j−1,t j ] . (22.9)
to obtain the results displayed in Figure 22.5. j =1
There is a gain of 20 in CPU time using nonuniform
samples compared to regular samples. The RK4 is not
stable with the 3-bit samples illustrating the stability
result of [12]. The minimum number depends both on 22.4.1 Sampling in the Frequency Domain
the filter (here a Butterworth one) and on the numerical The transfer function is also sampled nonuniformly
scheme. and interpolated linearly with the extra feature that the
transfer function is complex valued. The best way to
do it is to interpolate separately the amplitude and the
22.4 Nonuniform Filters in the Frequency phase (see [9], where it is compared to interpolation in
the complex plane). In practice, we define the filter for
Domain
positive values of the frequency and use level-crossing
We now want to go further from classical filtering and sampling to define the sampling frequencies. This yields
take advantage of the fact that filters are naturally the filter samples (ωk , Hk ). Then, H is interpolated. For
Digital Filtering with Nonuniformly Sampled Data 523

a low-pass filter, where


1
* Cj = Si(ωc (t − t j )),
K
i (θ0k +θ1k ω)
π
H ( ω ) ∼ ∑ ( ρk + ρk ω ) e
0 1
χ Jk + (ρk − ρk ω)
0 1
and Si is the integral sine equation defined by Si( x ) =
k =1  x
+ dy
i (θ0k −θ1k ω) sin(y) . This has the disadvantage to be noncausal
−e χ J− . (22.10) 0 y
(i.e., to use all the signal samples to compute x (t), even
k

Here, Jk = [ωk−1 , ωk ] (ω0 = 0), Jk− = [−ωk , −ωk−1 ], and future samples). To avoid this, we can use the associated
causal filter derived using the Hilbert transform of the
| Hk | − | Hk − 1 | filter transfer function:
ρ1k = , ρ0k = | Hk | − ρ1k ωk ,
ω k −1 − ω k Nt
y (t) = ∑ x j (Ccj (t) − Ccj−1 (t)), (22.13)
arg( Hk ) − arg( Hk−1 ) =
θk =
1 , θk = arg( Hk ) − θk ωk .
0 1 j 1
ω k −1 − ω k
where
The filtering algorithm is then obtained by “simply” 1'
Ccj = Si(ωc (t − t j )) + i sgn(t − t j )
computing the integral (22.3) using (22.8) and the inter- π (
polated formulae (22.9) and (22.10). The result can × Cin(ωc (t − t j )) ,
always be cast as
where Cin is the integral cosine equation defined
n K  |x|
y(t) = ∑ x j ∑ h jk (t). (22.11) by Cin( x ) = dy
(1 − cos(y)) . The details are given
j =0 k =1 0 y
in [10]. In Equation 22.13, Nt is the index of the last
The main drawback is that the formulae for h jk (t) are sample at time t.
quite complex and involve special functions such as
trigonometric functions and integral trigonometric func- 22.4.3 Algorithm and Results
tions. The detailed formulae can be found in [9] (see
also [11] for interpolation in the log-scale). Each sin- The special functions Si and Cin would be very diffi-
gle evaluation is therefore very costly and impossible cult to implement efficiently on asynchronous hardware.
to implement on the asynchronous systems we target as Even in MATLAB their evaluation is quite slow. The best
main applications. In addition, the computational cost way to implement the algorithm is to create a look-up
has a priori an nK complexity (times the computational table from a previous knowledge of the range of the
cost of the single evaluation of a h jk (t)). expressions ωc (t − t j ). Then, values of the special func-
tion are interpolated from the look-up table. If the table
takes into account the specific features (almost linear
22.4.2 Coefficient Number Reduction: The Ideal Filter part, oscillating parts) of the special functions involved,
To reduce the cost of the previously described algorithm, this gives very good qualitative results with an impor-
we first recall that we address nonuniform samples that tant speed-up, at the cost of a preprocessing procedure
have been specifically chosen to reduce their number to construct the table.
for the targeted application. Therefore, n is low com- The filtering result, displayed in Figure 22.6 is again
pared to regular sampling (by one or two decades in our similar to the previous results (e.g., FIR filtering), which
applications). The next direction is to reduce K. It can be validates this approach. The cost is, of course, higher
reduced to K = 1, and yield the equivalent of an infinite than zeroth-order FIR filtering, but it is still competitive.
order scheme. Indeed, the perfect low-pass filter However, for certain applications with no a priori knowl-
 edge of the delays, it can be quite tricky to construct the
1 if 0 ≤ ω ≤ ωc , look-up table correctly.
H0 (ω) = (22.12)
0 if ω > ωc ,

would need an infinite number of filter coefficients in


the time domain. With our notations, we have K = 1 and
H1 = 1. In this case, and for zeroth-order interpolation
for the signal, the algorithm is quite simple [7]: 22.5 Filter Implementation Based on
Event-Driven Logic
N
y(t) = ∑ x j (Cj (t) − Cj−1 (t)), The implementation of filters with event-driven logic
is a good way to fully exploit the low-power potential
j =1
524 Event-Based Control and Signal Processing

× 10–3
3
Initial
2
1
0
–1
0 5 10 15
× 10–3
3
Nonuniform
2
1
0
–1
0 5 10 15
Time (s)

FIGURE 22.6
Filtering with the ideal filter.

DAC
Ack.

Ack.
Req.

Req.
Req. Req.
Vref (t) Ack.
Ack.
Difference Up/down
Up xj
quantifier counter
Vx(t) dwn
δtj

If (Vx(t) – Vref (t)) > q/2 → up = 1


If (Vx(t) – Vref (t)) < q/2 → down = 1
Else up = down = 0 Timer

FIGURE 22.7
Architecture of an A-ADC.

digital filter using nonuniform sampled data. Indeed, Figure 22.7, is composed of four blocks: a difference
we are not only reducing the number of samples, but we quantifier, a state variable (an up/down counter and a
are also stopping the computation activity when no data Look-Up Table if the thresholds are not equally spaced),
a digital-to-analog converter (DAC), and a timer deliv-
are fed in the filter. This approach offers a good opportu-
ering the time intervals δt j between samples. The Vre f
nity to spare energy and is particularly well suited to the
nonuniform sampling schemes. Moreover, this design signal is changed each time a threshold is crossed and
strategy is general enough to be applied to the differenttwo new thresholds are defined accordingly. An exam-
kinds of digital filters presented in the previous sections.
ple of a possible algorithm for the difference quantifier is
given in Figure 22.7. This choice is very interesting for a
minimization of the hardware. Moreover, only one cycle
22.5.1 Analog-to-Digital Conversion for Event-Driven
of the loop is needed to convert a sample. Notice that
Technology
no external signal as a clock is used to synchronize and
Many hardware strategies can be used to implement trigger the conversion of samples. Moreover, an asyn-
the level-crossing sampling scheme described in the chronous structure has been chosen for the circuit. The
previous paragraphs. One approach is a tracking loop information transfer between each block is locally man-
enslaved on the analog signal to convert. For the cross- aged with a bidirectional control signaling: a request
ing detection, the instantaneous comparison of the input and an acknowledgment, represented by the dashed
analog signal Vx is restricted to the two nearest lev- lines. This explains the name of asynchronous ADC.
els: the one just above and the other just below. Every This block is able to simultaneously provide the mag-
time a level is crossed, the two comparison levels are nitude of Vx and the time elapsed between the last two
shifted down or up. The conversion loop, shown in samples. This architecture has been implemented with
Digital Filtering with Nonuniformly Sampled Data 525

many FPGA and COTS (circuits on the shelf) but also in Consequently, when the output changes from ‘0’ to
130 nm CMOS technology from STMicroelectronics [1]. ‘1’, we may conclude that both inputs are ‘1’. And
similarly, when the output changes from ‘1’ to ‘0’, we
may conclude that both inputs are now set to ‘0’. This
22.5.2 Event-Driven Logic
behavior could be interpreted as an acknowledgment
Unlike synchronous logic, where the synchronization is that indicates when both inputs are ‘1’ or ‘0’. This is
based on a global clock signal, event-driven or asyn- why the C-element is extensively used in asynchronous
chronous logic does not need a clock to maintain the syn- logic and is considered as the fundamental component
chronization between its sub-blocks. It is considered as a on which the communication protocols are based.
data-driven logic where computation occurs only when
new data arrived. Each part of an asynchronous circuit
22.5.3 Micropipeline Circuits
establishes a communication protocol with its neigh-
bors in order to exchange data with them. This kind of Many asynchronous logic styles exist in the literature.
communication protocol is known as “handshake” pro- It is worth mentioning that the choice of the asyn-
tocol. It is a bidirectional protocol, between two blocks chronous style affects the circuit implementation (area,
called a sender and a receiver as shown in Figure 22.8. speed, power, robustness, etc.). One of the most
The sender starts the communication cycle by sending a interesting styles to implement our filters with a lim-
request signal ‘req’ to the receiver. This signal means ited area and power consumption is the micropipeline
that data are ready to be sent. The receiver starts the style [21]. Among all the asynchronous circuit styles,
new computation after the detection of the ‘req’ signal the micropipeline has the closest resemblance with
and sends back an acknowledgment signal ‘ack’ to the the design of synchronous circuits due to the exten-
sender marking the end of the communication cycle, so sive use of timing assumptions [20]. Similarly as a
that a new one can start. synchronous pipeline circuit, the storage elements are
The main gate used in this kind of protocol is the controlled by control signals. Nevertheless, there is no
“Muller” gate, also known as C-element. It helps— global clock. These signals are generated by the Muller
thanks to its properties—to detect a rendezvous between gates in the pipeline controlling the storage elements as
different signals. The C-element is in fact a state-holding shown in Figure 22.9.
gate. Table 22.2 shows its output behavior. This circuit can be seen as an asynchronous data-flow
structure composed of two main blocks:
Request • The data path that is completely similar to the
Sender
Data
Receiver synchronous data path and locally clocked by a
Acknowledgment distributed asynchronous controller.

FIGURE 22.8
• The control path that is a distributed asynchronous
Handshake protocol used in asynchronous circuits.
controller able to provide the clock signal to mem-
ory elements when data are available. This feature
TABLE 22.2 replaces the clock tree of synchronous circuits.
Output of a C-Element
This kind of circuit is easy to implement because we can
A B Z adapt the synchronous flow. Indeed, the data path syn-
0 0 0 thesis can be done by the existing commercial tools, and
0 1 Z-1 the control path insertion can be semi-automated. This is
1 0 Z-1 perfect for our purpose because we target event-driven
1 1 1

D Q D Q D Q
Logic Logic Logic

Ack Ack Ack Ack

Req Req Req Req


Delay C Delay C Delay C

FIGURE 22.9
Micropipeline circuit.
526 Event-Based Control and Signal Processing

xn–j
MULT1
hk

xn Shft rg2.
ROM MULT2 ACC BUFF
δtn (yn ; δtn)
Shft rg1.
j k δτk
Δ
Async
ctrl
δtn–j

FIGURE 22.10
Architecture of FIR for nonuniform sampling.

digital signal processing. Each time new data are sam-


pled, then the computation is activated and naturally
stopped after the end of the calculation. This perfectly
suits the objectives of low power while maintaining an
easy circuit implementation with a similar area to their
synchronous counterpart.

22.5.4 Implementing the Algorithm FIGURE 22.11


Nonuniform sampling and filtering experimental setup.
With the nonuniform sampling scheme, the sampling
instants of the FIR filter impulse response do not nec-
essarily fit with the sampling times of the input sam- is based on detecting the signal k. If k reaches its
ples. In order to avoid this, we use Algorithm 22.1 (see maximum, this means that all the filter coefficients
Section 22.2.2). have been used. Thus, the convolution product is
The previously proposed algorithm is implemented done. The asynchronous controller block gener-
with the structure presented in Figure 22.10, which ates at this point a reset signal to the other blocks,
describes the FIR filter architecture. The dashed lines indicating that the filter is ready to start a new
symbolize the handshake signals used for implementing convolution.
the micropipeline logic. The two shift registers get the
The two multipliers (MULT 1 and MULT 2) compute the
sampled signal data ( xn , δtn ) from the A-ADC. The com-
intermediate values Δ · xn− j · hh that are accumulated
munication between the shift registers and the A-ADC
in the accumulator (ACC). Finally, at the end of each
is based on the handshake protocol as described in the
convolution product cycle when the reset signal is sent,
previous section. The shift registers are the memory
the accumulator ( ACC) transfers its content to the buffer
blocks of the filter. They store the input samples, mag-
(BUFF). Then, the resulting convolution product (yn , δtn )
nitudes, and time intervals. The output of this register
is available at the filter output.
is connected to an asynchronous controller that allows
The FIR filter has been implemented with the
selecting samples depending on the value of the selec-
micropipeline logic style on an Altera FPGA board fol-
tion input j. The ROM is used to store the impulse
lowing the approach presented in [14]. The experimental
response coefficients. The coefficients are selected by the
setup can be seen in Figure 22.11.
signal k. The asynchronous controller block (Async ctrl)
has multiple functionalities:

• It allows determining the minimum time interval


Δ among {δtn− j , δτk }.
22.6 Conclusion
• It generates the selection signals j and k that con-
trol the selection process in the shift registers. In this chapter, several filtering techniques that can be
applied to nonuniform sampled signals have been pre-
• It detects the end of the convolution product round sented. The approaches are general and can be used with
and allows starting a new one. This functionality any kind of signal. We first describe the algorithm to
Digital Filtering with Nonuniformly Sampled Data 527

implement a FIR filter and show that the result is con- based on time quantization. In 9th International
vincing even if we used a small set of samples compared Symposium on Asynchronous Circuits and Systems,
to the uniform approach. Then, we present the imple- Async’03, pp. 196–205, IEEE, Vancouver, BC, May
mentation of IIR filters via a state representation and 2003.
discuss several finite difference approximations. The fol-
lowing section shows how nonuniform filters can be [3] F. Aeschlimann. Traitement du signal echantil-
built in the frequency domain. This novel strategy for lonne non uniformement: Algorithme et architec-
synthesizing filters is of interest for drastically reducing ture. PhD thesis, INP, Grenoble, February 2006.
the coefficient number. Finally, the design architecture
[4] F. Aeschlimann, E. Allier, L. Fesquet, and
of such filters on a real system is presented. The tech-
M. Renaudin. Asynchronous FIR filters: Towards a
nology target, asynchronous circuits, fully exploits the
new digital processing chain. In 10th International
data time irregularity. Indeed, these circuits are event-
Symposium on Asynchronous Circuits and Systems
driven and are able to react and process data only when
(Async’04), pp. 198–206, IEEE, Hersonisos, Crete,
new samples are produced. This gives a perfectly coher-
April 2004.
ent system that captures new samples when the signal is
evolving and processes the novel data on demand. With [5] F. Akopyan, R. Manohar, and A. B. Apsel. A level-
such an approach, it is possible to drastically reduce crossing ash asynchronous analog-to-digital con-
the system activity, the number of samples, and the verter. In 12th IEEE International Symposium on
data quantity sent on the output. Considering on one Asynchronous Circuits and Systems (ASYNC’06),
side today’s strong requirements for many electronic pp. 11–22, Grenoble, France, March 2006.
equipment covering applications such as smartphones,
notepads, laptops, or the emerging Internet of Things [6] B. Bidégaray-Fesquet and M. Clausel. Data driven
and the resulting data deluge, it becomes really urgent sampling of oscillating signals. Sampling Theory in
to stop this orgy of data that largely contributes to a Signal and Image Processing, 13(2):175–187, 2014.
colossal waste of energy.
This chapter is also a contribution to encourage people [7] B. Bidégaray-Fesquet and L. Fesquet. A fully
to directly process nonuniformly sampled data. Indeed, non-uniform approach to FIR filtering. In 8th Inter-
it is possible to perform spectrum analysis or pattern national Conference on Sampling Theory and Applica-
recognition, for instance. This leads to really interest- tions (SampTa’09), L. Fesquet and B. Torresani, Eds.,
ing results demonstrating that most of the time we are pp. 1–4, Marseille, France, May 2009.
using more than necessary samples. Finally, the nonuni-
form approach is a plus for people who target ultra-low [8] B. Bidégaray-Fesquet and L. Fesquet. SPASS 2.0:
power systems, because it helps reduce by several orders Signal Processing for ASynchronous Systems. Soft-
of magnitude the data volume and processing and thus ware, May 2010.
the power consumption. [9] B. Bidégaray-Fesquet and L. Fesquet. Non-uniform
filter interpolation in the frequency domain. Sam-
pling Theory in Signal and Image Processing, 10:17–35,
2011.
Acknowledgments
[10] B. Bidégaray-Fesquet and L. Fesquet. Non Uniform
This work has been partially supported by the LabEx Filter Design. Technical Report, Grenoble, 2015.
PERSYVAL-Lab (ANR-11-LABX-0025-01).
[11] B. Bidégaray-Fesquet and L. Fesquet. A New Syn-
thesis Approach for Non-Uniform Filters in the Log-
Scale: Proof of Concept. Technical Report, Grenoble,
Bibliography 2012.

[1] E. Allier, G. Sicard, L. Fesquet, and M. Renaudin. [12] L. Fesquet and B. Bidégaray-Fesquet. IIR digi-
Asynchronous level crossing analog to digital con- tal filtering of non-uniformly sampled signals via
verters. Special Issue on ADC Modelling and Testing state representation. Signal Processing, 90:2811–
of Measurement, 37(4):296–309, 2005. 2821, 2010.

[13] L. Fontaine and J. Ragot. Filtrage de signaux


[2] E. Allier, G. Sicard, L. Fesquet, and M. Renaudin. a echantillonnage irregulier. Traitement du Signal,
A new class of asynchronous A/D converters 18:89–101, 2001.
528 Event-Based Control and Signal Processing

[14] Q. T. Ho, J.-B. Rigaud, L. Fesquet, M. Renaudin, [18] S. Mian Qaisar, L. Fesquet, and M. Renaudin.
and R. Rolland. Implementing asynchronous cir- Adaptive rate filtering for a signal driven sampling
cuits on LUT based FPGAs. In The 12th International scheme. In International Conference on Acoustics,
Conference on Field-Programmable Logic and Applica- Speech, and Signal Processing, ICASSP 2007, vol. 3,
tions (FPL02), pp. 36–46, La Grande-Motte, France, pp. 1465–1568, Honolulu, HI, April 2007.
2–4 September 2005.
[19] N. Sayiner, H. V. Sorensen, and T. R. Viswanathan.
[15] J. W. Mark and T. D. Todd. A nonuniform sampling A level-crossing sampling scheme for A/D conver-
approach to data compression. IEEE Transactions on sion. IEEE Transactions on Circuits and Systems II,
Communications, 29:24–32, 1981. 43:335–339, 1996.

[16] F. A. Marvasti. Nonuniform sampling. Theory and [20] J. Sparsø and S. Furber. Principles of Asynchronous
practice. In Information Technology: Transmission, Circuit Design: A Systems Perspective. Springer,
Processing and Storage, Springer, 2001. Boston, MA, 2001.

[17] D. Poulton and J. Oksman. Filtrage des signaux [21] I. E. Sutherland. Micropipelines. Communications of
a echantillonnage non uniforme. Traitement du the ACM, 32:720–738, 1989.
Signal, 18:81–88, 2001.
23
Reconstruction of Varying Bandwidth Signals from
Event-Triggered Samples

Dominik Rzepka
AGH University of Science and Technology
Kraków, Poland

Mirosław Pawlak
University of Manitoba
Winnipeg, MB, Canada

Dariusz Kościelnik
AGH University of Science and Technology
Kraków, Poland

Marek Miśkowicz
AGH University of Science and Technology
Kraków, Poland

CONTENTS
23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
23.2 Concepts of Local Bandwidth and Recovery of Signals with Varying Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . 531
23.2.1 Global Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
23.2.2 Local Bandwidth and Signals with Varying Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
23.2.3 Reconstruction of Varying Bandwidth Signals with Nonideal Sampling Timing . . . . . . . . . . . . . . . . 532
23.3 Level-Crossing Sampling of Gaussian Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
23.3.1 Level Crossings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
23.3.2 Mutilevel Crossings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
23.3.3 Estimation of Level-Crossing Local Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
23.3.4 Estimation of Signal Parameters in Multilevel Crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
23.4 Reconstruction of Bandlimited Signals from Nonuniform Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
23.4.1 Perfect Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
23.4.2 Minimum Mean Squared Error Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
23.4.3 Example of Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
23.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543

ABSTRACT Level-crossing sampling is an event- suited for the irregular samples combined with the time
based sampling scheme providing the samples whose warping is presented.
density is dependent on the local bandwidth of the sam-
pled signal. The use of the level crossings allows to
exploit local signal properties to avoid unnecessarily fast
sampling when the time-varying local bandwidth is low.
In this chapter, the method of estimation of the local
23.1 Introduction
signal bandwidth based on counting the level crossings
is proposed. Furthermore, the recovery of the original The Shannon theorem introduced to engineering com-
signal from the level crossings based on the methods munity in 1949 is a fundamental result used for digital
529
530 Event-Based Control and Signal Processing

signal processing of analog signals [1]. The main operations with the rate varying to local frequency
assumption under which a signal can be reconstructed content is one of the challenges of signal processing
from its discrete-time representation is that sampling technique focused on exploiting local signal properties.
rate is twice higher than the highest frequency com- The method which can be used for this purpose is the
ponent in the signal spectrum (Nyquist frequency), or level-crossing sampling, because the mean rate of level-
twice the signal bandwidth. The class of signals with crossing samples depends on the power spectral density
a finite bandwidth is often referred to as bandlimited. of the signal [5,6,10,11]. The mean rate of level cross-
The concept of bandwidth is defined using the Fourier ings is higher when the signal varies quickly and lower
transform of the infinitely long signal, that is, the sig- when it changes more slowly. The mean rate of level-
nal that is not vanishing in any finite time interval crossing sampling for a bandlimited signal modeled as
of (−∞, ∞). Furthermore, according to the uncertainty a stationary Gaussian process is directly proportional
principle, only signals defined over infinite time interval to the signal bandwidth, if the spectrum shape is fixed.
can be exactly bandlimited. As it is known, real phys- By evaluating the number of level crossings in a time
ical signals are always time limited, so they cannot be window of a finite size, the estimate of the local band-
perfectly bandlimited and the bandlimited model is only width can be obtained. However, as the level-crossing
the convenient approximation [2]. Moreover, in practice, rate depends on the bandwidth only in the mean sense,
the Nyquist frequency is determined on the basis of the the level-crossing instants, in general, do not match the
finite record of signal measurements as the frequency for time grid defined by the local Nyquist rate. Therefore,
which higher spectral components are weak enough to to recover the original signal from the level crossings,
be neglected. Therefore, the evaluation of the Nyquist we propose to adopt methods suited for the irregular
frequency is referred not to the global but to the local samples combined with the time warping.
signal behavior. The local bandwidth can be directly related to the con-
In the conventional signal processing system, the cept of a local intensity function. This is a measure of
sampling rate corresponding to the Nyquist frequency distribution of sampling points that characterize a given
is kept fixed when the signal is processed. Such an signal in terms of its variability and reconstructability.
approach is justified in relation to signals whose spectral In the case of samples generated from level crossings
properties do not evolve significantly in time. How- of stochastic signals, the intensity function is defined by
ever, there are some classes of signals, for example, FM the joint density function of the signal and its deriva-
signals, radar, EEG, neuronal activity, whose local spec- tive. In Section 23.3.3, we provide a comprehensive
tral content is strongly varying. These signals do not introduction to the problem of estimation of the inten-
change significantly their values during some long time sity function from a given set of level-crossing samples.
intervals, followed by rapid aberrations over short time Viewing the intensity function as the variable rate of the
intervals. The maximum frequency component of the point process, we propose nonparametric techniques for
local bandwidth evaluated over finite length time inter- recovering the function. The kernel nonparametric esti-
vals for such signals may change significantly and can mation method is discussed including the issue of its
be much lower than the Nyquist frequency defined for consistency and optimal tuning.
the global bandwidth. Intuitively, by exploiting local The signal analysis based on evolution of a local band-
properties, signals can be sampled faster when the local width might be used in various applications. In par-
bandwidth becomes higher and slower in regions of ticular, music can be considered as possessing a local
lower local bandwidth, which provides a potential to bandwidth that depends on a set of the pitches present
more efficient resource utilization. in a given time window [9]. The concept of the local
The postulates to utilize time-varying local properties bandwidth can also be adopted to the decomposition of
of the signal and adapt the sampling rate to changing images into edges and textures [9]. The most promis-
frequency content have been proposed in several works ing area concerns resource-constraint applications, for
in the past [3–9]. The objective of these approaches is to example, biomedical signals processed in wireless sys-
avoid unnecessarily fast sampling when the local band- tems with limited energy resources.
width is low. As shown in [4], the sampling of the signal The rest of the chapter is structured as follows:
at the local Nyquist frequency preserving its perfect Section 23.2 introduces the concept of the local band-
recovery is based on scaling the uniform sampling fre- width and the corresponding signal recovery prob-
quency by time-warping function that reflects varying lem. In Section 23.3, we develop the methodology for
local bandwidths. This idea, however, requires knowl- level-crossing sampling for signals that are modeled
edge of local bandwidth to control time-varying sam- as Gaussian stochastic processes. Section 23.4 discusses
pling rate accordingly, and to ensure the proper signal the signal reconstruction method for non-uniformly
reconstruction. The problem of triggering the sampling distributed samples that occur in the level-crossing
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 531
" #
sampling strategy and any other schemes with irregular x N (t) = ∑nN=− N x (nT )sinc πT (t − nT ) is of order
sampling. O(1/N). See [12] for further details on the truncated
error.

23.2.2 Local Bandwidth and Signals with


Varying Bandwidth
23.2 Concepts of Local Bandwidth and
Recovery of Signals with Varying The extension of the notion of the global bandwidth Ω
Bandwidth to a local bandwidth Ω(t) defined at a certain time
instant t is not unique and can be defined in several dif-
The bandwidth is a fundamental measure that defines ferent ways [9]. The decisive criterion of usefulness of
frequency content of a signal. Intuitively, as the range a given definition is a possibility to formulate the sam-
of frequencies may change in time, the local bandwidth pling theorem which allows for the perfect recovery of a
refers to the rate at which a signal varies locally. How- signal from infinite number of error-free signal samples,
ever, there is an underlying opposition between the con- and perfect knowledge of local bandwidth.
cept of bandwidth and time locality because the former Intuitively, the signal x (t) bandlimited at time t to
is non-local by definition [9]. the bandwidth Ω(t) can be defined as the output of an
ideal low-pass filter with time-varying cutoff frequency
23.2.1 Global Bandwidth Ω(t) for some possibly non-bandlimited signal y(t) on
the input. This can be represented using the Fourier
The bandwidth Ω (rad/s) (or B = Ω/2π [Hz]) of the transform with the variable cutting frequency, that is,
finite energy signal x (t) is defined using Fourier trans-  Ω( t )
form: 1
 ∞ x (t) = X (ω)e jωt dω. (23.5)
X (ω) = x (t)e− jωt dt (23.1) 2π −Ω( t )
−∞
Alternatively, we can define Ω(t) as a parameter of the
as the upper frequency component in the spectrum of following low-pass time-varying filter with the input
x (t). The Ω-bandlimitedness is defined as the vanish- y (t) and output x (t):
ing of the Fourier transform X (ω) outside the interval  ∞
[−Ω, Ω], that is, x (t) = h(t − τ, t)y(τ)dτ. (23.6)
−∞
X (ω) = 0 for |ω| > Ω. (23.2) where h(λ, t) = sin(Ω(t)λ) is the ideal low-pass filter
πλ
response function with the time-varying bandwidth
Taking into account the time-limited nature of physi-
Ω( t ).
cal signals, equation 23.2 can be replaced by a weaker
Unfortunately, such representations of x (t) cannot
requirement; the energy of the frequency components
lead to the perfect reconstruction formula as was proven
outside [−Ω, Ω] is bounded by a certain small ε, that is,
by Horiuchi [3]. In fact, in this case, it is possible to

formulate the consistent reconstruction, for which the
| X (ω)|2 dω < ε. (23.3) reconstructed signal x̃(t) = x (t) only for t = t , where
|ω|>Ω n
{ x (tn )} are samples of x (t). The reconstruction, how-
The perfect recovery of the ideally bandlimited signal is ever, is not valid for other time instants.
possible by means of the Shannon–Whittaker interpola- Since the local bandwidth definition using Fourier
tion formula transform is not useful for reconstruction, the varying
∞ 'π ( bandwidth reconstruction formula can be derived using
x (t) = ∑ x (nT )sinc (t − nT ) (23.4) a different, more general approach, based on the func-
n =− ∞ T tional analysis [7,8]. The aim of this extension was to
avoid the Gibbs phenomenon, which can occur when the
for T ≤ π/Ω, where Ω is the signal bandwidth and signal x (t) reconstructed using (23.4) is not bandlimited.
sin( t )
sinc(t) = t . However, this reconstruction scheme is characterized
In practice, the number of samples is finite, but by relatively high complexity, so we will focus on the
the error of reconstruction caused by such constraint simpler approach yielding acceptable results.
decreases as the number of samples grows. In fact, As the notion of the bandwidth relies on the represen-
for signals with sufficiently smooth Fourier spectra, tation of a signal as a sum of sine and cosine functions
it is known that the L2 -distance  x N − x 2 between with frequencies from the interval [−Ω, Ω], the variation
x (t) and the truncated version of (23.4), that is, of the bandwidth can be, therefore, considered based
532 Event-Based Control and Signal Processing

on the modulation of frequencies of each signal com- The above-presented approach was introduced by
ponent [4]. The frequency modulation of the sinusoidal Clark et al. [4] along with the following time-warped
signal is given by interpolation formula:
 t  ∞   
π
z(t) = sin Ω(τ)dτ , (23.7) x (t) = ∑ x (α(nπ))sinc Ω̄(t) t − n ,
−∞ n =− ∞ Ω̄(t)
(23.14)
where Ω(t) > 0 is an instantaneous angular frequency
assumed to be a positive and continuous function of t.
where Ω̄(t) = γ(t)/t for t = 0, which is the average
The nonlinear increase of sine phase in (23.7) can be
value of γ(t).
viewed as warping the time axis, which can be formally
Let us recall again that if Ω(t) is constant, that is,
expressed by a warping function
Ω(t) = Ω then Ω̄(t) = Ω and α(t) = t/Ω. In this case
 t (23.14) becomes the standard Shannon–Whittaker inter-
γ( t ) = Ω(τ)dτ. (23.8) polation formula in (23.4) with T = π/Ω.
−∞
According to the formula in (23.10), the varying
The signal x (t) with varying bandwidth defined by bandwidth signal can be transformed into the regular
(23.8) can now be represented as the warped version of bandlimited signal such that the perfectness of recon-
the bandlimited signal y (t) struction formulas both in (23.4) and (23.14) is then
preserved.
x (t) = y(γ(t)). (23.9)

Due to the assumption that Ω(t) is positive and contin- 23.2.3 Reconstruction of Varying Bandwidth Signals
uous, we have a well-defined inverse function α(t) = with Nonideal Sampling Timing
γ−1 (t) allowing to rewrite (23.9) as follows:
To achieve the smallest possible number of samples
x (α(t)) = y(t). (23.10) assuring the perfect reconstruction of the signal, the
samples should be taken at the rate defined by the local
It is worth noting that if Ω(t) is constant, that is, Nyquist rate, that is, by the instantaneous Nyquist rate
Ω(t) = Ω then γ(t) = Ωt and α(t) = t/Ω. Furthermore, related to the local signal bandwidth Ω(t) with sampling
we can view (23.9) as the time-varying filter with the times tn given by (23.13). This would, however, require
input y (t) and output x (t). The corresponding transfer the a priori knowledge of the local bandwidth Ω(t) and
function of this filter can be shown to be the precise control of triggering sampling operations at
instants tn according to the fluctuations of a Ω(t) in
δ(t − α(t)) time. If this knowledge is unavailable, the local spectrum
h (λ, t) = . (23.11)
Ω(α(t)) can be estimated using spectrum analyzer. However,
such a procedure would require a complex hardware,
where δ(t) is the Dirac delta function. and the result of estimation would be available with a
In order to reconstruct the signal x (t) defined in (23.9), considerable delay. This makes this idea impractical.
the uniform sampling grid {iT, i = 0, ±1, ±2, ±3, . . .} In the practically realizable level-crossing sampling,
should be affected by time warping, resulting in nonuni- the samples are provided at times {t
n }, such that their
form sampling. Without loss of generality, it can be density in time depends on the local signal bandwidth.
assumed that there exists a sample at time t0 = 0. Then, Due to approximated relationship with the local band-
the locations of other samples must correspond to the width, the instants {t
n }, in general, do not match the
zeros of the time-warped function and are given by ideal time grid {tn } given by (23.13), which defines the
solutions of the following equation: sampling according to the local Nyquist rate.
According to the local Nyquist rate (23.13), the recov-
sin(γ(tn )) = 0. (23.12)
ery is provided by the classical Shannon theorem with
This is equivalent to setting the time warping, see (23.14), consequently, the recovery
of the signal from the samples {t
n } should be obtained
tn = α(nπ). (23.13) by means of a signal reconstruction method suited for
the irregular samples combined with the time-warping
The formula in (23.13) defines the sampling points function in (23.8).
according to the local Nyquist rate, that is, to the instan- The necessary condition for reconstruction of signals
taneous Nyquist rate related to the local signal band- from irregular samples is to keep average sampling rate
width Ω(t). above the Nyquist rate [13]. Equivalently, the sampling
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 533
1

0.5

0
0 2 4 6 8 10 12 14 16 18 20
(a)
1

0.5

0
0 2 4 6 8 10 12 14 16 18 20
(b)
1

0.5

0
0 2 4 6 8 10 12 14 16 18 20
(c)
1

0.5

0
0 2 4 6 8 10 12 14 16 18 20
(d)

FIGURE 23.1
(a) Example of bandwidth function Ω( t), (b) sinc functions and time locations of uniform samples for Ω = π, (c) sinc functions warped with Ω( t)
and time locations { tn } of warped samples according to (23.13), and (d) sinc functions warped with by Ω( t) and the example of the sampling
instants { t
n } irregular to { tn }.

density, defined as 3. Warp the reconstructed signal ỹ (t) according


to x̃ (t) = ỹ (γ(t)).
#{tn : tn ∈ [− a, a]}
D ({tn }) = lim (23.15)
a→∞ 2a The methods of signal reconstruction from irregular
samples are described in Section 23.4.
must be the same for irregular sampling instants {t
n }
and time-warped instants {tn }, that is, D ({tn }) =
D ({t
n }).
Therefore, in theory, it is sufficient to sample a varying
bandwidth signal at the average frequency f s = Ω̄/π,
T 23.3 Level-Crossing Sampling of Gaussian
where Ω̄ = lim T →∞ 2T 1
− T Ω( t ) dt is the average of Ω( t ) Random Processes
over the whole time axis. This allows for the perfect
recovery if Ω(t) is known exactly at each time instant. In 23.3.1 Level Crossings
practice, however, the recovery algorithms for irregular
In the forthcoming discussion, the relationship between
samples became unstable if the samples are highly irreg-
the local bandwidth and the level-crossing rate will be
ular [14], so the closeness of the points {t
n } and {tn } is
analyzed in detail based on the assumption that the
beneficial.
input signal is a bandlimited Gaussian random process.
The reconstruction of the varying bandwidth signal
In the classical study, Rice established the formula for the
using the recovery algorithm suited for the irregularly
mean number of level crossings for stationary stochastic
sampled constant bandwidth signal is performed in the
processes [15,16], see also [17]. Let
following steps:

1. Unwarp the irregular time locations accord- θ L = #{tn ∈ (0, 1) : x (tn ) = L} (23.16)
ing to un = α(t
n ).
2. Reconstruct the constant-bandwidth signal be the number of level crossings of the stochastic pro-
y(t) from the samples y(un ) = x (t
n ). cess x(t) at the level L over the interval (0, 1). Let E[θ L ]
534 Event-Based Control and Signal Processing

denote the corresponding mean value of θ L . Rice’s cel- the exponential function we have that
ebrated formula for E[θ L ] assumes that x(t) is a zero-
mean, stationary, and differentiable Gaussian process  
2B L2 1
with the autocorrelation function ρx (τ). Then, the mean E[ θ L ] = √ − √ +O .
3 4π 3 B
number of crossings of the level L by x(t) is
7   In particular, the maximum rate that is obtained for
1 σ22 − L2
E[ θ L ] = exp , (23.17) zero crossings ( L = 0) and the flat spectrum low-pass
π σ20 2σ20 process is

where σ20 = ρx (0) and σ22 = −ρx (2) (0). Since −ρx (2) (τ) is 2B
E[θ0 ] = √ ≈ 1.15B < 2B. (23.21)
the autocorrelation function of the process derivative 3
x(1) (t); therefore, σ20 = Var[x(t)] and σ22 = Var[x(1) (t)].
Hence, the factor σ22 /σ20 in (23.17) is the ratio of power If x(t) is the process with non-zero mean value m, then
of x(1) (t) and power
∞ of x(t). It is also  ∞worth noting
the formula in (23.20) can be used to evaluate the mean
that since σ20 = −∞ Sx (ω)dω and σ22 = −∞ ω2 Sx (ω)dω; rate of level crossings with respect to the shifted level
L − m, that is, we can evaluate E[θ L−m ]. For such signals,
therefore σ20 and σ22 are called the spectral moments,
the mean rate of crossings of the single level is too low
where Sx (ω) is the power spectral density of x(t). Hence,
to recover the original bandlimited Gaussian process at
this constitutes a relation between the signal spectrum
the Nyquist rate.
and the mean level-crossing rate. The critical question
which needs to be answered is how to provide the
sufficient number of level crossings to allow the recon- 23.3.2 Mutilevel Crossings
struction of x(t). The increase in the sampling rate can be achieved by
The formula (23.17) indicates that there are three fac- triggering the sampling by multiple levels (Figure 23.2).
tors influencing the average rate of crossings of the Under the name of the multilevel-crossing sampling, we
level L for Gaussian random processes: the value of the mean the sampling scheme triggering samples when
level L, the ratio of the variances σ22 /σ20 , respectively, of x(t) crosses any of the levels Li , 1 ≤ i ≤ K. Note that in
the process x(t) and its derivative x(1) (t), and for the the engineering literature, such a scheme is referred to
non-zero L, the variance of the process x(t). The average as the level-crossing sampling [18,19]. We use the term
number of level crossings E[θ L ] grows with decreasing the multilevel crossing sampling to differ it from the
value of | L|, and the maximum sampling rate occurs crossings of a single level, which has been considered
for L = 0. in the previous section. The number of samples trig-
If the spectral density Sx (ω) of x(t) is flat in the band gered by multilevel-crossing sampling in a unit of time
(Ωa , Ωb ), that is, is a sum of the samples triggered by each level, Li , 1 ≤
 i ≤ K. Sampling at level crossings gives the samples
0 for |ω| ∈
/ (Ω a , Ωb ) of the form {x(tn ) : x(tn ) = Li }. Sampling the Gaussian
S x (ω) = (23.18)
1 for |ω| ∈ (Ωa , Ωb ) process using multilevel crossings or crossing a single
non-zero level Li makes the sampling rate dependent on
then, by virtue of (23.17), the mean number of level the power σ20 of the signal x(t) appearing in (23.17).
crossings in the unit interval is given by For a given set of levels L1 , L2 , . . . , LK and a zero mean,
7 stationary, and differentiable Gaussian process x(t), the
 
1 Ω3b − Ω3a L2 mean rate of crossings E[θ] of all levels is the sum of
E[ θ L ] = √ exp − .
π 3 Ωb − Ω a 4(Ωb − Ω a ) crossings of individual levels, that is,
(23.19)
7  
1 σ22 K − L2k
For the low-pass bandlimited Gaussian random process E[ θ] =
π σ20
∑ exp 2σ20
. (23.22)
where Ωa = 0, Ωb = 2πB, the level-crossing rate is k =1

 
2B − L2
E[θ L ] = √ exp . (23.20) In particular, if the spectral density of x(t) is flat over
3 8πB the interval [0, 2πB], then

As follows from (23.20), the level-crossing rate for  


Gaussian low-pass random processes is directly propor- 2B K − L2k
E[θ] = √ ∑ exp . (23.23)
tional to the process bandwidth B. In fact, by expanding 3 k =1 8πB
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 535
3

1
x(t)
0

–1

–2

–3
0 2 4 6 8 10 12 14 16 18 20
t

FIGURE 23.2
The sampling by crossings of multiple levels.

x(t) x(t) x(t)

Δ Δ

t t t
(a) (b) (c)

FIGURE 23.3
(a) Level-crossing sampling without hysteresis, (b) level-crossing sampling with hysteresis (send-on-delta sampling), and (c) extremum
sampling.

The formulas in (23.22) and (23.23) refer to the level- By extending the concept of level crossings to the
crossing sampling without hysteresis (Figure 23.3a). By transforms of input signal, new sampling criteria might
comparison, the upper bound on the mean rate E[θ]h be formulated. For example, the zero crossings of the
of the level-crossing sampling with hysteresis (send-on- first derivative defines the extremum sampling (mini–
delta scheme, Figure 23.3b) and uniformly distributed max or peak sampling), which relies on capturing sig-
levels for the Gaussian stationary process bandlimited nal values at its local extrema (Figure 23.3c) [20–22].
to Ω = 2πB [11] is Extremum sampling allows to estimate dynamically
 time-varying signal bandwidth similarly as level cross-
8π Bσ0 ∼ Bσ0 ings. However, unlike level crossing, extremum sam-
E[ θ] h ≤ = 2.89 , (23.24) pling enables recovery of the original signal only at the
3 Δ Δ
half of Nyquist rate [22] because it is two-channel sam-
where Δ = Li+1 − Li , i = 1, . . . , K − 1, is the distance pling, that is, the samples provide information both on
between consecutive levels Li and Li+1 and σ20 = signal value and on zeros of its first time derivative.
Var[x(t)]. The mean rate E[θ]h of the level-crossing Furthermore, for the low-pass bandlimited Gaussian
sampling with hysteresis approaches its upper bound random process (23.18) with f a = 0 and f b = B, the
defined by the right side of the formula (23.24) if Δ → 0. mean number of extrema in the unit interval is
It can be easily shown that E[θ]h < E[θ] for a given set √
L1 , L2 , . . . , LK of levels. E[θ0 ] x
(t) = 2B ≈ 1.41B > B, (23.25)
536 Event-Based Control and Signal Processing
(2)
which exceeds the minimum sufficient sampling rate that ρx (0) = 1 and −ρz (τ) = σ22 . Note that the process
for two-channel sampling and allows for reconstruction. x(t) is nonstationary Gaussian with the deterministic
For recovery of the signal based on irregular derivative trend and x(1) (t) = z(1) (t) + ψ(1) (t), where ψ(1) (t) is the
sampling, see [23,24]. derivative of ψ(t). A tedious but straightforward alge-
bra shows that the local intensity λ(t) (as defined in
23.3.3 Estimation of Level-Crossing Local Intensity (23.27)) of the process x(t) at the level L is given by
  (1) 
The well-designed event-triggered sampling system ψ (t)
λ(t) = φ( L − ψ(t)) 2σ2 φ
provides the number of samples proportional to the σ2
bandwidth of a signal. However, in contrast to classic  (1)  )
ψ (t)
sampling paradigms, the samples obtained via level- + 2ψ(1) (t)Φ − ψ( 1 ) ( t ) , (23.28)
crossing strategy are distributed irregularly. The irreg- σ2
ularity, however, does not prohibit the recovery of the where φ(t) and Φ(t) are the pdf and CDF of the standard
signal from samples, as it was shown in Section 23.3.1, Gaussian random variable, respectively.
but it has an impact on the performance of the corre-
sponding reconstruction procedure. The reason behind Figure 23.4 depicts the intensity λ(t), 0 < t < 5,
this is that the sampling procedure is not adaptive as it derived in (23.28) assuming the autocorrelation function
does not take into account the local signal complexity.
ρz (τ) = (1 − τ2 )e−τ with the corresponding spec-
2

The latter can be captured by the concept of the local


tral moment σ2 = 4. Two different trend functions
2
intensity of the sampled process. Hence, let
ψ1 (t) = sin(2πt) and ψ2 (t) = 2t are applied. Two level-
#{tn ∈ (0, T ) : x(tn ) = L} crossing values are used, that is, L = 0 and L = 2. Note
θL ( T ) = (23.26) that the intensity λ(t) corresponding to ψ1 (t) is limited
T
to the interval (0, 5) of the periodic function having the
be the proportion of the level-crossing points of the period 1. The λ(t) corresponding to the linear trend
stochastic process x(t) at the level L within the time ψ2 (t) is the unimodal Gaussian curve confined to (0,5).
interval (0, T ), see (23.16) for the related notion. The local intensity will be used to estimate the local
The numerator in (23.26) defines a counting process signal bandwidth. For example, in flat low-pass model
that in the case when x(t) is a stationary Gaussian of signal (23.18), the dependence between local band-
signal behaves (as the level L is increasing) as a reg- width and intensity is
ular Poisson point process with the constant intensity √
equal to TE[θ L ], where E[θ L ] is given by Rice’s for- Ω̂(t) = λ̂(t)π 3. (23.29)
mula in (23.17) (see [17], theorem 8.4 or [25] for further
details). Hence, in the case of the stationary Gaussian A problem of a practical importance is to estimate
process the level-crossing intensity, to be denoted as the unknown local intensity λ(t) assuming only that
λ(t) is constant and equal to λ(t) = E[θ L ]. For a non- there are N level-crossing points {t1 , . . . , t N } in the inter-
stationary and differentiable stochastic process x(t), val ( 0, T ). As we have already mentioned, the points
this is not the case and the result in [17] reveals { t 1 , . . . , t N } can be interpreted as the realization of the
that the relative average number of level crossings of point process that can be described by the step function
the process x(t) within (0, T ) is given by N (t) = #{t : 0 < t ≤ t} for t ≤ T. (23.30)
j j

1 T
λ(t)dt, (23.27) Figure 23.5 depicts the structure of N (t) being a step
T 0 function increasing by 1 at the time of the each level-
∞ crossing event occurring within the interval (0, t).
where now λ(t) = −∞ |z| f x(t),x(1) (t) ( L, z)dz defines the The value of N (t) can serve as a naive estimate of the
local intensity. Here, f x(t),x(1) (t) (u, z) is the joint density expected number of points in (0, t), that is, the num-

of (x(t), x(1) (t)) that, due to non-stationarity, depends ber 0t λ(z)dz. Consequently, a formal derivative dNdt(t) of
on t. To illustrate the above discussion, let us consider N (t) could be used as an estimate of d  t λ(z)dz = λ(t).
a simple example concerning the local intensity λ(t). dN ( t )
dt 0
Note the derivate dt is given by
EXAMPLE 23.1 Let us define a class of nonstation-
d ( t ) = ∑ δ( t − t j ) ,
ary stochastic processes of the form x(t) = z(t) + ψ(t), j
where ψ(t) is a differentiable deterministic function,
whereas z(t) is the stationary Gaussian process with where δ(t) is the Dirac delta function. Clearly, the esti-
zero-mean and the autocorrelation function ρz (τ) such mate d(t) is impractical and inconsistent since it includes
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 537
2.5
0.8
2.0
0.6
1.5
0.4
1.0

0.5 0.2

0 0
0 1 2 3 4 5 0 1 2 3 4 5
(a) (b)

FIGURE 23.4
The level-crossing intensity λ( t) of the non-stationary Gaussian stochastic processes x ( t) with a deterministic trend ψ( t): (a) ψ1 ( t) = sin(2πt)
and (b) ψ2 ( t) = 2t. Two values of level crossing: L = 0 and L = 2 (in solid line).

N (t) The number of data points {t j : 1 < j ≤ N } depends


on L, that is, larger L makes N smaller and this has
a direct implication on the accuracy of λ̂h (t). It is an
interesting issue how to combine the kernel estimates
corresponding to different values of level crossings to
fully characterize the local bandwidth and variability of
the underlying nonstationary stochastic process.
0 t1 t2 t3 t4 t
The specification of λ̂h (t) needs the selection of the
kernel function K (t) and the smoothing parameter h.
FIGURE 23.5
Commonly used kernels are smooth functions, and
The process N ( t) over the interval (0, t) is a step function increasing
popular choices are' the(following: the Gaussian ker-
by 1 at the time t1 of the each level-crossing event. 2
nel K (t) = √1 exp − t2 and the compactly supported

infinite spikes. A consistent estimate of λ(t) can be Epanechnikov kernel K (t) = 32 (1 − t2 )1(|t| ≤ 1). It is
obtained by a proper smoothing of the naive estimate well known, however, that the choice of the smooth-
d(t). This can be accomplished by taking the convolu- ing parameter is far more important than the choice of
tion of d (t) and the locally tuned kernel function Kh (t), the kernel function [29]. The data-driven choice of h can
where Kh (t) = h−1 K (t/h ) and h > 0 is the smoothing be based on the approximation of the optimal h ISE that
parameter that controls the amount of smoothing. The minimizes the integrated squared error (ISE)
kernel function K (t) is assumed to be any pdf symmet-  T" #2
ric about the origin. Hence, let λ̂h (t) = d(t) ∗ Kh (t) be the ISE(h) = λ̂h (t) − λ(t) dt. (23.32)
0
kernel convolution estimate of λ(t). An explicit form of
this estimate is The value hISE cannot be determined since we do not
know the form of λ(t). Note, however, that the ISE(h)
N can be decomposed as follows:
λ̂h (t) = ∑ Kh ( t − t n ). (23.31)
 T  T  T
n =1
ISE(h) = λ̂2h (t)dt − 2 λ̂h (t)λ(t)dt + λ2 (t)dt.
For theoretical properties of the kernel local intensity 0 0 0
(23.33)
estimate in (23.31), we refer to [26,27] and the references
cited therein. Some analogous analysis of the estimate The last term in (23.33) does not depend on h and can
λ̂h (t) in the context of spike rate estimation for neu- be omitted. The first one is known up to h, and the only
roscience applications was developed in [28] and the term that needs to be evaluated is the second term in
related references cited therein. It is also worth not- (23.33). This can be estimated by the cross-validation
ing that if K (t) is the uniform kernel, that is, K (t) = strategy, that is, we estimate
2 1 (| t | ≤ 1), where 1 (·) is the indicator function, then
1
N
λ̂h (t) defines the binning counting method used com-
monly in neuroscience and biophysics.
∑ λ̂ih (ti ),
i =1
The estimate in (23.31) is defined for a fixed level
crossing L. In the case when we have multiple levels where λ̂ih (t) = ∑ N
j =1,j  = i Kh ( t − t j ) is the leave-one-out
{ Li }, we can obtain the multiple estimates as in (23.31). estimator. Hence, λ̂ih (t) is the version of λ̂h (t), where
538 Event-Based Control and Signal Processing

the observation ti is left out in constructing λ̂h (t). This replaced by its leave—(2l + 1)—out version, that is,
leads to the selection of hCV being the minimizer of the
N
following criterion:
λ̂i,l
h (t) = ∑ Kh ( t − t j ), (23.35)
j: |i − j |> l
 T" #2 N
CV (h) = λ̂h (t) dt − 2 ∑ λ̂ih (ti ).
(23.34) where l is a positive integer. The choice l depends on the
0 i =1 correlation structure of {t j }. The criterion in (23.35) takes
into account the fact that the correlation between ti and
Plugging the formula in (23.31) into (23.34), we can t decreases as the distance between t and t increases.
j i j
obtain the equivalent kernel form of CV (h) The case l = 0 corresponds to the lack of correlation and
then λ̂i,0h ( t ) is equal to λ̂h ( t ) used in (23.34). In practical
i
N N N N
CV (h) = ∑ ∑ K̄h (ti − t j ) − 4 ∑ ∑ Kh (ti − t j ), cases, the choice of l in the range 1 ≤ l ≤ 3 is usually
i =1 j =1 i =1 j = j +1 sufficient, see [29]. An example of such a estimation is
shown in Figure 23.6.
where K̄h (t) = (Kh ∗ Kh )(t) is the convolution kernel, An interesting approach for selecting the smooth-
and the formula for the second term results from the ing parameter h, based on the Bayesian choice, was
symmetry of the kernel function. proposed in [26]. Here, one assumes that the inten-
It was shown in [26] that the choice hCV is a consis- sity function λ(t) is a realization of a stationary, non-
tent approximation of the optimal hISE if the number of negative random process with the mean μ and the
points {t j : 1 < j ≤ N } in (0, T ) is increasing. It is worth autocorrelation function ρλ (τ). Then, {t1 , . . . , t N } can
noting that the points {t j : 1 < j ≤ N } constitute a cor- be viewed as a realization of double stochastic point
related random process, and the aforementioned cross- process N ( t ) as it is defined in (23.30). The process
validation strategy may produce a biased estimate of N ( t ) is characterized by the property that condition-
hISE . In fact, if we normalize λ(t) such that λ(t) = cα(t), ing on the realization λ ( t ) , the conditional process
T N (t)|λ(t) is the inhomogeneous Poisson point process
where 0 α(t)dt = 1, then for given N, the level-crossing
with the rate function λ(t). This strategy allows the
points {t1 , . . . , t N } have the same distribution as the
explicit
-" evaluation. of the mean square error MSE(h) =
order statistics corresponding to N independent random #2
variables with pdf α(t) on the interval (0, T ). This fact E λ̂h (t) − λ(t) , where the expectation is taken
would suggest the modification of (23.34) with λ̂ih (t) now over the randomness in {t1 , . . . , t N } and in λ(t).

3
x(t)
2

1
x(t)

–1

–2

–3
0 50 100 150 200 250 300
t
(a)
0.5
Ω(t)
0.4 Ω̂(t) estimated using x(t) crossings
x(t) zero-crossings instants

0.3
Ω(t)

0.2

0.1

0
0 50 100 150 200 250 300
t
(b)

FIGURE 23.6
(a) Signal x ( t) with time warping demonstrated using time grid and (b) estimate Ω̃( t) of the time-varying bandwidth Ω( t).
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 539

The formula for the error is only a function of μ and Since the signal samples are taken when the signal x(t)
ρλ (τ). For the uniform kernel, the formula reads as crosses one of the predefined levels, this probability
μ ' μ (2  2h gives the distribution of the signal samples among the
MSE(h) = ρλ (0) + (1 − 2μϕ(h)) + ϕ(z)dz, levels set { Li : 1 ≤ i ≤ K }. The example of the distri-
2h 2h 0 bution g( Li ) in (23.37) with K = 7 for the zero-mean
(23.36)
Gaussian process is shown in Figure 23.7.
t
where ϕ(t) = μ22 o ρλ (τ)dτ [26]. Since we can easily esti- In practice, the parameters σ20 , σ22 defining the num-
ber of level crossings are unknown but they could be
mate μ, ρλ (τ), and consequently ϕ(t), then the above
estimated from the observed number of crossings for a
expression gives an attractive strategy for selecting the
given set of levels { Li : 1 ≤ i ≤ K }. Hence, let θ Li denote
smoothing parameter h. In fact, the computation of h
the number of crossings of the level Li by the observed
only needs the numerical minimization of a single vari-
sample path x(t), 0 < t < 1, of the Gaussian process with
able function defined in (23.36). Let us finally mention
generally nonzero mean value denoted as m. In this case,
that the localized version of the cross-validation selec-
the generalization of Rice’s formula (23.17) takes the
tion method in (23.34) was examined in [28].
following form:
7
−( L − m )2
23.3.4 Estimation of Signal Parameters in 1 σ22 2
Multilevel Crossing E[ θ L ] = e 2σ0 . (23.38)
π σ0 2

Owing to (23.17), the level-crossing rate depends


directly on the variance of the sampled Gaussian signal To estimate the parameters m, σ22 , and σ20 , we can apply
x(t), that is, it depends on the parameter σ20 . The cross- the classic least-squares method  [30]. To do so, let us
ing rate is high for the levels close to the mean of the σ22
denote y = log(E[θ L ]) and A = 1
π .
process x(t) and gets smaller for higher values of levels. σ20
In this section, we propose a quantitative descriptor of Then, by a direct algebra, the formula in (23.38) can be
the distribution of level crossings, that is, the frequency equivalently expressed as
of visits of the Gaussian process to a level Li . In fact, let
{ Li : 1 ≤ i ≤ K } be a given set of predefined levels. The y = a + bL + cL2 , (23.39)
probability that the level Li is selected by the process x(t)
m2
can, due to (23.17), be defined as where a = ln A − 2σ20
, b= 2m
2σ20
, and c = 1
2σ20
.
The formula in (23.39) can be viewed as a standard
e− Li /2σ0
2 2

g ( Li ) = . (23.37) quadratic regression function being linear with respect


− L2j /2σ20
∑Kj=1 e to the parameters a, b, and c. The least-square solution

0.25
g(L)
Gaussian envelope

0.2

0.15
g(L)

0.1

0.05

0
–3 –2 –1 0 1 2 3
L

FIGURE 23.7
The distribution of level-crossings set for the zero-mean Gaussian process.
540 Event-Based Control and Signal Processing

of this linear problem is based on the input variables {λ̂( Li ) (t), 1 ≤ i ≤ K } (see Figure 23.8). Therefore, (23.41)
{ Li : 1 ≤ i ≤ K } and the corresponding outputs {yi = (or (23.40), if the y i is expected to be sufficiently large) is
log(θ Li ) : 1 ≤ i ≤ K }. This produces an estimate of a, b, solved for every time instants t, at which Ω̂(t) has to be
and c as the solution of the following linear equations: estimated.
⎡ ⎤⎡ ⎤ ⎡ ⎤ Figure 23.9 shows the example of Ω̂(t) estima-
K Σi Li Σi L2i a Σi y i tion according to the scheme from Figure 23.8 for
⎣ Σi L i Σi L2 Σi L3 ⎦ ⎣ b ⎦ = ⎣ Σi L i y i ⎦ , (23.40) level crossings of uniformly distributed levels L =
i i
Σi L2i Σi L3i Σi L4i c Σi L2i yi {−1.5, 0.75, 0, 0.75, 1.5}. Since the level crossings of dif-
ferent levels are characterized by different rates, the
where Σi denotes summation ΣK i =1 . It was shown in [31]
kernel bandwidths estimated for each level separately
that if the values of θ Li are small, the fitting error grows are different. This is not beneficial, since the kernel band-
substantially due to the fact that y i = log(θ Li ). To pre- width should reflect the variability of the estimated
vent such a singularity of the solution, the exponential signal Ω(t). To avoid this effect, the kernel bandwidth
weighted version of (23.40) of the following form was h is estimated from the zero crossings, and then it used
proposed: in (23.35) for each level. The bandwidth estimated from
⎡ 2y ⎤⎡ ⎤ ⎡ ⎤
the {λ̂( Li ) (t), 1 ≤ i ≤ 7} using Gaussian fitting (23.41) is
e i Σi Li e2yi Σi L2i e2yi a Σi yi e2yi
⎣ Σi Li e2yi Σi L2 e2yi Σi L3 e2yi ⎦ ⎣ b⎦ = ⎣ Σi Li yi e2yi ⎦ .shown in thick dashed lines. The inaccuracy of the esti-
i i mation in the beginning and in the end of the shown
Σi L2i e2yi Σi L3i e2yi Σi L4i e2yi c Σi L2i yi e2yi
signal stems from the edge effects. The accuracy of the
(23.41)
estimates derived from single levels decreases for the
In both the aforementioned versions of the least-
levels with higher | Li |, because the number of crossings
squares algorithm represented by (23.40) and (23.41), the
is smaller.
searched parameter of Rice’s formula in (23.38) can be
easily found based on the relationships between a, b, c
and m, σ22 , σ20 .
Then, a model of bandwidth estimation presented
in the previous subsections can be applied. Assuming
the flat low-pass model of stationary Gaussian process, 23.4 Reconstruction of Bandlimited Signals
according to (23.18), we have from Nonuniform Samples
7
σ̂2 23.4.1 Perfect Reconstruction
Ω̂ = 3 22 , (23.42)
σ̂0 As it was shown in Section 23.3, the varying band-
width signal represented using time-warping method
where σ̂22 and σ̂20 were estimated from {yi = log(θ Li ) : can be transformed into the constant bandwidth signal.
1 ≤ i ≤ K }. If the process is non-stationary and band- The samples obtained using level crossing are irreg-
width is varying (but still with the assumption of spec- ular with respect to the time-warped sampling grid,
trum flatness), then the estimation of Ω(t) must be and despite the shifting introduced by unwarping, they
based on the intensity functions yi (t) = λ̂( Li ) , where remain irregular with respect to the periodic grid also

d1(t) ˆλ(L1)(t)
Intensity
estimation

d2(t) ˆλ(L2)(t)
Intensity
Gaussian ˆ
Ω(t)
estimation
function
fitting

dK(t) ˆλ(LK)(t)
Intensity
estimation

FIGURE 23.8
Bandwidth estimation in multilevel crossing sampling.
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 541
3
x(t)
2

x(t)
0

–1

–2

–3
0 50 100 150 200 250 300
t
(a)

0.55
0.5
0.45
0.4
Ω(t)

0.35
0.3
0.25
0.2
0.15
0.1
0 50 100 150 200 250 300
t

Ω(t) Ω̂(t) estimated using L4 crossings


Ω̂(t) estimated using all crossings Ω̂(t) estimated using L5 crossings
Ω̂(t) estimated using L1crossings Ω̂(t) estimated using L6 crossings
Ω̂(t) estimated using L2 crossings Ω̂(t) estimated using L7 crossings
Ω̂(t) estimated using L3 crossings

(b)

FIGURE 23.9
(a) Signal x ( t) with time warping demonstrated using time grid and (b) estimate Ω̃( t) of time-varying bandwidth Ω( t).

after the aforementioned transformation. Then, a recon- To allow the perfect reconstruction, the family of sam-
struction from the irregular samples must be applied to pling functions { gn (t)} must constitute a frame or a Riesz
recover the signal from samples. basis [32]. This means that for every bandlimited signal
Theory of perfect reconstruction of signal from x (t) with finite energy ) x (t), x (t)* < ∞, there exists the
nonuniform measurements represents the sampling as coefficients c = {c[n]} such that
an inner product of the signal x (t) and the sampling
function gn (t): +∞
 +∞ x ( t ) = ∑ c [ n] gn ( t ). (23.45)
x (tn ) = ) x (t), gn (t)* = x (t) gn (t)dt. (23.43) n =− ∞
−∞

In the case of sampling of the bandlimited signal, the For bandlimited signals,
- { gn.(t)} is a frame if the aver-
sin(Ω( t − t ))
sampling function is given by gn (t) = π(t−t )n , since 1
age sampling rate E t −tn exceeds Nyquist rate [13]
n n +1
the samples of x (t) are can be viewed as the output of (overlapping samples tn+1 = tn are prohibited). The per-
ideal bandpass filter fect reconstruction is also possible if the signal and its
 Ω derivatives are sampled [23,24]. For the details of the
1
x (tn ) = X (ω)e jωtn dω perfect reconstruction theory, we refer to the [33] in this
2π −Ω book, and [34,35].
 +∞
Ω
= x (t) sinc(Ω(t − tn ))dt. (23.44)
−∞ π 23.4.2 Minimum Mean Squared Error Reconstruction
Equation 23.44 can be interpreted in three equivalents The perfect reconstruction requires infinite number of
manners: filtering X (ω) in the frequency domain, con- signal samples, which is not possible in practice. The
sin(Ωt )
volution of signals x (t) and πt , and inner product of practical recovery algorithms are aimed to minimize the
these signals. reconstruction error, when signal is reconstructed from
542 Event-Based Control and Signal Processing
sin(Ω( t − t ))
a finite number of measurements using a finite number with N × N matrix G[n, m] = π(t −n t )m and N × 1
n m
of reconstruction functions: vector x[n] = x (tn ). The signal x (t) is approximated
N using coefficients c from the solution of the linear sys-
x N ( t ) = ∑ c [ n] gn ( t ). (23.46) tem (23.54), inserted into recovery equation (23.46). The
n =1 analogous result can be obtained, when the signal x (t) is
treated as stochastic process [37].
The minimum mean squared error reconstruction can be
derived [36] by solving
23.4.3 Example of Reconstruction
min  x (t) − x N (t) . 2
(23.47) The example of reconstruction is shown in Figure 23.10.
c
The length of the examined signal was assumed to
Denoting the error of reconstruction as e(c), we apply be tmax = 300. The bandwidth function Ω(t) was con-
the classic least-squares optimization: structed using Shannon formula with TΩ = 60 using five
uniformly distributed random samples Ωn ∼ U (0, 1),
e(c) =  x (t) − x N (t)2 according to
 +∞
 2
N
5  
= x (t) − ∑ c[n] gn (t) dt (23.48) π
−∞ Ω ( t ) = Ω 0 + Ω s ∑ Ω n sinc ( t − nTΩ ) . (23.55)
n =1 TΩ
n =1
 +∞  N
= x (t)2 − 2x (t) ∑ c[n] gn (t)dt Constants Ω0 , Ωs were set to obtain max Ω(t) = 0.5,
−∞ n =1 min Ω(t) = 0.1. The signal x (t) was created using

N N 163 random samples with normal distribution, Xn ∼
+ ∑ ∑ c[n]c[m] gn (t) gm (t) dt (23.49) N (0, 1), according to
n =1 m =1
N 163
sin(γ(t) − nπ)
=  x (t)2 − 2 ∑ c[n])x (t), gn (t)* x (t) = ∑ Xn (23.56)
n =1 n =1
γ(t) − nπ
N N
+ ∑ ∑ c[n]c[m]) gn (t), gm (t)*. (23.50) where γ(t) was calculated from (23.8). The number of
n =1 m =1 samples resulted from the particular bandwidth Ω(t)
sin(Ω( t − t ))
obtained from (23.55) and the assumed tmax . The signal
Setting gn (t) = π(t−t )n , we obtain was sampled using level crossings of uniformly dis-
n
tributed levels L = {−1.5, 0.75, 0, 0.75, 1.5}, which
N yielded 511 samples.
e(c) =  x (t)2 − 2 ∑ c[n] x (tn ) The result of bandwidth estimation, described in
n =1
Section 23.3.4, is shown in Figure 23.10b. Since the esti-
N N
sin(Ω(tn − tm )) mated bandwidth Ω̃(t) varies around the true band-
+ ∑ ∑ c[n]c[m ] . (23.51)
n =1 m =1
π( t n − t m ) width Ω(t), and it is better to overestimate bandwidth
than to underestimate it, then a safety margin Ωm = 0.04
To find minimum of e(c), the gradient ∇e(c) = 0 is was added to the Ω̃(t), prior to the calculation of
solved by γ̃(t) and α̃(t). Function α̃(t) was used for unwarping
the level-crossings location, according to the procedure
∂e(c) N
sin(Ω(tn − tm )) described in Section 23.2.3. To reconstruct the signal, the
= −2x (tn ) + 2 ∑ c[m] = 0, regularized variant of (23.54)
∂c[n] m =1
π( t n − t m )
(23.52)
(G T G + εI)−1 G T x = c (23.57)
yielding
was used (ε = 0.006). The need for regularization
N
sin(Ω(tn − tm ))
x (tn ) = ∑ c[m] . (23.53) stems from the fact that solving (23.54) is usually ill-
m =1
π( t n − t m ) conditioned, which results in numerical instability [35].
Finally, the signal reconstructed using (23.46) and coef-
Assembling equations for x (t1 ), x (t2 ), . . . , x (t N ) ficients c from (23.57) was warped back, using γ̃(t),
together gives the following liner system of equations which yielded the signal x̃ (t) shown in Figure 23.10a.
The reconstruction error (Figure 23.10c) is low in the
x = Gc (23.54) region where the bandwidth estimation is accurate and
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 543

x(t)
0

–2

0 50 100 150 200 250 300


t
Sampled signal x(t) ˆ
Reconstructed signal x(t)

(a)
0.5

0.4
Ω(t)

0.3

0.2

0.1
0 50 100 150 200 250 300
t
Bandwidth Ω(t) ˆ
Estimated bandwidth Ω(t)
ˆ with safety margin
Estimated bandwidth Ω(t)

(b)
3
|x(t) – x(t)|

2
ˆ

0
0 50 100 150 200 250 300
t
Reconstruction error

(c)

FIGURE 23.10
Example of signal reconstruction: (a) original and reconstructed signal, (b) estimate of bandwidth, and (c) reconstruction error.

gets worse at the edges, where the estimation suffers


from edge effects.
Acknowledgments
Dominik Rzepka was supported from Dean’s Grant
15.11.230.136 (AGH University of Science and Tech-
23.5 Conclusions nology, Faculty of Computer Science, Electronics
The level-crossing sampling rate provides dynamic esti- and Telecommunication, Kraków, Poland). Dariusz
mation of time-varying signal bandwidth and, therefore, Kościelnik and Marek Miskowicz were supported by
allows to exploit local signal properties to avoid unnec- the Polish National Center of Science under grant
essarily fast sampling when the local bandwidth is low. DEC-2012/05/E/ST7/01143.
In the present study, the local bandwidth is referred
to the concept of a local intensity function, which for
level crossings of stochastic signals has been defined by
the joint density function of the signal and its deriva- Bibliography
tive. The precision of estimation was further increased
for level crossings of the bandlimited Gaussian process [1] C. E. Shannon. Communication in the presence of
with multiple reference levels and estimation of the local noise. Proceedings of the IRE, 37(1):10–21, 1949.
bandwidth using an algorithm for fitting the Gaussian
function to the local level-crossing density. Finally, the [2] D. Slepian. On bandwidth. Proceedings of the IEEE,
recovery of the original signal from the level crossings by 64(3):292–300, 1976.
the use of the methods suited for the irregular samples
combined with the time warping has been presented. [3] K. Horiuchi. Sampling principle for continuous
The simulation results show that the proposed approach signals with time-varying bands. Information and
gives the successful reconstruction accuracy. Control, 13(1):53–61, 1968.
544 Event-Based Control and Signal Processing

[4] J. J. Clark, M. Palmer and P. Lawrence. A transfor- [16] S. O. Rice. Mathematical analysis of random noise.
mation method for the reconstruction of functions Bell System Technical Journal, 24(1):46–156, 1945.
from nonuniformly spaced samples. IEEE Trans-
actions on Acoustics, Speech and Signal Processing, [17] G. Lindgren. Stationary Stochastic Processes: Theory
33(5):1151–1165, 1985. and Applications, Chapman & Hall/CRC Texts in
Statistical Science, Taylor & Francis, New York,
[5] M. Greitans and R. Shavelis. Signal-dependent USA, 2012.
sampling and reconstruction method of signals
[18] J. W. Mark and T. D. Todd. A nonuniform sampling
with time-varying bandwidth. In Proceedings of
approach to data compression. IEEE Transactions on
SAMPTA’09, International Conference on Sampling
Communications, 29(1):24–32, 1981.
Theory and Applications, 2009.
[19] M. Miskowicz. Reducing communication by event-
[6] M. Greitans and R. Shavelis. Signal-dependent triggered sampling. In Event-Based Control and Sig-
techniques for non-stationary signal sampling and nal Processing (M. Miskowicz, Ed.), pp. 37–58. CRC
reconstruction. In Proceedings of 17th European Signal Press, Boca Raton, FL, 2015.
Processing Conference EUSIPCO, 2009.
[20] M. Greitans, R. Shavelis, L. Fesquet, and
[7] Y. Hao. Generalizing sampling theory for time- T. Beyrouthy. Combined peak and level-crossing
varying Nyquist rates using self-adjoint extensions sampling scheme. In 9th International Conference
of symmetric operators with deficiency indices on Sampling Theory and Applications SampTA 2011,
(1, 1) in Hilbert spaces. PhD Thesis, University of 2011.
Waterloo, Waterloo, 2011.
[21] I. Homjakovs, M. Hashimoto, T. Hirose, and
[8] Y. Hao and A. Kempf. Filtering, sampling, T. Onoye. Signal-dependent analog-to-digital con-
and reconstruction with time-varying bandwidths. version based on MINIMAX sampling. IEICE
IEEE Signal Processing Letters, 17(3):241–244, 2010. Transactions on Fundamentals of Electronics, Commu-
nications and Computer Sciences, 96(2):459–468, 2013.
[9] D. Wei. Sampling based on local bandwidth. Mas-
ter thesis, Massachusetts Institute of Technology, [22] D. Rzepka and M. Miskowicz. Recovery of varying-
Boston, 2006. bandwidth signal from samples of its extrema. In
Signal Processing: Algorithms, Architectures, Arrange-
[10] D. Rzepka, D. Koscielnik and M. Miskowicz. ments, and Applications (SPA), 2013, 2013.
Recovery of varying-bandwidth signal for level-
crossing sampling. In Proceedings of 19th Confer- [23] D. Rzepka, M. Miskowicz, A. Grybos, and
ence of Emerging Technology and Factory Automation D. Koscielnik. Recovery of bandlimited signal
(ETFA), 2014. based on nonuniform derivative sampling. In Pro-
ceedings of 10th International Conference on Sampling
[11] M. Miskowicz. Efficiency of level-crossing sam- Theory and Applications SampTA, 2013.
pling for bandlimited Gaussian random processes.
[24] M. D. Rawn. A stable nonuniform sampling expan-
In 2006 IEEE International Workshop on Factory
sion involving derivatives. IEEE Transactions on
Communication Systems, 2006.
Information Theory, 35(6):1223–1227, 1989.
[12] R. J. Marks. Handbook of Fourier Analysis & Its Appli-
[25] H. Cramér. On the intersections between the trajec-
cations. Oxford University Press, London, 2009.
tories of a normal stationary stochastic process and
[13] F. J. Beutler. Error-free recovery of signals from a high level. Arkiv för Matematik, 6(4):337–349, 1966.
irregularly spaced samples. SIAM Review, 8(3): [26] P. Diggle. A kernel method for smoothing point
328–335, 1966. process data. Applied Statistics, 34:138–147, 1985.
[14] D. Chen and J. Allebach. Analysis of error in [27] P. Diggle and J. S. Marron. Equivalence of smooth-
reconstruction of two-dimensional signals from ing parameter selectors in density and intensity
irregularly spaced samples. IEEE Transactions on estimation. Journal of the American Statistical
Acoustics, Speech and Signal Processing, 35(2):173– Association, 83(403):793–800, 1988.
180, 1987.
[28] H. Shimazaki and S. Shinomoto. Kernel bandwidth
[15] S. O. Rice. Mathematical analysis of random noise. optimization in spike rate estimation. Journal of
Bell System Technical Journal, 23(3):282–332, 1944. Computational Neuroscience, 29(1–2):171–182, 2010.
Reconstruction of Varying Bandwidth Signals from Event-Triggered Samples 545

[29] M. P. Wand and M. C. Jones. Kernel Smoothing. [34] H. G. Feichtinger and K. Gröchenig. Theory and
Chapman and Hall/CRC Press, London, UK, 1995. practice of irregular sampling. In Wavelets: Math-
ematics and Applications (J. J. Benedetto and M. W.
[30] R. A. Caruana, R. B. Searle, T. Heller, and S. I. Frazier, Eds.), pp. 305–363, CRC Press, Boca Raton,
Shupack. Fast algorithm for the resolution of spec- FL, 1994.
tra. Analytical Chemistry, 58(6):1162–1167, 1986.
[35] T. Strohmer. Numerical analysis of the non-uniform
[31] H. Guo. A simple algorithm for fitting a Gaussian sampling problem. Journal of Computational and
function. IEEE Signal Processing Magazine, 28:134– Applied Mathematics, 122(1):297–316, 2000.
137, 2011.
[36] J. Yen. On nonuniform sampling of bandwidth-
[32] R. J. Duffin and A. C. Schaeffer. A class of non- limited signals. IRE Transactions on Circuit Theory,
harmonic Fourier series. Transactions of the American 3(4):251–257, 1956.
Mathematical Society, 72:341–366, 1952.
[37] H. Choi and D. C. Munson Jr. Stochastic formu-
[33] N. T. Thao. Event-based data acquisition and lation of bandlimited signal interpolation. IEEE
reconstruction—mathematical background. In Transactions on Circuits and Systems II: Analog and
Event-Based Control and Signal Processing Digital Signal Processing, 47(1):82–85, 2000.
(M. Miskowicz, Ed.), pp. 379–408. CRC Press,
Boca Raton, FL, 2015.

Das könnte Ihnen auch gefallen