Sie sind auf Seite 1von 465

Evolving Connectionist Systems

Nikola Kasabov

The Knowledge Engineering Approach
Second edition

Professor Nikola Kasabov, PhD, FRSNZ

Director and Chief Scientist
Knowledge Engineering and Discovery Research Institute
Auckland University of Technology
Auckland, New Zealand

British Library Cataloguing in Publication Data

A catalogue record for this book is available from the British Library
Library of Congress Control Number: 2006940182
ISBN 978-1-84628-345-1

e-ISBN 978-1-84628-347-5

Printed on acid-free paper

Springer-Verlag London Limited 2007
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only by reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the
publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued
by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be
sent to the publishers.
The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of
a specific statement, that such names are exempt from the relevant laws and regulations and therefore
free for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept and legal responsibility or liability for any errors or
omissions that may be made.
Springer Science+Business Media

To my daughters Assia and Kapka,

for all their love, understanding, and support throughout my
academic career.

Foreword I

This second edition provides fully integrated, up-to-date support for knowledgebased computing in a broad range of applications by students and professionals.
Part I retains well-organized introductory chapters on modeling dynamics in
evolving connectionist systems adapted through supervised, unsupervised, and
reinforcement learning; it includes chapters on spiking neural networks, neurofuzzy inference systems, and evolutionary computation. Part II develops promising
new and expanded applications in gene regulation, DNA-protein interactions,
adaptive speech recognition, multimodal signal processing, and adaptive robotics.
Emphasis is placed on multilayered adaptive systems in which the rules that
govern parameter optimization are themselves subject to evolutionary pressures
and modification in accord with strategic reformulation of pathways to problem
solving. The human brain is treated both as a source of new concepts to be
incorporated into knowledge engineering and as a prime target for application
of novel techniques for analyzing and modeling brain data. Brains are material
systems that operate simultaneously at all levels of scientific study, from quantum
fields through cellular networks and neural populations to the dynamics of social,
economic, and ecological systems. All these levels find place and orientation in this
succinct presentation of universal tools, backed by an extended glossary, selected
appendices, and indices referenced to Internet resources.
Professor Walter J. Freeman
University of California at Berkeley


Foreword II

This book is an important update on the first edition, taking account of exciting
new developments in adaptive evolving systems. Evolving processes, through both
individual development and population/generation evolution, inexorably led the
human race to our supreme intelligence and our superior position in the animal
kingdom. Nikola Kasabov has captured the essence of this powerful natural tool
by adding various forms of adaptivity implemented by neural networks. The new
edition of the book brings the treatment of the first edition to the cutting edge of
modern research. At the same time Kasabov has kept the treatment to the twopart format of generic level and applications, with demonstrations showing how
important problems can be handled by his techniques such as in gene and protein
interactions and in brain imaging and modelling, as well as in other exciting areas.
In all, this new edition is a very important book, and Nik should be congratulated
on letting his enthusiasm shine through, but at the same time keeping his expertise
as the ultimate guide. A must for all in the field!
Professor John G. Taylor
Kings College London



This second edition of the book reflects on the new development in the area
of computational intelligence and especially the adaptive evolving systems. Even
though the structure of the book is preserved and lots of the material from the
first edition is also included here, there are new topics that make the new edition a
contemporary and an advanced reading in the field of computational intelligence.
In terms of generic methods, these are: spiking neural networks, transductive
neuro-fuzzy inference methods, personalised modelling, methods for integrating
data and models, quantum inspired neural networks, neuro-genetic models, and
others. In terms of applications, these are: gene-regulatory network modelling,
computational neuro-genetic modelling, adaptive robots, and modelling adaptive
socioeconomic and ecological systems.
The emphasis in the second edition is more on the following aspects.
1. Evolving intelligent systems: systems that apply evolving rules to evolve their
structure, functionality, and knowledge through incremental adaptive learning
and interaction, where the evolving rules may change as well
2. The knowledge discovery aspect of computational modelling across application
areas such as bioinformatics, brain study, engineering, environment, and social
sciences, i.e. the discovery of the evolving rules that drive the processes and the
resulting patterns in the collected data
3. The interaction between different levels of information processing in one
system, e.g. parameters (genes), internal clusters, and output data (behaviour)
4. Challenges for the future development in the field of computational intelligence (e.g. personalised modelling, quantum inspired neuro-computing, gene
regulatory network discovery)
The book covers contemporary computational modelling and machine learning
techniques and their applications, where in the core of the models are artificial
neural networks and hybrid models (e.g. neuro-fuzzy) inspired by the evolving
nature of processes in the brain, in proteins and genes in a cell, and by some
quantum principles. The book also covers population-generation-based methods
for optimisation of model parameters and features (variables), but the emphasis
is on the learning and the development of the structure and the functionality of
an individual model. In this respect, the book has a much wider scope than some
earlier work on evolutionary (population-generation) based training of artificial
neural networks, also called there evolving neural networks.



The second edition of the book includes new applications to gene and protein
interaction modelling, brain data analysis and brain model creation, computational neuro-genetic modelling, adaptive speech, image and multimodal recognition, language modelling, adaptive robotics, and modelling dynamic financial,
socioeconomic, and ecological processes.
Overall, the book is more about problem solving and intelligent systems rather
than about mathematical proofs of theoretical models. Additional resources for
practical model creation, model validation, and problem solving, related to topics
presented in some parts of the book, are available from:
Evolving Connectionist Systems is aimed at students and practitioners interested
in developing and using intelligent computational models and systems to solve
challenging real-world problems in computer science, engineering, bioinformatics,
and neuro-informatics. The book challenges scientists and practitioners with open
questions about future directions for information sciences inspired by nature.
The book argues for a further development of humanlike and human-oriented
information-processing methods and systems. In this context, humanlike means
that some principles from the brain and genetics are used for the creation of new
computational methods, whereas human-oriented means that these methods can
be used to discover and understand more about the functioning of the brain and
the genes, about speech and language, about image and vision, and about our
society and our environment.
It is likely that future progress in many important areas of science (e.g. bioinformatics, brain science, information science, physics, communication engineering,
and social sciences) can be achieved only if the areas of artificial intelligence,
brain science, bioinformatics, and quantum physics share their methods and their
knowledge. This book offers some steps in this direction. This book introduces
and applies similar or identical fundamental information-processing methods to
different domain areas. In this respect the conception of this work was inspired
by the wonderful book by Douglas Hofstadter, Godel, Escher, Bach: An Eternal
Golden Braid (1979), and by the remarkable Handbook of Brain Science and Neural
Networks, edited by Michael Arbib (1995, 2002).
The book consists of two parts. The first part presents generic methods and
techniques. The second part presents specific techniques and applications in bioinformatics, neuro-informatics, speech and image recognition, robotics, finance,
economics, and ecology. The last chapter presents a new promising direction:
quantum inspired evolving intelligent systems.
Each chapter of the book stands on its own. In order to understand the details of
the methods and the applications, one may need to refer to some relevant entries
in the extended glossary, or to a textbook on neural networks, fuzzy systems, and
knowledge engineering (see, for example Kasabov (1996)). The glossary contains
brief descriptions and explanations of many of the basic notions of information
science, statistical analysis, artificial intelligence, biological neurons, brain organization, artificial neural networks, molecular biology, bioinformatics, evolutionary
computation, etc.
This work was partially supported by the research grant AUTX02001
Connectionist-based intelligent information systems, funded by the New Zealand



Foundation for Research, Science, and Technology and the New Economy Research
Fund, and also by the Auckland University of Technology.
I am grateful for the support and encouragement I received from the editorial
team of Springer-Verlag, London, especially from Professor John G. Taylor and
the assistant editor Helen Desmond.
There are a number of people whom I would like to thank for their participation
in some sections of the book. These are several colleagues, research associates,
and postgraduate students I have worked with at the Knowledge Engineering
and Discovery Research Institute in Auckland, New Zealand, in the period from
2002 till 2007: Dr. Qun Song, Mrs. Joyce DMello, Dr. Zeke S. Chan, Dr. Lubica
Benuskova, Dr. Paul S. Pang, Dr. Liang Goh, Dr. Mark Laws, Dr. Richard Kilgour,
Akbar Ghobakhlou, Simei Wysosky, Vishal Jain, Tian-Min Ma (Maggie), Dr. Mark
Marshall, Dougal Greer, Peter Hwang, David Zhang, Dr. Matthias Futschik, Dr.
Mike Watts, Nisha Mohan, Dr. Ilkka Havukkala, Dr. Sue Worner, Snjezana Soltic,
Dr. DaDeng, Dr. Brendon Woodford, Dr. John R. Taylor, Prof. R. Kozma. I would
like to thank again Mrs. Kirsty Richards who helped me with the first edition of the
book, as most of the figures from the first edition are included in this second one.
The second edition became possible due to the time I had during my sabbatical
leave in 2005/06 as a Guest Professor funded by the German DAAD (Deutscher
Akademisher Austausch Dienst) organisation for exchange of academics, and
hosted by Professor Andreas Koenig and his group at the TU Kaiserslautern.
I have presented parts of the book at conferences and I appreciate the
discussions I had with a number of colleagues. Among them are Walter Freeman
and Lotfi Zadeh, both from the University of California at Berkeley; Takeshi
Yamakawa, Kyushu Institute of Technology; John G. Taylor, Kings College,
London; Ceese van Leuwen and his team, RIKEN, Japan; Michael Arbib, University
of Southern California; Dimiter Dimitrov, National Cancer Institute in Frederick,
Maryland; Jaap van den Herik and Eric Postma, University of Maastricht; Wlodeck
Duch, Copernicus University; Germano Resconi, Catholic University in Brescia;
Alessandro Villa, University of Lausanne; UK; Peter Erdi, Budapest; Max Bremer,
University of Plymouth; Bill Howell, National Research Council of Canada;
Mario Fedrizzi, University of Trento in Italy; Plamen Angelov, the University of
Lancaster, UK, Dimitar Filev, FORD; Bogdan Gabrich, University of Bournemouth
Dr. G. Coghill, University of Auckland; Dr V. Brusic, Harvard; Prof. Jim Wright,
Auckland and many more.
I remember a comment by Walter Freeman, when I first presented the concept
of evolving connectionist systems (ECOS) at the Iizuka98 conference in Japan:
Throw the chemicals and let the system grow, is that what you are talking
about, Nik? After the same presentation at Iizuka98, Robert Hecht-Nielsen made
the following comment, This is a powerful method! Why dont you apply it to
challenging real world problems? Later on, in November, 2001, Walter Freeman
made another comment at the ICONIP conference in Shanghai: Integrating genetic
level and neuronal level in brain modelling and intelligent machines is a very
important and a promising approach, but how to do that is the big question.
Michael Arbib said in 2004 If you include genetic information in your models,
you may need to include atomic information as well.
Max Bremer commented after my talk in Cambridge, at the 25th anniversary of
the AI SIG of the BCS in December 2005: A good keynote speech is the one that



makes at least half of the audience abandon their previous research topics and
start researching on the problems and topics presented by the speaker.
All those comments encouraged me and at the same time challenged me in
my research. I hope that some readers would follow on some of the techniques,
applications, and future directions presented in the book, and later develop their
own methods and systems, as the book offers many open questions and directions
for further research in the area of evolving intelligent systems (EIS).
Nikola Kasabov
23 May 2007


Foreword I by Walter J. Freeman . . . . . . . . . . . . . . . . . . . . . . . . . vii

Foreword II by John G. Taylor . . . . . . . . . . . . . . . . . . . . . . . . . . .


Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Part I

Evolving Connectionist Methods

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Everything Is Evolving, but What Are the Evolving Rules? . . . .
Evolving Intelligent Systems (EIS) and Evolving Connectionist
Systems (ECOS) . . . . . . . . . . . . . . . . . . . . . . . . . .
Biological Inspirations for EIS and ECOS . . . . . . . . . . . . . . 11
About the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Feature Selection, Model Creation, and Model Validation . . . . . . . .

Feature Selection and Feature Evaluation . . . . . . . . . . . . . .
Incremental Feature Selection . . . . . . . . . . . . . . . . . . . .
Machine Learning Methods A Classification Scheme. . . . . . .
Probability and Information Measure. Bayesian Classifiers,
Hidden Markov Models. Multiple Linear Regression . . . . . .
Support Vector Machines (SVM). . . . . . . . . . . . . . . . . . .
Inductive Versus Transductive Learning and Reasoning.
Global, Local, and Personalised Modelling . . . . . . . . . . .
Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
1.10 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Evolving Connectionist Methods for Unsupervised Learning . . . . . . .

Unsupervised Learning from Data. Distance Measure . . . . . . .
Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Evolving Clustering Method (ECM) . . . . . . . . . . . . . . . . .
Vector Quantisation. SOM and ESOM . . . . . . . . . . . . . . . .
Prototype Learning. ART . . . . . . . . . . . . . . . . . . . . . . .









Evolving Connectionist Methods for Supervised Learning . . . . . . . .

Connectionist Supervised Learning Methods . . . . . . . . . . . .
Simple Evolving Connectionist Methods . . . . . . . . . . . . . .
Evolving Fuzzy Neural Networks (EFuNN) . . . . . . . . . . . . .
Knowledge Manipulation in Evolving Fuzzy Neural
Networks (EFuNNs) Rule Insertion, Rule Extraction,
Rule Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and Open Questions . . . . . . . . . . . . . . . . . . . .
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .



. . . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .


Evolving Neuro-Fuzzy Inference Models . . . . . . . . . . . . . . . . . . .

Knowledge-Based Neural Networks . . . . . . . . . . . . . . . . .
Hybrid Neuro-Fuzzy Inference System (HyFIS). . . . . . . . . . .
Dynamic Evolving Neuro-Fuzzy Inference
Systems (DENFIS) . . . . . . . . . . . . . . . . . . . . . . . . .
Transductive Neuro-Fuzzy Inference Models . . . . . . . . . . . .
Other Evolving Fuzzy Rule-Based Connectionist Systems . . . . .
Exercise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Population-Generation-Based Methods: Evolutionary Computation . . .

A Brief Introduction to EC . . . . . . . . . . . . . . . . . . . . . .
Genetic Algorithms and Evolutionary Strategies . . . . . . . . . .
Traditional Use of EC for Learning and Optimisation in ANN . .
EC for Parameter and Feature Optimisation of ECOS . . . . . . .
EC for Feature and Model Parameter Optimisation
of Transductive Personalised (Nearest Neighbour) Models. . .
Particle Swarm Intelligence . . . . . . . . . . . . . . . . . . . . . .
Artificial Life Systems (ALife) . . . . . . . . . . . . . . . . . . . .
Exercise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and Open Questions . . . . . . . . . . . . . . . . . . . .
6.10 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Generic Applications of Unsupervised Learning Methods .

Exercise. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and Open Problems . . . . . . . . . . . . . . . .
Further Reading . . . . . . . . . . . . . . . . . . . . . . . .

Inspired Evolving Connectionist Models

State-Based ANN . . . . . . . . . . . . .
Reinforcement Learning . . . . . . . . .
Evolving Spiking Neural Networks. . . .
Summary and Open Questions . . . . . .
Further Reading . . . . . . . . . . . . . .












Evolving Integrated Multimodel Systems . . . . . . . . . . . . . . . . . .

Evolving Multimodel Systems . . . . . . . . . . . . . . . . . . . .
ECOS for Adaptive Incremental Data and Model Integration . . .
Integrating Kernel Functions and Regression Formulas in
Knowledge-Based ANN . . . . . . . . . . . . . . . . . . . . . .
Ensemble Learning Methods for ECOS . . . . . . . . . . . . . . .
Integrating ECOS and Evolving Ontologies . . . . . . . . . . . . .
Conclusion and Open Questions . . . . . . . . . . . . . . . . . . .
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part II



Evolving Intelligent Systems


Adaptive Modelling and Knowledge Discovery in Bioinformatics . . . .

Bioinformatics: Information Growth, and Emergence
of Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DNA and RNA Sequence Data Analysis and
Knowledge Discovery . . . . . . . . . . . . . . . . . . . . . . .
Gene Expression Data Analysis, Rule Extraction, and
Disease Profiling . . . . . . . . . . . . . . . . . . . . . . . . . .
Clustering of Time-Course Gene Expression Data . . . . . . . . .
Protein Structure Prediction . . . . . . . . . . . . . . . . . . . . .
Gene Regulatory Networks and the System Biology Approach . .
Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Dynamic Modelling of Brain Functions and Cognitive Processes . . . .

Evolving Structures and Functions in the Brain and
Their Modelling . . . . . . . . . . . . . . . . . . . . . . . . . .
Auditory, Visual, and Olfactory Information Processing and
Their Modelling . . . . . . . . . . . . . . . . . . . . . . . . . .
Adaptive Modelling of Brain States Based on EEG and
fMRI Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Computational Neuro-Genetic Modelling (CNGM) . . . . . . . . .
BrainGene Ontology . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Modelling the Emergence of Acoustic Segments in Spoken Languages .

10.1 Introduction to the Issues of Learning Spoken Languages. . . . .
10.2 The Dilemma Innateness Versus Learning or Nature Versus
Nurture Revisited . . . . . . . . . . . . . . . . . . . . . . . . .
10.3 ECOS for Modelling the Emergence of Phones and Phonemes . .
10.4 Modelling Evolving Bilingual Systems . . . . . . . . . . . . . . . .
10.5 Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
10.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .





Evolving Intelligent Systems for Adaptive Speech Recognition . . . . . . 325

11.1 Introduction to Adaptive Speech Recognition . . . . . . . . . . . 325
11.2 Speech Signal Analysis and Speech Feature Selection . . . . . . . 329







Adaptive Phoneme-Based Speech Recognition . . . . . . . . . . .

Adaptive Whole Word and Phrase Recognition . . . . . . . . . .
Adaptive, Spoken Language HumanComputer
Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Evolving Intelligent Systems for Adaptive Image Processing . . . . . . .

12.1 Image Analysis and Feature Selection . . . . . . . . . . . . . . . .
12.2 Online Colour Quantisation . . . . . . . . . . . . . . . . . . . . .
12.3 Adaptive Image Classification . . . . . . . . . . . . . . . . . . . .
12.4 Incremental Face Membership Authentication and Face
Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.5 Online Video-Camera Operation Recognition . . . . . . . . . . .
12.6 Exercise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7 Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
12.8 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Evolving Intelligent Systems for Adaptive Multimodal
Information Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1 Multimodal Information Processing . . . . . . . . . . . . . . . . .
13.2 Adaptive, Integrated, Auditory and Visual Information
Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.3 Adaptive Person Identification Based on Integrated Auditory
and Visual Information . . . . . . . . . . . . . . . . . . . . . .
13.4 Person Verification Based on Auditory and Visual
Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5 Summary and Open Problems . . . . . . . . . . . . . . . . . . . .
13.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Evolving Intelligent Systems for Robotics and Decision Support . . . .

14.1 Adaptive Learning Robots . . . . . . . . . . . . . . . . . . . . . .
14.2 Modelling of Evolving Financial and Socioeconomic
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3 Adaptive Environmental Risk of Event Evaluation . . . . . . . . .
14.4 Summary and Open Questions . . . . . . . . . . . . . . . . . . . .
14.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .




Is Next: Quantum Inspired Evolving Intelligent Systems? . . . . .

Why Quantum Inspired EIS? . . . . . . . . . . . . . . . . . . . . .
Quantum Information Processing . . . . . . . . . . . . . . . . . .
Quantum Inspired Evolutionary Optimisation Techniques . . . .
Quantum Inspired Connectionist Systems. . . . . . . . . . . . . .
Linking Quantum to Neuro-Genetic Information Processing:
Is This The Challenge For the Future? . . . . . . . . . . . . . .
15.6 Summary and Open Questions . . . . . . . . . . . . . . . . . . . .
15.7 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .





Appendix A. A Sample Program in MATLAB for Time-Series Analysis . . . . 405

Appendix B. A Sample MATLAB Program to Record Speech
and to Transform It into FFT Coefficients as Features . . . . . . . . . . 407
Appendix C. A Sample MATLAB Program for Image Analysis and
Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Appendix D. Macroeconomic Data Used in Section 14.2 (Chapter 14) . . . . 415
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Extended Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453


This book covers contemporary computational modelling and machine-learning

techniques and their applications, where in the core of the models are artificial
neural networks and hybrid models (e.g. neuro-fuzzy) that evolve to develop their
structure and functionality through incremental adaptive learning. This is inspired
by the evolving nature of processes in the brain, the proteins, and the genes in a
cell, and by some quantum principles. The book also covers population/generationbased optimisation of model parameters and features (variables), but the emphasis
is on the learning and the development of the structure and the functionality of
an individual model. In this respect, the book has a much wider scope than some
earlier work on evolutionary (population/generation)-based training of artificial
neural networks, called evolving neural networks.
This second edition of the book includes new methods, such as online
incremental feature selection, spiking neural networks, transductive neuro-fuzzy
inference, adaptive data and model integration, cellular automata and artificial life
systems, particle swarm optimisation, ensembles of evolving systems, and quantum
inspired neural networks.
In this book new applications are included to gene and protein interaction
modelling, brain data analysis and brain model creation, computational neurogenetic modelling, adaptive speech, image and multimodal recognition, language
modelling, adaptive robotics, modelling dynamic financial and socioeconomic
structures, and ecological and environmental event prediction. The main emphasis
here is on adaptive modelling and knowledge discovery from complex data.
A new feature of the book is the attempt to connect different structural and
functional elements in a single computational model. It looks for inspiration at
some functional relationships in natural systems, such as genetic and brain activity.
Overall, this book is more about problem solving and intelligent systems than
about mathematical proofs of theoretical models. Additional resources for practical
model creation, model validation, and problem solving, related to topics presented
in some parts of the book, are available from ->books, and
Evolving Connectionist Systems is aimed at students and practitioners interested
in developing and using intelligent computational models and systems to solve
challenging real-world problems in computer science, engineering, bioinformatics,
and neuro-informatics. The book challenges scientists with open questions about
future directions of information sciences.


Evolving Connectionist Methods
This part presents some existing connectionist and hybrid techniques for
adaptive learning and knowledge discovery and also introduces some new
evolving connectionist techniques. Three types of evolving adaptive methods
are presented, namely unsupervised, supervised, and reinforcement learning.
They include: evolving clustering, evolving self-organising maps, evolving fuzzy
neural networks, spiking neural networks, knowledge manipulation, and structure
optimisation with the use of evolutionary computation. The last chapter of
this part, Chapter 7, suggests methods for data, information, and knowledge
integration into multimodel adaptive systems and also methods for evolving
ensembles of ECOS. The extended glossary at the end of the book can be used
for a clarication of some of the used concepts.

Modelling and Knowledge Discovery from Evolving
Information Processes

This introductory chapter presents the main concepts used in the book and gives a
justification for the development of this field. The emphasis is on a process/system
evolvability based on evolving rules (laws). To model such processes, to extract
the rules that drive the evolving processes, and to trace how they change over
time are among the main objectives of the knowledge engineering approach that
we take in this book. The introductory chapter consists of the following sections.

Everything is evolving, but what are the evolving rules?

Evolving intelligent systems (EIS) and evolving connectionist systems (ECOS)
Biological inspirations for EIS and ECOS
About the book
Further reading


Everything Is Evolving, but What Are

the Evolving Rules?

According to the Concise Oxford English Dictionary (1983), evolving means

revealing, developing. It also means unfolding, changing. We define an evolving
process as a process that is developing, changing over time in a continuous manner.
Such a process may also interact with other processes in the environment. It may
not be possible to determine in advance the course of interaction, though. For
example, there may be more or fewer variables related to a process at a future
time than at the time when the process started.
Evolving processes are difficult to model because some of their evolving rules
(laws) may not be known a priori; they may dynamically change due to unexpected
perturbations, and therefore they are not strictly predictable in a longer term.
Thus, modelling of such processes is a challenging task with a lot of practical
applications in life sciences and engineering.
When a real process is evolving, a modelling system needs to be able to trace
the dynamics of the process and to adapt to changes in the process. For example,
a speech recognition system has to be able to adapt to various new accents, and
to learn new languages incrementally. A system that models cognitive tasks of

Evolving Connectionist Systems

the human brain needs to be adaptive, as all cognitive processes are evolving by
nature. (We never stop learning!) In bioinformatics, a gene expression modelling
system has to be able to adapt to new information that would define how a gene
could become inhibited by another gene, the latter being triggered by a third gene,
etc. There are an enormous number of tasks from life sciences where the processes
evolve over time.
It would not be an overstatement to say that everything in nature evolves. But
what are the rules, the laws that drive these processes, the evolving rules? And
how do they change over time? If we know these rules, we can make a model that
can evolve in a similar manner as the real evolving process, and use this model
to make predictions and to understand the real processes. But if we do not know
these rules, we can try to discover them from data collected from this process
using the knowledge engineering approach presented in this book.
The term evolving is used here in a broader sense than the term evolutionary.
The latter is related to a population of individual systems traced over generations
(Charles Darwin; Holland, 1992), whereas the former, as it is used in this book,
is mainly concerned with the development of the structure and functionality of
an individual system during its lifetime (Kasabov, 1998a; Weng et al., 2001). An
evolutionary (population/generation) optimisation of the system can be applied
as well.
The most obvious example of an evolving process is life. Life is defined in
the Concise Oxford English Dictionary (1983) as a state of functional activity
and continual change peculiar to organized matter, and especially to the portion
of it constituting an animal or plant before death, animate existence, being
alive. Continual change, along with certain stability, is what characterizes life.
Modelling living systems requires that the continuous changes are represented in
the model; i.e. the model adapts in a lifelong mode and at the same time preserves
some features and principles that are characteristic to the process. The stability
plasticity dilemma is a well-known principle of life that is also widely used in
connectionist computational models (Grossberg, 1969, 1982).
In a living system, evolving processes are observed at different levels (Fig. I.1).

6. Evolutionary (population/generation) processes

5. Brain cognitive processes
4. System information processing (e.g. neural ensemble)
___________ _____________________________________
3. Information processing in a cell (neuron)
2 Molecular information processing (genes, proteins)
1. Quantum information processing

Fig. I.1 Six levels of evolving processes in a higher-order living organism: evolution, cognitive brain processes,
brain functions in neural networks, single neuron functions, molecular processes, and quantum processes.


At the quantum level, particles are in a complex evolving state all the time,
being in a superposion of several locations at the same time, which is defined
by probabilities. General evolving rules are defined by several principles, such as
entanglement, superposition, etc. (see Chapter 15).
At a molecular level, RNA and protein molecules, for example, evolve and
interact in a continuous way based on the DNA information and on the
environment. The central dogma of molecular biology constitutes a general
evolving rule, but what are the specific rules for different species and individuals?
The area of science that deals with the information processing and data manipulation at this level is bioinformatics. Modelling evolving processes at the molecular
level is discussed in Chapter 8.
At the cellular level (e.g. a neuronal cell) all the metabolic processes, the cell
growing, cell division, etc., are evolving processes. Modelling evolving processes
in cells and neurons is discussed in Chapter 8.
At the level of cell ensembles, or at a neural network level, an ensemble of
cells (neurons) operates in concert, defining the function of the ensemble or the
network through learning, for instance, perception of sound, perception of an
image, or learning languages. An example of a general evolving rule is the Hebbian
learning rule (Hebb, 1949); see Chapter 9.
In the human brain, complex dynamic interactions between groups of neurons
can be observed when certain cognitive functions are performed, e.g. speech and
language learning, visual pattern recognition, reasoning, and decision making.
Modelling such processes is presented in Chapters 9 and 10.
At the level of population of individuals, species evolve through evolution. A
biological system evolves its structure and functionality through both lifelong
learning of an individual and the evolution of populations of many such individuals
(Charles Darwin; Holland, 1992). In other words, an individual is a result of
the evolution of many generations of populations, as well as a result of its own
developmental lifelong learning processes. The Mendelian and Darwinian rules of
evolution have inspired the creation of computational modelling techniques called
evolutionary computation, EC (Holland, 1992; Goldberg, 1989). EC is discussed in
Chapter 6, mainly from the point of view of optimisation of some parameters of
an evolving system.
All processes in Fig. I.1 are evolving. Everything is evolving, the living organisms
being more evolving than the other, but what are the evolving rules, the laws that
govern these processes? Are there any common evolving rules for every material
item and for every living organism, along with their specific evolving rules? And
what are the specific rules? Do these rules change over time; i.e. do they evolve as
An evolving process, characterised by its evolving governing rules, manifests
itself in a certain way and produces data that in many cases can be measured.
Through analysis of these data, one can extract relationship rules that describe the
data, but do they describe the evolving rules of the process as well?
Processes at different levels from Fig. I.1 are characterised by general characteristics, such as frequency spectrum, energy, information, and interaction as
explained below.

Evolving Connectionist Systems

1. Frequency spectrum
Frequency, denoted F, is defined as the number of a signal/event repetition over
a period of time T (seconds, minutes, centuries). Some processes have stable
frequencies, but others change their frequencies over time. Different processes
from Fig. I.1 are characterised by different frequencies, defined by their physical
parameters. Usually, a process is characterised by a spectrum of frequencies.
Different frequency spectrums characterise brain oscillations (e.g. delta waves,
Chapter 9), speech signals (Chapter 10), image signals (Chapter 11), or quantum
processes (Chapter 15).
2. Energy
Energy is a major characteristic of any object and organism. Albert Einsteins most
celebrated energy formula defines energy E as depending on the mass of the object
m and the speed of light c:
E = mc 2


The energy of a protein, for example, depends not only on the DNA sequence that
is translated into this protein, but on the 3D shape of the protein and on external
3. Information
Generally speaking, information is a report, a communication, a measure, a representation of news, events, facts, knowledge not known earlier. This is a characteristic that can be defined in different ways. One of them is entropy; see Chapter 1.
4. Interaction, connection with other elements of the system (e.g. objects, particles)
There are many interactions within each of the six levels from Fig. I.1 and across
these levels. Interactions are what make a living organism a complex one, and that
is also a challenge for computational modelling. For example, there are complex
interactions between genes in a genome, and between proteins and DNA. There
are complex interactions between the genes and the functioning of each neuron, a
neural network, and the whole brain. Abnormalities in some of these interactions
are known to have caused brain diseases and many of them are unknown at
present (see the section on computational neuro-genetic modelling in Chapter 9).
An example of interactions between genes and neuronal functions is the
observed dependence between long-term potentiation (learning) in the synapses
and the expression of the immediate early genes and their corresponding proteins
such as Zif/268 (Abraham et al., 1993). Genetic reasons for several brain diseases
have been already discovered (see Chapter 9).
Generally speaking, neurons from different parts of the brain, associated
with different functions, such as memory, learning, control, hearing, and vision,
function in a similar way. Their functioning is defined by evolving rules and
factors, one of them being the level of neuro-transmitters. These factors are


controlled at a genetic level. There are genes that are known to regulate the level
of neuro-transmitters for different types of neurons from different areas of the
brain (RIKEN, 2001). The functioning of these genes and the proteins produced
can be controlled through nutrition and drugs. This is a general principle that can
be exploited for different models of the processes from Fig. I.1 and for different
systems performing different tasks, e.g. memory and learning; see Benuskova and
Kasabov (2007). We refer to the above in the book as neuro-genetic interactions
(Chapter 9).
Based on the evolving rules, an evolving process would manifest different
Random: There is no rule that governs the process in time and the process is
not predictable.
Chaotic: The process is predictable but only in a short time ahead, as the process
at a time moment depends on the process at previous time moments via a
nonlinear function.
Quasi-periodic: The process is predictable subject to an error. The same rules
apply over time, but slightly modified each time.
Periodic: The process repeats the same patterns of behaviour over time and is
fully predictable (there are fixed rules that govern the process and the rules do
not change over time).
Many complex processes in engineering, social sciences, physics, mathematics,
economics, and other sciences are evolving by nature. Some dynamic time series in
nature manifest chaotic behaviour; i.e. there are some vague patterns of repetition
over time, and the time series are approximately predictable in the near future,
but not in the long run (Gleick, 1987; Barndorff-Nielsen et al., 1993; Hoppensteadt,
1989; McCauley, 1994). Chaotic processes are usually described by mathematical
equations that use some parameters to evaluate the next state of the process from
its previous states. Simple formulas may describe a very complicated behaviour
over time: e.g. a formula that describes fish population growth Ft + 1 is based
on the current fish population Ft and a parameter g (Gleick, 1987):
Ft + 1 = 4gFt1 Ft


When g > 089, the function becomes chaotic.

A chaotic process is defined by evolving rules, so that the process lies on the
continuum of orderness somewhere between random processes (not predictable
at all) and quasi-periodic processes (predictable in a longer timeframe, but only to
a certain degree). Modelling a chaotic process in reality, especially if the process
changes its rules over time, is a task for an adaptive system that captures the
changes in the process in time, e.g. the value for the parameter g from the formula
All problems from engineering, economics, and social sciences that are characterised by evolving processes require continuously adapting models to model them.
A speech recognition system, an image recognition system, a multimodal information processing system, a stock prediction system, or an intelligent robot, for
example, a system that predicts the emergence of insects based on climate, etc.
should always adjust its structure and functionality for a better performance over
time, which is the topic of Part II of the book, evolving intelligent systems.


Evolving Connectionist Systems

Evolving Intelligent Systems (EIS) and Evolving

Connectionist Systems (ECOS)

Despite the successfully developed and used methods of computational intelligence

(CI), such as artificial neural networks (ANN), fuzzy systems (FS), evolutionary
computation, hybrid systems, and other methods and techniques for adaptive
machine learning, there are a number of problems while applying these techniques
to complex evolving processes:
1. Difficulty in preselecting the systems architecture: Usually a CI model has a
fixed architecture (e.g. a fixed number of neurons and connections). This makes
it difficult for the system to adapt to new data of unknown distribution. A
fixed architecture would definitely prevent the ANN from learning in a lifelong
learning mode.
2. Catastrophic forgetting: The system would forget a significant amount of old
knowledge while learning from new data.
3. Excessive training time required: Training an ANN in a batch mode usually
requires many iterations of data propagation through the ANN structure. This
may not be acceptable for an adaptive online system, which would require fast
4. Lack of knowledge representation facilities: Many of the existing CI architectures
capture statistical parameters during training, but do not facilitate extracting the
evolving rules in terms of linguistically meaningful information. This problem
is called the black box problem. It occurs when only limited information is
learned from the data and essential aspects, that may be more appropriate and
more useful for the future work of the system, are missed forever.
To overcome the above problems, improved and new connectionist and hybrid
methods and techniques are required both in terms of learning algorithms and
system development.
Intelligence is seen by some authors as a set of features or fixed properties of the
mind that are stable and static. According to this approach, intelligence is genetically defined given rather than developed. Contrary to this view, intelligence
is viewed by other authors as a constant and continuous adaptation. Darwins
contemporary H. Spencer proposed in 1855 the law of intelligence, stating that
the fundamental condition of vitality is that the internal state shall be continually
adjusted to the external order (Richardson, 1999, p. 14). Intelligence is the faculty
of adapting oneself to circumstances, according to Henri Simon and Francis Binet,
the authors of the first IQ test (see Newell and Simon (1972)). In Plotkyn (1994),
intelligence is defined as the human capacity to acquire knowledge, to acquire a
set of adaptations and to achieve adaptation.
Knowledge representation, concept formation, reasoning, and adaptation are
obviously the main characteristics of intelligence upon which all authors agree
(Rosch and Lloyd, 1978; Smith and Medin, 1981). How these features can be
implemented and achieved in a computer model is the main objective of the area
of artificial intelligence (AI).
AI develops methods, tools, techniques, and systems that make possible the
implementation of intelligence in computer models. This is a soft definition of


AI, which is in contrast to the first definition of AI (the hard one) given by Alan
Turing in 1950. According to the Turing test for AI, if a person communicates
in natural language with another person or an artificial system behind a barrier
without being able to distinguish between the two, and also without being able
to identify whether this is a male or a female, as the system should be able to
fool the human in this respect, then if it is a system behind the barrier, it can be
considered an AI system. The Turing test points to an ultimate goal of AI, which
is the understanding of concepts and language, but on the other hand it points to
no direction or criteria to develop useful AI systems.
In a general sense, information systems should help trace and understand the
dynamics of the modelled processes, automatically evolve rules, knowledge that
captures the essence of these processes, take a short cut while solving a problem
in a complex problem space, and improve their performance all the time. These
requirements define a subset of AI which is called here evolving intelligent systems
(EIS). The emphasis here is not on achieving the ultimate goal of AI, as defined
by Turing, but rather on creating systems that learn all the time, improve their
performance, develop a knowledge representation for the problem in hand, and
become more intelligent.
A constructivist working definition of EIS is given below. It emphasises the
dynamic and the knowledge-based structural and functional self-development of
a system.
EIS is an information system that develops its structure, functionality, and
knowledge in a continuous, self-organised, adaptive, and interactive way from
incoming information, possibly from many sources, and performs intelligent tasks
typical for humans (e.g. adaptive pattern recognition, concept formation, language
learning, intelligent control) thus improving its performance.
David Fogel (2002), in his highly entertaining and highly sophisticated book,
Blondie 24 Playing at the Edge of AI, describes a case of EIS as a system that
learns to play checkers online without using any instructions, and improves after
every game. The system uses connectionist structure and evolutionary algorithms
along with statistical analysis methods.
EIS are presented here in this book in the form of methods of evolving connectionist systems (ECOS) and their applications. An ECOS is an adaptive, incremental learning and knowledge representation system that evolves its structure and
functionality, where in the core of the system is a connectionist architecture that
consists of neurons (information processing units) and connections between them.
An ECOS is a CI system based on neural networks, but using other techniques of
CI that operate continuously in time and adapt their structure and functionality
through a continuous interaction with the environment and with other systems
(Fig. I.2). The adaptation is defined through:
1. A set of evolving rules
2. A set of parameters (genes) that are subject to change during the system
3. An incoming continuous flow of information, possibly with unknown
4. Goal (rationale) criteria (also subject to modification) that are applied to
optimise the performance of the system over time


Evolving Connectionist Systems



Fig. I.2 EIS, and ECOS in particular, evolve their structure and functionality through incremental (possibly
online) learning in time and interaction with the environment.

The methods of ECOS presented in the book can be used as traditional CI

techniques, but they also have some specific characteristics that make them applicable to more complex problems:
1. They may evolve in an open space, where the dimensions of the space can
2. They learn via incremental learning, possibly in an online mode.
3. They may learn continuously in a lifelong learning mode.
4. They learn both as individual systems and as an evolutionary population of
such systems.



Brain Signals
Environmental and
Social Information
Stock Market




Feature Extraction


Knowledge discovery

Other Sources



Fig. I.3 An EIS, and ECOS in particular, consists of four parts: data acquisition, feature extraction, modelling,
and knowledge acquisition. They process different types of information in a continuous adaptive way, and
communicate with the user in an intelligent way providing knowledge (rules). Data can come from different
sources: DNA (Chapter 8), brain signals (Chapter 9), socioeconomic and ecological data (Chapter 14), and from
many others.



5. They use constructive learning and have evolving structures.

6. They learn and partition the problem space locally, thus allowing for a fast
adaptation and tracing the evolving processes over time.
7. They evolve different types of knowledge representation from data, mostly a
combination of memory-based, statistical, and symbolic knowledge.
Each EIS system, and an ECOS in particular, consists of four main parts:

Data acquisition
Preprocessing and feature evaluation
Knowledge acquisition

Figure I.3 illustrates the different parts of an EIS that processes different types
of information in a continuous adaptive way. The online processing of all this
information makes it possible for the ECOS to interact with users in an intelligent
way. If humansystem interaction can be achieved in this way, this can be used to
extend systemsystem interactions as well.


Biological Inspirations for EIS and ECOS

Some of the methods for EIS and ECOS presented in Chapters 2 through 6 use
principles from the human brain, as discussed here and in many publications
(e.g. Arbib (1995, 2002) and Kitamura (2001)).
It is known that the human brain develops before the child is born. During
learning the brain allocates neurons to respond to certain stimuli and develops
their connections. Some parts of the brain develop connections and also retain
their ability to create neurons during the persons lifetime. Such an area is
the hippocampus (Erikson et al., 1998; McClelland et al., 1995). According to
McClelland et al. (1995) the sequential acquisition of knowledge is due to the
continuous interaction between the hippocampus and the neocortex. New nerve
cells have a crucial role in memory formation.
The process of brain development and learning is based on several principles
(van Owen, 1994; Wong, 1995; Amit, 1989; Arbib, 1972, 1987, 1998, 1995, 2002;
Churchland and Sejnowski, 1992; J. G. Taylor, 1998, 1999; Deacon, 1988, 1998;
Freeman, 2001; Grossberg, 1982; Weng et al., 2001), some of them used as inspirations for the development of ECOS:
1. Evolution is achieved through both genetically defined information and
2. The evolved neurons in the brain have a spatialtemporal representation where
similar stimuli activate close neurons.
3. Redundancy is the evolving process in the brain leading to the creation of a
large number of neurons involved in each learned task, where many neurons
are allocated to respond to a single stimulus or to perform a single task; e.g.
when a word is heard, there are hundreds of thousands of neurons that are
immediately activated.


Evolving Connectionist Systems

4. Memory-based learning, i.e. the brain stores exemplars of facts that can be
recalled at a later stage. Bernie Widrow (2006) argues that learning is a process
of memorising and everything we do is based on our memory.
5. Evolving through interaction with the environment.
6. Inner processes take place, e.g. sleep-learning and information consolidation.
7. The evolving process is continuous and lifelong.
8. Through learning, higher-level concepts emerge that are embodied in the
evolved brain structure, and can be represented as a level of abstraction (e.g.
acquisition and the development of speech and language, especially in multilingual subjects).
Learning and structural evolution coexist in the brain. The neuronal structures
eventually implement a long-term memory. Biological facts about growing neural
network structures through learning and adaptation are presented in Joseph (1998).
The observation that humans (and animals) learn through memorising sensory
information, and then interpreting it in a context-driven way, has been known
for a long time. This is demonstrated in the consolidation principle that is widely
accepted in physiology. It states that in the first five or so hours after presenting
input stimuli to a subject, the brain is learning to cement what it has perceived.
This has been used to explain retrograde amnesia (a trauma of the brain that
results in loss of memory about events that occurred several hours before the
event of the trauma). The above biological principle is used in some methods of
ECOS in the form of sleep, eco-training mode.
During the ECOS learning process, exemplars (or patterns) are stored in a longterm memory. Using stored patterns in the eco-training mode is similar to the
task rehearsal mechanism (TRM). The TRM assumes that there are long-term and
short-term centres for learning (McClelland et al., 1995). According to the authors,
the TRM relies on long-term memory for the production of virtual examples of
previously learned task knowledge (background knowledge). A functional transfer
method is then used to selectively bias the learning of a new task that is developed
in short-term memory. The representation of this short-term memory is then
transferred to long-term memory, where it can be used for learning yet another new
task in the future. Note that explicit examples of a new task need not be stored in
long-term memory, only the representation of the task, which can be used later
to generate virtual examples. These virtual examples can be used to rehearse
previously learned tasks in concert with a new related task.
But if a system is working in a real-time mode, it may not be able to adapt to
new data due to lack of sufficient processing speed. This phenomenon is known
in psychology as loss of skills. The brain has a limited amount of working shortterm memory. When encountering important new information, the brain stores
it simply by erasing some old information from the working memory. The prior
information gets erased from the working memory before the brain has time to
transfer it to a more permanent or semi-permanent location for actual learning.
ECOS sleep-training is based on similar principles.
In Freeman (2000), intelligence is described as related to an active search for
information, goal-driven information processing, and constant adaptation. In this
respect an intelligent system has to be actively selecting data from the environment.
This feature can be modelled and is present in ECOS through data and feature



selection for the training process. The filtering part of the ECOS architecture from
Fig. I.3 serves as an active filter to select only appropriate data and features from
the data streams. Freeman (2000) describes learning as a reinforcement process
which is also goal-driven.
Part of the human brain works as associative memory (Freeman, 2000). The
ECOS models can be used as associative memories, where the first part is trained
in an unsupervised mode and the second part in a reinforcement or supervised
learning mode.
Humans are always seeking information. Is it because of the instinct for survival?
Or is there another instinct, an instinct for information? If that is true, how is
this instinct defined, and what are its rules? Although Perlovski (2006) talks about
cognitive aspects of this instinct; here we refer to the genetic aspects of the instinct.
In Chapter 9 we refer to genes that are associated with long-term potentiation in
synapses, which is a basic neuronal operation of learning and memory (Abraham
et al., 1993; Benuskova and Kasabov, 2006). We also refer to genes associated with
loss of memory and other brain diseases that affect information processing in
the brain, mainly the learning and the memory functions. It is now accepted that
learning and memory are both defined genetically and developed during the life
of an individual through interaction with the environment.
Principles of brain and gene information processing have been used as an
inspiration to develop the methods of ECOS and to apply them in different chapters
of the book.
The challenge for the scientific area of computational modelling, and for the
ECOS paradigm in particular, is how to create structures and algorithms that solve
complex problems to enable progress in many scientific areas.


About the Book

Figure I.4 represents a diagram that links the inspirations/principles, the ECOS
methods, and their applications covered in different chapters of the book.


Further Reading

The Nature of Knowledge (Plotkyn, 1994)

Cognition and Categorization (Rosch and Lloyd, 1978)
Categories and Concepts (Smith and Medin, 1981).
Chaotic Processes (Barndorff-Nielsen et al., 1993; Gleick, 1987; Hoppensteadt,
1989; McCauley, 1994; Erdi, 2007)
Emergence and Evolutionary Processes (Holland, 1998)
Different Aspects of Artificial Intelligence (Dean et al., 1995; Feigenbaum, 1989;
Hofstadter, 1979; Newell and Simon, 1972)
Alan Turings Test for AI (Fogel, 2002; Hofstadter, 1979)
Emerging Intelligence (Fogel, 2002)
Evolving Connectionist Systems as Evolving Intelligence (Kasabov, 19982006)
Evolving Processes in the Brain (Freeman, 2000, 2001)


Evolving Connectionist Systems

Fig. I.4 A block diagram schematically showing principles, methods, and applications covered in the book in
their relationship.

Evolving Consciousness (Taylor, 1999, 2005)

Principles of the Development of the Human Brain (Amit, 1989; Arbib, 1972,
1987, 1998, 1995, 2002; Churchland and Sejnowski, 1992; Deacon, 1988, 1998;
Freeman, 2001; Grossberg, 1982; Joseph, 1998; J. G. Taylor, 1998, 1999; van Owen,
1994; Wong, 1995)
Learning in the Hippocampus Brain (Durand et al., 1996; Eriksson et al., 1998;
Grossberg and Merrill, 1996; McClelland et al., 1995)
Biological Motivations Behind ECOS (Kasabov, 1998; Kitamura, 2001)
Autonomous Mental Development (J.Weng et al., 2001)

1. Feature Selection, Model Creation,

and Model Validation

This chapter presents background information, methods, and techniques of

computational modelling that are used in the other chapters. They include methods
for feature selection, statistical learning, and model validation. Special attention
is paid to several contemporary issues such as incremental feature selection and
feature evaluation, inductive versus transductive learning and reasoning, and
a comprehensive model validation. The chapter is presented in the following

Feature selection and feature evaluation

Incremental feature selection
Machine learning methods a classification scheme
Probability and information measure. Bayesian classifiers, hidden Markov
models, and multiple linear regressions
Support vector machines
Inductive versus transductive learning and reasoning. Global versus local models
Model validation
Summary and open problems
Further reading


Feature Selection and Feature Evaluation

Feature selection is the process of choosing the most appropriate features

(variables) when creating a computational model (Pal, 1999).
Feature evaluation is the process of establishing how relevant to the problem in
hand (e.g. the classification of gene expression microarray data) are the features
(e.g. the genes) used in the model.
Features can be:
Original variables, used in the first instance to specify the problem (e.g. raw
pixels of an image, an amplitude of a signal, etc.)
Transformed variables, obtained through mapping the original variable space
into a new one (e.g. principle component analysis (PCA); linear discriminant
analysis (LDA); fast Fourier transformation (FFT), SVM, etc.)


Evolving Connectionist Systems

There are different groups of methods for feature selection:

Filtering methods: The features are filtered, selected, and ranked in advance,
before a model is created (e.g. a classification model).
Wrapping methods: Features are selected on the basis of how well the created
model performs using these features.
Traditional filtering methods are: correlation, t-test, and signal-to-noise ratio
Correlation coefficients represent the relationship between the variables,
including a class variable if such is available. For every variable xi i = 1 2     d1 
its correlation coefficients Corr(xi  yj  with all other variables, including output
variables yj  j = 1 2     d2 , are calculated. The following is the formula to
calculate the Pearson correlation between two variables x and y based on n values
for each of them:
Corr =


xi Mx yi My/n 1 Stdx Stdy



where Mx and My are the mean values of the two variables x and y, and Stdx and
Stdy are their respective standard deviations.
The t-test and the SNR methods evaluate how important a variable is to discriminate samples belonging to different classes. For the case of a two-class problem,
a SNR ranking coefficient for a variable x is calculated as an absolute difference
between the mean value M1x of the variable for class 1 and the mean M2x of this
variable for class 2, divided to the sum of the respective standard deviations:
SNR_x = abs M1x M2x/Std1x + Std2x


A similar formula is used for the t-test:

t-test_x = absM1x M2x/Std1x2 /N1 + Std2x2 /N2


where N1 and N2 are the numbers of samples in class 1 and class 2 respectively.
Figure 1.1a shows a graphical representation of the correlation coefficients of
all four inputs and the class variables of the Iris benchmark data, and Fig. 1.1b
gives the SNR ranking of the variables. The Iris benchmark data consist of 150
samples defined by four variables: sepal length, sepal width, petal length, petal
width (in cm) Each of these samples belongs to one of three classes: Setosa,
Versicolour, or Virginica (Fisher, 1936). There are 50 samples of each class.
Principal component analysis aims at finding a representation of a problem
space X defined by its variables X = x1 x2     xn into another orthogonal
space having a smaller number of dimensions defined by another set of variables
Z = z1 z2     zm , such that every data vector x from the original space is
projected into a vector z of the new space, so that the distance between different
vectors in the original space X is maximally preserved after their projection into
the new space Z. A PCA projection of the Iris data is shown in Fig. 1.2a.
Linear discriminant analysis is a transformation of classification data from the
original space into a new space of LDA coefficients that has an objective function

Feature Selection, Model Creation, and Model Validation




Fig. 1.1 (a) Correlation coefficients between the five variables in the Iris data set (four input variables and
one class variable encoding class Setosa as 1, Versicolour as 2, and Virginica as 3); (b) SNR ranking of the four
variables of the Iris case study data (variable 3, petal length is ranked the highest). A colour version of this
figure is available from

to preserve the distance between the samples using also the class label to make
them more distinguishable between the classes. An LDA projection of the Iris data
is shown in Fig. 1.2a.


Evolving Connectionist Systems



Fig. 1.2 (a) PCA transformation (unsupervised, uses only input variables) of the Iris case study data: the first
principal component alone accounts for more than 90% of the variation among the data samples; (b) LDA
transformation of the Iris case study data gives a better discrimination than the PCA as it uses class labels to
achieve the transformation (it is supervised). (See a colour version at

Feature Selection, Model Creation, and Model Validation


Another benchmark dataset used in the book is the gas-furnace time-series

data (Box and Jenkins, 1970). A quantity of methane gas (representing the first
independent variable) is fed continuously into a furnace and the CO2 gas produced
is measured every minute (a second independent variable). This process can
theoretically run forever, supposing that there is a constant supply of methane
and the burner remains mechanically intact. The process of CO2 emission is an
evolving process. In this case it depends on the quantity of the methane supplied
and on the parameters of the environment. For simplicity, only 292 values of CO2
are taken in the well-known gas-furnace benchmark problem. Given the values of
methane at a particular moment (t 4) and the value of CO2 at the moment (t)
the task is to predict the value for CO2 at the moment (t + 1) (output variable).
The CO2 data from Box and Jenkins (1970), along with some of their statistical
characteristics, are plotted in Fig. 1.3. It shows the 292 points from the time series,
the 3D phase space, the histogram, and the power spectrum of the frequency
characteristics of the process. The program used for this analysis as well as for some
other time-series analysis and visualisation in this book is given in Appendix A.
Several dynamic benchmark time series have been used in the literature and
also in this book. We develop and test evolving models to model the well-known
MackeyGlass chaotic time series xt, defined by the MackeyGlass time delay
differential equation (see Farmer and Sidorovich (1987)):

dt 1 + x10 t

The gas furnace time series


The 3D phase space


x(t + 2)

x = CO2(t)





Time t

x(t + 1)


40 40


Power spectrum






Repetitive occurance







CO2 values




Fig. 1.3 The gas-furnace benchmark dataset, statistical characteristics.



Evolving Connectionist Systems

Fig. 1.4 Statistical parameters of a series of 3000 values from the MackeyGlass time series. Top left: 3000
data points from the time series; top right: a 3D phase space; bottom left: the histogram of the time-series
values; bottom right: the power spectrum, showing on the x-axis frequencies of repetition, and on the y-axis:
the power of this repetition.

This series behaves as a chaotic time series for some values of the parameters
, a,
and b, and for the initial value of x, x(0); for example, x0 = 12,
= 17, a = 02,
b = 01, and xt = 0 for t < 0. Some of the statistical characteristics of the Mackey
Glass time series are plotted in Fig. 1.4. Predicting future values from past values of a
chaotic time series is a problem with which many computer models deal. Such a task
is to predict the future values, xt + 6 or xt + 85, from the past values xt, xt 6,
xt 12, and xt 18 of the MackeyGlass series, as illustrated in Chapters 2 and 3.
Some dynamic chaotic processes occupy comparatively small parts of the space
they evolve in; that is, they form fractals (see the 3D space graph in Fig. 1.4). The
dimension of the problem space is a fraction of the standard integer dimensions,
e.g. 2.1D instead of 3D.


Incremental Feature Selection

In EIS features may need to be evaluated in an incremental mode too, at each time
using the most relevant features. The set of features used may change from one
time interval to another based on changes in the modelled data. This is a difficult
task and only a few techniques for achieving it are discussed here.

Incremental Correlation Analysis

Correlation analysis can be applied in an incremental mode, as outlined in the
algorithm shown in Fig. 1.5. This is based on the incremental calculation of the
mean and the standard deviation of the variable.

Feature Selection, Model Creation, and Model Validation


Calculating the online correlation coefficient CorrXY

between two variables: an input variable X, and an output variable
SumX = 0;
SumY = 0;
SumXY = 0;
SumX2 = 0;
SumY2 = 0;
CorrXY = [];
WHILE there are data pairs (x,y) from the input stream, DO
INPUT the current data pair (x(i), y(i));
SumX = SumX+x(i);
SumY = SumY+y(i);
AvX = SumX/i;
AvY = SumY/i;
SumXY = SumXY + (x(i) AvX)*(y(i) AvY);
SumX2 = SumX2 + (x(i) AvX)^2;
SumY2 = SumY2 + (y(i) AvY)^2;
the current value for the correlation coefficient is:
CorrXY(i) = SumXY / sqrt(SumX2 * SumY2);

Fig. 1.5 An illustrative algorithm for online correlation analysis. The operations in the algorithm are whole-vector
operations expressed in the notation of MATLAB.

An example is given in Fig. 1.6. The figure shows the graphs of the euro/US$
exchange rate and the Dow Jones index over time, along with the online
calculated correlation between the two time series using the algorithm from
Fig. 1.5 (the bottom line). It can be seen that the correlation coefficient changes
over time.
In many classification problems there is a data stream that contains different
chunks of data, each having a different number of samples of each class. New class
samples can emerge as well; see Fig. 1.7a (see Ozawa et al. (2005, 2006)).
Incremental selection of features is a complex procedure. S. Ozawa et al., (2005,
2006) have introduced a method for incremental PCA feature selection where after
the presentation of a new sample from the input data stream (or a chunk of data)
a new PCA axis may be created (Fig.1.7b) or an axis can be rotated (Fig.1.7c) based
on the position of the new sample in the PCA space. An algorithm for incremental
LDA feature selection is proposed in S. Pang et al. (2005).


Machine Learning Methods A Classification Scheme

Machine learning is an area of information science concerned with the creation of

information models from data, with the representation of knowledge, and with the
elucidation of information and knowledge from processes and objects. Machine
learning includes methods for feature selection, model creation, model validation,
and knowledge extraction (see Fig. I.3).


Evolving Connectionist Systems

Normalised in [0,1] DJ and Euro/EUS and the online correlation

Online correlation between DJ and the Euro/US$




time 01/01/9929/03/01



Fig. 1.6 The values of the euro/US$ exchange rate normalized in the interval [0,1] and the Dow Jones stock
index as evolving time series, and the online correlation (the bottom line) between them for the period 1
January 1999 until 29 March 2001. The correlation coefficient changes over time and it is important to be able
to trace this process. The first 100 or so values of the calculated correlation coefficient should be ignored, as
they do not represent a meaningful statistical dependence.

Here we talk mainly about learning in connectionist systems (neural

networks, ANN) even though the principles of these methods and the classification scheme presented below are valid for other machine learning methods
as well.
Most of the known ANN learning algorithms are influenced by a concept introduced by Donald O. Hebb (1949). He proposed a model for unsupervised learning






Fig. 1.7 (a) A data stream that contains chunks of data characterised by different numbers of samples (vectors,
examples) from different classes; (Continued overleaf )

Feature Selection, Model Creation, and Model Validation



If h > ,
a new axis is added.


Eigen-axis Rotation

Fig. 1.7 (continued ) (b) incremental PCA through new PCA axis creation: when a new data vector is entered
and the distance between this vector and the existing eigenvector (PCA axis) is larger than a threshold, a new
eigenaxis is created; (c) incremental PCA with axis rotation: when a new vector is added, the eigenvectors may
need to be rotated (from Ozawa et al. (2005, 2006)).

in which the synaptic strength (weight) is increased if both the source and the
destination neurons become simultaneously activated. It is expressed as
wij t + 1 = wij t + c oi oj 


where wij t is the weight of the connection between the ith and jth neurons at the
moment t, and oi and oj are the output signals of neurons i and j at the same moment
t. The weight wij t + 1 is the adjusted weight at the next time moment t + 1.
In general terms, a connectionist system {S, W, P, F, L, J} that is defined by
its structure S, its parameter set P, its connection weights W, its function F, its
goal function J, and a learning procedure L, learns if the system optimises its
structure and its function F when observing events z1 z2 z3    from a problem
space Z. Through a learning process, the system improves its reaction to the
observed events and captures useful information that may be later represented as
knowledge. In Tsypkin (1973) the goal of a learning system is defined as finding
the minimum of an objective function JS named the expected risk function.
The function JS can be represented by a loss function QZ S and an unknown
probability distribution Prob(Z).


Evolving Connectionist Systems

Most of the learning systems optimise a global goal function over a fixed part
of the structure of the system. In ANN this part is a set of predefined and fixed
number of connection weights, i.e. a set number of elements in the set W. As an
optimisation procedure, some known statistical methods for global optimisation
are applied (Amari, 1967, 1990), for example, the gradient descent method. The
obtained structure S is expected to be globally optimal, i.e. optimal for data drawn
from the whole problem space Z. In the case of a changing structure S and a
changing (e.g. growing) part of its connections W, where the input stream of data
is continuous and its distribution is unknown, the goal function could be expressed
as a sum of local goal functions J, each one optimised in a small subspace Z  Z
as data are drawn from this subspace. In addition, while the learning process is
taking place, the number of dimensions of the problem space Z may also change
over time. The above scenarios are reflected in different models of learning, as
explained next.
There are many methods for machine learning that have been developed for
connectionist architectures (for a review, see Arbib (1995, 2002)). It is difficult
and quite risky to try to put all the existing methods into a clear classification
structure (which should also assume slots for new methods), but this is necessary
here in order to define the scope of the evolving connectionist system paradigm.
This also defines the scope of the book.
A classification scheme is presented below. This scheme is a general one, as
it is valid not only for connectionist learning models, but also for other learning
paradigms, for example, evolutionary learning, case-based learning, analogy-based
learning, and reasoning. On the other hand, the scheme is not comprehensive, as
it does not present all existing connectionist learning models. It is only a working
classification scheme needed for the purpose of this book.
A (connectionist) system that learns from observations z1, z2, z3, from
a problem space Z can be designed to perform learning in different ways. The
following classification scheme outlines the main questions and issues and their
alternative solutions when constructing a connectionist learning system.
1. In what space is the learning system developing?
(a) The learning system is developing in the original data space Z.
The structural elements (nodes) of the connectionist learning system
are points in the d-dimensional original data space Z (Fig. 1.8a). This is
the case in some clustering and prototype learning systems. One of the
problems here is that if the original space is high-dimensional (e.g. 30,000
gene expression space) it is difficult to visualise the structure of the system
and observe some important patterns. For this purpose, special visualisation techniques, such as principal component analysis, or Sammon
mapping, are used to project the system structure S into a visualisation
space V.
(b) The learning system is developing in its own machine learning space M.
The structural elements (nodes) of the connectionist learning system are
created in a system (machine) space M, different from the d-dimensional
original data space Z (Fig. 1.8b). An example is the self-organising map
(SOM) NN (Kohonen, 1977, 1982, 1990, 1993, 1997). SOMs develop in
two-, three-, or more-dimensional topological spaces (maps) from the
original data.

Feature Selection, Model Creation, and Model Validation



Visualisation (Projection)
Space V

Connectionist model





Data points


Space Z


Data points

Machine space
of the evolving
system (e.g. 3D)

Problem Space Z

Fig. 1.8 (a) A computational model is built in the original data space; i.e. the original problem variables are
used and a network of connections is built to model their interaction; a special visualisation procedure may be
used to visualise the model in a different space; (b) a computational model is built in a new (machine) space,
where the original variables are transformed into a new set of variables.

2. Is the problem space open?

(a) An open problem space is characterised by an unknown probability distribution PZ of the incoming data and a possible change in its dimensionality. Sometimes the dimensionality of the data space may change
over time, involving more or fewer dimensions, for example, adding new
modalities to a person identification system. In this case the methods
discussed in the previous section for incremental feature selection would
be appropriate.


Evolving Connectionist Systems

(b) A closed problem space has a fixed dimensionality, and either a known
distribution of the data or the distribution can be approximated in advance
through statistical procedures.
3. Is learning performed in an incremental or in a batch mode, in an off-line or
in an online mode?
(a) Batch-mode and pattern modes of learning: In a batch mode of learning a
predefined learning (training) set of data z1 z2    zp is learned by the
system through propagating this dataset several times through the system.
Each time the system optimises its structure W, based on the average value
of the goal function over the whole dataset. Many traditional algorithms,
such as the backpropagation algorithm, use this type of learning (Werbos,
1990; Rumelhart and McLelland, 1986; Rumelhart et al., 1986).
The incremental pattern mode of learning is concerned with learning
each data example separately and the data might exist only for a short
time. After observing each data example, the system makes changes in
its structure (the W parameters) to optimise the goal function J. Incremental learning is the ability of an NN to learn new data without fully
destroying the patterns learned from old data and without the need to be
trained on either old or new data. According to Schaal and Atkeson (1998)
incremental learning is characterized by the following features.
Input and output distributions of data are not known and these distributions may change over time.
The structure of the learning system W is updated incrementally.
Only limited memory is available so that data have to be discarded after
they have been used.
(b) Off-line versus online learning: In an off-line learning mode, a NN model
is trained on data and then implemented to operate in a real environment,
without changing its structure during operation. In an online learning
mode, the NN model learns from new data during its operation and once
used the data are no longer available.
A typical simulation scenario for online learning is when data examples
are drawn randomly from a problem space and fed into the system one by
one for training. Although there are chances of drawing the same examples
twice or several times, this is considered as a special case in contrast
to off-line learning when one example is presented to the system many
times as part of the training procedure. Methods for online learning in
NN are studied in Albus (1975), Fritzke (1995), and Saad (1999). In Saad
(1999), a review of some statistical methods for online learning, mainly
gradient descent methods applied to fixed-size connectionist structures, is
presented. Some other types of learning, such as incremental learning and
lifelong learning, are closely related to online learning.
Online learning, incremental learning, and lifelong learning are typical
adaptive learning methods. Adaptive learning aims at solving the wellknown stability/plasticity dilemma, which means that the system is stable
enough to retain patterns learned from previously observed data, while
being flexible enough to learn new patterns from new incoming data.

Feature Selection, Model Creation, and Model Validation


Adaptive learning is typical for many biological systems and is also useful
in engineering applications such as robotic systems and process control.
Significant progress in adaptive learning has been achieved due to the
adaptive resonance theory (ART; Carpenter and Grossberg (1987, 1990,
1991) and Carpenter et al. (1991)) and its various models, which include
unsupervised models (ART1, ART2, FuzzyART) and supervised versions
(c) Combined online and off-line learning: In this mode the system may work
for some of the time in an online mode, after which it switches to off-line
mode, etc. This is often used for optimisation purposes, where a small
window of data from the continuous input stream can be kept aside,
and the learning system, which works in an online mode, can be locally
or globally optimised through off-line learning on this window of data
through window-based optimisation of the goal function JW.
4. Is the learning process lifelong?
(a) Single session learning: The learning process happens only once over the
whole set of available data (even though it may take many iterations during
training). After that the system is set in operation and never trained again.
This is the most common learning mode in many existing connectionist
methods and relates to the off-line, batch mode of training. But how can
we expect that once a system is trained on certain (limited) data, it will
always operate perfectly well in a future time, on any new data, regardless
of where they are located in the problem space?
(b) Lifelong learning is concerned with the ability of a system to learn from
continuously incoming data in a changing environment during its entire
existence. Growing, as well as pruning, may be involved in the lifelong
learning process, as the system needs to restrict its growth while always
maintaining a good learning and generalisation ability. Lifelong learning
relates to incremental, online learning modes, but requires more sophisticated methods.
5. Are there desired output data and in what form are they available?
The availability of examples with desired output data (labels) that can
be used for comparison with what the learning system produces on its
outputs defines four types of learning.
(a) Unsupervised learning: There are no desired output data attached to the
examples z1 z2 z3    . The data are considered as coming from an input
space Z only.
(b) Supervised learning: There are desired output data attached to the
examples z1 z2 z3    . The data are considered as coming in (x y) pairs
from both an input space X and an output space Y that collectively define
the problem space Z. The connectionist learning system associates data
from the input space X to data from the output space Y (see Fig. 1.9).
(c) Reinforcement learning: In this case there are no exact desired output
data, but some hints about the goodness of the system reaction are
available. The system learns and adjusts its structural parameters from
these hints. In many robotic systems a robot learns from the feedback
from the environment, which may be used as, for example, a qualitative
indication of the correct movement of the robot.


Evolving Connectionist Systems

Problem Space Z



Error Feedback
data points

Connectionist Model

Fig. 1.9 A supervised learning model maps the input subspace into the output subspace of the problem
space Z.

(d) Combined learning: This is the case when a connectionist system can
operate in more than one of the above learning modes.
6. Does learning include populations of individuals over generations?
(a) Individual development-based learning: A system is developing independently and is not part of a population of individual systems over
(b) Evolutionary (population/generation based) learning: Here, learning is
concerned with the performance not only of an individual system, but of
a population of systems that improve their performance through generations (see Chapter 6). The best individual system is expected to emerge and
evolve from such populations. Evolutionary computation (EC) methods,
such as genetic algorithms (GA), have been widely used for optimising
ANN structures (Yao, 1993; Fogel et al., 1990; Watts and Kasabov, 1998).
They utilise ideas from Darwinism. Most of the evolutionary EC methods
developed thus far assume that the problem space is fixed, i.e. that the
evolution takes place within a predefined problem space and this space
does not change dynamically. Therefore these methods do not allow
for modelling real online adaptation. In addition they are very timeconsuming, which also prevents them from being used in real-world
7. Is the structure of the learning system of a fixed size, or it is evolving?
Here we refer again to the bias/variance dilemma (see, e.g. Carpenter and
Grossberg (1991) and Grossberg (1969, 1982)). For an NN structure the
dilemma states that if the structure is too small, the NN is biased to certain
patterns, and if the NN structure is too large there are too many variances,
which may result in overtraining, poor generalization, etc. In order to
avoid this problem, an NN structure should change dynamically during
the learning process, thus better representing the patterns in the data and
the changes in the environment.
(a) Fixed-size structure: This type of learning assumes that the size of the
structure S is fixed (e.g. number of neurons, number of connections), and

Feature Selection, Model Creation, and Model Validation


through learning the system changes some structural parameters (e.g. W,

the values of connection weights). This is the case in many multilayer
perceptron ANNs trained with the backpropagation algorithm (Rosenblatt,
1962; Amari, 1967, 1990; Arbib, 1972, 1987, 1995, 2002; Werbos, 1990;
Hertz et al., 1991; Rumelhart et al., 1986).
(b) Dynamically changing structure: According to Heskes and Kappen (1993)
there are three different approaches to dynamically changing structures: constructivism, selectivism, and a hybrid approach. Connectionist
constructivism is about developing ANNs that have a simple initial
structure and grow during their operation through inserting new nodes
using evolving rules. This theory is supported by biological facts (see Saad
(1999)). The insertion can be controlled by either a similarity measure
of input vectors, by the output error measure, or by both, depending
on whether the system performs an unsupervised or supervised mode
of learning. A measure of difference between an input pattern and
already stored ones is used for deciding whether to insert new nodes in
the adaptive resonance theory models ART1 and ART2 (Carpenter and
Grossberg, 1987) for unsupervised learning. There are other methods that
insert nodes based on the evaluation of the local error. Such methods are
the growing cell structure and growing neural gas (Fritzke, 1995). Other
methods insert nodes based on a global error to evaluate the performance of the whole NN. One such method is the cascade correlation
method (Fahlman and Lebiere, 1990). Methods that use both similarity and
output error for node insertion are used in Fuzzy ARTMAP (Carpenter
et al., 1991) and also in EFuNN (Chapter 3). Connectionist selectivism
is concerned with pruning unnecessary connections in an NN that starts
its learning with many, in most cases redundant, connections (Rummery
and Niranjan, 1994; Sankar and Mammone, 1993). Pruning connections
that do not contribute to the performance of the system can be done
by using several methods: optimal brain damage (Le Cun et al., 1990),
optimal brain surgeon (Hassibi and Stork, 1992), and structural learning
with forgetting (Ishikawa, 1996).
8. How does structural modification in the learning system partition the problem
When a machine learning (e.g. connectionist) model is created, in either
a supervised or an unsupervised mode, the nodes and the connections
partition the problem space Z into segments. Each segment of the input
subspace is mapped onto a segment from the output subspace in the case
of supervised learning. The partitioning in the input subspace imposed by
the model can be one of the following types.
(a) Global partitioning (global learning): Learning causes global partitioning
of the space. Usually the space is partitioned by hyperplanes that are
modified either after every example is presented (in the case of incremental
learning), or after all of them being presented together.
Through the gradient descent learning algorithm, for example, the problem
space is partitioned globally. This is one of the reasons why global
learning in multilayer perceptrons suffers from the catastrophic forgetting
phenomenon (Robins, 1996; Miller et al., 1996). Catastrophic forgetting


Evolving Connectionist Systems

(also called unlearning) is the inability of the system to learn new patterns
without forgetting previously learned patterns. Methods to deal with this
problem include rehearsing the NN on a selection of past data, or on
new data points generated from the problem space (Robins, 1996). Other
techniques that use global partitioning are support vector machines (SVM;
Vapnik (1998)). SVM optimise the positioning of the hyperplanes to
achieve maximum distance from all data items on both sides of the plane
(Kecman, 2001).
(b) Local partitioning (local learning): In the case of local learning, structural
modifications of the system affect the partitioning of only a small part of
the space from where the current data example is drawn. Examples are
given in Figs. 1.10a and b, where the space is partitioned by circles and
squares in a two-dimensional space. Each circle or square is the subspace
defined by a neuron. The activation of each neuron is defined by local
functions imposed on its subspace. Kernels, as shown in Fig. 1.10a, are
examples of such local functions. Other examples of local partitioning are
shown in Figs. 1.11a and b, where the space is partitioned by hypercubes
and fractals in a 3D space.
Before creating a model it is important to choose which type of partitioning would be more suitable for the task in hand. In the ECOS presented
later in this book, the partitioning is local. Local partitioning is easier to
adapt in an online mode, faster to calculate, and does not cause catastrophic forgetting.
9. What knowledge representation is facilitated in the learning system?
It is a well-known fact that one of the most important characteristics
of the brain is that it can retain and build knowledge. However, it is
not yet known exactly how the activities of the neurons in the brain are
transferred into knowledge. For the purpose of the discussion in this book,
knowledge can be defined as the information learned by a system such
that the system and humans can interpret it to obtain new facts and new
knowledge. Traditional neural networks and connectionist systems have
been known as poor facilitators of representing and processing knowledge,
despite some early investigations (Hinton, 1987, 1990).
However, some of the issues of knowledge representation in connectionist
systems have already been addressed in the so-called knowledge-based
neural net- works (KBNN) (Towell and Shavlik, 1993, 1994; Cloete and
Zurada, 2000). KBNN are neural networks that are prestructured in a way
that allows for data and knowledge manipulation, which includes learning,
knowledge insertion, knowledge extraction, adaptation, and reasoning.
KBNN have been developed either as a combination of symbolic AI systems
and NN (Towell et al., 1990), or as a combina- tion of fuzzy logic systems
and NN (Yamakawa and Tomoda, 1989; Yamakawa et al., 1992, 1993;
Furuhashi et al., 1993, 1994; Hauptmann and Heesche, 1995; Jang, 1993;
Kasabov, 1996). Rule insertion and rule extraction operations are examples
of how a KBNN can accommodate existing knowledge along with data,
and how it can explain what it has learned. There are different methods
for rule extraction that are applied to practical problems (Hayashi, 1991;
Mitra and Hayashi, 2000; Duch et al., 1998; Kasabov, 1996, 1998c, 2001c).

Feature Selection, Model Creation, and Model Validation


Data points





Fig. 1.10 Local partitioning of the problem space using different types of kernels: (a) hyperspheres and Gaussian
functions defined on them; (b) squares, where a simple function can be defined on them (e.g.: yes, the data
vector belongs to the square, and no, it does not belong). Local partitioning can be used for local learning.

Generally speaking, learning systems can be distinguished based on the

type of knowledge they represent.
(a) No explicit knowledge representation is facilitated in the system: An
example for such a connectionist system is the traditional multilayer perceptron network trained with the backpropagation algorithm


Evolving Connectionist Systems


3D Hypercubes

3D Fractals

Fig. 1.11 Different types of local partitioning of the problem space illustrated in a 3D space: (a) using
hypercubes; (b) using fractals.

(Rosenblatt, 1962; Amari, 1967; Arbib, 1972, 1987, 1995, 2002; Werbos,
1990; Hertz et al., 1991; Rumelhart et al., 1986).
(b) Memory-based knowledge: The system retains examples, patterns, prototypes, and cases, for example, instance-based learning (Aha et al.,
1991), case-based reasoning systems (Mitchell, 1997), and exemplar-based
reasoning systems (Salzberg, 1990).
(c) Statistical knowledge: The system captures conditional probabilities,
probability distribution, clusters, correlation, principal components, and
other statistical parameters.

Feature Selection, Model Creation, and Model Validation


(d) Analytical knowledge: The system learns an analytical function f : X > Y,

that represents the mapping of the input space X into the output
space Y. Regression techniques and kernel regressions in particular, are
well established.
(e) Symbolic knowledge: Through learning, the system associates information
with predefined symbols. Different types of symbolic knowledge can be
facilitated in a learning system as further discussed below.
(f) Combined knowledge: The system facilitates learning of several types of
(g) Meta-knowledge: The system learns a hierarchical level of knowledge
representation where meta-knowledge is also learned, for example, which
piece of knowledge is applicable and when.
(h) Consciousness (the system knows about itself): The system becomes
aware of what it is, what it can do, and where its position is among the
rest of the systems in the problem space.
(i) Creativity (e.g. generating new knowledge): An ultimate type of knowledge
would be such knowledge that allows the system to act creatively, to create
scenarios, and possibly to reproduce itself, for example, a system that
generates other systems (programs) improves in time based on its performance in the past.
Without the ability of a system to represent knowledge, it cannot capture
knowledge from data, it cannot capture the evolving rules of the process
that the system is modelling or controlling, and it cannot help much to
better understand the processes. In this book we indeed take a knowledgeengineering approach to modelling and building evolving intelligent
systems (EIS), where knowledge representation, knowledge extraction, and
knowledge refinement in an evolving structure are the focus of our study
along with the issues of adaptive learning.
10. If symbolic knowledge is represented in the system, of what type is it?
If we can represent the knowledge learned in a learning system as symbols,
different types of symbolic knowledge can be distinguished.
(a) Propositional rules
(b) First-order logic rules
(c) Fuzzy rules
(d) Semantic maps
(e) Schemata
(f) Meta-rules
(g) Finite automata
(h) Higher-order logic
11. If the systems knowledge can be represented as fuzzy rules, what type of fuzzy
rules are they?
Different types of fuzzy rules can be used, for example:
(a) ZadehMamdani fuzzy rules (Zadeh, 1965; Mamdani, 1977).
(b) TakagiSugeno fuzzy rules (Takagi and Sugeno, 1985).
(c) Other types of fuzzy rules, for example, type-2 fuzzy rules (for a comprehensive reading, see Mendel (2001)).
The above types of rules are explained in Chapter 5. Generally speaking,
different types of knowledge can be learned from a process or from an


Evolving Connectionist Systems

object in different ways, all of them involving human participation. Some

of these ways are shown in Fig. 1.12. They include direct learning by
humans, simple problem representation as graphs, analytical formulas,
using NN for learning and rule extraction, and so on. All these forms can
be viewed as alternative and possibly equivalent forms in terms of final
results obtained after a reasoning mechanism is applied on them. Elaborating analytical knowledge in a changing environment is a very difficult
process involving changing parameters and formulas with the change of
the data. If evolving processes are to be learned in a system and also
understood by humans, neural networks that are trained in an incremental
mode and their structure interpreted as knowledge are the most promising
models at present. This is the approach that is taken and developed in
this book.
12. Is learning active?
Humans and animals are selective in terms of processing only important
information. They are searching actively for new information (Freeman,
2000; J.G. Taylor, 1999). Similarly, we can have two types of learning in
an intelligent system:
(a) Active learning: In terms of data selection, filtering, and searching for
relevant data.
(b) Passive learning: The system accepts all incoming data.

Fig. 1.12 Different learning and knowledge representation techniques applied to modelling a process or an
object, including: off-line versus online learning; connectionist versus analytical (e.g. regression) learning; learning
in a model versus learning in the brain.

Feature Selection, Model Creation, and Model Validation


Both approaches are applied in the different methods and techniques of evolving
connections systems presented in Chapters 2 to 7.


Probability and Information Measure. Bayesian

Classifiers, Hidden Markov Models. Multiple
Linear Regression

Probability Characteristics of Events

Many learning methods are based on probability of events that are learned from
data, and then used to predict new events. The formal theory of probability relies
on the following three axioms, where pE is the probability of an event E to
happen and pE is the probability of an event not happening. E1, E2,,Ek is a
set of mutually exclusive events that form an universe U:

Box 1.1. Probability axioms

Axiom 1. 0<= pE <= 1.
Axiom 2. pEi = 1, E1 E2    Ek = U U- problem space.
Corollary: pE + pE = 1.
Axiom 3. pE1 OR E2 = pE1+pE2, where E1 and E2 are mutually exclusive

Probabilities are defined as:

Theoretical some rules are used to evaluate a probability of an event.
Experimental probabilities are learned from data and experiments: throw dice
1000 times and measure how many times the event getting 6 has happened.
Subjective probabilities are based on common-sense human knowledge, such
as defining that the probability of getting 6 after throwing dice is 1/6th , without
really throwing it many times.

Information and Entropy Characteristics of Events

A random variable x is characterised at any moment of time by its uncertainty
in terms of what value this variable will take in the next moment, its entropy.
A measure of uncertainty hxi  can be associated with each random value xi of a
random variable x, and the total uncertainty Hx, called entropy, measures our
lack of knowledge, the seeming disorder in the space of the variable x:
HX = i=1n pi  hxi 



Evolving Connectionist Systems

where pi is the probability of the variable x taking the value of xi .

The following axioms for the entropy Hx apply.
Monotonicity: If n > n are the number of events (values) that a variable x
can take, then Hnx > Hn x, so the more values x can take, the greater the
Additivity: If x and y are independent random variables, then the joint entropy
Hx y, meaning Hx AND y, is equal to the sum of Hx and Hy.
The following log function satisfies the above two axioms.
hxi  = log1/pi 


If the log has a basis of 2, the uncertainty is measured in [bits], and if it is the
natural logarithm ln, then the uncertainty is measured in [nats].
HX = i=1n pi  hxi  = c i=1n pi  log pi 


where c is a constant.
Based on the Shannon measure of uncertainty, entropy, we can calculate an
overall probability for a successful prediction for all states of a random variable
x, or the predictability of the variable as a whole:
Px = 2Hx


The max entropy is calculated when all the n values of the random variable x
are equiprobable; i.e. they have the same probability 1/n, a uniform probability
HX = i=1n pi  log pi <= log n


Let us assume that it is known that a stock market crashes (goes to an extremely
low value that causes many people and companies to lose their shares and assets
and to go bankrupt) every six years. What are the uncertainty and the predictability
of the crash in terms of determining the year of crash, if: (a) the last crash
happened two years ago? (b) The same as in (a), plus we know for sure that
there will be no crash in the current year, nor in the last year of the six-year
The solution will be:
(a) The possible years for a crash are the current one (year 3) and also years 4, 5,
and 6. The random variable x which has the meaning of annual crash of the
stock market can take any of the values 3, 4, 5, and 6, therefore n = 4 and the
maximum entropy is Hx = log2 4 = 2. The predictability is Px = 22 = 1/4.

Feature Selection, Model Creation, and Model Validation


(b) The possible values for the variable x are reduced to 2 (years 4 and 5) as we
have some extra knowledge of the stock market. In this case the maximum
entropy will be 1 and the predictability will be 1/2.
Joint entropy between two random variables x and y (e.g. an input and an output
variable in a system) is defined by the formulas:
Hx y = i=1n pxi AND yj  log pxi AND yj 


Hx y <= Hx + Hy


Conditional entropy, that is, measuring the uncertainty of a variable y (output

variable) after observing the value of a variable x (input variable), is defined as
Hyx = i=1n pxi  yj   log pyj xi 
Hyx <= Hy


Entropy can be used as a measure of the information associated with a random

variable x, its uncertainty, and its predictability.The mutual information between
two random variables, also simply called information, can be measured as
Iy x = Hy Hyx


The process of online information entropy evaluation is important as in a time

series of events, after each event has happened, the entropy changes and its value
needs to be re-evaluated.
Models based on probability are:
Bayesian classifiers
Hidden Markov models
A Bayesian classifier uses a conditional probability estimate to predict a class for new
data. The following formula, which represents the conditional probability between
two events C and A, is known as Bayes formula (Thomas Bayes, 18th century):
pAC = pCApA/pC


It follows from the equations,

pA AND C = pC AND A = pACpC = pCApA


Evaluating the probability pAC of a patient having a flu (event A) based
on the evidence that the patient has a high temperature (fact C). In order to
accomplish this, we need the prior probability pC of all ill patients having a high
temperature, the prior probability of all people suffering of a flu at the time pA,


Evolving Connectionist Systems

and the conditional probability pCA of patients who, if having a flu A, have a
high temperature C.
Problems with the Bayesian learning models relate to unknown prior probabilities and the requirement of a large amount of data for more accurate probability
calculation. This is especially true for a chain of events A B C   , where probabilities pCA B   , etc. need to be evaluated. The latter problem is addressed in
techniques called hidden Markov models (HMM).
HMM is a technique for modelling the temporal structure of a time series signal,
or of a sequence of events (Rabiner, 1989). It is a probabilistic pattern-matching
approach that models a sequence of patterns as the output of a random process.
The HMM consists of an underlying Markov chain.
Pqt + 1qt qt 1 qt 2     qt n Pqt + 1qt


where qt is state q sampled at a time t.

Weather forecast problem as a Markov chain of events. Given today is sunny
(S), what is the probability that the next following five days are S, Cloudy (C, or
Rainy (R? The answer can be derived using Table 1.1a.
HMM can be used not only to model time series of events, but a sequence of
events in space. An example is modelling DNA sequences of four basic molecules:
A, C, T, G (see Chapter 8) based on a probability matrix of having all 16 pairs of
these molecules derived from a large enough segment of DNA (see Table 1.1b).
Building a HMM from a DNA sequence and using this HMM to predict segments
Table 1.1a Representation of conditional probabilities for a HMM for weather
forecast of tomorrows weather from the weather today. Using this probability
matrix, we can build a HMM for prediction of the weather several days ahead,
starting from any day named Today.
P (Tomorrow  Today)








Table 1.1b Probability for a Pair of Neighbouring Molecular Nucleotides to Appear

in a DNA Sequence of a Species.

A (Adenine)
C (Cytosine)
T (Thymine)
G (Guanine)





Feature Selection, Model Creation, and Model Validation


of a DNA that will be translated into proteins is the main purpose of the software
system GeneMark (Lukashin and Borodovski, 1998).

Multiple Linear Regression Methods (MLR)

The purpose of multiple linear regression is to establish a quantitative relationship
between a group of p predictor variables (X) and a response y.
This relationship is useful for:
Understanding which predictors have the greatest effect
Knowing the direction of the effect (i.e. increasing x increases/decreases y).
Using the model to predict future values of the response when only the predictors
are currently known
A linear model takes its common form of:
Y = X A+b


where p is the number of the predictor variables; y is an n-by-1 vector of observations, X is an n-by-p matrix of regressors, A is a p-by-1 vector of parameters,
and b is an n-by-1 vector of random disturbances. The solution to the problem is
a vector A which estimates the unknown vector of parameters. The least squares
solution is used, so that the linear regression formula approximates the data with
the least root mean square error (RMSE) as follows,
RMSE = SQRTSUMi=12n yi yi 2 /n


where yi is the desired value from the dataset corresponding to an input vector xi ,
yi is the value obtained through the regression formula for the same input vector
xi , and n is the number of the samples (vectors ) in the dataset.
Another error measure is also used to evaluate the performance of the regression
model a nondimensional error index (NDEI) the RMSE divided to the standard
deviation of the dataset:


Example 1
Linear regression modelling of the gas furnace benchmark data. Fig. 1.13 shows
the regression formula that approximates the data, the desired versus the approximated by the formula values of the time series, and the two error measures, root
mean square and the nondimensional error index.
Example 2
The following linear regression approximates the MackeyGlass benchmark data
(data are normalised).
Y = 093 03 x1 001 x2 056 x3 + 086 x4



Evolving Connectionist Systems

Fig. 1.13 Linear regression modelling of the gas-furnace benchmark data. The figure shows: the regression
formula that approximates the data; the desired versus the approximated by the formula values of the time
series; and the two error measures: root mean square error and the nondimensional error index.

Example 3
The following multiple linear regression model (three linear regressions)
discriminates the samples among the three classes of the Iris data (data are
Class 1 (Setosa) = 012 + 006 x1 + 024 x2 022 x3 006 x4
Class 2 (Versicolor) = 15 002 x1 044 x2 + 022 x3 048 x4
Class 3 (Virginica) = 068 004 x1 + 02 x2 + 0004x3 + 055 x4


Support Vector Machines (SVM)

The support vector machine was first proposed by Vapnik and his group at AT&T
Bell laboratories (Vapnik, 1998). For a typical learning task defined as probability

estimation of output values y depending on input vectors x:
Px y = PyxPx


an inductive SVM classifier aims to build a decision function

fL x 1 +1


Feature Selection, Model Creation, and Model Validation


based on a training set,

fL = LStrain 

where Strain = x1  y1  x2  y2      xn  yn 


In the SVM theory, the computation of fL can be traced back to the classical
structural risk minimization approach, which determines the classification decision
function by minimizing the empirical risk, as


fxi  yi 
l i=1


where N and f represent the size of the set of examples and the classification
decision function, respectively; l is a constant for normalization. For SVM, the
primary concern is determining an optimal separating hyperplane that gives a low
generalization error. Usually, the classification decision function in the linearly
separable problem is represented by
f wb = signw x + b


In SVM, this optimal separating hyperplane is determined by giving the largest

margin of separation between vectors that belong to different classes. It bisects
the shortest line between the convex hulls of the two classes, which is required to
satisfy the following constrained minimization conditions.
1 T
Minimize w w
Subject to yi w x + b 1


For the linearly nonseparable case, the minimization problem needs to be modified
to allow misclassified data points. This modification results in a soft margin
classifier that allows but penalizes errors, by introducing a new set of variables
as the measurement of violation of the constraints (Fig. 1.14a):
1 T
Minimize w w + C i=1 i k
Subject to yi w xi  + b 1 i 


where C and k are used to weight the penalizing variables i=1
, and  is a
nonlinear function that maps the input space into a higher-dimensional space. In
order to solve the above equation, we need to construct a set of functions and
implement the classical risk minimization on this set. Here, a Lagrangian method
is used to solve the above problem. Then, the above equation can be written as

Minimize F =  1  D 
Subject to  y = 0  C  > 0



Evolving Connectionist Systems


SVM hyper plane








Fig. 1.14 (a) A SVM classifier builds a hyperplane to separate samples from different classes in a higherdimensional space; the new vectors on the border are called support vectors; (b) a SVM tree, where each node
is a SVM; (c) an evolving SVM tree evolves new nodes incrementally (from S. Pang et al. (2005)).

Feature Selection, Model Creation, and Model Validation


where  = 1   l , D = yi yj xi xj for binary classification and the decision

function can be rewritten as

fx = sign


yi i x xi  + b 



For more details, the reader is referred to Vapniak (1998) and Cherkassky and
Mulier (1998).

Transductive SVM (TSVM)

In contrast to the inductive SVM learning method described above, transductive
SVM (TSVM) learning includes knowledge of test set Stest in the training procedure,
thus the above learning function of inductive SVM can be reformulated as (Kasabov
and Pang, 2004)
fL = LStrain  Stest 

where Strain = x1  y1  x2  y2      xn  yn 


Therefore, in a linearly separable data case, to find a labelling y1  y2   yn of the

test data, the hyper plane < w b > should separate both training and test data
with maximum margin:

Minimize Overy 1  y 2   y 3  w b

1 T
w w
yi w xi + b 1

Subject to

yj w xj + b 1

To be able to handle nonseparable data, similar to the way in the above inductive
SVM, the learning process of transductive SVM can be formulated as the following
optimization problem.
Minimize Over
y1  y2   y3  w b 1   n  1  k 


1 T
w w + C i k + C j k

Subject to


yi w xi  + b 1 i

yj w xj  + b 1 j

where C is the effect factor of the query examples, and C i is the effect term of
the ith query example in the above objective function.


Evolving Connectionist Systems


The SVM tree is constructed by a divide-and-conquer approach using a binary
class-specific clustering and SVM classification technique; see, for example,
Fig. 1.14b (Pang and Kasabov, 2004; Pang et al., 2006).
Basically, we perform two procedures at each node in the above tree generation. First, the class-specific clustering performs a rough classification because
it splits the data into two disjoint subsets based on the global features. Next, the
SVM classifier performs a fine classification based on training supported by the
previous separation result.
Figure 1.14b is an example of the SVM tree which is derived from the above
SVM tree construction. As mentioned, the SVM test starts at the root node 1. If
the test T1 x = +1 is observed, the test T2 x is performed. If the condition
T1 x = +1 and T2 x = 1 is observed, then the input data x are assigned to
class a, and so forth.
SVM trees can evolve new nodes, new local SVM to accommodate new data
from an input data stream. An example of an evolving SVM tree is shown in
Fig.1.14c (Pang et al., 2006).


Inductive Versus Transductive Learning and Reasoning.

Global, Local, and Personalised Modelling
Inductive Global and Local Modelling

Most learning models and systems in artificial intelligence developed and implemented thus far are based on inductive inference methods, where a model (a
function) is derived from data representing the problem space and this model
is further applied to new data (Fig. 1.15a). The model is usually created without
taking into account any information about a particular new data vector (test data).
An error is measured to estimate how well the new data fit into the model.
The models are in most cases global models, covering the whole problem space.
Such models are, for example, regression functions, some NN models, and also
some SVM models, depending on the kernel function they use. These models are
difficult to update on new data without using old data previously used to derive
the models. Creating a global model (function) that would be valid for the whole
problem space is a difficult task, and in most cases it is not necessary to solve.
Some global models may consist of many local models that collectively cover
the whole space and can be adjusted incrementally on new data. The output for
a new vector is calculated based on the activation of one or several neighbouring
local models. Such systems are the evolving connectionist systems (ECOS) for
example, EFuNN and DENFIS presented in Chapters 3 and 5, respectively.
The inductive learning and inference approach is useful when a global model
(the big picture) of the problem is needed even in its very approximate form.
In some models (e.g. ECOS) it is possible to apply incremental online learning to
adjust this model on new data and trace its evolution.

Feature Selection, Model Creation, and Model Validation



Transductive Modelling. WKNN

In contrast to the inductive learning and inference methods, transductive inference

methods estimate the value of a potential model (function) only in a single point
of the space (the new data vector) utilising additional information related to this
point (Vapnik, 1998). This approach seems to be more appropriate for clinical and
medical applications of learning systems, where the focus is not on the model,
but on the individual patient. Each individual data vector (e.g. a patient in the
medical area, a future time moment for predicting a time series, or a target day
for predicting a stock index) may need an individual local model that best fits
the new data, rather than a global model. In the latter case the new data are
matched into a model without taking into account any specific information about
these data.
Transductive inference is concerned with the estimation of a function in a
single point of the space only. For every new input vector xi that needs to be
processed for a prognostic task, the Ni nearest neighbours, which form a subdata
set Di, are derived from an existing dataset D and, if necessary, generated from an
existing model M. A new model Mi is dynamically created from these samples to
approximate the function in the point xi. The system is then used to calculate the
output value yi for this input vector xi (Fig. 1.15b,c).
A very simple transductive inference method, the k-nearest neighbour method
(K-NN) is briefly introduced here. In the K-NN method, the output value yi
for a new vector xi is calculated as the average of the output values of the k
nearest samples from the dataset Di. In the weighted K-NN method (WKNN) the
output yi is calculated based on the distance of the Ni nearest neighbour samples
to xi :

yi =


w j yj




where yj is the output value for the sample xj from Di and wj are their weights
measured as
wj =

maxd dj mind



The vector d = d1 d2    dNi  is defined as the distances between the new input
vector xi and Ni nearest neighbours (xj  yj ) for j = 1 to Ni; max(d) and min(d) are
the maximum and minimum values in d, respectively. The weights wj have the
values between min(d)/max(d) and 1; the sample with the minimum distance to
the new input vector has the weight value of 1, and it has the value min(d)/max(d)
in case of maximum distance.


Evolving Connectionist Systems

Training a
model M

Data set D
for training

Recall M for
any new data

Output yi

New input vector xi

New input vector xi

Data set D
for training

Model Mold

Data Dj selected from D in the

vicinity of the input vector xi

A new model
M generated
for the input
vector xi

Output yi

Data Do , j generated from Mold

in the vicinity of the input vector xi


Fig. 1.15 (a) Inductive learning: given a training set, construct a model M that will accurately represent the
examples in the set; recall the model M on a new example xi to evaluate the output yi. (b) Transductive
learning: for every new input vector xi, a new model Mi is dynamically created from the available samples to
approximate the function in the locality of the point xi. (c) A transductive model is created with a subtraining
dataset of neighbouring samples for each new input vector. This is shown here as two vectors x1 and x2.

Distance is usually measured as Euclidean distance:


x j y j  2
x y =
P j=1


Feature Selection, Model Creation, and Model Validation


Distance can be also measured as Pearson correlation distance, Hamming distance,

cosine distance, etc. (Cherkassky and Mulier, 1998).


Weighted Examples Weighted Variables K-NN: WWKNN

In the WKNN the calculated output for a new input vector depends not only on
the number of its neighbouring vectors and their output values (class labels), as
in the KNN method, but on the distance between these vectors and the new vector
which is represented as a weight vector (W). It is assumed that all v input variables
are used and the distance is measured in a v-dimensional Euclidean space with all
variables having the same impact on the output variable.
But when the variables are ranked in terms of their discriminative power of class
samples over the whole v-dimensional space, we can see that different variables
have different importance to separate samples from different classes, therefore a
different impact on the performance of a classification model. If we measure the
discriminative power of the same variables for a subspace (local space) of the
problem space, the variables may have a different ranking.
Using the ranking of the variables in terms of a discriminative power within
the neighborhood of K vectors, when calculating the output for the new input
vector, is the main idea behind the WWKNN algorithm (Kasabov, 2007b), which
includes one more weight vector to weigh the importance of the variables. The
distance dj between a new vector xi and a neighboring one xj in 1.36 is calculated
now as:
dj = sqrsuml=1

to v cil xil

xjl 2 


where cil is the coefficient weighing variable xl in a neighbourhood of xi . It can be

calculated using a signal-to-noise ratio procedure that ranks each variable across
all vectors in the neighbourhood set Di of Ni vectors:
Ci = ci1  ci2     civ 


cil = Sl /sumSl  for l = 1 2    v

class 1

Sl = absMl
class 1

class 1

class 2


class 1



+ Stdl


and Stdl
are, respectively, the mean value and the standard
Here Ml
deviation of variable xl for all vectors in Di that belong to class 1.
The new distance measure, that weighs all variables according to their importance as discriminating factors in the neighbourhood area Di, is the new element
in the WWKNN algorithm when compared to the WKNN.
Using the WWKNN algorithm, a personalised profile of the variable importance can be derived for any new input vector that represents a new piece of
personalised knowledge.


Evolving Connectionist Systems

Weighting variables in personalised models is used in the TWNFI models (transductive weighted neuro-fuzzy inference) in Song and Kasabov (2005, 2006).
There are several open problems related to transductive learning and reasoning,
e.g. how to choose the optimal number of vectors in a neighbourhood and the
optimal number of variables, which for different new vectors may be different
(Mohan and Kasabov, 2005).


Model Validation

When a machine learning model is built based on a dataset S, it needs to be

validated in terms of its generalisation ability to produce good results on new,
unseen data samples. There are several ways to validate a model:
1) Train-test split of data: Splitting the dataset S into two sets: Str for training,
and Sts for testing the model.
2) K-fold cross validation (e.g. 3, 5, 10): in this case the dataset S is split randomly
into k subsets S1,S2,   Sk and i = 1 2    k times a model Mi is created on the
dataset SSi and tested on the set Si; the mean accuracy across all k experiments
is calculated.
3) Leave-one-out cross-validation (a partial case of the above method when the
dataset S is split N times; in each subset there is only one sample).
What concerns the whole task of feature selection, model creation, and model
validation, the above methods can be applied in two different ways:
1) A biased way features are selected from the whole set S using a filtering-based
method, and then a model is created and validated on the selected features.
2) An unbiased way for every data subset Si in a cross-validation procedure,
first features Fi are selected from the set SSi (using some of the above-discussed
methods, e.g. SNR) and then a model is created based on the feature set Fi;
the model Mi is validated on Si using features Fi. The leave-one-version of this
procedure is outlined in Box 1.2.
Box 1.2. Leave-one-out cross validation procedure
For i :=1 to N do
Take out sample Si from the data set S
Use the rest (SSi) samples for feature selection Fi (optional)
Train a model Mi on SSi using features Fi
Test the model Mi on the left-out-sample Si, evaluate error Ei
Evaluate the overall mean error
Evaluate the features used, their frequency of selection in the iterations.
Train a final model M on all data and on the most frequently selected features

Feature Selection, Model Creation, and Model Validation


The unbiased leave-one-out procedure is illustrated on another benchmark
dataset that is used further in the book, the leukaemia classification problem
of AML/ALL classes (Golub et al., 1999). The dataset consists of 38 samples for
training a model and 34 test samples, each having 7129 variables representing the
expression of genes in two classes each of the samples from the class of AML and
class of ALL leukaemia types.
Figure 1.16a shows the result of the unbiased feature selection and model
validation procedure, where only the top four genes are selected on each of the
38 runs of the procedure and a k-NN model is used, k = 3. The overall accuracy
is above 92% and the top selected four genes are shown in the diagram with their
gene numbers.
The selected-above top four genes are used to build a (final) MLR model and to
test it on the test data of 34 samples using the same four variables. The results, in
the form of a confusion table, are shown in Fig. 1.16b. The coefficients of each of
the regression formulas (shown in a box) represent the participation of each of the
variables in terms of positive or negative and in terms of importance. This is important
knowledge contained in the MLR model that needs to be further analysed.
A transductive modelling approach can be applied when for every vector from
the test data, the closest K samples are selected from the training data using the
already-selected four genes and an individual MLR model is created for this sample
after which it can be used to test the model.



1) Select a classification or a prediction problem and a data set for it (e.g. from, or from the repository of machine learning databases.: UC Irvine).
2) Select features using some of the methods from this chapter (e.g. SNR, t-test).
3) Create a global statistical model using MLR through inductive learning.
4) Validate the model and evaluate its accuracy in a leave-one-out cross-validation
5) Create individual models through transductive learning and evaluate their
average accuracy.
6) Answer the following questions.
Q1. Which of the models is adaptive to new data?
Q2. What knowledge can be learned from the models?


Summary and Open Problems

This chapter introduces the basic concepts in CI modelling and some benchmark
datasets that are used in the rest of the chapters.


Evolving Connectionist Systems



Fig. 1.16 (a) The result of an unbiased feature selection and model validation procedure, where only the top
four genes are selected at each of the 38 runs of the procedure using the SNR method to rank the variables
and a multiple linear regression (MLR) model for classification. The overall accuracy is above 92% and the top
selected four genes are shown in the diagram with their gene numbers. (b) The four genes selected in (a
top) are used to build the final inductive MLR model and to test it on the test data of 32 samples using the
same four variables. The results are shown as a confusion table (a proprietary software is used, SIFTWARE, See for colour figures.

Feature Selection, Model Creation, and Model Validation


This chapter also raises some open questions, such as:

How do we identify the problem space and the dimensionality in which a process
is evolving having only a limited data collected?
Thus far, Euclidean space has predominantly been used, but is it appropriate to
use it for all cases?
Most of the machine learning models use time as a linear variable, but is that
the only way to present it?
How do we define the best model for the purpose of modelling an evolving
Prediction modelling in an open problem space: how is it verified and evaluated?
In an EIS it may be important how fast the intelligence emerges within a
learning system and within a population of such systems. How do we make this
process faster for both machines and humans?
Can a system become faster and more efficient than humans in acquiring intelligence, e.g. in learning multiple languages?
The rest of the chapters in this part present evolving connectionist methods
for incremental, adaptive, knowledge-based learning. The methods are illustrated
using several benchmark datasets, some of them presented in this chapter. These
methods are applied to real-world problems from life sciences and engineering in
Part II of the book. All these applications deal with complex, evolving, continuous,
dynamically changing processes.


Further Reading

Statistical Learning (Vapnik, 1998; Cherkassky and Mulier, 1998)

Incremental LDA Feature Selection and Modelling (Pang et al., 2005, 2006)
Incremental PCA Feature Selection and Modelling (Ozawa et al., 2005, 2006)
SVM (Vapniak, 1998)
Chaotic Processes (Barndorff-Nielsen et al., 1993; Gleick, 1987; Hoppensteadt,
1989; McCauley, 1994)
Emergence and Evolutionary Processes (Holland, 1998)
Introduction to the Principles of Artificial Neural Networks (Aleksander, 1989;
Aleksander and Morton, 1990; Amari, 1967, 1990; Arbib, 1972, 1987, 1995, 2002;
Bishop, 1995; Feldman, 1989; Hassoun, 1995; Haykin, 1994; Hecht-Nielsen, 1987;
Hertz et al., 1991; Hopfield, 1982; Kasabov, 1996; Rumelhart et al., 1986; Werbos,
1990; Zurada, 1992)
Principles and Classification of online Learning Connectionist Models (Murata
et al., 1997; Saad, 1999)
ANN and MLP for Data Analysis (Gallinari et al., 1988)
Catastrophic Forgetting in Multiplayer Perceptrons and other ANN (Robins,
Time-series Prediction (Weigend et al., 1990; Weigend and Gershefeld, 1993)
Local Learning (Bottu and Vapnik, 1992; Shastri, 1999)
Emerging Intelligence (EI) in Autonomous Robots (Nolfi and Floreano, 2000)


Evolving Connectionist Systems

Integrating ANN with AI and Expert Systems (Barnden and Shrinivas, 1991;
Giacometti et al., 1992; Hendler and Dickens, 1991; Hinton, 1990; Kasabov, 1990;
Medsker, 1994; Morasso et al., 1992; Touretzky and Hinton, 1985, 1988; Towell
and Shavlik, 1993, 1994; Tresp et al., 1993)
Integrating ANN with Fuzzy Logic (Furuhashi et al., 1993; Hayashi, 1991;
Kasabov, 1996; Kosko, 1992; Takagi, 1990; Yamakawa and Tomoda, 1989)
Incremental PCA and LDA (Pang et al., 2005a, 2005b; Ozawa et al., 2004, 2005)
Transductive Learning and Reasoning (Vapniak, 1998; Cherkassky and Mulier,
1998; Song and Kasabov, 2005, 2006)
Comparison Among Local, Global, Inductive, and Transductive Modelling
(Kasabov, 2007b)

2. Evolving Connectionist Methods

for Unsupervised Learning

Unsupervised learning methods utilise data that contain input vectors only. Evolving
unsupervised learning methods are about learning from a data stream of unlabelled
data e.g. financial market, biological data, patient medical data, weather data, mobile
telephone calls, or radioastronomy signals from the universe. They develop their
structure to model the incoming data in an incremental, continuous learning mode.
They learn statistical patterns such as clusters, probability distribution, and so on.
This chapter presents various methods for unsupervised adaptive incremental
learning that include clustering, prototype learning, and vector quantisation,
along with their generic applications for data analysis, filling missing values in
data, classification, transductive learning, and reasoning. The learned clusters,
categories, and the like represent new knowledge. The emphasis here is put on
the model adaptability they are evolving, and on their features to facilitate rule
extraction and pattern/knowledge discovery, which are the main objectives of the
knowledge engineering approach that we take in this book. The chapter material
is presented in the following sections.

Unsupervised learning from data; distance measure

Evolving clustering. ECM.
Vector quantisation. SOM. ESOM
Prototype learning. ART
Generic applications of unsupervised learning methods
Further readings


Unsupervised Learning from Data. Distance Measure

General Notions

As pointed out in Chapter 1, many real-world information systems use data

streams. Such data streams are, for example: financial data such as stock market
indexes; video streams transferred across the Internet; biological information,


Evolving Connectionist Systems

made available in an increasing volume, such as DNA and protein data; patient
data; climate information; radioastronomy signals; etc. To manipulate a large
amount of data in an adaptive mode and to extract useful information from it,
adaptive, knowledge-based methods are needed.
Evolving, unsupervised learning methods are concerned with learning statistical
and other information characteristics and knowledge from a continuous stream
of data. The distribution of the data in the stream may not be known in advance.
Such unsupervised methods are adaptive clustering, adaptive vector quantisation,
and adaptive prototype learning presented in the next sections. The similarity and
the difference among clustering, quantisation, and prototyping is schematically
illustrated in Fig. 2.1. The time line and time-arrow on the figure show the order
in which the data vectors are presented to the learning system. Different methods
for unsupervised evolving connectionist systems are presented and illustrated in
the rest of the chapter.


Measuring Distance in Unsupervised Learning Techniques

In the context of clustering, quantisation, and prototype learning, we can assume

that we have a data manifold X of dimension d; i.e., X Rd . We aim at finding a
set of vectors c1      cn , that encodes the data manifold with small quantisation
error. Vector quantisation usually utilizes a competitive rule; i.e. a new input
vector x is represented by the best matching unit ci , that satisfies the conditions:
x ci  x cj  j = i i j 1 n


where x-ci  measures a distance.


Time arrow




Original d-dimensional space



Prototype Learning

Vector Quantization













Two clusters are defined








x P4

x P2


x P3


Four prototypes are found


Fig. 2.1 Clustering, vector quantization, and prototype learning as unsupervised learning techniques.

Evolving Connectionist Methods for Unsupervised Learning


The goal is to minimize the reconstruction error

E = xX  x
x ci x



Here  x
is the probability distribution of data vectors over the manifold X.
Measuring distance is a fundamental issue in all the above-listed methods. The
following are some of the most used methods for measuring distance, illustrated
on two n-dimensional data vectors x = x1  x2     xn
and y = y1  y2     yn
Euclidean distance:
D x y


i=1n i


Hamming distance:
D x y



i=1n i


where absolute values of the difference between the two vectors are used.
Local fuzzy normalized distance (see Chapter 3; also Kasabov (1998)):
A local normalised fuzzy distance between two fuzzy membership vectors xf and
yf that represent the membership degrees to which two real vector data x and y
belong to predefined fuzzy membership functions is calculated as:
D xf  yf
= xf yf /xf + yf 


where xf yf  denotes the sum of all the absolute values of a vector that is obtained
after vector subtraction (or summation in case of xf + yf 
of two vectors xf and
yf of fuzzy membership values; / denotes division.
Cosine distance:
D = 1 SUM
Correlation distance:
D = 1



x i yi

xi xi
yi yi


xi 2 y i 2

xi xi
2 yi yi



where xi is the mean value of the variable xi .

Some examples of measuring distance are shown in Fig. 2.2, which illustrates both
Euclidean and fuzzy normalized distance. Using Euclidean distance may require
normalization beforehand as illustrated in the figure. In this figure x1 is in the
range of [0,100] and x2 is in the range of [0,1]. If x1 is not normalised, then the
Euclidean distance D (A B) is greater than the distance D C D
. Otherwise, it will


Evolving Connectionist Systems




Fig. 2.2 Euclidean versus fuzzy normalized and fuzzy distance illustrated on four points in a two-dimensional
space (x1, x2). If the variable values are not normalised, the Euclidean distance between A and B will be
greater than the distance between D and C as the range of variable x1 is 100 times larger than the range
of the variable x2. If either normalised or fuzzified (three membership functions, denoted S for small, M for
medium, and H for high) values are used, the relative distance between D and C will be greater than the
distance between A and B.



new data d1


new data d2

Fig. 2.3 Voronoi tessellation (the straight solid lines) versus hypersphere separation (the circles) of a hypothetical
problem space separating three clusters r1 , r2 , and r3 defined by their centres and hyperspheres.

Evolving Connectionist Methods for Unsupervised Learning


be the opposite. For the fuzzy normalised distance D A B

< D C D
is always
held. In the example, three membership functions are used: Small (S), Medium
(M), and High (H) for each of the two variables.
Figure 2.3 illustrates two ways of space partitioning among three nodes r1,
r2, and r3: Voronoi tessellation (see Okabe et al. (1992)), the straight lines, and
hyperspheres. The latter is described in detail in Chapter 3 for the EFuNN model.
When using Voronoi tessellation a new data vector d1 will be allocated to node
r2, whereas if using hyperspheres, it will be allocated to r1. A new data point d2
will be allocated to r2 in the first case, but there will not be a clear allocation in
the second case.


Batch-Mode versus Evolving Clustering

Clustering is the process of defining how data are grouped together based on
similarity. Clustering results in the following outcomes.
Cluster centres: These are the geometrical centers of the data grouped together;
their number can be either predefined (batch-mode clustering) or not defined
a priori but evolving.
Membership values, defining for each data vector to what cluster it belongs.
This can be either a crisp value of 1 (the vector belongs to a cluster) or 0 (it
does not belong to a cluster, as it is in the k-means method), or a fuzzy value
between 0 and 1 showing the level of belonging; in this case the clusters may
overlap (fuzzy clustering).
Evolving, adaptive clustering is the process of finding how data from a continuous
input stream z t
, t = 0 1 2    are grouped (clustered) together at any time
moment t. It requires finding the cluster centres, the cluster area occupied by each
cluster in the data space, and the membership degrees to which data examples
belong to these clusters.
New data, entered into the system, are either associated with existing clusters
and the cluster characteristics are changed, or new clusters are created. Based
on the current p vectors from an input stream x1  x2  x3      xp      n clusters
are defined in the same input space, so that n << p. The cluster centres can be
represented as points in the input space X of the data points. Adaptive evolving
clustering assumes that each input data vector from the continuous input data
stream is presented once to the system as it is assumed that it will not be accessible
again. Adaptive clustering is a type of incremental learning, so each new data
example xi contributes to the changes in the clusters and this process can be traced
over time. Through tracing an adaptive clustering procedure, it can be observed
and understood how the modelled process has developed over time.
In contrast to the adaptive incremental clustering, off-line batch mode clustering
methods are usually iterative, requiring many iterations to find the cluster
centres that optimise an objective function. Such a function minimizes the


Evolving Connectionist Systems

distance between the data elements and their clusters, and maximizes the distance
between the cluster centres. Such methods for example are k-means clustering
(MacQueen, 1967), hierarchical clustering, and fuzzy C-means clustering (Bezdek,
1981, 1987, 1993).

2.2.2 K-Means Clustering

A popular clustering method is the K-means algorithm (MacQueen, 1967), which
finds K disjoint groups of data (clusters) and their cluster centres as the mean of
data vectors within a cluster. This procedure minimises the sum of the distances for
each data vector and its closest cluster centre. Usually, this is done in a batch mode
through many iterations, starting with K randomly selected cluster centres (Lloyd,
1982). The adaptive version of the K-means algorithm (MacQueen, 1967; Moody
and Darken, 1989), applied without prior knowledge of the data distribution, is a
stochastic gradient descent on Eq. (2.2). Starting with K randomly selected cluster
centres, ci , I = 1 2   , K, for each new data vector x the closest cluster centre is
updated as follows.
ci = xci  if ci is the closest cluster centre for x ci = 0 otherwise for j = i

This learning rule is also referred as the local k-means algorithm. It is of the
winner-takes-all type and can operate in a dynamic environment with continuously arriving data. But it can also suffer from confinement to a local minimum
(Martinetz et al., 1993). To avoid this problem some soft computing schemes
are proposed to modify reference vectors (cluster centres), in which not only the
winner prototype is modified, but all reference vectors are adjusted depending
on their proximity to the input vector.
In both batch mode and adaptive mode of K-means clustering, the number of
clusters is predefined in advance. The K-means clustering method uses an iterative
algorithm that minimizes the sum of distances from each sample to its cluster
centre over all clusters until the sum cannot be decreased further. The control of
the minimisation procedure is done through choosing the number of clusters, the
starting positions of the clusters (otherwise they will be randomly positioned), and
number of iterations.
As the data vectors are grouped together in a predefined number of clusters
based on similarity measure, if Euclidean distance is used, the clustering procedure
may result in different cluster centers if data are normalised (scaled into a given
interval, e.g. [0,1], either in a linear or in an nonlinear fashion), versus nonnormalised data; see Fig. 2.4a,b.
Another method for clustering is the DCA (dynamic clustering algorithm; Bruske
and Sommer (1995)). The method does require a predefined number of clusters.
This algorithm is used for dynamic fMRI cortical activation analysis data.


Hierarchical Clustering

The hierarchical clustering procedure finds similarity (distance) between each pair
of samples using correlation analysis, and then represents this similarity as a

Evolving Connectionist Methods for Unsupervised Learning




Fig. 2.4 An illustration of k-means clustering on the case study of gas-furnace data (see Fig. 1.3). The procedure
results in different cluster centres and membership values for the data vectors that are not normalised, shown
in (a), versus linearly normalised in the interval [0,1] data as shown in (b).


Evolving Connectionist Systems

Fig. 2.5 Hierarchical clustering: (a) of the Iris data 4 variables; (b) Gene expression data of Leukaemia cancer
12 variables.

dendogram tree . Figure 2.5 shows two cases of hierarchical clustering: (a) the four
Iris input variables, and (b) a set of 12 gene expression variables represented as
columns for the leukaemia data (see Chapter 1).


Fuzzy Clustering

Fuzzy clustering results in finding cluster centres and fuzzy membership degrees
(numbers between 0 and 1) to which each data point belongs for each of the
existing clusters. In the C-means clustering method for each data point these
numbers add up to 1. Some other methods, such as the evolving clustering method
(ECM ) introduced in this chapter and the evolving fuzzy neural network (EFuNN)
introduced in the next chapter, define clusters as overlapping areas (e.g. hyperspheres) and a data point d can geometrically fall into several such overlapping
clusters. Then it is considered that this point belongs to each of these clusters and
the membership degree is defined by the formula 1 D d c
, where D d c
is the
normalised Euclidean or normalised fuzzy distance between the data point d and
the cluster centre c (see the text below).
In Fig. 2.6 the fuzzy C-means clustering algorithm, proposed by Jim Bezdek
(1981) is outlined. A general description of fuzzy clustering is given in the
extended glossary. A validity criterion for measuring how well a set of fuzzy
clusters represents a dataset can be applied. One criterion is that a function
J c
= ik
2  xk Vi
2 Vi Mx
2  reaches local minimum, where Mx is
the mean value of the variable x, Vi is a cluster centre and ik is the membership
degree to which xk belongs to the cluster Vi . If the number of clusters is not
defined, then the clustering procedure should be applied until a local minimum
of J c
is found, which means that c is the optimal number of clusters. One of the
advantages of the C-means fuzzy clustering algorithm is that it always converges

Evolving Connectionist Methods for Unsupervised Learning


1. Initialise c fuzzy cluster centers V1 , V2 ,..., Vc arbitrarily and calculate the membership degrees ik i = 1,2,...,c, k = 1,2,...,n
such that the general conditions are met.
2. Calculate the next values for cluster centres:

Vi =  ik 2 xk / ik 2  for i = 1 2  c


3. Update the fuzzy degrees of membership:

ik =



dik > 0 i k

j=1 jk

where: dik = (xk Vi 2 , djk = xk Vj 2 (Euclidean distance)

4. If the currently calculated values Vi for the cluster centers are not different from the values calculated at the previous step
(subject to a small error ), then stop the procedure, otherwise go to step 2.

Fig. 2.6 A general algorithm of Bezdeks fuzzy C-means clustering (Bezek, 1981, from Kasabov (1996), MIT
Press, reproduced with permission).

to a strict local minimum. A possible deficiency is that the shape of the clusters is
ellipsoidal, which may not be the most suitable form for a particular dataset.
In most of the clustering methods (k-means, fuzzy C-means, ECM, etc.) the
cluster centres are geometrical points in the input space; e.g. c is (x = 37, y =
23). But in some other methods such as the EFuNN, not only may each data
point belong to the clusters to different degrees (fuzzy), but the cluster centres are
defined as fuzzy coordinates and a geometrical area associated with this cluster.
For example, a cluster centre c can be defined as (x is Small to a degree of 0.7, and
y is Medium to a degree of 0.3; radius of the cluster area R = 0.3). Such clustering
techniques are called fuzzy-2 clustering in this book.
Fuzzy clustering is an important data analysis technique. It helps to represent
better the ambiguity in data. It can be used to direct the way other techniques for
information processing are used afterwards. For example, the structure of a neural
network to be used for learning from a dataset can be defined a great deal after
knowing the optimal number of fuzzy clusters.
Fuzzy clustering is applied on gene expression data in Chapter 8 and in Futschik
and Kasabov (2002).


Evolving Clustering Method (ECM)


Here, an evolving clustering method, ECM, is introduced that allows for adaptive
clustering of continuously incoming data. This method performs a simple evolving,
adaptive, maximum distance-based clustering (Kasabov and Song, 2002). Its
extension, ECMc, evolving clustering method with constrained optimisation, to
implement scatter partitioning of the input space for the purpose of deriving
fuzzy inference rules, is also presented. The ECM is specially designed for
adaptive evolving clustering, whereas ECMc involves some additional tuning of the


Evolving Connectionist Systems

cluster centres, more suitable for combined adaptive and off-line tasks (combined
learning; see Chapter 1). The ECMc method takes the results from the ECM as
initial values, and further optimises the clusters in an off-line mode with a predefined objective function J (C X) based on a distance measure between data X and
cluster centres C, given some constraints.
The adaptive evolving clustering method, ECM, is a fast one-pass algorithm for
dynamic clustering of an input stream of data (Kasabov and Song, 2002), where
there is no predefined number of clusters. It is a distance-based clustering method
where the cluster centres are represented by evolved nodes in an adaptive mode.
For any such cluster, the maximum distance MaxDist, between an example point
xi and the closest cluster centre, cannot be larger than a threshold value Dthr,
that is, a preset clustering parameter. This parameter would affect the number
of the evolved clusters. The threshold value Dthr can be made adjustable during
the adaptive clustering process, depending on some optimisation and self-tuning
criteria, such as current error, number of clusters, and so on.
During the clustering process, data examples come from a data stream and this
process starts with an empty set of clusters. When a new cluster Cj is created, its
cluster centre Ccj is defined and its cluster radius Ruj is initially set to zero. With
more examples presented one after another, some already created clusters will
be updated through changing their centres positions and increasing their cluster
radii. Which cluster will be updated and how much it will be changed depends on
the position of the current data example in the input space. A cluster Cj will not
be updated any more when its cluster radius Ruj has reached the value equal to
the threshold value Dthr. Figure 2.7 shows an illustration of the ECM clustering
process in a 2D space.

The ECM Algorithm

Step 0: Create the first cluster C1 by simply taking the position of the first
example from the input data stream as the first cluster centre Cc1 , and setting
a value 0 for its cluster radius Ru1 (see Fig. 2.7a).
Step 1: If all examples from the data stream have been processed, the clustering
process finishes. Else, the current input example xi is taken and the normalised
Euclidean distance Dij between this example and all n already created cluster
centres Ccj , Dij = xi Ccj , j = 1 2    n, is calculated.
Step 2: If there is a cluster Cm with a centre Ccm , a cluster radius Rum , and
distance value Dim such that:
Dim = xi Ccm  = minDij  = minxi Ccj  for j = 1 2    n and
Dim < Rum
the current example xi is considered as belonging to this cluster. In this case
neither a new cluster is created, nor any existing cluster updated (e.g. data
vectors x4 and x6 in Fig. 2.7). The algorithm then returns to Step 1.

Evolving Connectionist Methods for Unsupervised Learning






Ru20 = 0







Ru10 = 0

x1 *



* x2



Ru30 = 0

x7 *


Ru1 2











* *

* x5

* *


Ccj k: cluster centre

xi: sample



Cj : cluster

Ruj k: cluster radius

Fig. 2.7 An evolving clustering process using ECM with consecutive examples x1 to x9 in a 2D space (from
Kasabov and Song (2002)): x1 causes the ECM to create a new cluster C1 0 ; x2 to update cluster C1 0 C1 1 ;
x3 to create a new cluster C2 0 ; x4 to do nothing; x5 to update cluster C1 1 C1 2 ; x6 to do nothing; x7 to
update cluster C2 0 C2 1 ; x8 to create a new cluster C3 0 ; x9 to update cluster C1 2 C1 3 .

Step 3: Find a cluster Ca (with a centre Cca , a cluster radius Rua , and a distance
value Dia , which cluster has a minimum value Sia :
Sia = Dia + Rua = minSij  j = 1 2    n


Step 4: If Sia is greater than 2 Dthr, the example xi does not belong to any
existing cluster. A new cluster is created in the same way as described in Step 0
(e.g. input data vectors x3 and x8 in Figure 2.7c). The algorithm then returns to
Step 1.
Step 5: If Sia is not greater than 2 Dthr, the cluster Ca is updated by moving its
centre Cca and increasing its radius value Rua . The updated radius Rua new is set
to be equal to Sia /2 and the new centre Cca new is located on the line connecting
input vector xi and the old cluster centre Cca , so that distance from the new


Evolving Connectionist Systems

centre Cca new to the point xi is equal to Rua new (e.g. input data points x2 , x5 , x7 ,
and x9 in Fig. 2.7). The algorithm then returns to Step 1.
In this way, the maximum distance from any cluster centre to the farthest example
that belongs to this cluster is kept less than the threshold value Dthr although the
algorithm does not keep any information of passed examples.
The objective (goal) function here is a very simple one and it is set to ensure
that for every data example xi there is cluster centre Cj such that the distance
between xi and the cluster centre Ccj is less than a predefined threshold Dthr.
The evolving rules of ECM include:
A rule for a new cluster creation
A rule for existing cluster modification


ECMc: ECM with Constrained Optimisation

The evolving clustering method with constrained optimisation, ECMc, applies a

global optimisation procedure to the result produced by the ECM. In addition
to what ECM does, which is partitioning a dataset including p vectors xi , i =
1 2     p , into n clusters Cj with cluster centres Ccj , j = 1 2     n , the ECMc
further minimises an objective function based on a distance measure subject to
given constraints. Using the normalised Euclidean distance as a measure between
an example vector xk , belonging to a cluster Cj , and the corresponding cluster
centre Ccj , the objective function is defined by the following equation,
J = j=1 n Jj


where Jj = k xkECj xk Ccj  is the objective function within a cluster Cj , for each
j = 1 2    n, and the constraints are defined as
xk Ccj  Dthr


where xk ECj for j = 1 2    n.

The clusters can be represented as a p n binary membership matrix U, where
the element uij is 1 if the ith data point xi belongs to Cj , and 0 otherwise. Once
the cluster centres Ccj are defined, the values uij are derived as
if xi Ccj  xi Cck  for k = 1     n j = k then uij = 1 else uij = 0 (2.13)
The ECMc algorithm works in an off-line iterative mode on a batch of data
repeating the steps shown in Fig. 2.8.
Combined alternative adaptive clustering with ECM and off-line optimisation
with ECMc can be used in a mode as follows. After the ECM is applied to a certain
sequence of data vectors, the ECMc optimisation is applied to the latest data from
a data window. After that, the system continues to work in an adaptive mode with
the ECM, and so on.

Evolving Connectionist Methods for Unsupervised Learning


ECMc evolving clustering with constraint optimisation

Step 1: Initialise the cluster centres Ccj, j = 1, 2, , n, that are produced through the adaptive
evolving clustering method ECM.
Step 2: Determine the membership matrix U
Step 3: Employ the constrained minimisation method to modify the cluster centres.
Step 4: Calculate the objective function J
Step 5: Stop, if: (1) the result is below a certain tolerance value, or (2) the improvement of the
result when compared with the previous iteration is below a certain threshold, or (3) the iteration
number of minimizing operation is over a certain value. Else, the algorithm returns to Step2.

Fig. 2.8 The ECMc evolving clustering algorithm with constraint optimisation (from Kasabov and Song (2002)).


Comparative Analysis of ECM, ECMc, and Traditional

Clustering Techniques

Here, the gas-furnace time-series data is used as a benchmark dataset.

A benchmark process used widely so far is the burning process in a gas furnace
(Box and Jenkins, 1970). The gas methane is fed continuously into a furnace and
the produced CO2 gas is measured every minute. This process can theoretically
run forever supposing that there is a constant supply of methane and the burner
keeps mechanically intact. The process of CO2 emission is an evolving process. In
this case it depends on the quantity of the methane supplied and on the parameters of the environment. For simplicity, only 292 values of CO2 are taken in the
well-known gas-furnace benchmark problem. Given the values of methane at a
moment (t 4) and the value of CO2 at the moment (t) the task is to predict
what the value for the CO2 at the moment (t + 1) will be. The CO2 data from Box
and Jenkins (1970) along with some of their statistical characteristics, are plotted
in Fig. 1.3. It shows the 292 points from the time series, the 3D phase space, the
histogram, and the power spectrum of the frequency characteristics of the process.
Figure 2.9 displays a snapshot from the evolving clustering process of the 2D input
data (methane (t 4), CO2 t

with the ECM algorithm.

For the purpose of comparative analysis, the following clustering methods are
applied to the same dataset.

ECM, evolving clustering method (adaptive, one pass)

SC, subtractive clustering (off-line, one pass; see Bezdek (1993))
ECMc, evolving clustering with constrained optimisation (off-line)
FCMC, fuzzy C-means clustering (off-line; Bezdek (1981, 1987))
KMC, K-means clustering (off-line; MacQueen (1967))

Each of them partitions the data into a fixed number of clusters; in this case this
number was chosen to be 15. The maximum distance MaxD, between an example
and the corresponding cluster centre, as well as the value of the objective function
J are measured for comparison as shown in Table 2.1.


Evolving Connectionist Systems

Fig. 2.9 A snapshot of the clustering process: cluster centres and their cluster radii when the ECM algorithm
is applied for online clustering on 146 gas-furnace data examples (see Fig. 1.3).
Table 2.1 Comparative results of clustering the gas-furnace data set into
15 clusters by using different clustering methods.


Objective value: J

ECM (online, one-pass)

SC (off-line, one-pass)
ECMc (off-line)
FCM (off-line )
KM (off-line )



Figure 2.10 displays the data points from the gas-furnace time series and the
cluster centres obtained through the use of different clustering techniques.
Both ECM (adaptive, one pass) and ECMc (optimized through objective function,
multiple passes) obtain minimum values of MaxD, which indicates that these
methods partition the dataset more uniformly than the other methods. We can
also predict that if all these clustering methods obtained the same value for MaxD,
then the ECM and the ECMc would result in a smaller number of partitions.
Considering that the ECM clustering is a one-pass adaptive process, the
objective function value J for ECM simulation is acceptable as it is comparable
with the J value for the other methods. With more data presented to the clustering
system from the data stream, the values for the objective functions for both
adaptive ECM and ECMc become closer; after a certain number of data points from
a time series the two methods will eventually produce the same results provided

Evolving Connectionist Methods for Unsupervised Learning

(c) ECMc (off-line)



(a) ECM (online, one-pass)


(b) SC (off-line, one-pass)



(d) FCMC (off-line)




(e) KMC (off-line)


Fig. 2.10 Results of clustering of the gas-furnace dataset with the use of different clustering methods.

that the data are drawn from a closed space and the probability distribution of
the data stream does not change after a certain data point in the stream.
The advantages of the ECM online clustering technique can be summarised as
1. ECM allows for unsupervised, life-long, adaptive modelling of evolving
2. ECM is much faster than the off-line clustering techniques.



Evolving Connectionist Systems

Vector Quantisation. SOM and ESOM

Vector Quantisation

This is the process of transferring d-dimensional vectors into k-dimensional

vectors, where k << d, usually k = 2; i.e. this is a projection of d-dimensional
space into k-dimensional space whereas the distance between the data points is
maximally preserved in the new space.
In adaptive, evolving vector quantisation only one iteration may be applied
to each data vector from an input stream. This is different from the off-line
vector quantisation where many iterations are required. Such off-line quantisation
methods are principal component analysis (PCA), and self-organizing maps (SOM;
Kohonen (1977, 1982, 1990, 1993, 1997)). SOM that have dynamically changing
structures are described in Section 2.4.3. Evolving SOM (ESOM) are introduced in
Section 2.4.4.


Self-Organizing Maps (SOMs)

Here, the principles of the traditional SOMs are outlined first, and then some
modifications that allow for dynamic, adaptive node creation are presented.
Self-organizing maps belong to the vector quantisation methods where prototypes are found in a prototype (feature) space (map) of dimension k rather than
in the input space of dimension d, k < d. In Kohonens self-organizing feature
map (Kohonen, 1977, 1982, 1990, 1997) the new space is a topological map of 1,
2, 3, or more dimensions (Fig. 2.11).
The main ideas of SOM are as follows.
Each output neuron specializes during the training procedure to react to similar
input vectors from a group (cluster) of the input space. This characteristic of
SOM tends to be biologically plausible as some evidence show that the brain is
organised into regions which correspond to similar sensory stimuli. A SOM is
able to extract abstract information from multidimensional primary signals and
to represent it as a location, in one-, two-, three-, etc. dimensional space.



Fig. 2.11 A schematic diagram of a simple, hypothetical two-input, 2D-output SOM system (from Kasabov
(1996), MIT Press, reproduced with permission).

Evolving Connectionist Methods for Unsupervised Learning


The neurons in the output layer are competitive ones. Lateral interaction
between neighbouring neurons is introduced in such a way that a neuron has
a strong excitatory connection to itself, and less excitatory connections to its
neighbouring neurons in a certain radius; beyond this area, a neuron either
inhibits the activation of the other neurons by inhibitory connections, or does
not influence it. One possible neighbouring rule that implements the described
strategy is the so-called Mexican hat rule. In general, this is a winner-takes-all
scheme, where only one neuron is the winner after an input vector is fed, and
a competition between the output neurons has taken place. The fired neuron
represents the class, the group (cluster), the label, or the feature to which the
input vector belongs.
SOMs transform or preserve similarity between input vectors from the input
space into topological closeness of neurons in the output space represented as a
topological map. Similar input vectors are represented by near points (neurons)
in the output space.
The unsupervised algorithm for training a SOM, proposed by Teuvo Kohonen, is
outlined in Fig. 2.12. After each input pattern is presented, the winner is established and the connection weights in its neighbourhood area Nt increase, and the
connection weights outside the area are kept unchanged.  is a learning parameter.
Training is done through a number of training iterations so that at each iteration
the whole set of input data is propagated through the SOM and the connection
weights are adjusted.
SOMs learn statistical features. The synaptic weight vectors tend to approximate
the density function of the input vectors in an orderly fashion. Synaptic vectors
wj converge exponentially to centres of groups of patterns and the nodes of the
output map represent to a certain degree the distribution of the input data. The
weight vectors are also called reference vectors, or reference codebook vectors.
The whole weight vector space is called a reference codebook
In SOM the topology order of the prototype nodes is predetermined and the
learning process is to drag the ordered nodes onto the appropriate positions in
the low-dimensional feature map (see Fig. 2.11b, upper figure). As the original
input manifold can be complicated and an inherent dimension larger than that
of the feature map (usually set as two for visualization purposes), the dimension

K0. Assign small random numbers to the initial weight vectors wj(t=0), for every neuron j from the output map.
K1. Apply an input vector x (x1 , x2 , ..., xn ) at the consecutive time moment t.
K2. Calculate the distance dj (in n-dimensional space) between x and the weight vectors Wj(t) of each neuron j.
K3. The neuron K which is closest to X is declared winner. It becomes a center of a neighbourhood area Nt.
K4. Change all the weight vectors within the neighbourhood area:
wj (t+1) = wj (t) +
.(x wi (t)), if j Nt,
wj (t+1) = wj (t), if j is not from the area Nt
All of the steps from K1 to K4 are repeated for all training instances. Nt and
decrease in time. The training procedure is
repeated again with the same training instances until convergence is achieved.

Fig. 2.12 The SOM training algorithm (from Kasabov (1996), MIT Press, reproduced with permission).


Evolving Connectionist Systems

reduction in SOM may become inappropriate for complex data analysis tasks.
The SOM have been extended for supervised learning to LVQ (Learning vector
Quantisation) (Kohonen, 1997).


Dynamic SOMs

The constraints of a low-dimensional mapping topology of SOM are removed

in Martinez and Schulten (1991), where a neural gas model is proposed with a
learning rule similar to SOM, but the prototype vectors are organized in the original
manifold of the input space. Each time the prototype weights are updated the
neighbourhood rank, i.e. the matching rank of prototypes, needs to be computed.
Unfortunately, this brings the time complexity of the algorithm to the scale of (n
log n) in a serial implementation, whereas searching for the best matching unit in
the K-means algorithm or in the SOM algorithm takes only n steps.
Fritzke (1995) proposed a growing structure neural gas (GNG) which uses a fixed
topology for reference vector space, but there is no predefined layout order for map
nodes. The map creates new nodes whenever input data are not closely matched by
existing reference vectors, and sets up connections between neighbouring nodes.
One of the goals of the method is to insert more nodes in the model where the
density of the data in that subspace is higher, thus keeping the entropy at its
maximum value. If a node has more data associated with it, the node gets split
and a new one is created as illustrated in Fig. 2.13a,b. It is statistical knowledge
that is accumulated in the model and used to optimise its structure.
Bruske and Sommer (1995) presented another similar model, dynamic cell
structure (DCS), slightly differing from GNG in the node insertion part. GNG,
and DCS need to calculate local resources for prototypes, which introduces extra
computational effort and reduces their efficiency.
SOM and its derivatives are unsupervised learning methods. The SOM algorithm
was further extended to the learning vector quantisation (LVQ) algorithm for

Number of examples associated with this node










Separators of the Voronoi regions

Fig. 2.13 An example of splitting neurons in the growing neural gas structure (see Fritzke (1995)): (a) initial
structure; (b) the structure after node #2 was split and a new node #6 was created.

Evolving Connectionist Methods for Unsupervised Learning


learning supervised pattern classification data (Kohonen, 1990). Vesanto (1997)

incorporated a local linear regression model on the top of a SOM map for a
time-series prediction problem. This method constructs local prototype vectors
and uses linear regression models on these vectors. Strictly speaking, this is not an
incremental learning approach and the complexity of the model usually is larger
than the scale of the number of prototype vectors.


Evolving Self-Organizing Maps (ESOM)

Several methods, such as: dynamic topology representing networks (Si et al., 2000)
and evolving self-organizing maps (ESOM; Deng and Kasabov (2000)) further
develop the principles of SOM. These methods allow the prototype nodes to
evolve in the original data space X, and at the same time acquire and keep a
topology representation. The neighbourhood of the evolved nodes (neurons) is
not predefined as it is in a SOM. It is decided in an online mode according to
the current distances between the nodes. These methods are free of the rigid
topological constraints in a SOM. They do not require searching for neighbourhood ranking as in the neural gas algorithm, thus improving the speed of
Here, the ESOM method is explained in more detail.
Given an input vector x, the activation on the ith node in ESOM is defined as:
ai = exwi  /


where  is a radial. Here ai can be regarded as a matching score for the ith
prototype vector wi onto the current input vector x. The closer they are, the bigger
the matching score is.
The following online stochastic approximation of the error minimization
function is used.
Eapp = i=1n ai x wi 2


where n is the current number of nodes in ESOM upon arrival of the input
vector x.
To minimize the criterion function above, weight vectors are updated by
applying a gradient descent algorithm. From Eq. (2.15) it follows
Eapp /wi = ai wi x
+ x wi 2 ai /wi


For the sake of simplicity, we assume that the change of the activation will be
rather small each time the weight vector is updated, so that ai can be treated as a
constant. This leads to the following simplified weight-updating rule.
wi = ai x wi
 for i = 1 2     n
Here  is a learning rate held as a small constant.



Evolving Connectionist Systems

The likelihood of assigning the current input vector x onto the ith prototype wi
is defined as
Pi x wi
= ai / k=12  n ak


Evolving the Feature Map

During online learning, the number of prototypes in the feature map is usually
unknown. For a given dataset the number of prototypes may be optimum at a
certain time but later it may become inapropriate as when new samples arrive the
statistical characteristics of the data may change. Hence it is highly desirable for
the feature map to be dynamically adaptive to the incoming data.
The approach here is to start with a null map, and gradually allocate new
prototype nodes when new data samples cannot be matched well onto existing
prototypes. During learning, when old prototype nodes become inactive for a long
time, they can be removed from the dynamic prototype map.
If for a new data vector x none of the prototype nodes is within a distance
threshold, then a new node wnew is inserted representing exactly the poorly matched
input vector wnew = x, resulting in a maximum activation of this node for x.
The ESOM evolving algorithm is given in Box 2.1.
Box 2.1. The ESOM evolving self-organised map algorithm:
Step 1: Input a new data vector x.
Step 2: Find a set S of prototypes that are closer to x than a predefined
Step 3: If S is null, go to step 4 (insertion), otherwise calculate the activations
ai of all nodes from S and go to step 5 (updating).
Step 4 (insertion): Create a new node wi for x and make a connection between
this node and its two closest nodes (nearest neighbours) that will form a set S.
Step 5 (updating):Modify all prototypes in S according to (2.17) and recalculate
the connections s i j
between the winning node i (or the newly created one)
and all the nodes j in the set S: s i j
= ai aj / maxai  aj .
Step 6: After a certain number of input data are presented to the system, prune
the weakest connections. If isolated nodes appear, prune them as well.
Step 7: Go to step 1.

Visualising the Feature Map

Sammon projection is used here to visualise the evolving nodes at each time when
it is necessary. In addition to the node projection in a 2D space, the topology of
node connections is also shown as links between neighbouring nodes. This is a
significant difference between the ECM presented in Section 2.2 and the ESOM
(see examples on Fig. 2.17b, Fig. 2.19).

Evolving Connectionist Methods for Unsupervised Learning



Prototype Learning. ART

Adaptive Prototype Learning

This is a similar technique to the adaptive clustering methods, but here instead
of n cluster centres and membership degrees, n prototypes of data points are
found that represent to a certain degree of accuracy the whole data stream up
to the current point in time. The d-dimensional space, with p examples currently
presented, is transformed into n prototypes in the same space.
SOMs and ESOMs form prototypes as nodes that are placed in the original data
space. Each prototype gets activated if an example from the prototype area is
presented to the system. This is explained later in this chapter.


Adaptive Resonance Theory

Here, a brief outline of one of the historically first, and computationally simplest,
adaptive prototyping systems ART1 and ART2 is given (Carpenter and
Grossberg, 1987).
Adaptive resonance theory (ART) makes use of two terms from brain behaviour,
i.e. stability and plasticity. The stability/plasticity dilemma is the ability of a
system to preserve the balance between retaining previously learned patterns and
learning new patterns. Two layers of neurons are used to realize the idea: a top
layer, an output concept layer, and a bottom layer, an input feature layer. Two
sets of weights between the neurons in the two layers are used. The top-down
weights represent learned prototype patterns, expectations. The bottom-up weights
represent a scheme for new inputs to be accommodated in the network.
Patterns, associated with an output node j, are collectively represented by
the weight vector of this node tj (top-down weight vector, prototype). The
reaction of the node j to a particular new input vector is defined by another
weight vector bj (bottom-up weight). The key element in the ART realisation
of the stability/plasticity dilemma is the control of the partial match between
new feature vectors and already learned ones achieved by using a parameter,
called vigilance, or vigilance factor. Vigilance controls the degree of mismatch
between the new patterns and the learned (stored) patterns which the system can
Figure 2.14a shows a diagram of a simple ART architecture (Carpenter and
Grossberg, 1987). It consists of two sets of neurons: input (feature) neurons (first
layer) and output neurons (second layer). The bottom-up connections bij from
each input i to every output j and the top-down connections tji from the outputs
back to the inputs are shown in the figure. Each of the output neurons has a strong
excitatory connection to itself and a strong inhibitory connection to each of the
other output neurons.
The ART1 learning algorithm for binary inputs and outputs is given in Fig. 2.14b.
It consists of two major phases. The first one is presenting the input pattern
and calculating the activation values of the output neurons. The winning neuron
is defined. The second phase is for calculating the mismatch between the input
pattern and the pattern currently associated with the winning neuron. If the


Evolving Connectionist Systems








A1. Weight coefficients are initialized:
tij 0
:1, bij :=1/(1+n), for each i=1,2,...,n; j=1,2,...m
A2. A coefficient of similarly r, a so-called vigiliance factor, is defined, 0<=r<=1. The greater the
value of r, the more similar the patterns ought to be in order to activate the same output
neuron representing a category, a class, or a concept.
A3. WHILE (there are input vectors) DO
(a) a new input vector x(t) is fed at moment t, x = (x1 ,x2 ,...,xn
(b) the outputs are calculated:

Oj = bij (t).xi (t), for j=1,2..., m
(c) an output oj with the highest value is defined;
(d) the similarly of the associated to j input pattern is defined:
IF (number of "1"s in the intersection of the vector x(t) and tj (t)) divided to the number of
"1"s in x(t) is greater than the vigilance r) THEN GO TO (f)
(e) the output j is abandoned and the procedure returns to (b) in order to calculate another
output to be associated with x(t);
(f) the pattern x(t) is associated with the vector tj (t), therefore the pattern tj (t) is changed
using its intersection with x(t):
tij (t+1):=tij (t).xi (t), for i=1,2,...,n
(g) the weights bij are changed:

bij (t+1):=bij (t)+tij (t).xi /(0.5+ tij (t). xi (t))

Fig. 2.14 (a) A schematic diagram of ART1; (b) the ART1 learning algorithm presented for n inputs and m
outputs (from Kasabov (1996), MIT Press, reproduced with permission).

mismatch is below a threshold (vigilance parameter) this pattern is updated to

accommodate the new one. But if the mismatch is above the threshold, the
procedure continues to either find another output neuron, or to create a new one
An example of applying the algorithm for learning a stream of three patterns
is presented in Fig. 2.15. The network associates the first pattern with the
first output neuron, the second pattern with the same output neuron, and the
third input pattern with a newly created second output neuron. If the network
associates a new input pattern with an old one, it changes the old one respectively.
For binary inputs, the simple operation of binary intersection (multiplication)
is used.

Evolving Connectionist Methods for Unsupervised Learning

Input Pattern


Top-down template
Output 1 Output 2

Fig. 2.15 Patterns presented to an ART1 system and learned as two prototypes at three consecutive time
moments. If a new pattern did not match an existing prototype above the vigilance parameter value, a new
output node was created to accommodate this pattern as a new prototype (from Kasabov (1996), MIT Press,
reproduced with permission).

ART1 was further developed into ART2 (continuous values for the inputs;
Carpenter and Grossberg (1987)), ART3 (Carpenter and Grossberg, 1990), Fuzzy
ARTMAP (Carpenter et al., 1991). The latter is an extension of ART1 when input
nodes represent not yes or no features, but membership degrees, to which the
input data belong to these features, for example, a set of features {sweet, fruity,
smooth, sharp, sour} used to categorise different samples of wines based on their
taste. A particular sample of wine can be represented as an input vector consisting
of membership degrees, e.g. (0.7, 0.3, 0.9, 0.2, 0.5). The fuzzy ARTMAP allows
for continuous values in the interval of [0,1] for both the inputs and for the
top-down weights. It uses fuzzy operators MIN and MAX to calculate intersection
and union between the fuzzy input patterns x and the continuous-value weight
vectors t.


Generic Applications of Unsupervised Learning Methods

Data Analysis. Time-Series Data Analysis

Clustering of data may reveal important patterns that can lead to knowledge
discovery in various application areas. Data can be either static, or dynamic
time-series data as illustrated in Fig. 2.16, where three gene expression variables
measured over time are clustered together based on their similarity of values over
the time of measurement. The mean time series (the temporal cluster centre) is
also shown.
The clustered genes together suggest that these genes may have a similar function
in a cell, or may co-regulate each other which is important information for the
understanding of the interaction between these genes.


Evolving Connectionist Systems

Cluster 53 Members










Fig. 2.16 A cluster of three data time series that have similar temporal profiles: three genes which expression
level is measured at several time moments. The mean value of the time series (the temporal cluster centre) is
also shown.


Filling Missing Values

In the case of datasets that have missing values for some variables and some
samples, data can be clustered according to the available variable values and the
missing values can be assigned based on similarity of the samples with missing
values to other samples that have no missing values as shown in Box 2.2.
Box 2.2. Using clustering for filling missing values:
1. Assume that the value xim is missing in a vector (sample) Sm =
x1m  x2m      xim      xnm
2. Find the closest K samples to sample Sm based on the distance, measured
with the use of the only available variables (xi is not included) set Smk.
3. Substitute xim = j=1K 1 dj
xij / j=1K 1 dj
where dj is the distance
between sample Sm and sample Sj from the set Smk.
4. For every new input vector x, find the closest K samples to build a model
(the new vector x is a centre of a cluster and find the K closest members of
this cluster)


Evolving Clustering Method for Classification (ECMC)

Generally speaking, if there are class labels associated with the examples used for
training a system, i.e. the examples are of the type z = x y
, where y is a class
label, clustering methods can be used for classification purposes. The procedure
is the following one.

Evolving Connectionist Methods for Unsupervised Learning


1. Apply the clustering algorithm to data pairs xy

, separately for each class
finding separate clusters for each class.
2. A new input datum x with unknown class label, is first clustered into one of
the existing clusters based on its distance from the cluster centres, and then
the class label y assigned to this cluster is assigned to the input data point as a
classification result y.

The two-spirals problem is used here for illustration. The 2D training dataset is
generated with a density of 1 and consists of 194 data, with 97 data points for each
spiral (Fig. 2.17a) The testing dataset, generated with a density of 4, is composed
of 770 data with 385 data for each spiral.
(density 1, for training data):

 =  + /2
 = k/16 k = 0 1 2    96


(density 4, for testing data):

 =  + /2
 = k/64 k = 0 1 2    384  


spiral 1  x =  cos 

y =  sin 

spiral2  x =  cos 

y =  sin 

Further points of the spirals can be generated in the two-dimensional Euclidean

space, thus the process of generating spiral points can be considered evolving and
expanding in an open 2D space. Figure 2.17b compares the evolved structures
through using SOM (the upper figure) and through using ESOM (the lower figure).
An evolved ESOM is more suitable for the spiral data clustering problem than
a trained SOM, as SOM imposes a certain 2D grid that is not suitable for the
problem, whereas ESOM does not assume in advance any grid of connected nodes.
Two ECMc classification models were also created here. The first one had a
threshold Dthr of 0.955. It created 64 nodes and achieved a classification accuracy
of 100% on the training dataset and 98.4% on the test set. The second model
was evolved with Dthr = 0.98. It evolved 146 nodes and achieved 100% classification accuracy for both the training and the test sets. Figures 2.18a,b show the
classification boundaries between the two classes for the two models.


Evolving Connectionist Systems



Fig. 2.17 The training data for the benchmark two-spirals problem: (a) the two-spiral benchmark data;
(b) evolved structures with the use of SOM (the upper figure) and ESOM (the lower figure).

Evolving Connectionist Methods for Unsupervised Learning



Fig. 2.18 The two-spiral problem, decision regions of ECMc: (a) decision regions for ECMc (model 1, with 64
nodes); (b) decision regions for ECMc (model 2, with 146 nodes).

The two-spiral classification problem is further used as an illustration problem

in some other methods presented in this chapter. It shows that for simple classification problems unsupervised learning with clustering could provide a good
solution. The problem can be solved with the use of other supervised learning
schemes where the learning process takes into account the labelling of the data
when the model parameters are adjusted.
In ECM, as well as in other clustering algorithms, the learning system develops
in the original d-dimensional input data space X and the cluster centres Cc are
points in this space. In the case of a high-dimensional X space, visualisation of
the clusters becomes difficult. As a solution, the PCA or the Sammon projection
algorithm can be used to project approximately the d-dimensional space into twoor three-dimensional visualisation space. This is illustrated also in Chapter 10
where ECM is used to evolve acoustic clusters from continuous speech data from
multiple languages.


ESOM for Classification

Here we assume that data arrive in pairs z = x y

, where y is the class label
assigned to each input data vector x. When a new node wj is created to represent
an input vector xi , the node is assigned a class label yi .
A new input vector x with unknown class label is first mapped into the prototype
nodes. A k-nearest neighbour classification is applied then on the winning node
and its neighbours are linked to it through the neighbourhood links.
This is illustrated here on the benchmark two-spiral problem explained in the
above section (see also Fig 2.17b). Table 2.2 shows classification results obtained
with the use of different unsupervised learning models including ESOM.
ESOM is also applied on the MackeyGlass data as explained and illustrated in
Chapter 1. Five variables are used that constitute the problem space: x t
, x t 6
x t 12
, x t 18
, and x t + 6
. In Fig. 2.19 the evolved nodes from 200 data
points are plotted in a two-dimensional space of the first two principal components
of the original 5D problem space. Except for two nodes, all of them were created
during the presentation of the first 100 examples as shown in one of the windows
of the snapshot of a graphical user interface in Fig. 2.19. ESOM is applied for the


Evolving Connectionist Systems

Table 2.2 The result of classification of test data for the two-spiral problem using
ESOM and other classification algorithms.

No. of units

Error rate

No. of Epochs





analysis of gene expression data in Chapter 8, for adaptive analysis of image data
in Chapter 12, and for other applications throughout this book.

Fig. 2.19 The evolved ESOM structure from the MackeyGlass time-series data for the following parameter
values: error threshold 0.08; learning rate 0.2; sigma 0.4. Five input variables are used: xt, xt 6,
xt 12, xt 18, and xt + 6 for 200 examples. The evolved nodes are plotted in a two-dimensional
space of the first two principal components of the problem space.


Evolving Clustering for Outlier Detection

Outlier detection is an important task in many applications where an unusual

situation should be automatically detected from the input data and reported. If the
current input vector is far from any of the clusters already created in the system,
it is an outlier. The outlier may be joined by some other data vectors later in time,
and thus will no longer be an outlier.

Evolving Connectionist Methods for Unsupervised Learning


Applications of outlier detection include:

Online processing of radio signals from a radio telescope recording signals from
the universe (see, for example, SETI, Search for Extra Terrestrial Intelligence
Institute at
Online processing of data from a production pipeline, indicating a good or a
defect product (outlier).
Many more.



Assignment specification:
1. Select a dataset and run the following clustering algorithms:
a. k-means, for several numbers of predefined clusters
b. ECM, for various clustering parameter values
c. ECMC for classification in an inductive and transductive modes
d. Hierarchical clustering
e. SOM
2. Analyse the results and answer the following questions.
a. Which clustering methods are adaptive on new data?
b. What knowledge can be learned through clustering?


Summary and Open Problems

Models for adaptive unsupervised learning and data analysis, such as the presented
evolving, adaptive clustering, adaptive quantisation, and adaptive prototype
creation, have the following advantages when compared with the off-line learning
1. They are much faster as they require one pass of adaptive data propagation.
2. They do not require a preset number of prototypes or clusters. They create
prototypes or clusters in an adaptive mode depending on the incoming data.
3. They allow for adaptive learning and accumulating of statistical knowledge.
4. They allow for the process of learning to be traced in time.
The difficulty with the adaptive unsupervised methods is that a set goal function
J may not reach a minimum value, which is the case in off-line batch learning
modes. This problem is overcome if a sufficient number of data examples are
presented to the adaptive learning system. This is very much the case with the
lifelong learning systems.
The ECM method is extended to a knowledge-based connectionist learning
method DENFIS in Chapter 5.
The methods presented here are used in several applications described in Part
II of the book.


Evolving Connectionist Systems

This chapter also raises some open questions and problems:

1. In our everyday life we learn through using different approaches in concert, e.g.
unsupervised, supervised, reinforcement, and so on. How can this flexibility be
implemented in a system?
2. New ways of measuring distance between vectors, depending on their size,
data distribution, and so on, are needed. Would it be always appropriate to
use Euclidean distance, for example, for both measuring the distance between
data points in the two-dimensional gas-furnace data space, and in the 40,000dimensional gene expression space?
3. Would it be possible to combine different methods for measuring distance
between data vectors in one model?
4. How do we measure relative distance between consecutive data points from a
time series in contrast to measuring global absolute distance?
5. What is really learned through an unsupervised learning process if we do not
specify goals or expectations, and if we do not analyse the results? Should we
still call the process of feeding unlabelled data into a system a learning process?
6. Can unsupervised learning methods for classification perform better than supervised learning methods and when?
7. How can we integrate unsupervised learning with other sources of available
information to improve the knowledge representation and discovery?


Further Reading

Details of the ESOM Algorithm (Deng and Kasabov, 2002, 2003)

Details of the ECM Algorithm (Kasabov and Song, 2002)
Clustering Algorithms General (Hartigan, 1975; Bezdek, 1987).
K-means Clustering (MacQueen, 1967).
Fuzzy C-means Clustering (Bezdek, 1981, 1987, 1993)
Incremental Clustering (Fisher, 1989)
Adaptive Resonance Theory (Carpenter and Grossberg, 1987, 1990, 1991)
Self-organizing Maps (Kohonen, 1977, 1982, 1990, 1993, 1997)
Chaotic SOM (Dingle et al., 1993)
Neural Gas (Martinez and Schulten, 1991).
Growing Neural Gas (Fritzke, 1995)
Dynamic Topology Representing Networks (Si et al., 2000)
Kernel-based Equiprobabilistic Topographic Map Formation (Van Hulle, 1998)
A Topological Neural Map for Adaptive Learning (Gaussier and Zrehen, 1994)
Dynamic Cell Structures (Bruske and Sommer, 1995).
Spatial Tessellation (Okabe et al., 1992)
Online Clustering Using Kernels (Boubacar et al., 2006)

3. Evolving Connectionist Methods

for Supervised Learning

This chapter presents, as background knowledge, several well-known connectionist methods for supervised learning, such as MLP, RBF, RAN, and then introduces methods for evolving connectionist learning. These include simple evolving
MLP (eMLP), evolving fuzzy neural networks (EFuNN) and other methods. The
emphasis is on model adaptability and evolvability and on their facilities for rule
extraction and pattern/knowledge discovery, which are the main objectives of the
knowledge engineering approach that we take in this book. The chapter material
is presented in the following sections.

Connectionist supervised learning methods

Simple evolving connectionist methods
Evolving fuzzy neural networks EFuNN.
Knowledge manipulation in EFuNN rule insertion, rule extraction, rule
Summary and open problems
Further readings


Connectionist Supervised Learning Methods

General Notions

Connectionist systems for supervised learning learn from pairs of data (xy), where
the desired output vector y is known for an input vector x. If the model is
incrementally adaptive, new data will be used to adapt the systems structure and
function incrementally (see the classification scheme in Chapter 1).
As discussed in Chapter 1, the objective (goal) function used to optimise the
structure of the learning model during the learning process can be either global,
or a local goal function.
If a system is trained incrementally, the generalization error of the system on
the next new input vector (or vectors) from the input stream is called here local
incrementally adaptive generalization error. The local incrementally adaptive generalization error at the moment tfor example, when the input vector is xt, and the
output vector calculated by the system is yt , is expressed as Errt = yt yt .


Evolving Connectionist Systems

The local incrementally adaptive root mean square error, and the local incrementally adaptive nondimensional error index LNDEI(t) can be calculated at each
time moment t as

LRMSEt = i = 1 2    tErri2 /t

LNDEIt = LRMSEt/stdy1  yt


where std(y(1):yt is the standard deviation of the output data points from time
unit 1 to time unit t.
In a general case, the global generalisation root mean square error RMSE and
the nondimensional error index are evaluated on a set of p new (future) test
examples from the problem space as follows.

RMSE = i = 1 2    pyi yi 2 /p

NDEI = RMSE/stdy1 yp 


where std (y1 : yp ), is the standard deviation of the data from 1 to p in the test set.
After a system is evolved on a sufficiently large and representative part of
the whole problem space Z, its global generalisation error is expected to become
satisfactorily small, similar to the off-line, batch mode learning error.


Multilayer Perceptrons (MLP) and Gradient Descent Algorithms

Multilayer perceptrons (MLP) trained with a backpropagation algorithm (BP) use

a global optimisation function in both incrementally adaptive (pattern mode)
training, and in a batch mode training (Amari, 1967; Rumelhart et al., 1986;
Werbos, 1990).
The batch mode off-line training of a MLP is a typical learning method. Figure 3.1
depicts the batch mode backpropagation algorithm.
In the incremental, pattern learning mode of the backpropagation algorithm, after
each training example is presented to the system and propagated through it, an error
is calculated and then all connections are modified in a backward manner. This is
one of the reasons for the phenomenon called catastrophic forgetting: if examples are
presented only once, the model may adapt to them too much and forget previously
learned examples, if the model is a global model. In an incrementally adaptive learning
mode, the same or very similar examples from the past need to be presented many
times again, in order for the system to properly learn new examples without forgetting
the old ones. The process of learning new examples while presenting previously
used ones is called rehearsal training (Robins, 1996).
MLP can be trained in an incrementally adaptive mode, but they have limitations
in this respect as they have a fixed structure and the weight optimisation is a
global one if a gradient descent algorithm is used for this purpose.
A very attractive feature of the MLP is that they are universal function approximators (see Cybenko (1989) and Funahashi (1989)) even though in some cases
they may converge in a local minimum.
Some connectionist systems that include MLP use a local objective (goal)
function to optimise the structure during the learning process. In this case when a

Evolving Connectionist Methods for Supervised Learning


Forward pass:
BF1. Apply an input vector x and its corresponding output vector y (the desired output).
BF2. Propagate forward the input signals through all the neurons in all the layers and calculate the output signals.
BF3. Calculate the Errj for every output neuron j as for example:
Errj =yj -oj , where yj is the jth element of the desired output vector y.
Backward pass:
BB1. Adjust the weights between the intermediate neurons i and output neurons j according to the calculated error:
wij (t+1) =1rate.oj (1-oj ).Errj .oi +momentum.wij (t)
BB2. Calculate the error Erri for neurons i in the intermediate layer:

Erri = Errj .wij
BB3. Propagate the error back to the neurons k of lower level:
wki (t+1) =1rate.oi (1-oi ).Erri .xk +momentum.wki (t)

Fig. 3.1 The backpropagation algorithm (BP) for training a multilayer perceptron (MLP) (Amari, 1967; Rumelhart
et al., 1986; Werbos, 1990) (from Kasabov (1996), MIT Press, reproduced with permission).

data pair x y is presented, the system optimises its functioning always in a local
vicinity of x from the input space X, and in the local vicinity of y from the output
space Y (Saad, 1999).


Radial Basis Function (RBF) Connectionist Methods

Several connectionist methods for incrementally adaptive and knowledge-based

learning use principles of the radial basis function (RBF) networks (Moody and
Darken, 1988, 1989). The basic architecture is outlined here along with its modifications for constructive, incrementally adaptive learning.
The RBF network consists of three layers of neuronsinput layer, radial basis
layer, and output layeras shown in Fig. 3.2. The radial basis layer represents
clustering of the training data and is established through a clustering method.
The second layer of connections is tuned through the delta rule for a global error
through multiple iterations over the training data.
The input nodes are fully connected to the neurons in the second layer. A
hidden node has a radial basis function as an activation function. The RBF is a
symmetric function (e.g. Gaussian, belllike):
fx = expx M2 /2 2


where M and are two parameters meaning the mean and the standard deviation
of the input vector x For a particular node i, its RBF fi is centred at the cluster
centre Ci in the n-dimensional input space. The cluster centre Ci is represented by
the vector (w1i      wni  of connection weights between the n input nodes and the
hidden node i. The standard deviation for this cluster defines the range for the


Evolving Connectionist Systems



(Linear output function)

(Gaussian activation function)





Fig. 3.2 General structure of an RBF network.

RBF fi . The RBF is nonmonotonic, in contrast to the sigmoid function used in the
MLP networks.
The second layer is connected to the output layer. The output nodes perform a
simple summation function with a linear thresholding activation function.
Training of a RBFN consists of two phases:
Adjusting the RBFs of the hidden neurons by applying a statistical clustering
method; this represents an unsupervised learning phase.
Applying gradient descent (e.g. the backpropagation algorithm) or a linear
regression algorithm for adjusting the second layer of connections; this is a
supervised learning phase.
During training, the following parameters of the RBFN are adjusted.
The n-dimensional position of the centres Ci of the RBFi. This can be achieved by
using the k-means clustering algorithm, for example, which finds a predefined
number of hidden nodes (cluster centres and shape of the Gaussian function)
that minimize the average distance between the training examples and the knearest centres.
The weights of the second layer connections.
The recall procedure for the RBF network calculates the activation of the hidden
nodes that represent how close an input vector x is to the centres Ci . The activation
value of the closest node is propagated to the output layer.
Several methods for incrementally adaptive and constructive training of RBF
networks exist. Such is the extended growing cell structure (GCS) method (see
Chapter 2) called the supervised growing cell structure network (Fritzke, 1995).
The method applies the growing cell algorithm on the radial basis nodes in a RBF
network. The second layer of connections is tuned through the delta rule in an
incrementally adaptive mode.
In Blanzieri and Katenkamp (1996) an algorithm for incrementally adaptive
learning in RBF networks is presented which utilises a factorisable RBF network
(F-RBFN) introduced by Poggio and Girosi (1990). Fig. 3.3 shows the structure of

Evolving Connectionist Methods for Supervised Learning




Product Unit
Gaussian Unit
Output Unit

Fig. 3.3 Factorizable RBF network.

the F-RBFN. The RBF are not located in the whole n-dimensional space, but on
the local one-variable space for each input variable.


The Resource Allocation Network (RAN) Model

The resource allocation network (RAN) model was suggested by Platt (1991) and
improved in other related methods presented in this section. RAN uses the same
architecture as the RBF networks, but both the clustering and the second layer
adjustment are performed in a two-pass incrementally adaptive mode. A RAN
model allocates a new neuron for a new input example x y if the input vector x is
not sufficiently close to any of the already allocated radial basis neurons (centres),
and also if the output error evaluation (y y  ), where y  is the output produced by
the system for the input vector x, is above an error threshold. Otherwise, centres
will be adapted to minimize the error for the example (x y) through a gradient
descent algorithm.
Some versions of RAN have been developed by Rosipal et al. (1997; RAN-GRD
and RAN-P-GQRD). Some of these methods are used in this and the next chapters
for a comparative analysis of the performance of different incrementally adaptive
learning methods.


The Receptive Field Weighted Regression Method (RFWR)

The receptive field weighted regression method (RFWR) is a connectionistregression technique that uses a regression formula in the form of a weighted
sum of local receptive fields learned in local neurons (Schaal and Atkeson, 1998).
Through learning, the receptive fields change their size and shape, but not their
centre, once it is established. During learning the regression formula changes
the weighting of the input variables for the purpose of incremental function


Evolving Connectionist Systems

A schematic diagram of the RFWR model is given in Fig. 3.4. Each receptive
field is learned in a kernel function unit (this is a Gaussian function) and in a
linear unit.
A predicted output value y for an input vector x is calculated based on the
following formula,
y =


wk ak /




where wk is the weight of the kth receptive field learned through the learning
procedure, and ak is the activation of the kth receptive field for the input vector x.
An example of how the receptive fields change during the learning process of a
complex function is shown in Fig. 3.5. Data examples are generated in a random
manner from the following function of two variables x and y.
z = max exp10x2  exp50y2  125 exp5x2 + y2  + N0 001


in a package of 500 examples used for one training iteration in an incrementally

adaptive mode. As test data, 1681 data points are drawn from the function space
and their output values are evaluated by the trained RFWR model. Figure 3.5
shows: (a) the target function; (b) the approximated function after 50 iterations
of training the RFWR model; (c) receptive fields of the generated nodes after
one epoch of training (that includes 500 randomly drawn examples) shown in
the original input space; and (d) the receptive fields after 50 epochs of training
(each epoch includes 500 randomly drawn examples from the function space).
The centres of the receptive fields do not change once they are established in an
incremental way. The small dots represent data examples drawn from the input
space of the above function.

Weighted Average

Gaussian Unit

Linear Unit




Fig. 3.4 The architecture of the receptive field weight regression network model (RFWR) (see Shaal and Atkeson

Evolving Connectionist Methods for Supervised Learning






Fig. 3.5 An example of how the receptive fields change during an incremental learning process of a complex
function: (a) the original function; (b) the learned in the model function; (c) receptive fields at the beginning
of the learning process with a small number of examples; (d) receptive fields after more examples are added
(from Schaal and Atkeson (1998); reproduced with permission).

RFWR methods are similar to the mixture of experts methods (Jordan and
Jacobs, 1994) as each receptive field here represents one local expert. All receptive
fields cover the function under approximation. The system can cope with the
changing dynamics of the function and changing probability distribution over time
through creating new receptive fields and pruning old ones in an incremental way.



FuzzyARTMAP (Carpenter et al., 1991) is an incremental learning connectionist

model that associates fuzzy clusters from an input space with an output space.
It consists of two parts FuzzyARTa, and FuzzyARTb each of them being
type ART2 networks that deal with fuzzy input features and fuzzy outputs (see
Fig. 3.6). At each time of the learning process, rules that associate input patterns
with output classes can be extracted from a FuzzyARTMAP network. A map field
maps the activated node in ARTa with the desired output node from ARTb. The
mapping process for each inputoutput training pair is iterative.


Evolving Connectionist Systems

Map Field





Input Features

Desired/Produced Output

Fig. 3.6 A schematic diagram of a FuzzyARTMAP network.


Lifelong Learning Cell Structures

Some methods, such as the lifelong learning cell structures (Hamker, 2001),
combine self-organized unsupervised learning in the input space with error-driven
learning in the output space whereas the nodes in the first area are not restricted
in numbers nor in their position in the input space. Such systems can grow
forever and pruning is also involved to remove the old, and not useful nodes
from the input space.
Figure 3.7 illustrates the idea of the lifelong learning cell structure. Each node in
the self-organised area has several parameters attached to it: centre and width of
the Gaussian activation functions associated with the node, error counter, inherited
error, insertion threshold, and age. These parameters are used in the evolving
algorithm that defines when a node should be considered as sufficiently activated,
how to link this node with the neighbouring nodes, how to update these links (see
ESOM for a similar approach, Chapter 2), when to create a new node, when to
prune a node, and so on.


Error-driven weights

Self-organized growing area

Fig. 3.7 A schematic diagram of a life-long learning cell structure.

Evolving Connectionist Methods for Supervised Learning



Simple Evolving Connectionist Methods

Simple Evolving MLP and RBF

A representative of this class of methods and systems is ZISC (the Zero Instruction
Set Computer) (ZISC Manual, 2001). ZISC is a supervised learning system in a
chip that realises a growing RBF network. Each hidden node has a receptive field
that is initially maximally large. A node is linked to an output class of yes/no type
depending on the example that is presented. If during the learning process a new
example is close to this node but belongs to another class, the radius of the field
of this node is reduced to exclude this example and a new node is created. The
distance between a new example and all nodes is computed in parallel.
Another simple evolving MLP method is called here eMLP and presented in
Fig. 3.8 as a simplified graphical representation. An eMLP consists of three layers
of neurons, the input layer, with linear or other transfer functions, an evolving
layer, and an output layer with a simple saturated linear activation function. It
is a simplified version of the evolving fuzzy neural network (EFuNN), presented
later in this chapter (Kasabov, 2001).
The evolving layer is the layer that will grow and adapt itself to the incoming
data, and is the layer with which the learning algorithm is most concerned.
The meaning of the incoming connections, activation, and forward propagation
algorithms of the evolving layer all differ from those of classical connectionist
If a linear activation function is used, the activation A of an evolving layer node
n is determined by Eq. (3.6),
A n = 1 Dn


where An is the activation of the node n and Dn is the normalised distance between
the input vector and the incoming weight vector for that node.

Fig. 3.8 A block diagram of a simple evolving MLP (eMLP).


Evolving Connectionist Systems

Other activation functions, such as a radial basis function could be used. Thus,
examples which exactly match the exemplar stored within the neurons incoming
weights will result in an activation of 1 whereas examples that are entirely outside
the exemplars region of input space will result in an activation of near 0.
The preferred form learning algorithm is based on accommodating, within the
evolving layer, new training examples by either modifying the connection weights
of the evolving layer nodes, or by adding a new node. The algorithm employed is
described below.
Box 3.1. eMLP learning algorithm
1. Propagate the input vector I through the network.
IF the maximum activation Amax of a node is less than a coefficient called
sensitivity threshold Sthr :
2. Add a node, ELSE
3. Evaluate the error between the calculated output vector Oc and the desired
output vector Od .
4. IF the error is greater than an error threshold Ethr OR the desired output
class node is not the most highly activated,
5. Add a node, ELSE
6. Update the connections to the winning node in the evolving layer.
7. Repeat the above procedure for each training vector.
When a node is added, its incoming connection weight vector is set to the input
vector I, and its outgoing weight vector is set to the desired output vector Od .
The incoming weights to the winning node j are modified according to Eq. (3.7),
whereas the outgoing weights from node j are modified according to Eq. (3.8)
Wij t + 1 = Wij t + 1 Ii Wij t


Wij tis the connection weight from input i to j at time t
Wij t + 1 is the connection weight from input i to j at time t + 1
1 is the learning rate one parameter
Ii is the ith component of the input vector I
Wjp t + 1 = Wjp t + 2 Aj Ep 


Wjp t is the connection weight from j to output p at time t
Wip t + 1 is the connection weight from j to p at time t + 1
2 is the learning rate two parameter
Aj is the activation of a node j
Ep = Odp Ocp


Evolving Connectionist Methods for Supervised Learning


where Ep is the error at p; Odp is the desired output at p; and Ocp is the calculated
output at p.
The distance measure Dn in Eq. (3.6) above is preferably calculated as the
normalised Hamming distance, as shown in Eq. (3.10):

Dn =



Ii Wi 
Ii + Wi 

where K is the number of input nodes in the eMLP, I is the input vector, and W
is the input to the evolving layer weight matrix.
The eMLP architecture is similar to the Zero Instruction Set Computer architecture. However, ZISC is based on RBF ANN and requires several training iterations over input data.
Aggregation of nodes in the evolving layer can be employed to control the size
of the evolving layer during the learning process. The principle of aggregation is
to merge those nodes which are spatially close to each other. Aggregation can be
applied for every (or after every n training example. It will generally improve the
generalisation capability of eMLP. The aggregation algorithm is as follows.
FOR each rule node rj  j = 1  n where n is the number of nodes in the evolving
layer and W1 is the connection weight matrix between the input and evolving layer
and W2 is the connection weight matrix between the evolving and output layer.
Find a subset R of nodes in the evolving layer for which the normalised Euclidean
distances DW1rj  W1ra  and DW2rj  W2ra rj  ra R are below a threshold Wthr .
Merge all the nodes from the subset R into a new node rnew and update W1rnew
and W2rnew using the following formulas,

W1rnew =
W2rnew =

ra RW1ra 
ra RW2ra 


where m denotes the number of nodes in the subset R.

Delete the nodes ra R.
Node aggregation is an important regularisation that is not present in ZISC. It is
highly desirable in some application areas, such as speech and image recognition
systems. In speech recognition, the vocabulary of recognition systems needs to be
customised to meet individual needs. This can be achieved by adding words to the
existing recognition system or removing words from the existing vocabulary.
eMLP is also suitable for online output space expansion because it uses local
learning which tunes only the connection weights of the local node, so all the
knowledge that has been captured in the nodes in the evolving layer will be
local and only covering a patch of the inputoutput space. Thus, adding new
class outputs or new input variables does not require retraining of the whole


Evolving Connectionist Systems

system on both the new and old data as is required for traditional neural
The task is to introduce an algorithm for online expansion and reduction of the
output space in eMLP. As described above the eMLP is a three-layer network with
two layers of connections. Each node in the output layer represents a particular
class in the problem domain when using eMLP as a classifier. This local representation of nodes in the evolving layer enables eMLP to accommodate new classes
or remove an already existing class from its output space.
In order to add a new node to the output layer, the structure of the existing eMLP
first needs to be modified to encompass the new output node. This modification
affects only the output layer and the connections between the output layer and the
evolving layer. The graphical representation of this process is shown in Fig. 3.8. The
connection weights between the new output in the output layer and the evolving
layer are initialised to zero (the dotted line in Fig. 3.8.). In this manner the new
output node is set by default to classify all previously seen classes as negative. Once
the internal structure of the eMLP is modified to accommodate the new output
class, the eMLP is further trained on the new data. As a result of the training
process new nodes are created in the evolving layer to represent thenew class.
The process of adding new output nodes to eMLP is carried out in a supervised
manner. Thus, for a given input vector, a new output node will be added only if
it is indicated that the given input vector is a new class. The output expansion
algorithm is as follows.
FOR every new output class:
1. Insert a new node j into the output layer;
2. FOR every node in the evolving layer ri  i = 1  n, where n is the number of
nodes in the evolving layer, modify the outgoing connection weights W2 from
the evolving to output layer by expanding W2ij with set of zeros to reflect the
zero output.
3. Insert a new node in the evolving layer to represent the new input vector and
connect it to the new output node j.
This is equivalent to allocating a part of the problem space for data that belong to
new classes, without specifying where this part is in the problem space.
It is also possible to remove a class from an eMLP. It only affects the output
and evolving layer of eMLP architecture:
FOR every output class o to be removed,
1. Find set of nodes S in the evolving layer which are connected to that output o.
2. Modify the incoming connections W1 from input layer to evolving layer by
deleting Si  i = 1  n, where n is the number of nodes in the set S connected to
output o.
3. Modify the outgoing connection weights W2 from the evolving to output layer
by deleting output node o.
The above algorithm is equivalent to deallocating a part of the problem space
which had been allocated for the removed output class. In this manner, there will
be no space allocated for the deleted output class in the problem space. In other
words the network is unlearning a particular output class. The eMLP is further
studied and applied in Watts and Kasabov (2002) and Watts (2006).

Evolving Connectionist Methods for Supervised Learning



Evolving Classification Function (ECF)

Another simple evolving connectionist method for classification is the evolving

classifier function ECF presented here (see Fig. 3.9). The learning and the recall
algorithms of ECF are shown in Box 3.2. Internal nodes in the ECF structure
capture clusters of input data that belong to a same class. For each input variable
there are fuzzy membership functions define as in Fig. 2.2

Box 3.2a. Learning algorithm of ECF:

1. Enter the current input vector from the dataset (stream) and calculate the
distances between this vector and all nodes already evolved (rule) using
Euclidean distance (by default). If there is no node created, create the first
one that has the co-ordinates of the first input vector attached as input
connection weights.
2. If all calculated distances between the new input vector and the existing rule
nodes are greater than a max-radius parameter Rmax, a new rule node is
created. The position of the new rule node is the same as the current vector
in the input data space and the radius of its receptive field is set to the
min-radius parameter Rmin; the algorithm goes to step 1; otherwise it goes
to the next step.
3. If there is a rule node with a distance to the current input vector less than
or equal to its radius and its class is the same as the class of the new vector,
nothing will be changed; go to step 1; otherwise:
4. If there is a rule node with a distance to the input vector less than or equal to
its radius and its class is different from that of the input vector, its influence
field should be reduced. The radius of the new field is set to the larger value
from the two numbers: distance minus the min-radius; min-radius. New
node is created as in step 2 to represent the new data vector.
5. If there is a rule node with a distance to the input vector less than or equal
to the max-radius, and its class is the same as that of the input vectors,
enlarge the influence field by taking the distance as a new radius only if
such enlarged field does not cover any other rule nodes which belong to a
different class; otherwise, create a new rule node in the same way as in step
2, and go to step 1.

Box 3.2.b Recall procedure (classification of a new input vector)

in a trained ECF:
1. Enter the new vector in the ECF trained system; if the new input vector lies
within the field of one or more rule nodes associated with one class, the
vector is classified in this class;


Evolving Connectionist Systems

2. If the input vector lies within the fields of two or more rule nodes associated
with different classes, the vector will belong to the class corresponding to
the closest rule node.
3. If the input vector does not lie within any field, then take m highest activated
by the new vector rule nodes, and calculate the average distances from the
vector to the nodes with the same class; the vector will belong to the class
corresponding to the smallest average distance.

Two main characteristics of ECF are demonstrated in the following example that
uses the Iris case study data set.
Incrementally adaptive learning
Rule/knowledge extraction

Figure 3.10 shows: (a) an ECF model trained on 90% of the Iris data (135 samples)
creating 18 clusters (rules), and afterwards adapted incrementally to the other 10%
of data (class 3 only, 15 samples), updating the rules and creating a new one, #19;
(b) the 19 rules that represent the adapted 18 clusters of data from the first 90%
of the Iris data and the new rule, #19.





Output nodes


Rule nodes







Fig. 3.9 A simplified structure of an evolving classifier function ECF. For every input variable different number
and type of fuzzy membership functions can be defined or evolved see Fig. 2.2.

Evolving Connectionist Methods for Supervised Learning


Fig. 3.10 (a) An ECF model trained on 90% of the Iris data (135 samples) creating 18 clusters (rules), and
afterwards adapted incrementally to the other 10% of data (class 3 only, 15 samples), updating the rules and
creating a new one, #19; (Continued overleaf )


Evolving Fuzzy Neural Networks (EFuNN)

Fuzzy neural networks are connectionist structures that can be interpreted in terms
of fuzzy rules (Yamakawa et al., 1992; Furuhashi et al., 1993; Lin and Lee, 1996;
Kasabov, 1996). Fuzzy neural networks are NN, with all the NN characteristics
of training, recall, adaptation, and so on, whereas neuro-fuzzy inference systems
(Chapter 5) are fuzzy rule-based systems and their associated fuzzy inference
mechanisms that are implemented as neural networks for the purpose of learning
and rule optimisation. The evolving fuzzy neural network (EFuNN) presented
here is of the former type, whereas the HyFIS and DENFIS systems presented in
Chapter 5 are of the latter type. Some authors do not separate the two types that
make the transition from one to the other type more flexible and also broaden the
interpretation and the application of each of these systems.


The EFuNN Architecture

EFuNNs have a five-layer structure (Fig. 3.11). Here nodes and connections are
created/connected as data examples are presented. An optional short-term memory
layer can be used through a feedback connection from the rule (also called case)

Rule 1:
X1 is ( 1: 0.75 )
X2 is ( 2: 0.61 )
X3 is ( 1: 0.89 )
X4 is ( 1: 0.92 )
then Class is [1]
Radius = 0.240437 , 50 in Cluster
Rule 2: if
X1 is ( 2: 0.73 )
X2 is ( 1: 0.50 )
X3 is ( 2: 0.62 )
X4 is ( 2: 0.54 )
then Class is [2]
Radius = 0.102388 , 10 in Cluster
Rule 3:
X1 is ( 1: 0.65 )
X2 is ( 1: 0.84 )
X3 is ( 2: 0.51 )
X4 is ( 1: 0.50 )
then Class is [2]
Radius = 0.107233 , 15 in Cluster
Rule 4:
X1 is ( 1: 0.55 )
X2 is ( 1: 0.58 )
X3 is ( 2: 0.54 )
X4 is ( 2: 0.58 )
then Class is [2]
Radius = 0.073327 , 17 in Cluster
Rule 5:
X1 is ( 1: 0.80 )
X2 is ( 1: 0.80 )
X3 is ( 1: 0.60 )
X4 is ( 1: 0.61 )
then Class is [2]
Radius = 0.078333 , 3 in Cluster
Rule 6: if
X1 is ( 1: 0.55 )
X2 is ( 1: 0.50 )
X3 is ( 2: 0.63 )
X4 is ( 2: 0.69 )
then Class is [2]
Radius = 0.038928 , 1 in Cluster
Rule 7:
X1 is ( 2: 0.55 )
X2 is ( 1: 0.77 )

Evolving Connectionist Systems

X3 is ( 2: 0.65 )
X4 is ( 2: 0.58 )
then Class is [2]
Radius = 0.057870 , 1 in Cluster
Rule 8:
X1 is ( 1: 0.50 )
X2 is ( 1: 0.65 )
X3 is ( 2: 0.62 )
X4 is ( 1: 0.54 )
then Class is [2]
Radius = 0.010000 , 1 in Cluster
Rule 9:
X1 is ( 1: 0.53 )
X2 is ( 1: 0.69 )
X3 is ( 2: 0.68 )
X4 is ( 2: 0.61 )
then Class is [2]
Radius = 0.010000 , 1 in Cluster
Rule 10:
X1 is ( 1: 0.53 )
X2 is ( 2: 0.58 )
X3 is ( 2: 0.58 )
X4 is ( 2: 0.61 )
then Class is [2]
Radius = 0.010000 , 1 in Cluster
Rule 11:
X1 is ( 2: 0.55 )
X2 is ( 2: 0.54 )
X3 is ( 2: 0.82 )
X4 is ( 2: 0.95 )
then Class is [3]
Radius = 0.169390 , 20 in Cluster
Rule 12:
X1 is ( 1: 0.80 )
X2 is ( 1: 0.77 )
X3 is ( 2: 0.58 )
X4 is ( 2: 0.65 )
then Class is [3]
Radius = 0.089745 , 1 in Cluster
Rule 13:
X1 is ( 1: 0.58 )
X2 is ( 1: 0.69 )
X3 is ( 2: 0.68 )
X4 is ( 2: 0.73 )

then Class is [3]

Radius = 0.061177 , 4 in Cluster
Rule 14:
X1 is ( 2: 0.88 )
X2 is ( 1: 0.58 )
X3 is ( 2: 0.91 )
X4 is ( 2: 0.80 )
then Class is [3]
Radius = 0.170023 , 11 in Cluster
Rule 15:
X1 is ( 1: 0.53 )
X2 is ( 1: 0.88 )
X3 is ( 2: 0.66 )
X4 is ( 2: 0.58 )
then Class is [3]
Radius = 0.045060 , 1 in Cluster
Rule 16:
X1 is ( 2: 0.58 )
X2 is ( 1: 0.69 )
X3 is ( 2: 0.71 )
X4 is ( 2: 0.73 )
then Class is [3]
Radius = 0.076566 , 10 in Cluster
Rule 17:
X1 is ( 1: 0.50 )
X2 is ( 1: 0.73 )
X3 is ( 2: 0.75 )
X4 is ( 2: 0.54 )
then Class is [3]
Radius = 0.010000 , 1 in Cluster
Rule 18:
X1 is ( 2: 0.55 )
X2 is ( 1: 0.65 )
X3 is ( 2: 0.68 )
X4 is ( 2: 0.58 )
then Class is [3]
Radius = 0.010000 , 1 in Cluster
Rule 19:
X1 is ( 1: 0.53 )
X2 is ( 1: 0.58 )
X3 is ( 2: 0.63 )
X4 is ( 2: 0.69 )
then Class is [3]
Radius = 0.010000 , 1 in Cluster

Fig. 3.10 (continued ) (b) The 19 rules that represent the adapted 19 clusters of data, obtained after further
training of the ECF model from Fig. 2.10a on the other 10% of the Iris data. Rule #19 is a new one, as a new
cluster #19 was created as a result of the adaptation of the model from (a) to the new 10% of the data. The
cluster centers in each rule are defined by the membership degree (between 0 and 1) to which each variable
belongs to a fuzzy membership function (here 1 indicates small value, and 2 indicates large value fuzzy
membership function.

Evolving Connectionist Methods for Supervised Learning





Fig. 3.11 Evolving fuzzy neural network EFuNN: an example of a simplified standard feedforward EFuNN system
(from Kasabov (2001a,b), PCT patent WO 01/78003).

node layer (see Fig. 3.12). The layer of feedback connections could be used if
temporal relationships of input data are to be memorized structurally.
The input layer represents input variables. The second layer of nodes (fuzzy
input neurons or fuzzy inputs) represents fuzzy quantisation of each input variable
space (similar to the ECF model and to the factorisable RBF networks; see
Section 3.1). For example, two fuzzy input neurons can be used to represent small
and large fuzzy values. Different membership functions (MF) can be attached to
these neurons (triangular, Fig. 2.2, Fig. 3.13, Gaussian, etc.).
The number and the type of MF can be dynamically modified. The task of
the fuzzy input nodes is to transfer the input values into membership degrees to
which they belong to the corresponding MF. The layers that represent fuzzy MF
are optional, as a nonfuzzy version of EFuNN can also be evolved with only three
layers of neurons and two layers of connections as in the eMLP and also used in
Chapter 6.
The third layer contains rule (case) nodes that evolve through supervised and/or
unsupervised learning. The rule nodes represent prototypes (exemplars, clusters) of
inputoutput data associations that can be graphically represented as associations
of hyperspheres from the fuzzy input and the fuzzy output spaces. Each rule
node r is defined by two vectors of connection weights, W1r and W2r, the
latter being adjusted through supervised learning based on the output error, and

Fuzzy outputs


Rule (base)





Fuzzy input

Fig. 3.12 An example of an EFuNN with a short-term memory realised as a feedback connection (from Kasabov
(2001a,b), PCT patent WO 01/78003).


Evolving Connectionist Systems

(membership degree)

The local normalised

fuzzy distance

R = 1S

d1f = (0, 0, 1, 0, 0, 0)
d2f = (0, 1, 0, 0, 0, 0)





D(d1,d2) = D(d1,d3) = D(d1,d5) = 1

Fig. 3.13 Triangular membership functions (MF) and the local, normalised, fuzzy distance measure (from
Kasabov (2001a,b)).

the former being adjusted through unsupervised learning based on a similarity

measure within a local area of the problem space. A linear activation function, or
a Gaussian function, is used for the neurons of this layer.
The fourth layer of neurons represents fuzzy quantisation of the output variables,
similar to the input fuzzy neuron representation. Here, a weighted sum input
function and a saturated linear activation function is used for the neurons to
calculate the membership degrees to which the output vector associated with the
presented input vector belongs to each of the output MFs. The fifth layer represents
the values of the output variables. Here a linear activation function is used to
calculate the defuzzified values for the output variables.
A partial case of EFuNN would be a three-layer network without the fuzzy input
and the fuzzy output layers (e.g. eMLP, or an evolving simple RBF network). In
this case slightly modified versions of the algorithms described below are applied,
mainly in terms of measuring Euclidean distance and using Gaussian activation
The evolving learning in EFuNNs is based on either of the following assumptions:
1. No rule nodes exist prior to learning and all of them are created (generated)
during the evolving process; or
2. There is an initial set of rule nodes that are not connected to the input and
output nodes and become connected through the learning (evolving) process.
The latter case is more biologically plausible as most of the neurons in the
human brain exist before birth, and become connected through learning, but
still there are areas of the brain where new neurons are created during learning
if surprisingly different stimuli from those previously seen are presented. (See
Chapter 1 for biological inspirations of ECOS.)
The EFuNN evolving algorithm presented next does not differentiate between these
two cases.
Each rule node, for example, rj , represents an association between a hypersphere
from the fuzzy input space and a hypersphere from the fuzzy output space (see
Fig. 3.14), the W1rj  connection weights representing the co-ordinates of the
centre of the sphere in the fuzzy input space, and the W2 (rj  the co-ordinates
in the fuzzy output space. The radius of the input hypersphere of a rule node rj

Evolving Connectionist Methods for Supervised Learning

rj(1) rj(2)









Fig. 3.14 Adaptive learning in EFuNN: a rule node represents an association of two hyperspheres from the fuzzy
input space and the fuzzy output space; the rule node rj moves from a position rj to rj to accommodate
a new inputoutput example (xf , yf ) (from Kasabov (2001a,b)).

is defined as Rj = 1 Sj , where Sj is the sensitivity threshold parameter defining

the minimum activation of the rule node rj to a new input vector x from a new
example (x y) in order for the example to be considered for association with this
rule node.
The pair of fuzzy inputoutput data vectors (xf  yf ) will be allocated to the
rule node rj if xf falls into the rj input receptive field (hypersphere), and yf
falls in the rj output reactive field hypersphere. This is ensured through two
conditions: that a local normalised fuzzy difference between xf and W1rj  is
smaller than the radius Rj , and the normalised output error Err = y y /Nout is
smaller than an error threshold E. Nout is the number of the outputs and y  is the
produced by EFuNN output. The error parameter E sets the error tolerance of the

A local normalised fuzzy distance between two fuzzy membership vectors d1f and
d2f that represent the membership degrees to which two real-value vector data d1
and d2 belong to predefined MFs, is calculated as
Dd1f  d2f  = d1f d2f /d1f + d2f 


where x y denotes the sum of all the absolute values of a vector that is obtained
after vector subtraction (or summation in case of x + y ) of two vectors x and y;


Evolving Connectionist Systems

/ denotes division. For example, if d1f = 0 0 1 0 0 0 and d2f = 0 1 0 0 0 0, then
Dd1  d2  = 1 + 1/2 = 1, which is the maximum value for the local normalised
fuzzy difference (see Fig. 3.13). In EFuNNs the local normalised fuzzy distance is
used to measure the distance between a new input data vector and a rule node in
the local vicinity of the rule node.
In RBF networks Gaussian radial basis functions are allocated to the nodes and
used as activation functions to calculate the distance between the node and the
input vectors.
Through the process of associating (learning) of new data points (vectors) to a
rule node rj , the centres of this nodes hyperspheres adjust in the fuzzy input space
depending on the distance between the new input vector and the rule node through
a learning rate lj , and in the fuzzy output space depending on the output error
through the WidrowHoff least mean square (LMS) delta algorithm (Widrow and
Hoff, 1960). This adjustment can be represented mathematically by the change in
the connection weights of the rule node rj from W1rj  and W2rj  to W1rj

, respectively, employing the following vector operations.
and W2rj
W1rj t+1  = W1rj t  + ljxf W1rj t 


W2rj t+1  = W2rj t  + ljyf A2A1rj t 

where A2 = f2 W2A1 is the activation vector of the fuzzy output neurons in
the EFuNN structure when x is presented; A1rj  = f2 DW1rj  xf  is the
activation of the rule node rj ; a simple linear function can be used for f1 and f2 ;
for example, A1rj  = 1 DW1rj  xf ; lj is the current learning rate of the
rule node rj calculated, for example, as lj = 1/Nexrj , where Nexrj  is the number
of examples currently associated with rule node rj .
The statistical rationale behind this is that the more examples are currently
associated with a rule node, the less it will move when a new example has to be accommodated by this rule node; that is, the change in the rule node position is proportional to the number of already associated examples with the new single example.
When a new example is associated with a rule node rj not only its location in
the input space changes, but also its receptive field expressed as its radius Rj, and
its sensitivity threshold Sj:
Rjt+1 = Rjt + DW1rjt+1  W1rjt


Sjt+1 = Sjt DW1rjt+1  W1rjt 



The learning process in the fuzzy input space is illustrated in Fig. 3.15 on four data
points d1 , d2 , d3 and d4 . Figure 3.15 shows how the centre rj of the rule node
2 3
rj adjusts (after learning each new data point) to its new positions rj ,rj , rj
when one-pass learning is applied. Figure 3.16 shows how the rule node position
would move to new positions rj , rj , and rj , if another pass of learning

Evolving Connectionist Methods for Supervised Learning











Fig. 3.15 Evolving adaptive learning in EFuNN illustrated on the example of learning four new input data
vectors (points) in a rule node rj (from Kasabov (2001a,b)).

were applied. If the two learning rates l1 and l2 have zero values, once established,
the centres of the rule nodes will not move.
The weight adjustment formulas (3.14) define the standard EFuNN that has the
first part updated in an unsupervised mode, and the second part in a supervised
mode similar to the RBF networks. But here the formulas are applied once for
each example (x y) in an incrementally adaptive mode, that is similar to the RAN
model (Platt, 1991) and its modifications. The standard supervised/unsupervised
learning EFuNN is denoted EFuNN-s/u. In two other modifications of EFuNN,
namely double-pass learning EFuNN (EFuNN-dp), and gradient descent learning
EFuNN (EFuNN-gd), slightly different update functions are used as explained in
the next subsection.
The learned temporal associations can be used to support the activation of
rule nodes based on temporal pattern similarity. Here, temporal dependencies are
learned through establishing structural links. These dependencies can be further
investigated and enhanced through synaptic analysis (at the synaptic memory
level) rather than through neuronal activation analysis (at the behavioural level).
The ratio spatial similarity/temporal correlation can be balanced for different


rj (1(1)
rj (4(1)
rj (4(2)



Fig. 3.16 Two-pass learning of four input data vectors (points) that fall in the receptive and the reactive fields
of the rule node rj (from Kasabov (2001a,b)).


Evolving Connectionist Systems

applications through two parameters Ss and Tc such that the activation of a rule
node r for a new data example dnew is defined through the following vector
A1r = 1 SsDW1r dnewf  + TcW3rmax t1  r01


where 01 is a bounded operation in the interval [0,1]; DW1rdnewf  is the

normalised local fuzzy distance value and rmax t1 is the winning neuron at the
previous time moment. Here temporal connections can be given a higher importance in order to tolerate a higher distance. If Tc = 0, then temporal links are
excluded from the functioning of the system.
Figure 3.17 shows a schematic diagram of the process of evolving of three rule
nodes and setting the temporal links between them for data taken from consecutive
frames of a spoken word eight similar to HMM see chapter 1.
The EFuNN system was explained thus far with the use of one-rule node
activation (the winning rule node for the current input data). The same formulas
are applicable when the activation of m rule nodes is propagated and used (the
so-called many-of-n mode, or m-of-n for short). By default, m = 3, but it is
subject to optimisation for different data sets.
The supervised learning in EFuNN is based on the above-explained principles,
so when a new data example d = xy is presented, the EFuNN either creates a
new rule node rn to memorize the two input and output fuzzy vectors W1rn = xf
and W2rn  = yf , or adjusts an existing rule node rj .
After a certain time (when a certain number of examples have been presented)
some neurons and connections may be pruned or aggregated.
Different pruning rules can be applied for a successful pruning of unnecessary
nodes and connections. One of them is given below:
IF (Agerj > OLD) AND (the total activation TArj is less than a pruning parameter Pr
times Age (rj) ) THEN prune rule node rj,

where Age(rj  is calculated as the number of examples that have been presented
to the EFuNN after rj have been first created; OLD is a predefined age limit; Pr is a
pruning parameter in the range [0,1], and the total activation TA(rj  is calculated







Fig. 3.17 The process of creation of temporal connections from consecutive frames (vectors) taken from speech
data of a pronounced word eight. The three rule nodes represent the three major parts of the speech signal,
namely the phonemes /silence/, /ei/, /t/. The black dots represent data points (frame vectors) allocated to the
rule nodes (from Kasabov (2001a,b)).

Evolving Connectionist Methods for Supervised Learning


as the number of examples for which rj has been the correct winning node (or
among the m winning nodes in the m-of-n mode of operation).
The above pruning rule requires that the fuzzy concepts of OLD, HIGH, and so
on are defined in advance. As a partial case, a fixed value can be used; e.g. a node
is OLD if it has existed during the evolving process from more than p examples.
The pruning rule and the way the values for the pruning parameters are defined,
depend on the application task.


EFuNN Evolving Supervised Learning Rules and Algorithms

Three supervised learning algorithms are outlined here that differ in the weight
adjustment formulas.
(a) EFuNN-s/u Learning Algorithm
Set initial values for the system parameters: number of membership functions;
initial sensitivity threshold (default S = 09); error threshold E; aggregation
parameter Nagg , a number of consecutive examples after which an aggregation is
performed (explained in a later section); pruning parameters OLD and Pr; a value
for m (in m-of-n mode); thresholds T1 and T2 for rule extraction.
Set the first rule node to memorize the first example (xy):
W1r0  = xf and W2r0  = yf


Loop over presentations of inputoutput pairs (xy)

Evaluate the local normalised fuzzy distance D between xf and the existing rule
node connections W1 (formulas (3.5)).
Calculate the activation A1 of the rule node layer. Find the closest rule node rk (or
the closest m rule nodes in case of m-of-n mode) to the fuzzy input vector xf
if A1(rk  < Sk (sensitivity threshold for the node rk ; create a new rule node for
(xf yf 
Find the activation of the fuzzy output layer A2 = W2A1 and the output error
Err = y y /Nout .
if Err > E
create a new rule node to accommodate the current example (xf yf 
Update W1rk  and W2rk  according to (3.4) (in the case of m-of-n EFuNN update
all the m rule nodes with the highest A1 activation).
Apply aggregation procedure of rule nodes after each group of Nagg examples is
Update the parameters Sk , Rk , Age(rk , TA (rk  for the rule node rk .
Prune rule nodes if necessary, as defined by pruning parameters.
Extract rules from the rule nodes (as explained in a later subsection)
} End of the main loop.
The two other learning algorithms presented next are exceptions and if it is not
explicitly mentioned otherwise, the denotation EFuNN means EFuNN-s/u.


Evolving Connectionist Systems

(b) EFuNN-dp Learning Algorithm

This is different from the EFuNN-s/u in the weight adjustment formula for W2
that is a modification of (3.14) as follows.

W2rj t+1  = W2rj t  + ljyf A2A1rj


meaning that after the first propagation of the input vector and error Err calculation, if the weights are going to be adjusted, W1 weights are adjusted first with the
use of (3.14) and then the input vector x is propagated again through the already
adjusted rule node rj to its new position rj
in the input space, a new error Err
is calculated, and after that the W2 weights of the rule node rj are adjusted. This
is a finer weight adjustment than the adjustment in EFuNN-s/u that may make a
difference in learning short sequences, but for learning longer sequences it may
not cause any difference in the results obtained through the simpler and faster
(c) EFuNN-gd Learning Algorithm
This algorithm is different from the EFuNN-s/u in the way the W1 connections
are adjusted, which is no longer unsupervised, but here a one-step gradient descent
algorithm is used similar to the RAN model (Platt, 1991):






 = W1rj  + ljxf W1rj yf A2A1rj W2rj 


Formula (3.20) should be extended when the m-of-n mode is applied. The EFuNNgd algorithm is no longer supervised/unsupervised and the rule nodes are no
longer allocated at the cluster centres of the input space.
An important characteristic of EFuNN learning is the local element tuning. Only
one (or m, in the m-of-n mode) rule node will be either updated or created for
each data example. This makes the learning procedure very fast (especially in the
case when linear activation functions are used). Another advantage is that learning
a new data example does not cause forgetting of old ones. A third advantage is that
new input and new output variables can be added during the learning process, thus
making the EFuNN system more flexible to accommodate new information, once
such becomes available, without disregarding the already learned information.
The use of MFs and membership degrees (layer two of neurons), and also the
use of the normalised local fuzzy difference, makes it possible to deal with missing
values. In such cases, the fuzzy membership degree of all MFs will be 0.5 indicating
that the value, if it existed, may belong to any of them. Preference, in terms of
which fuzzy MF the missing value might belong to, can also be represented through
assigning appropriate membership degrees, e.g. 0.7 degrees to Small means that
the value is more likely to be small rather than Medium, or Large.
The supervised learning algorithms above allow for an EFuNN system to always
evolve and learn when a new inputoutput pair of data becomes available. This is
an active learning mode.
(d) EFuNN Sleep-Learning Rules
In another mode, passive or sleep learning, learning is performed when there is
no input pattern presented. This may be necessary to apply after an initial learning
has been performed. In this case existing connections that store previously fed
input patterns are used as an echo to reiterate the learning process. This type
of learning may be applied in the case of a short initial presentation of the data,

Evolving Connectionist Methods for Supervised Learning


when only a small portion of data is learned in one-pass, incremental adaptive

mode, and then the training is refined through the sleep-learning method when
the system consolidates what it has learned before.
Sleep learning in EFuNN and in some other connectionist models is further
developed by Yamauchi and Hayami (2006).
(e) One- Pass Versus Multiple-Passes Learning
The best way to apply the above learning algorithms is to draw examples randomly
from the problem space, propagate them through the EFuNN and tune the connection
weights and the rule nodes, change and optimise the parameter values, and so on,
until the error becomes a desirably small one. In a fast learning mode, each example
is presented only once to the system. If it is possible to present examples two or more
times, the error may become smaller, but that depends on the parameter values of
the EFuNN and on the statistical characteristics of the data.


EFuNN Inference and Recall

The evolved EFuNN can perform inference when recalled on new input data. The
EFuNN inference method consists of calculating the output activation value when
a new input vector is applied. This is part of the EFuNN supervised learning
method when only an input vector x is propagated through the EFuNN. If the new
input vector falls in the receptive field of the winning rule node (the closest rule
node to the input vector) one-of-n mode of inference is used that is based on the
winning rule node activation (one rule inference). If the new input vector does
not fall in the receptive field of the closest to it rule node, then the m-of-n mode
is used, where m rule nodes (rules) are used in the EFuNN inference process, with
an usual value of m being 3.


Strategies for Allocating Rule Nodes in the EFuNN Rule

Node Space

There are different ways to allocate in a model space the EFuNN rule nodes evolved
over time as illustrated in Fig. 3.18 and explained below:
(a) A simple consecutive allocation strategy, i.e. each newly created rule (case)
node is allocated next to the previous, and to the following ones, in a linear
fashion. That represents a time order.
(b) Preclustered location, i.e. for each output fuzzy node (e.g. NO, YES) there
is a predefined location where the rule nodes supporting this predefined
concept are located. At the centre of this area the nodes that fully support this
concept (error 0) are placed; every new rule nodes location is defined based
on the fuzzy output error and the similarity with other nodes. In a nearest
activated node insertion strategy, a new rule node is placed nearest to the
highly activated node the activation of which is still less than its sensitivity
threshold. The side (left or right) where the new node is inserted is defined
by the highest activation of the two neighbouring nodes.
(c) As in (b) but temporal feedback connections are set as well. New connections are set that link consecutively activated rule nodes through using the
short-term memory and the links established through the W3 weight matrix;


Evolving Connectionist Systems

fuzzy output concepts
rule nodes


fuzzy output concepts

rule nodes


fuzzy output concepts

rule nodes


Fig. 3.18 Rule node allocation strategies: (a) simple consecutive allocation strategy; (b) a preclustered location;
(c) temporal feedback connections are evolved; (d) connections are evolved between rule nodes from different
EFuNN modules. For simplicity, only two membership functions (fuzzy output concepts) are used for the output
variable (from Kasabov (2001a,b,c)).

that will allow for the evolving system to react properly to a series of data
points starting from a certain point that is not necessarily the beginning
of the series.
(d) The same as above, but in addition, new connections are established between
rule nodes from different EFuNN modules that become activated simultane-

Evolving Connectionist Methods for Supervised Learning


ously (at the same time moment). This would make it possible for an ECOS
to learn a correlation between conceptually different variables, e.g. correlation
between speech sound (left module) and lip movement (right module).


EFuNNs Evolve Using Some Evolving Rules. What Are They

in a Summary?

An EFuNN model evolves its structure and functionality based on the following
evolving rules, defined mathematically above.

A rule for a new node creation

Rules for local receptive field modifications (incremental learning rules)
A rule for node aggregation (consolidation rule)
A rule for node deletion (a forgetting rule)
A sleep-learning rule
Other rules as shown above


Knowledge Manipulation in Evolving Fuzzy Neural

Networks (EFuNNs) Rule Insertion, Rule Extraction,
Rule Aggregation

It is important for an ECOS that learns in a lifelong learning mode, not only
to adjust its structure and functionality, but also to explain at any system
operation time the essence and the knowledge the system has learned. Without
this ability the chances of using such systems in areas such as financial decision
making, complex process control, or gene discovery and drug design, are very
slim. The EFuNN architecture, and some other architectures presented thus far,
are knowledge-based indeed as they manipulate knowledge in terms of rules, both
inserting existing knowledge before the evolving process has started, and extracting
refined knowledge from an evolving system. Here more details and analysis of the
knowledge-based character of evolving connectionist systems are given with some
new architectures reviewed and introduced.


Rule Extraction from EFuNNs

At any time (phase) of the evolving (learning) process of an EFuNN fuzzy or exact
rules can be inserted and extracted. Insertion of fuzzy rules is achieved through
setting a new rule node rj for each new rule, such that the connection weights
W1rj  and W2rj  of the rule node represent this rule. For example, the fuzzy
rule(IF x1 is Small and x2 is Small THEN y is Small) can be inserted into an EFuNN
structure by setting the connections of a new rule node to the fuzzy condition
nodes x1 -Small and x2 -Small and to the fuzzy output node y-Small to a value of
1 each. The rest of the connections are set to a value of zero. Similarly, an exact
rule can be inserted into an EFuNN structure: e.g. IF x1 is 3.4 and x2 is 6.7 THEN


Evolving Connectionist Systems

y is 9.5. Here the membership degrees to which the input values x1 = 34 and
x2 = 67, and the output value y = 95 belong to the corresponding fuzzy values
are calculated and attached to the corresponding connection weights. Each rule
node rj can be expressed as a fuzzy rule, for example:
Rule rj: IF x1 is Small 0.85 and x1 is Medium 0.15 and x2 is Small 0.7 and x2 is Medium
0.3 (Radius of the receptive field Rj = 01, maxRadiusj = 075) THEN y is Small 0.2 and y
is Large 0.8 (20 out of 175 examples associated with this rule),

where the numbers attached to the fuzzy labels denote the degree to which the
centres of the input and the output hyperspheres belong to the respective MF. The
degrees associated with the condition elements are the connection weights from
the matrix W1. Only values that are greater than a threshold T1 are left in the
rules as the most important ones. The degrees associated with the conclusion part
are the connection weights from W2 that are greater than a threshold of T2. An
example of rules extracted from a benchmark dynamic time-series data is given in
Section 3.5. The two thresholds T1 and T2 are used to disregard the connections
from W1 and W2 that represent small and insignificant membership degrees (e.g.
less than 0.1). A set of simple rules extracted from ECF was shown in Fig. 3.10.


Rule Aggregation in EFuNNs

Another knowledge-based technique applied to EFuNNs is rule node aggregation.

Through this technique several rule nodes are merged into one as shown in
Fig. 3.19a,b,c on an example of three rule nodes r1 , r2 , and r3 (only the input space
is shown there).
For the aggregation of three rule nodes r1 r2 , and r3 the following two aggregation
rules can be used to calculate the new aggregated rule node ragg W1 connections
(the same formulas are used to calculate the W2 connections):
(a) As a geometrical centre of the three nodes:
W1ragg  = W1r1 + W1r2 + W1r3/3


(b) As a weighted statistical centre:

W2ragg  = W2r1Nexr1 + W2r2Nexr2 + W2r3Nexr3/Nsum (3.22)
Nexragg  = Nsum = Nexr1 + Nexr2 + Nexr3
The three rule nodes will aggregate only if the radius of the aggregated node
receptive field is less than a predefined maximum radius Rmax:
Rragg = DW1ragg  W1rj  + Rj <= Rmax
rj is the rule node from the three nodes that have a maximum distance from
the new node ragg and Rj is its radius of the receptive field.
(see Fig. 3.19c).

Evolving Connectionist Methods for Supervised Learning




Fuzzy Outputs













Fig. 3.19 Aggregation of rule nodes in an EFuNN: (a) an example of an evolved EFuNN structure; (b) the
process of aggregation of three rule nodes r1 , r2 , and r3 into one cluster node ragg . (Continued overleaf )


Evolving Connectionist Systems

r1 (2 ex)

r agg (5 ex)
r2 (2 ex)

r3 (1 ex)

Ragg < Rmax


Fig. 3.19 (continued ) (c) the resulting node ragg from the aggregation of the three rules has a receptive
field radius Ragg which is less than a predefined (as a system parameter) value Rmax ; (d) the process of
aggregation in time, shown for the example of gas-furnace data; the number of rule nodes is aggregated after
every 40 examples; the picture also shows the resulting rule node allocation, their corresponding clusters, the
desired and the approximated gas-furnace function in time, and other EFuNN parameter values (from Kasabov

Evolving Connectionist Methods for Supervised Learning


In order for a given node rj to aggregate with other nodes, two subsets of
nodes are formed: the subset of nodes rk that if activated to a degree of 1 will
produce an output value y  rk  that is different from y  rj  in less than the error
threshold E, and the subset of nodes that cause output values different from y  rk 
in more than E. The W2 connections define these subsets. Then all the rule nodes
from the first subset that are closer to rj in the input space than the closest to
rj node from the second subset in terms of W1 distance, get aggregated if the
radius of the new node ragg is less than the predefined limit Rmax for a receptive
field (Fig. 3.19c).
Figure 3.19d shows the process of incrementally adaptive learning and aggregation (after every 40 examples) from the gas-furnace time series. Through aggregation after the 146th example, 17 rule nodes are created. These rule nodes also
represent cluster centres in the input space. The data points that belong to each
of these clusters are shown in different colours.
For classification, instead of aggregating all rule nodes that are closer to a rule
node rj than the closest node from the other class, it is possible to keep the
closest to the other class node from the aggregation pool out of the aggregation
procedure as a separate node a guard (see Fig. 3.20), thus preventing a
possible misclassification of new data on the bordering area between the two
classes. Guard vectors are conceptually similar to support vector in SVM (see
chapter 1).
Through node creation and their consecutive aggregation, an EFuNN system
can adjust over time to changes in the data stream and at the same time preserve
its generalisation capabilities.
Through analysis of the weights W3 of an evolved EFuNN, temporal correlation between time consecutive exemplars can be expressed in terms of rules and
conditional probabilities, e.g.:
IF r1 t 1 THEN r2 t03


The meaning of the above rule is that some examples that belong to the rule
(prototype) r2 follow in time examples from the rule prototype r1 with a relative
conditional probability of 0.3.


Evolving Membership Functions in EFuNN

Changing membership functions is another knowledge-based operation that may

be needed for a refined performance after a certain time moment of the EFuNNs
operation. Changing the shape of the MF in a fuzzy neural structure such as FuNN
through a gradient descent algorithm is suggested in Kasabov et al. (1997). The
same algorithm, but in a one-epoch incrementally adaptive version, can be used
in EFuNNs. Changing the number of the MFs may also be needed. For example,
instead of three MFs, the system may perform better if it had five MFs for some
of the variables. In traditional fuzzy neural networks this change is difficult to
In EFuNNs there are several possibilities to implement such dynamical changes
of MF, two of them graphically illustrated in Fig. 3.21a,b. These are: (a) new MFs


Evolving Connectionist Systems





Fig. 3.20 Aggregation of rule nodes (support vector) with the use of a guard node strategy. Rule nodes are
presented as circles, the radius of which define their receptive fields and the colour representing the class that
the nodes support (two classes are used): (a) before aggregation; (b) after aggregation, the receptive fields of
the new rule nodes have changed, but the receptive fields of the unchanged nodes, the guard nodes, are
unchanged; (c) and (d) the process of aggregation as in (a) and (b) but here presented in one-dimensional
space of the ordered rule nodes where spatial allocation of nodes is applied (from Kasabov (2001a,b)).

are created (inserted) without a need for the old ones to be changed. The degree
to which each cluster centre (each rule node) belongs to the new MF can be
calculated through defuzzifying the centres. (b) All MFs change in order for new
ones to be introduced. For example, all stored fuzzy exemplars in W1 and W2 that
had three MFs, are defuzzified (e.g., through the centre of gravity defuzzification
technique) and subsequently used to evolve a new EFuNN structure that has five
MFs (Fig. 3.21a,b).
Adjustment of MF based on x2 criterion in the incrementally adaptive learning
context of EFuNN can be applied as follows.
1. Initialise the EFuNN with a standard number of MF before the learning begins,
based on some expected rule representation and based on the context of
the data.

Evolving Connectionist Methods for Supervised Learning




Fig. 3.21 Online membership function modification: (a) new MF are inserted without modifying the existing
ones; (b) five new MF are created that substitute the three old MF (from Kasabov (2001a,b)).

2. During the evolving process, calculate the number of examples that fall in each
of the areas of the already defined MF and their class (output) values.
3. Regularly, after a sufficient number of examples are presented (e.g. 100), start
merging the neighbouring fuzzy intervals and evaluate new ones based on the
X 2 criteria. The final fuzzy intervals will define the optimal fuzzy MF of a certain
type, e.g. triangular.


Case Study Examples: Learning, Aggregation, and Rule

Extraction from the MackeyGlass Time Series

Example 1
The following values for the EFuNN parameters were set: initial value for sensitivity threshold S of 0.9; error threshold E = 01; a maximum radius Rmax = 02;
a rule extraction threshold of 0.5; aggregation is performed after each consecutive
group of 50 examples is presented; m-of-n mode, where m = 1, is used; the number
of membership functions MF is 5; and 1000 consecutive data examples are used.
Some experimental results of the incrementally adaptive evolving of an EFuNN
are presented in Fig. 3.22ad, as follows: (a) the desired versus the predicted
six-steps-ahead values through one-pass incrementally adaptive learning; (b) the


Evolving Connectionist Systems















Fig. 3.22 Experiments for online evolving of an EFuNN from the MackeyGlass chaotic time-series data. An EFuNN
is evolved on 1000 data examples from the MackeyGlass time series (four inputs: xt, xt 6, xt 12,
and xt 18, and one output xt + 6; see http://legend.gwydion.cs.cmu/neural-bench/benchmarks/mackeyglass.html): (a) the desired versus the predicted six-steps-ahead value through

Evolving Connectionist Methods for Supervised Learning


Table 3.1 Some of the fuzzy rules extracted from the evolved from the
MackeyGlass data EFuNN (see Fig. 3.22).
Rule 1: If [x1 is (3 0.658) AND [x2 is (4 0.884)] AND [x3 is (4 0.822)] AND
[x4 is (4 0.722)] [Radius of the receptive field R1 = 0086]
then [y is (4 0.747)[accommodated training examples Nexr1 = 6]
Rule 2: If [x1 is (3 0.511)] AND [x2 is (4 0.774)] AND [x3 is (4 0.852)] AND
[x4 is (4 0.825)] [Radius of the receptive field R2 = 0179]
then [y is (3 0.913)][accommodated training examples Nexr2 = 2]
Rule 16: If [x1 is (2 0.532)]AND [x2 is (2 0.810)] AND [x3 is (3 0.783)] AND
[x4 is(4 0.928)] [Radius of the receptive field R16 = 0073)
then [y is (5 0.516)] [accommodated training examples Nexr16 = 12]
Notation: The fuzzy values are denoted with numbers as follows: 1,very small; 2, small; 3, medium;
4, large; 5, very large; the antecedent and the consequent weights are rounded to the third
digit after the decimal point; smaller values than 0.5 are ignored as 0.5 is used as a threshold
T 1 = T 2 for rule extraction.

absolute, the local incrementally adaptive RMSE (LRMSE), and the local incrementally adaptive NDEI (LNDEI) error over time as described below; (c) the number
of the rule nodes created and aggregated over time; (d) a plot of the input data
vectors (circles) and the evolved rule nodes (the W1 connection weights, crosses)
projected in the two-dimensional input space of the first two input variables xt
and xt 6.
For different values of the EFuNN parameters, a different number of rule nodes
are evolved, each of them represented as one rule through the rule extraction
procedure (some of the rules are shown in Table 3.1).
After a certain time moment, the LRMSE and LNDEI converge to constant values
subject to a small error. Generally speaking, in the case of compact and bounded
problem space the error can be made sufficiently small subject to appropriate
selection of the parameter values for the EFuNN.
The example here demonstrates that EFuNN can learn a complex chaotic
function through incrementally adaptive evolving from one-pass data propagation. But the real strength of the EFuNNs is in learning processes that change
their dynamics through time, e.g. changing values for the parameter  of the
MackeyGlass equation. Time-series processes with changing dynamics could be
of different origin, e.g. biological, financial, environmental, industrial processes, or
EFuNNs can also be used for off-line training and testing similar to other
standard NN techniques. This is illustrated in another example shown in
Fig. 3.23a,b.

Fig. 3.22 one-pass online learning and consecutive prediction; (b) the absolute, the local online RMSE, and
the local online NDEI over time; (c) the process of creation and aggregation of rule nodes over time; (d) the
input data vectors (circles) and the rule node co-ordinates (W 1 connection weights; crosses) projected in the
two-dimensional input space of the first two input variables xt and xt 6. Some of the extracted rules
are shown in Table 3.1.


Evolving Connectionist Systems













Fig. 3.23 (a) An EFuNN is evolved on 500 data examples from the MackeyGlass time series (four inputs:
xt, xt 6, xt 12, and xt 18, and one output xt + 6; the figure shows the desired versus the
predicted online values of the time series; (b) after the EFuNN is evolved, it is tested for a global generalisation
on a chunk of 500 future data.

Evolving Connectionist Methods for Supervised Learning


Example 2
The following parameter values are set before an EFuNN is evolved: MF =
5; initial S = 09; E = 001; m = 1; Rmax = 02. The EFuNN is evolved
on the first 500 data examples from the same MackeyGlass time series.
Figure 3.23a shows the desired versus the predicted online values of the
time series. The following are the parameter values: rule nodes, 62; created
nodes, 452; pruned nodes, 335; aggregated nodes, 55; RMSE, 0.034; and NDEI,
After the EFuNN is evolved, it is tested for a global generalisation on the second
500 examples. Figure 3.23b shows the desired versus the predicted by the EFuNN
values in an off-line mode.
As pointed out before, after having evolved an EFuNN on a small but representative part of the whole problem space, its global generalisation error can become
satisfactorily small.
The EFuNN was also tested for incrementally adaptive test error on the test data
while further training on it was performed. The incrementally adaptive local test
error was slightly smaller.
Generally speaking, if continuous and incremental learning is possible (in
many cases of time-series prediction it is) EFuNNs will be continuously evolved
all the time through adaptive lifelong learning, always improving their performance. Typical applications of EFuNN would be modelling and predicting of
continuous financial time series, modelling of large DNA data sequences, adaptive
spoken word classification, and many others (see applications in Part II of this


Evolving Fuzzy-2 Clustering in EFuNN

The rule nodes in EFuNN represent cluster centres and have areas associated with
them as the cluster area. If a data point d falls in a cluster area, the membership
degree belonging to the cluster is defined by the formula 1 Dd c, where Dd c
is the normalised fuzzy distance between the data point d and the cluster centre c
(see Chapter 2).
The clustering that is performed in EFuNN is called here fuzzy-2 clustering
as not only each data point may belong to several clusters to different degrees
(fuzzy), but a cluster centre is defined as fuzzy co-ordinates and a geometrical
area associated with this cluster. For example, a cluster centre c is defined as (x is
Small to a degree of 0.7, and y is Medium to a degree of 0.3; radius of the cluster
area Rc = 03).
This is illustrated in Fig. 3.24 where two random number input variables x
and y are mapped into the same variables as outputs (here EFuNN is used as a
replicator); 1000 data points were generated and 98 cluster centres (rule nodes)
were evolved for the following initial parameter values: Sthr = 09; Errthr = 01;
lr1 = lr2 = 01; 3 MF.


Evolving Connectionist Systems

Fig. 3.24 Rule nodes in EFuNN represent cluster centres in the input space of randomly generated twodimensional input vectors. o, raw data points in a 2D space; x, rule nodes; +, pruned nodes. EFuNN is used
here as a replicator (two inputs, two outputs, that have the same values as the corresponding inputs).


Comparative Analysis of EFuNNs and Other ANN and AI

Techniques for Incrementally Adaptive, Knowledge-Based

EFuNNs are learning models that can learn in an incrementally adaptive mode any
dataset, regardless of the problem (function approximation, time-series prediction,
classification, etc.) in a supervised, unsupervised, or hybrid learning mode, subject
to appropriate parameter values selected and a certain minimum number of
examples presented. Some well-established NN and AI techniques have difficulties
when applied to incrementally adaptive, knowledge-based learning. For example,
the multilayer perceptrons (MLP) and the backpropagation learning algorithm
have the following problems: catastrophic forgetting (Robins, 1996; Hopfield, 1982),
local minima problem, difficulties to extract rules (Duch et al., 1998), not being
able to adapt to new data without retraining on old ones, and too long training
when applied to large datasets.
The radial basis function RBF neural networks require clustering to be
performed first and then the backpropagation algorithm applied. They are
not efficient for incrementally adaptive learning unless they are significantly

Evolving Connectionist Methods for Supervised Learning


Many neurofuzzy systems, such as ANFIS (Jang, 1993), FuNN (Kasabov et al.,
1997), and neofuzzy neuron (Yamakawa et al., 1992) cannot update the learned
rules through continuous training on additional data without suffering catastrophic
A comparative analysis of different incrementally adaptive learning methods on
the MackeyGlass time series is shown in Table 3.2. Each model is evolved on
3000 data examples from the MackeyGlass time series (4 inputs: xt, xt 6,
xt 12, and xt 18, and one output xt + 85, from the CMU data:
The analysis of Table 3.2 shows that the EFuNN evolving procedure leads to
a similar local incrementally adaptive error as RAN and its modifications, but
EFuNNs allow for rules to be extracted and inserted at any time of the operation
of the system thus providing knowledge about the problem and reflecting changes
in its dynamics. In this respect the EFuNN is a flexible, incrementally adaptive,
knowledge engineering model.
One of the advantages of EFuNN is that rule nodes in EFuNN represent dynamic
fuzzy-2 clusters.
Despite the advantages of EFuNN, there are some difficulties when using them:
(a) EFuNNs are sensitive to the order in which data are presented and to the
initial values of the parameters.
(b) There are several parameters that need to be optimised in an incrementally adaptive mode. Such parameters are: error threshold Err; number,
shape, and type of the membership functions; type of learning; aggregation
threshold and number of iterations before aggregation, etc. In Chapter 6
parameters related to the simple ECF model, such as maxR, minR, m-ofn, and number of membership functions, are optimised using a genetic
algorithm (GA).

Table 3.2 Comparative analysis of different online learning models on the MackeyGlass time series. Each
model is evolved on 3000 data examples from the MackeyGlass time series (four inputs: xt, xt 6,
xt 12, and xt 18, and one output xt + 85, from the CMU data: http://legend.gwydion.cs.cmu/

Parameter values

Number of Centres
(Rule nodes in

Online LNDEI after

learning 3000


 = 002
 = 001
 = 002
E = 005; Rmax = 02
E = 005; Rmax = 02
E = 005; Rmax = 02





Evolving Connectionist Systems

EFuNNs as Universal Classifiers and Universal

Function Approximators

When issues such as applicability of the EFuNN model, learning accuracy, generalisation, and convergence are discussed for different tasks, two cases must be
Case A
The incoming data are from a compact and bounded problem space. In this case the
more data vectors are presented to an evolving EFuNN, the better its generalisation
is on the whole problem space. After a time moment T, if appropriate values for
the EFuNN parameters are used, each of the fuzzy input and the fuzzy output
spaces (they are compact and bounded) will be covered by hyperspheres of the
evolved rule nodes that will have different receptive fields in the general case.
We can assume that by a certain time moment T a sufficient number of
examples from the stream will have been presented and rule node hyperspheres cover the problem space to a desired accuracy. The local incrementally
adaptive error will saturate at this time because any two associated compact
and bounded fuzzy spaces Xf and Yf that represent a problem space can be
fully covered by a sufficient number of associated (possibly overlapping) fuzzy
hyperspheres. The number of these spheres (the number of rule nodes) depends
on the error threshold E, set before the training of the EFuNN system, and on
some other parameters. The error threshold can be automatically adjusted during
If the task is function approximation, a theorem can be proved that EFuNNs
are universal function approximators subject to the above conditions. This is
analogous to the proof that MLPs with only two layers are universal function
approximators (see for example Cybenko (1989), Funahashi (1989), and Kurkova
(1992) and the proof that fuzzy systems are universal approximators too (see for
example Kosko (1992) and Koczy and Zorat (1997)).
These proofs are based on the well-known Kolmogorov theorem (Kolmogorov,
1957), which states that:
For all n >= 2 there exist n2n + 1 continuous, monotonously increasing, univariate
functions on the domain [0,1], by which an arbitrary continuous real function f of n
variables can be constructed by the following equation.

fx1  x2     xn  =


q  p=1n pq xp 


As a continuous function, the sigmoid function is mostly used in MLP and other
ANN architectures. Linear or Gaussian functions are also used which is the case
in the EFuNN architecture.
Case B
The incoming data are from an open problem space, where data dynamics and
data distribution may change over time in a continuous way. In this case the local

Evolving Connectionist Methods for Supervised Learning


incrementally adaptive error will depend on the closeness of the new input data
to the existing rule nodes.


Incrementally Adaptive Parameter and Feature Evaluation

in EFuNNs

The performance of the EFuNN depends on its parameter values as illustrated in

Table 3.3 on the MackeyGlass data, when 3000 examples are used to evolve an
EFuNN on the task of predicting the values at time moments (t + 85), and 500
examples are used to test the system. Different values for the sensitivity threshold
Sthr and for the error threshold E result in a different number of rule nodes and
different values for the RMSE achieved.
Once set, the values of the EFuNN parameters can be either kept fixed during
the entire operation of the system, or can be adapted (optimised). Such parameters
are, for example: the number of membership functions; the value m for the m-of-n
parameter; the error threshold E; the maximum receptive field Rmax; the rule
extraction thresholds T1 and T2; the number of examples for aggregation Nagg ;
and the pruning parameters OLD and Pr. Adaptation can be achieved through:
(1) using relevant statistical parameters of the incoming data; (2)incrementally
adaptive self-analysis of the behaviour of the system; (3) feedback connection
from higher-level modules in the ECOS architecture; and (4) all the above
Table 3.4 shows the results of a similar experiment to the one shown in Table 3.3,
but here the two EFuNN parameters are adapted automatically after every 700
examples based on the current RMSE. If the current RMSE is higher than an
expected one, the value for E decreases by a delta value E and the value of Sthr
increases by a delta value Sthr. In the experiment shown in Table 3.4 delta values
of 0.05 and 0.06 are used for both parameters.
Genetic algorithms (GA) and evolutionary programming techniques can also
be applied to optimise the EFuNNs structural and functional parameters through
evolving populations of EFuNNs over generations and evaluating each EFuNN in
Table 3.3 Number of rule nodes, training, and test errors when different EFuNNs are evolved for different
parameter values of sensitivity threshold Sthr and error threshold E. The MackeyGlass series is used for training
for incremental training and prediction of the value of the series at time moment t+ 85 using 3000 initial
data points (called training) and further 500 points (called testing). The error on the test 500 vectors is smaller
than the training error on the first 3000 vectors, as the model is tested on each test data vector, but then it is
further trained on it, in the same way as it was done for the training data.
Sensitivity threshold

Error threshold E

Number of
evolved rule
nodes Rn

RMSE on the
training data

RMSE on the test








Evolving Connectionist Systems

Table 3.4 Self- tuning of the parameters Sthr and error threshold E in an EFuNN structure. EfuNNs, as from
the experiment shown in Table 3.3. Delta values are used for an automatic increase/decrease of the sensitivity
threshold Sthr and the error threshold E based on the RMSE, after every 700 examples. The experimented delta
values here are 0.05 and 0.06.
Delta value Initial


Initial errthr

Final errthr

Rule nodes


test data









Note: A desired maximum RMSE is set to 0.045; Sthr and Errthr modification after every 700 examples.

the population at certain time intervals (see Chapter 6 for an example of ECF
optimisation through GA).
The evaluation of the relevance of the input variables to the task can be done
in an incrementally adaptive mode. One way to achieve this is to continuously
evaluate the correlation of each input variable to each output class, or to each
membership function of the output variables, e.g. Corr (x1, [y is Small, Medium,
High]) = [0.7, 0.4, 0.3] thus producing continuous information on the most
relevant input features (see Chapter 7 for details).
In addition to optimising the set of features for an EFuNN in an incrementally
adaptive mode, new features can be added, new inputs and new outputs, while the
system is operating, similar to the case of the eMLP. Because EFuNN uses local
normalized fuzzy distance, new input variables and new output variables can be
added to the EFuNN structure at any time of its operation if new data contain
these variables.
The following algorithm allows for adding new outputs to already trained EFuNN
for a further training.
1. Insert new output node and its initial two fuzzy output nodes representing yes
and no for this output.
2. Connect the no output fuzzy nodes with zero connection weights to the already
existing rule nodes.
3. Continue the evolving learning process as previously done.
The above simple algorithm is used in Chapter 11 to add new classes of words
into an already-trained EFuNN for word recognition.



Choose a classification problem and a dataset for it.

Select a set of features for a classification model.
Build an inductive and a transductive SVM classification model and validate
their accuracy through a cross-validation technique.
Build an inductive and a transductive MLP classification model and validate
their accuracy through a cross-validation technique.

Evolving Connectionist Methods for Supervised Learning


Build an inductive and a transductive RBF classification model and validate

their accuracy through a cross-validation technique.
Build an inductive and a transductive ECF classification model and validate
their accuracy through a cross-validation technique.
Demonstrate model and rule adaptation of an ECF model on new data.
Answer the following questions:
(a) Which of the above models are adaptive to new data and under what conditions
and constraints?
(b) Which models allow for knowledge extraction and what type of knowledge
can be acquired from them?


Summary and Open Questions

This chapter presents dynamic supervised learning systems. A simple evolving

model, EFuNN, and other dynamic supervised learning models are presented that
incorporate important AI features, such as adaptive, evolving learning; nonmonotonic reasoning; knowledge manipulation in the presence of imprecision and
uncertainties; and knowledge acquisition and explanation.
EFuNNs have features of knowledge-based systems, logic systems, casebased reasoning systems, and adaptive connectionist-based systems, all together.
Through self-organization and self-adaptation during the learning process, they
allow for solving difficult engineering tasks as well as for simulation of emerging,
evolving biological and cognitive processes to be attempted. The lifelong learning
mode is the natural learning mode of all biological systems.
The EFuNN models can be implemented in software or in hardware with the
use of either conventional or new computational techniques.
The EFuNN applications span across several application areas of information
science, life sciences, and engineering, where systems learn from data and improve
continuously (Kasabov, 2000a). Some of them are presented in PartII of this book.
Despite the excellent properties of the EFuNNs and the other types of incrementally
adaptive ECOS, there are several issues that need to be addressed in the future:
1. How to optimise all ECOS parameters, including choosing the best set of features
in an incrementally adaptive mode. This question relates to modifying the
evolving rules of EFuNN to better model the incoming data and to reflect on
changes in the evolving rules of the modelled process. Only one possible answer
is presented in Chapter 6.
2. How to evaluate the convergence property of an ECOS if it is working in an
open space.
3. How to evaluate in an incrementally adaptive mode, which supervised model,
out of several available, is the best for a given task, or for a given time period
of this task.
4. How can knowledge be transferred from one connectionist model to another if
the two methods use different knowledge representations?


Evolving Connectionist Systems

5. How much can one rely on the labels (desired data, output values) provided with
the data for supervised learning? (e.g. are the diagnostic labels associated with
patients data always correct?) Would fuzzy representation help to accommodate
and deal with the imprecision during data collection?
6. If wrong labels are associated with data, would unsupervised evolving learning
be more precise than supervised learning? How can we make an ECOS model
unlearn associations between input vectors and output classes if more precise
labels for the samples become available in the future?


Further Reading

A full description of the evolving connectionist architectures presented as well as

of some other architectures for supervised incrementally adaptive learning can be
found as follows.
EFuNN (Kasabov, 1998, 2001a,b)
Simple ECOS, eMLP (Watts and Kasabov, 2002; Ghobakglou et al., 2003; Watts,
Incrementally Adaptive Learning in Multilayer Perceptron Architectures (Amari,
1990; Saad, 1999)
ART Architectures and the Stabilityplasticity Dilemma (Grossberg, 1981, 1988.
ARTMAP (Carpenter et al ., 1991)
FuzzyARTMAP (Carpenter et al., 1992)
Incrementally Adaptive Q-learning (Rummery and Niranjan, 1994)
Online Learning in ZISC (Zero Instruction Set Computer) (ZISC Manual, 2001)
Life-long Learning Cell Structures (Hamker, 2001; Bruske et al., 1998; Hamker
and Gross, 1997)
Hybrid Neuro-fuzzy Systems for Adaptive and Continuous Learning (Berenji,
1992; Lim and Harrison, 1998)
Incrementally Adaptive Learning in RBF Networks (Karayiannis and Mi, 1997;
Platt, 1991; Fritzke, 1995; Freeman and Saad, 1997)
Quantizable RBF Networks (Poggio and Girosi, 1990)
Prediction of Chaotic Time-series with a Resource-allocating RBF Network
(Rosipal et al., 1997)
Sleep Learning in EFuNN and other Connectionist Models (Yamauchi and
Hayami, 2006)

4. Brain Inspired Evolving

Connectionist Models

The chapter presents some closer to the brain information processing connectionist
methods, namely state-based ANN realized in recurrent connectionist structures,
reinforcement learning ANN, and spiking ANN. In the state-based ANN the output
signal from the model depends not only on the inputs and the connections, but
on its previous states. A mathematical model describing such behaviour is finite
state automata, realized here in a recurrent network structure, where connections
from outputs or hidden nodes connect back to the inputs or to the hidden layer.
Spiking neural networks (SNN) are brainlike connectionist methods, where the
output activation is represented as a train of spikes rather then as a potential. The
chapter is presented in the following sections.

State-based ANN
Reinforcement learning
Evolving spiking neural networks
Summary and open problems
Further reading


State-Based ANN

A classical model for modelling systems described by states and their transitions
is the finite automata model that has already been shown to be a good theoretical
candidate for modelling brain states and their transitions (see Arbib (1972, 1987)).
Here we present a new version of it evolving finite automata and show how
the model can be realised in a recurrent evolving connectionist structure.


Evolving Finite Automata

A deterministic finite-state automaton is characterized by a finite number of states.

It is described as a five-tuple A = X S  q O, where S = s1 s2     sn, is a set
of states, s0 is a designated initial state. X = x1 x2     xk is the alphabet of the
input language.
The transition table : X S->S defines the state transitions in A. F is a set
of final states (outputs) defined through an output transformation q S->O.


Evolving Connectionist Systems

A deterministic finite-state fuzzy automaton is characterized by a seven-tuple

A = X FX S  q O FO, where S, X, and O are defined as in the nonfuzzy
automaton. Fuzzy membership functions are defined as sets FX and FO for the
input and the output variables, respectively. Transitions are defined as follows:
 FX S->S defines the state transitions, and q S->FO defines the output
fuzzy transitions.
Further in this section the concepts of evolving automata and evolving fuzzy
automata are first introduced and then implemented in an evolving connectionist
In an evolving automaton, the number of states in the set S is not defined
a priori; rather it increases and decreases in a dynamic way, depending on the
incoming data. New transitions are added to the transition table. The number of
inputs and outputs can change over time.
In an evolving fuzzy automaton, the number of states is not defined a priori as
is the case for the nonfuzzy automata. New transitions are added to the transition
table as well as new output fuzzy transitions. The number of inputs and outputs
can change over time.


Recurrent Evolving Neural Networks and Evolving Automata

Recurrent connectionist architectures, having the feature to capture time dependencies, are suitable techniques to implement finite automata. In Omlin and Giles
(1994) recurrent MLPs that have fixed structures are used to implement finite
automata. In a reverse task, a finite automaton is extracted from a trained recurrent
Recurrent connectionist systems have feedback connections from a hidden or
output layer of neurons back to the inputs or to the hidden layer nodes.
There are two main types of recurrent connectionist architectures of EFuNN that
are derivatives of the main EFuNN architecture. They are depicted in Fig. 4.1a,b.
1. The feedback connections are from the hidden rule nodes to the same nodes
but with a delay of some time intervals, similar to the recurrent MLP (Elman,
2. The feedback connections are from the output nodes to the hidden nodes,
similar to the proposed system in Lawrence et al. (1996)
Recurrent connectionist structures capture temporal dependencies between the
presented data examples from the data stream (Grossberg, 1969; Fukuda et al.,
1997). Sometimes these dependencies are not known in advance. For example,
a chaotic time series function with changing dynamics may have different autocorrelation characteristics at different times. This implies a different dependency
between the predicted signal in the future and the past data values. The number
of the time-lags cannot be determined in advance. It has to be learned and built
in the systems structure as the system operates.
Figure 4.2a,b illustrates the autocorrelation characteristics of a speech signal,
phoneme /e/ in English, pronounced by a male speakers of New Zealand English:
(a) the raw signal in time; (b) the autocorrelation. The autocorrelation analysis

Brain Inspired Evolving Connectionist Models



Outp uts

Fuzzy output
Fuzzy output

R (t1)

Rule layer

Rule layer

Fuzzy input


Fuzzy input

Fig. 4.1 Two types of recurrent EFuNN structures: (a) recurrent connections from the rule layer; (b) recurrent
connections from the output layer.

shows that there is correlation between the signal at a current time moment (t = 0)
and the signal at previous moments. These time dependencies are very difficult
to know in advance and a preferred option would be that they are learned in an
online mode.
Autocorrelation and other time-dependency characteristics can be captured
online in the recurrent connections of an evolving connectionist structure as
explained in this section.
Although the connection weights W1 and W2 capture fuzzy co-ordinates of
the learned prototypes (exemplars) represented as centres of hyperspheres, the
temporal layer of connection weights W3 of the EFuNN (Chapter 3) captures
temporal dependencies between consecutive data examples. If the winning rule
node at the moment (t 1) (with which the input data vector at the moment
(t 1) is associated) is rmax t1 , and the winning node at the moment t is rmax t ,
then a link between the two nodes is established as follows.
W3rmax t1  rmax t  = W3rmax t1  rmax t  + l3 A1rmax t1 A1rmax t 


where A1r t  denotes the activation of the rule node r at a time moment (t);
and l3 defines the degree to which the EFuNN associates links between rule nodes
(clusters, prototypes) that include consecutive data examples. If l3 = 0, no temporal
associations are learned in an EFuNN structure. Figure 4.3 shows a hypothetical
process of rule node creation in a recurrent EFuNN for learning the phoneme /e/
from input data that are presented frame by frame.
Rather than using fixed time-lags as inputs to a time-series modeling system, the
structure shown in Fig. 4.1b of a recurrent EFuNN can be used to learn temporal
dependencies of a time series on the fly.


Evolving Connectionist Systems

Fig. 4.2 (a) A waveform of a speech signal over time representing a pronunciation of the phoneme /e/ in
English, by a male speaker; (b) the autocorrelation characteristics of the signal. The autocorrelation analysis
shows that there is correlation between the signal at a time moment (indicated as 0 time), and the signal at
previous moments. The middle vertical line represents the signal at a time 0.

Brain Inspired Evolving Connectionist Models




representation of
phoneme /e/ data.
W3 can be used to account
for the temporal links.








Fig. 4.3 The process of evolving nodes and recurrent connections for a pronounced phoneme /e/.

Two experiments were conducted to compare the EFuNN and the recurrent
EFuNN (called REFuNN) on the two benchmark time-series data used in this book:
the gas-furnace data, and the MackeyGlass data. The results shown in Table 4.1
suggest that even for a stationary time series the REFuNN gives a slightly better
result. If the dynamics of the time series change over time, the REFuNN would
be much superior to the EFuNN in a longer term of the evolving process. The
following parameters were used for both the EFuNN and the REFuNN systems
for the gas-furnace data: 4 MF, Sthr = 09, Errthr = 01, lr1 = lr2 = 0, no pruning,
no aggregation. For the MackeyGlass data experiments, the following parameter
values were used: 4 MF, Sthr = 087, Errthr = 013, lr1 = lr2 = 0, no pruning, no
For the same values of the parameters, the recurrent EFuNN REFuNN, achieves
less error for less number of rule nodes. This is due to the contribution of the
feedback connections to capture existing temporal relationship in the time series.
A recurrent EFuNN can realize an evolving fuzzy automaton as illustrated in
Fig. 4.4. In this realization the rule nodes represent states and the transition
function is learned in the recurrent connections. Such an automaton can start
learning and operating without any transition function and it will learn this
function in an incremental, lifelong way.

Table 4.1 Comparative analysis between an EFuNN architecture and a recurrent EFuNN
architecture with recurrent connections from the output nodes back to the rule nodes. Two
benchmark datasets are used: MackeyGlass series and gas-furnace time series (see Chapter 1).
In both cases the recurrent version of EFuNNREFuNN, evolves less nodes and achieves better
accuracy. The reason is that the REFuNN captures some temporal relationship in the time-series

EFuNN for the gas furnace data

REFuNN for the gas furnace data
EFuNN for the MackeyGlass data
REFuNN for the MackeyGlass data

Number of rule





Evolving Connectionist Systems

Outputs O

States S(t1) layer

Fuzzy Outputs

States S(t)


Fuzzy Input Layer

Input Layer

Inputs X

Fig. 4.4 Recurrent EFuNN realising an evolving fuzzy finite automaton. The transitions between states are
captured in the short-term memory layer and in the feedback connections.

As shown in Chapter 3, at any time of the evolving process of a recurrent

EFuNN, a meaningful internal representation of the network such as a set of rules
or their equivalent fuzzy automaton can be extracted. The REFuNN has some extra
evolving rules, such as the recurrent evolving rule defined in Eq. (4.1).


Reinforcement Learning

Reinforcement learning is based on similar principles as supervised learning, but

there is no exact desired output and no calculated exact output error. Instead,
feedback hints are given. There are several cases in a reinforcement learning
procedure for an evolving connectionist architecture, such as EFuNN (see Chapter 3):
(a) There is a rule node activated (by the current input vector x) above the
preset threshold, and the highest activated fuzzy output node is the same as
the received fuzzy hint. In this case the example x is accommodated in the
connection weights of the highest activated rule node according to the learning
rules of EFuNN.
(b) Otherwise, there will be a new rule node created and new output neuron (or
new module) created to accommodate this example. The new rule node is then
connected to the fuzzy input nodes and to a new output node, as is the case
in the supervised evolving systems (e.g. as in the EFuNN algorithm).

Brain Inspired Evolving Connectionist Models





input .nodes nodes

State nodes

Fig. 4.5 An exemplar recurrent EFuNN for reinforcement learning.

Figure 4.5 shows an example of a recurrent EFuNN for reinforcement learning.

The fuzzy output layer is called here a state node layer. The EFuNN structure has
feedback connections from its fuzzy outputs back to its rule nodes.
The connection weights from the state to the action (output) nodes can be
learned through reinforcement learning, where the awards are indicated as positive
connection weights and the punishments as negative connection weights. This type
of recurrent EFuNN can be used in mobile robots that learn and evolve as they
operate. They are suitable techniques for the realization of intelligent agents when
supervised, unsupervised, or reinforcement learning is applied at different stages
of the systems operation.


Evolving Spiking Neural Networks

Spiking Neuron Models

SNN models are more biologically plausible to brain principles than any of the
above ANN methods. A neuron in a spiking neural network, communicates with
other neurons by means of spikes (Maass, 1996, 1998; Gerstner and Kistler, 2002;
Izhikevich, 2003). A neuron Ni receives continuously input spikes from presynaptic
neurons Nj (Fig. 4.6). The received spikes in Ni accumulate and cause the emission
of output spikes forming an output spike train that is transmitted to other neurons.
This is a more biologically realistic model of a neuron that is currently used
to model various brain functions, for instance pattern recognition in the visual
system, speech recognition, and odour recognition.
We describe here the Spike Response Model (SRM) as a representative of spiking
neuron models that are all variations of the same theme. In a SRM, the state of
a neuron Ni is described by the state variable ui(t) that can be interpreted as a
total somatic postsynaptic potential (PSP). The value of the state variable ui (t) is
the weighted sum of all excitatory and inhibitory synaptic post synaptic potentials
ui t =

i tj Fj

Wij ij t tj ij 



Evolving Connectionist Systems

Fig. 4.6 Spiking model of a neuron sends and receives spikes to and from other neurons in the network,
similar to biological neurons (from Benuskova and Kasabov (2007)).

i is the pool of neurons presynaptic to neuron Ni, Fi is the set of times tj < t
when presynaptic spikes occurred, and ij is an axonal delay between neurons i
and j, which increases with the increase of the physical distance between neurons
in the network. The weight of synaptic connection from neuron Nj to neuron
Ni is denoted Wij . It takes positive (negative) values for excitatory (inhibitory)
connections, respectively. When ui t reaches the firing threshold i t from below,
neuron Ni fires, i.e. emits a spike.
Immediately after firing the output spike at ti , the neurons firing threshold
i t increases k times and then returns to its initial value 0 in an exponential
fashion. In such a way, absolute and relative refractory periods are modeled:

t ti
i t ti  = k 0 exp

where  is the time constant of the threshold decay. Synaptic PSP evoked on
neuron i when a presynaptic neuron j from the pool
i fires at time tj is expressed
by the positive kernel ij t tj ij  = ij s such that

ij s = A exp
where  are time constants of the decay and rise of the double exponential,
respectively, and A is the amplitude of PSP. To make the model more biologically
realistic, each synapse be it an excitatory or inhibitory one, can have a fast and
slow component of its PSP, such that

ij s = A
exp type exp type
where type denotes one of the following: fast_excitation, fast_inhibition,
slow_excitation, and slow_inhibition, respectively. These types of PSPs are based

Brain Inspired Evolving Connectionist Models


on neurobiological data (Destexhe, 1998; Deisz, 1999; Kleppe and Robinson, 1999;
White et al., 2000).
In each excitatory and inhibitory synapse, there can be a fast and slow
component of PSP, based on different types of postsynaptic receptors.
A SNN is characterized in general by:
An encoding scheme for the representation of the input signals as spike trains,
to be entered into a spiking neuronal model
A spiking model of a single neuron
A learning rule of a neuron, including a spiking threshold rule
A SNN structure
Learning rules for the SNN including rules for changing connection weights and
creation of neurons
In Bohte et al. (2000) a MLP architecture is used for a SNN model and the
backpropagation algorithm is modified for spike signals. In Strain et al. (2006)
this architecture is further developed with the introduction of a new rule for a
dynamically adjusting firing threshold.
The evolving rules in a SNN, being a biologically plausible ANN model, can
include some parameters that are directly related to genes and proteins expressed
in the brain as it is presented in the computational neuro-genetic model in
Chapter 9 and in Table 9.2 (see Benuskova and Kasabov (2007)). A simple, evolving
rule there relates to evolving the output spiking activity of a neuron based on
changes in the genetic parameters.

Fig. 4.7 (a) Suprathreshold summation of PSPs in the spiking neuron model. After each generation of
postsynaptic spike there is a rise in the firing threshold that decays back to the resting value between
the spikes. (b) Subthreshold summation of PSPs that does not lead to the generation of postsynaptic spike.
(c) PSP is generated after some delay taken by the presynaptic spike to travel from neuron j to neuron i (from
Benuskova and Kasabov (2007)).



Evolving Connectionist Systems

Evolving Spiking Neural Networks (eSNN)

Evolving SNN (eSNN) are built of spiking neurons (as described above), where
there are new evolving rules for:
Evolving new neurons and connections, based on both parameter (genetic)
information and learning algorithm, e.g. the Hebbian learning rule (Hebb, 1949)
Evolving substructures of the SNN
An example of such eSNN is given in Wysoski et al. (2006) where new output classes
presented in the incoming data (e.g. new faces in a face recognition problem)
cause the SNN to create new substructures; see Fig. 4.8.
The neural network is composed of three layers of integrate-and-fire neurons.
The neurons have a latency of firing that depends upon the order of spikes received.
Each neuron acts as a coincidence detection unit, where the postsynaptic potential
for neuron Ni at a time t is calculated as

PSPi t =
mod orderj wji
where mod 0 1 is the modulation factor, j is the index for the incoming
connection, and wji is the corresponding synaptic weight.
Each layer is composed of neurons that are grouped in two-dimensional grids
forming neuronal maps. Connections between layers are purely feedforward and
each neuron can spike at most once on a spike arrival in the input synapses. The
first layer cells represent the ON and OFF cells of the retina, basically enhancing

Fig. 4.8 Evolving spiking neural network (eSNN) architecture for visual pattern recognition (from Wysoski et al.

Brain Inspired Evolving Connectionist Models


the high-contrast parts of a given image (highpass filter). The output values of the
first layer are encoded into pulses in the time domain. High output values of the
first layer are encoded as pulses with short time delays whereas long delays are
given low output values. This technique is called rank order coding (Thorpe et al.,
1998) and basically prioritizes the pixels with high contrast that consequently are
processed first and have a higher impact on neurons PSP.
The second layer is composed of eight orientation maps, each one selective to
a different direction (0 , 45 , 90 , 135 , 180 , 225 , 270 , and 315 ). It is important
to notice that in the first two layers there is no learning, in such a way that
the structure can be considered simply passive filters and time-domain encoders
(layers 1 and 2). The theory of contrast cells and direction selective cells was
first reported by Hubel and Wiesel (1962). In their experiments they were able
to distinguish some types of cells that have different neurobiological responses
according to the pattern of light stimulus.
The third layer is where the learning takes place and where the main contribution
of this work is presented. Maps in the third layer are to be trained to represent
classes of inputs. In Thorpe et al. (1998) the learning is performed off-line using
the rule:
wji =

mod orderaj 


where wji is the weight between neuron j of the second layer and neuron i of the
third layer, mod (0,1) is the modulation factor, orderaj  is the order of arrival
of a spike from neuron j to neuron i, and N is the number of samples used for
training a given class.
In this rule, there are two points to be highlighted: (a) the number of samples
to be trained needs to be known a priori; and (b) after training, a map of a class
will be selective to the average pattern.
In Wysoski et al. (2006) a new approach is proposed for learning with structural
adaptation, aiming to give more flexibility to the system in a scenario where the
number of classes and/or class instances is not known at the time the training
starts. Thus, the output neuronal maps need to be created, updated, or even deleted
online, as the learning occurs. To implement such a system the learning rule needs
to be independent of the total number of samples because the number of samples
is not known when the learning starts. In the implementation of Equation (4.7) in
Delorme et al. (1999, 2001) the outcome is the average pattern. However, the new
equation in Wysoski et al. (2006) calculates the average dynamically as the input
patterns arrive as explained below.
There is a classical drawback to learning methods when, after training, the
system responds to the average pattern of the training samples. The average
does not provide a good representation of a class in cases where patterns have
high variance (see Fig. 4.9). A traditional way to attenuate the problem is the
divide-and-conquer procedure. We implement this procedure through the structural modification of the network during the training stage. More specifically,
we integrate into the training algorithm a simple clustering procedure: patterns
within a class that comply with a similarity criterion are merged into the same
neuronal map. If the similarity criterion is not fulfilled, a new map is generated.


Evolving Connectionist Systems

Fig. 4.9 Divide-and-conquer procedure to deal with high intraclass variability of patterns in the hypothetical
space of class K. The use of multiple maps that respond optimally to the average of a subset of patterns
provides a better representation of the classes than using a global average value.

The entire training procedure follows four steps described next and summarized
in the flowchart of Fig. 4.10.
The new learning procedure can be described in these sequential steps:

New training sample

Propagation to retina and DSC

Create a new map MapC(k)

For MapC(k), train the weights WC(k) and

calculate PSPthreshold C(k)

Calculate similarity S between WC(k) and

WC(k) (other maps i of the same class)


If S(i) > Thsim


Merge map MapC(k) and MapC(i)

Fig. 4.10 A flowchart of the eSNN online learning procedure.

Brain Inspired Evolving Connectionist Models


Propagate a sample k of class K for training into the layer 1 (retina) and layer
2 (direction selective cells, DSC);
Create a new map MapCk in layer 3 for sample k and train the weights using
the equation:
wji = mod orderaj 


where wji is the weight between neuron j of layer 2 and neuron i of layer 3,
mod (0,1) is the modulation factor, and orderaj  is the order of arrival of
spike from neuron j to neuron i.
The postsynaptic threshold (PSPthreshold ) of the neurons in the map is calculated
as a proportion c [0,1] of the maximum postsynaptic potential (PSP) created
in a neuron of map MapCk with the propagation of the training sample into
the updated weights, such that:

PSPthreshold = c maxPSP


The constant of proportionality c expresses how similar a pattern needs to be

to trigger an output spike. Thus, c is a parameter to be optimized in order
to satisfy the requirements in terms of false acceptance rate (FAR) and false
rejection rate (FRR).
Calculate the similarity between the newly created map MapCk and other maps
belonging to the same class MapCK . The similarity is computed as the inverse
of the Euclidean distance between weight matrices.
If one of the existing maps for class K has similarity greater than a chosen
threshold ThsimCK > 0, merge the maps MapCk and MapCKsimilar using the
arithmetic average as expressed in


WMapCk + Nsamples WMapCKsimilar

1 + Nsamples


where matrix W represents the weights of the merged map and Nsamples denotes
the number of samples that have already being used to train the respective map.
In similar fashion the PSPthreshold is updated:
PSPthreshold =


PSPMapCk + Nsamples PSPMapCKsimilar

1 + Nsamples


Summary and Open Questions

The methods presented in this chapter only indicate the potential of the
evolving connectionist systems for learning in a reinforcement mode, for learning
temporal dependencies, for the realisation of evolving finite automata, and for the


Evolving Connectionist Systems

implementation of more biologically plausible ANN models such as the SNN and
the evolving SNN (eSNN).
The topics covered in the chapter also raise some issues such as:
1. How to optimize the time lags of output values or inner states that the system
should use in order to react properly on future input vectors in an online mode
and an open problem space.
2. How can ECOS handle fuzzy reinforcement values if these values come from
different sources, each of them using different, unknown to the system, fuzzy
membership functions?
3. How can evolving fuzzy automata be evaluated in terms of the generalisation
4. How biologically plausible should a SNN model be in order to model a given
brain function?
5. How biologically plausible should the SNN be in order to be used for solving
complex problems of CI, such as speech recognition, image recognition, or
multimodal information processing?
6. How to develop an eSNN that evolves its rules for:
Firing threshold adjustment
Neuronal connection creation and connection deletion
Neuron aggregation
7. How to develop eSNN automata, where the whole SNN at a certain time is
represented as a state. The transition between states may be interpreted then
as learning knowledge representation.


Further Reading

Some more on the subject of this chapter can be read in the following references.
Recurrent Structures of NN (Elman, 1990; Arbib, 1995, 2002)
Finite Automata and their Realization in Connectionist Architectures (Arbib,
1987; Omlin and Giles, 1994)
Introduction to the Theory of Automata (Zamir,1983; Hopkin and Moss, 1976).
Symbolic Knowledge Representation in Recurrent Neural Networks (Omlin and
Giles, 1994)
Reinforcement Learning (Sutton and Barto,1998)
Spiking MLP and Backpropagation Algorithm (Bohte et al., 2000)
Spiking Neural Networks (Maass, 1996, 1998; Gerstner and Kistler, 2002;
Izhikevich, 2003)
SNN with a Firing Threshold Adjustment (Strain et al., 2006)
Using SNN for Pattern Recognition (Delorme et al., 1999; Delorme and Thorpe,
2001; Wysoski et al., 2006; Thorpe et al., 1998)
Evolving SNN (eSNN) (Wysoski et al., 2006)
Computational Neuro-genetic Modeling Using SNN (Benuskova and Kasabov,

5. Evolving Neuro-Fuzzy
Inference Models

Some knowledge-based fuzzy neural network models for adaptive incremental

(possibly online) learning, such as EFuNN and FuzzyARTMAP, were presented
in the previous chapter. Fuzzy neural networks are connectionist models that are
trained as neural networks, but their structure can be interpreted as a set of fuzzy
rules. In contrast to them, neuro-fuzzy inference systems consist of a set of rules
and an inference method that are embodied or combined with a connectionist
structure for a better adaptation. Evolving neuro-fuzzy inference systems are such
systems, where both the knowledge and the inference mechanism evolve and
change in time, with more examples presented to the system. In the models here
knowledge is represented as both fuzzy rules and statistical features that are learned
in an online or off-line, possibly, in a lifelong learning mode. In the last three
sections of the chapter different types of fuzzy rules, membership functions, and
receptive fields in ECOS (that include both evolving fuzzy neural networks and
evolving neuro-fuzzy inference systems) are analysed and introduced. The chapter
covers the following topics.

Knowledge-based neural networks

Hybrid neuro-fuzzy inference system: HyFIS
Dynamic evolving neuro-fuzzy inference system (DENFIS).
TWNFI: Transductive weighted neuro-fuzzy inference systems for personalised
Other neuro-fuzzy inference systems
Summary and open problems
Further reading


Knowledge-Based Neural Networks

General Notions

Knowledge (e.g. rules) is the essence of what a knowledge-based neural network

(KBNN) has accumulated during its operation (see Cloete and Zurada (2000).
Manipulating rules in a KBNN can pursue the following objectives.


Evolving Connectionist Systems

1. Knowledge discovery, i.e. understanding and explanation of the data used to

train the KBNN. The extracted rules can be analysed either by an expert, or by
the system itself. Different methods for reasoning can be subsequently applied
to the extracted set of rules.
2. Improvement of the KBNN system, e.g. maintaining an optimal size of the
KBNN that is adequate to the expected accuracy of the system. Reducing the
structure of a KBNN can be achieved through regular pruning of nodes and
connections thus allowing for knowledge to emerge in the structure, or through
aggregating nodes into bigger rule clusters. Both approaches are explored in
this chapter.

Types of Rules Used in KBNN

Different KBNNs are designed to represent different types of rules, some of them
listed below.
1. Simple propositional rules (e.g. IF x1 is A AND/OR x2 is B THEN y is C, where
A, B, and C are constants, variables, or symbols of true/false type) (see, for
example, Feigenbaum (1989), Gallant (1993), and Hendler and Dickens (1991)).
As a partial case, interval rules can be used, for example:
IF x1 is in the interval [x1min, x1max] AND x2 is in the interval [x2min,
x2max] THEN y is in the interval [ymin, ymax], with Nr1 examples associated
with this rule.
2. Propositional rules with certainty factors (e.g., IF x1 is A (CF1) AND x2 is B
(CF2) THEN y is C (CFc)), (see, e.g. Fu (1989)).
3. ZadehMamdani fuzzy rules (e.g., IF x1 is A AND x2 is B THEN y is C, where
A, B, and C are fuzzy values represented by their membership functions) (see,
e.g. Zadeh (1965) and Mamdani (1977)).
4. TakagiSugeno fuzzy rules (e.g. the following rule is a first-order rule: IF x1 is
A AND x2 is B THEN y is ax1 + bx2 + c, where A and B are fuzzy values and
a, b, and c are constants) (Takagi and Sugeno, 1985; Jang, 1993). More complex
functions are possible to use in higher-order rules.
5. Fuzzy rules with degrees of importance and certainty degrees (e.g. IF x1 is A
(DI1) AND x2 is B (DI2) THEN y is C (CFc), where DI1 and DI2 represent the
importance of each of the condition elements for the rule output, and the CFc
represents the strength of this rule (see Kasabov (1996)).
6. Fuzzy rules that represent associations of clusters of data from the problem
space (e.g. Rule j: IF [an input vector x is in the input cluster defined by
its centre (x1 is Aj, to a membership degree of MD1j, AND x2 is Bj, to a
membership degree of MD2j) and by its radius Rj-in] THEN [y is in the output
cluster defined by its centre (y is C, to a membership degree of MDc) and by
its radius Rj-out, with Nex(j examples represented by this rule]. These are the
EFuNN rules discussed in Chapter 3.
7. Temporal rules (e.g. IF x1 is present at a time moment t1 (with a certainty
degree and/or importance factor of DI1) AND x2 is present at a time moment
t2 (with a certainty degree/importance factor DI2) THEN y is C (CFc)).
8. Temporal recurrent rules (e.g., IF x1 is A (DI1) AND x2 is B (DI2) AND y at
the time moment (t k) is C THEN y at a time moment (t + n) is D (CFc)).

Evolving Neuro-Fuzzy Inference Models


9. Type-2 fuzzy rules, that is, fuzzy rules of the form of: IF x is A AND y is B
THEN z is C , where A , B , and C are type-2 fuzzy membership functions
(see the extended glossary, and also the section in this chapter on type-2 ECOS).

Generic Methods for Rule Extraction from KBNN

There are several methods for rule extraction from a KBNN. Three of them are
explained below:
1. Rule extraction through activating a trained KBNN on input data and observing
the patterns of activation (the short-term memory). The method is not
practical for online incremental learning as past data may not be available
for a consecutive activation of the trained KBNN. This method is widely used
in brain research (e.g. analysing MRI, fMRI, and EEG patterns and signals to
detect rules of behaviour.
2. Rule extraction through analysis of the connections in a trained KBNN (the
long-term memory). This approach allows for extracting knowledge without
necessarily activating the connectionist system again on input data. It is appropriate for online learning and system improvement. This approach is not yet
used in brain study as there are no established methods thus far for processing
information stored in neuronal synapses.
3. Combined methods of (1) and (2). These methods make use of the above two
A seminal work on fuzzy rule extraction from KBNN is the publication by Mitra
and Hayashi (2000).

Methods for Inference over Rules Extracted from KBNN

In terms of applying the extracted from a KBNN rules to infer new information,
there are three types of methods used in the KBNN:
1. The rule learning and rule inference modules constitute an integral structure
where reasoning is part of the rule learning, and vice versa. This is the case in
all fuzzy neural networks and of most of the neuro-fuzzy inference systems.
2. The rules extracted from a KBNN are interpreted in another inference machine.
The learning module is separated from the reasoning module. This is a main
principle used in many AI and expert systems, where the rule base acquisition
is separated from the inference machine.
3. The two options from above are possible within one intelligent system.
Figure 5.1 shows a general scheme of a fuzzy inference system. The decisionmaking block is the fuzzy inference engine that performs inference over fuzzy
rules and data from the database. The inference can be realized in a connectionist
structure, thus making the system a neuro-fuzzy inference system.


Evolving Connectionist Systems

Fuzzy rule base

Learning fuzzy rules

Fuzzy inference

Data base (Fuzzy)


Membership functions

User interface
Fuzzy data/Exact data
Exact queries/Fuzzy queries

Fig. 5.1 A general diagram of a fuzzy inference system (from Kasabov (1996), MIT Press, reproduced with


Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

ANFIS (Jang, 1993) implements TakagiSugeno fuzzy rules in a five-layer MLP

network. The first layer represents fuzzy membership functions. The second and
the third layers contain nodes that form the antecedent parts in each rule. The
fourth layer calculates the first-order TakagiSugeno rules for each fuzzy rule. The
fifth layer the output layer calculates the weighted global output of the system
as illustrated in Fig. 5.2a and b.
The backpropagation algorithm is used to modify the initially chosen
membership functions and the least mean square algorithm is used to determine
the coefficients of the linear output functions. Here, the min and the max functions
of a fuzzy inference method (Zadeh, 1965) are replaced by differentiable functions.
As many rules can be extracted from a trained ANFIS as there are a predefined
number of rule nodes. Two exemplar sets of fuzzy rules learned by an ANFIS
model are shown below (see also Fig. 5.2):
Rule 1: If x is A1 and y is B1 , then f1 = p1 x + q1 y + r1
Rule 2: If x is A2 and y is B2 , then f2 = p2 x + q2 y + r2
where x and y are the input variables; A1 , A2 , B1 and B2 are the membership
functions; f is the output; and p, q, and r are the consequent parameters.

Evolving Neuro-Fuzzy Inference Models


Fig. 5.2 (a) An exemplar set of two fuzzy rules and the inference over them that is performed in an ANFIS
structure; (b) the exemplar ANFIS structure for these two rules (see Jang (1993) and the MATLAB Tutorial book,
Fuzzy Logic Toolbox).

By employing a hybrid learning procedure, the proposed architecture can refine

fuzzy ifthen rules obtained from human experts to describe the inputoutput
behaviour of a complex system. If human expertise is not available, reasonable
initial membership functions can still be set up intuitively and the learning process
can begin to generate a set of fuzzy ifthen rules to approximate a desired dataset.
ANFIS employs a multiple iteration learning procedure and has a fast convergence due to the hybrid learning algorithm used. It does not require preselection of
the number of the hidden nodes; they are defined as the number of combinations
between the fuzzy input membership functions.
Despite the fact that ANFIS is probably the most popular neuro-fuzzy inference
system thus far, in some cases it is not adequate to use. For example, ANFIS cannot
handle problems with high dimensionality, for example, more than 10 variables
(we are not talking about 40,000 gene expression variables) as the complexity of the
system becomes incredible and the million of rules would not be comprehensible
by humans. ANFIS has a fixed structure that cannot adapt to the data in hand,
therefore it has limited abilities for incremental, online learning.
There can be only one output from an ANFIS. This is due to the nature of
the format of the fuzzy rules it represents. Thus ANFIS can only be applied to
tasks such as prediction or approximation of nonlinear functions where there
is only one output. The number of membership functions associated with each
input and output node cannot be adjusted, only their shape. Prior choice of
membership functions is a critical issue when building the ANFIS system. There
are no variations apart from the hybrid learning rule available to train ANFIS.
In contrast to ANFIS, incremental adaptive learning and local optimisation in
a fuzzy-neural inference system would allow for tracing the process of knowledge


Evolving Connectionist Systems

emergence, and for analysing how rules change over time. And this is the case
with the two neuro-fuzzy evolving systems presented in the rest of this chapter.


Hybrid Neuro-Fuzzy Inference System (HyFIS)

A General Architecture

HyFIS (Kim and Kasabov, 1999) consists of two main parts (Fig. 5.3):
1. A fuzzy analysis module for fuzzy rule extraction from incoming data with the
use of Wangs method (1994)
2. A connectionist module that implements and tunes the fuzzy rules through
applying the backpropagation algorithm (see Fig. 3.1)
The system operates in the following mode.
1. Data examples (x, y) are assumed to arrive in chunks of m (as a partial case,
m = 1).
2. For the current chunk Ki, consisting of mi examples, ni fuzzy rules are extracted
as described below. They have a form illustrated with the following example.
IF x1 is Small AND x2 is Large THEN y is Medium (certainty 0.7)
3. The ni fuzzy rules are inserted in the neuro-fuzzy module, thus updating the
current structure of this module.
4. The updated neuro-fuzzy structure is trained with the backpropagation
algorithm on the chunk of data Ki, or on a larger dataset if such is available.
5. New data x that do not have known output vectors, are propagated through
the neuro-fuzzy module for recall.

Fig. 5.3 A schematic block diagram of HyFIS (from Kim and Kasabov (1999)).

Evolving Neuro-Fuzzy Inference Models


The fuzzy rule extraction method is illustrated here on the following two examples
of inputoutput data (the chunk consists of only two examples):
examp1  x1 = 06 x2 = 02 y = 02
examp2  x1 = 04 x2 = 0 y = 04
These examples are fuzzified with membership functions (not shown here) defined
in the neuro-fuzzy module, or initialised if this is the first chunk of data, for
examp1  x1 = Medium08 x2 = Small06 y = Small06
examp2  x1 = Medium08  x2 = Small1 y = Medium08
Here we have assumed that the range of the three variables x1, x2, and y is [0,1]
and there are three membership functions uniformly distributed on this range
(Small, Medium, Large).
Each of the examples can be now represented as a fuzzy rule of the Zadeh
Mamdani type:
Rule 1: IF x1 is Medium and x2 is Small THEN y is Small.
Rule 2: IF x1 is Medium and x2 is Small THEN y is Medium.
The rules now can be inserted in the neuro-fuzzy modules, but in this particular
case they are contradictory rules; i.e. they have the same antecedent part but
different consequent part. In this case there will be a certainty degree calculated
as the product of all the membership degrees of all the variables in the rule, and
only the rule with the highest degree will be inserted in the neuro-fuzzy system.
Rule 1: Certainty degree CF1 = 08 06 06 = 0288 51
Rule 2: Certainty degree CF2 = 08 1 08 = 064
Only Rule 2 will be inserted in the connectionist structure.


Neuro-Fuzzy Inference Module

A block diagram of a hypothetical neuro-fuzzy module is given in Fig. 5.4.

It consists of five layers: layer one is the input layer, layer two is the input
membership functions layer, layer three is the rule layer, layer four is the output
membership function layer, and layer five is the output layer.
Layer three performs the AND operation calculated as the min function on the
incoming activation values of the membership function nodes. The membership
functions are of a Gaussian type. Layer four performs the OR operation, calculated
as the max function on the weighted activation values of the rule nodes connected
to nodes in layer four:
Oj4 = max Oi3 wi



Evolving Connectionist Systems

Fig. 5.4 The structure of the neuro-fuzzy module of HyFIS (from Kim and Kasabov (1999)).

Layer five performs a centre of area defuzzification:

Ol 5 = Oj4 Cj4 j4 / Oj4 j4


where Ol 5 is the activation of the lth output node; Oj4 is the activation of the jth
node from layer 4 that represents a Gaussian output membership function with a
centre Cj4 and a width of j4
Through the backpropagation learning algorithm the connection weights wi
j as
well as the centres and the width of the membership functions are adjusted to
minimize the mean square error over the training dataset (or the current chunk
of data).


Modelling and Prediction of the Gas-Furnace Data with HyFIS

In this experiment 200 data examples were drawn randomly from the gas-furnace
data time series (Box and Jankins (1970); see Chapter 1); 23 fuzzy rules were
extracted from them and were inserted in an initial neuro-fuzzy structure (black
circles in Fig. 5.5). Two new rules were extracted from the rest of the 92 data
examples (inserted also in structure: the empty circles).
This was trained on the 200 examples and tested and incrementally evolved on
the rest 92 Examples. The test results were compared with similar results obtained

Evolving Neuro-Fuzzy Inference Models


Fig. 5.5 The initial structure of HyFIS for the gas-furnace time-series prediction (the filled circles represent the
initial 2 fuzzy rules inserted before training), and the resulted HyFIS structure after training is performed (the
empty circles represent newly created rules; from Kim and Kasabov (1999)).

Table 5.1 A comparative analysis of the test results of different fuzzy neural networks and neuro-fuzzy inference
models for the prediction of the gas-furnace time series. All the models, except HyFIS, use a prior fixed structure
that does not change during training. HyFIS begins training with no rule nodes and builds nodes based on
fuzzy rules extracted from data in an online mode (from Kim and Kasabov (1999)).
Model name and reference

Number of inputs

Number of rules

Model error (MSE)

ARMA (Box and Jenkins, 1970)

TakagiSugeno model (1985)
(Hauptman and Heesche, 1995)
ANFIS ( Jang, 1993)
FuNN (Kasabov et al., 1997)
HyFIS (Kim and Kasabov, 1999)



073 103
051 103
042 103

with the use of other statistical connections, fuzzy, and neuro-fuzzy techniques as
shown in Table 5.1.
After training with the 200 examples the updated rules can be extracted from the
neuro-fuzzy structure. The membership functions are modified from their initial
forms, as shown in Fig. 5.6.


Dynamic Evolving Neuro-Fuzzy Inference

Systems (DENFIS)
General Principles

The dynamic evolving neuro-fuzzy system, DENFIS, in its two modifications for
online and for off-line learning use the TakagiSugeno type of fuzzy inference


Evolving Connectionist Systems

Fig. 5.6 The initial and the modified membership functions for the input variables ut 4 and yt 1
and the output variable yt after certain epochs of training HyFIS on the gas-furnace data are performed
(from Kim and Kasabov (1999)).

method (Kasabov and Song, 2002). The inference in DENFIS is performed on m

fuzzy rules indicated as follows.

if x1 is R11 and x2 is R12 and    and xq is R1q

then y is f1 x1
if x1 is R21 and x2 is R22 and    and xq is R2q
then y is f2 x1

if x is R and x is R and    and x is R

then y is f x
m 1 2
where xj is Rij , i = 1
m; j = 1
q, are m q fuzzy propositions
that form m antecedents for m fuzzy rules respectively; xj , j = 1
q, are
antecedent variables defined over universes of discourse Xj , j = 1
q, and
Rij , i = 1
m; j = 1
q are fuzzy sets defined by their fuzzy membership
functions Rij : Xj 0
i = 1
m; j = 1
q. In the consequent parts of
the fuzzy rules, y is the consequent variable, and crisp functions fi , i = 1
are employed.
In both online and off-line DENFIS models, fuzzy membership functions triangular type can be of that depend on three parameters, a, b
and c, as given below:

Evolving Neuro-Fuzzy Inference Models



x = mfx
c =


c b



where b is the value of the cluster centre on the variable x dimension; a = b d

Dthr; c = b + d Dthr; and d = 12 2; the threshold value Dthr is a clustering
parameter (see the evolving clustering method ECM presented in Chapter 2).
If fi x1
xq  = Ci , i = 1
m, and Ci are constants, we call this inference
a zero-order TakagiSugeno type fuzzy inference system. The system is called a
first-order TakagiSugeno fuzzy inference system if fi x1
xq , i = 1
are linear functions. If these functions are nonlinear functions, it is called a highorder TakagiSugeno fuzzy inference system.


Online Learning in a DENFIS Model

In the DENFIS online model, the first-order TakagiSugeno type fuzzy rules
are employed and the linear functions in the consequence parts are created
and updated through learning from data by using the linear least-square
estimator (LSE).
For an input vector x0 = x1 0 x2 0    xq 0 , the result of inference y0 ( the output of
the system) is the weighted average of each rules output indicated as follows.
y0 =

where i =


m i fi x1 0
x2 0
xq 0 
m i

Rij xj0  i = 1
   m j = 1




In the DENFIS online model, the first-order TakagiSugeno type fuzzy rules are
employed and the linear functions in the consequences can be created and updated
by linear least-square estimator on the learning data. Each of the linear functions
can be expressed as follows.
y = 0 + 1 x1 + 2 x2 + +q xq


For obtaining these functions a learning procedure is applied on a dataset, which

is composed of p data pairs {([xi1
xiq ], yi , i = 1
p}. The leastsquare estimator of  = 0 1 2   q T , is calculated as the coefficients b =
b0 b1 b2   bq T , by applying the following formula,
b = AT A1 AT y




Evolving Connectionist Systems










and y = y1 y2   yp T . A weighted least-square estimation method is used here as

bw = AT W A1 AT Wy




W = 





and wj is the distance between jth example and the corresponding cluster centre,
j = 1
We can rewrite Eqs. (5.9) and (5.10) as follows.

P = AT A1
b = P AT y


Pw = AT W A1
bw = P w A T W y


Let the kth row vector of matrix A defined in Eq. (5.9) be ak T = 1 xk1 xk2    xkq 
and the kth element of y be yk ; then b can be calculated iteratively as follows.

bk+1 = bk + Pk+1 ak+1 yk+1 ak+1 T bk 

a TP
Pk+1 = Pk k k+1 Tk+1 k
1 + ak+1 Pk ak+1


for k = n
n + 1
p 1.
Here, the initial values of Pn and bn can be calculated directly from Eq. (5.12)
with the use of the first n data pairs from the learning dataset.

Evolving Neuro-Fuzzy Inference Models


Equation (5.13) is the formula of recursive LSE. In the DENFIS online model,
we use a weighted recursive LSE with forgetting factor defined as the following
bk+1 = bk + wk+1 Pk+1 ak+1 yk+1 ak+1 T bk 

ak+1 T
w Pa
k = n
n + 1
p 1
Pk+1 =
Pk k+1 k k+1

 + ak+1 T Pk ak+1 k


where w is the weight defined in Eq. (5.10) and  is a forgetting factor with a
typical value between 0.8 and 1.
In the online DENFIS model, the rules are created and updated at the same
time with the input space partitioning using the online evolving clustering method
(ECM) and Eqs. (5.8) and (5.14). If no rule insertion is applied, the following steps
are used for the creation of the first m fuzzy rules and for the calculation of the
initial values P and b of the functions.
1. Take the first n0 learning data pairs from the learning dataset.
2. Implement clustering using ECM with these n0 data to obtaining m cluster
3. For every cluster centre Ci , find pi data points whose positions in the input
space are closest to the centre, i = 1
4. For obtaining a fuzzy rule corresponding to a cluster centre, create the
antecedents of the fuzzy rule using the position of the cluster centre and
Eq. (5.8). Using Eq. (5.12) on pi data pairs calculate the values of P and b of
the consequent function. The distances between pi data points and the cluster
centre are taken as the weights in Eq. (5.12). In the above steps, m, n0 , and p
are the parameters of the DENFIS online learning model, and the value of pi
should be greater than the number of input elements, q.
As new data pairs are presented to the system, new fuzzy rules may be created and
some existing rules updated. A new fuzzy rule is created if a new cluster centre
is found by the ECM. The antecedent of the new fuzzy rule is formed by using
Eq. (5.8) with the position of the cluster centre as a rule node. An existing fuzzy
rule is found based on the rule node that is closest to the new rule node; the
consequence function of this rule is taken as the consequence function for the new
fuzzy rule. For every data pair, several existing fuzzy rules are updated by using
Eq. (5.14) if their rule nodes have distances to the data point in the input space
that are not greater than 2 Dthr (the threshold value, a clustering parameter).
The distances between these rule nodes and the data point in the input space are
taken as the weights in Eq. (5.14). In addition to this, one of these rules may also
be updated through changing its antecedent so that, if its rule node position is
changed by the ECM, the fuzzy rule will have a new antecedent calculated through
Eq. (5.8).


TakagiSugeno Fuzzy Inference in DENFIS

The TakagiSugeno fuzzy inference system utilised in DENFIS is a dynamic

inference. In addition to dynamically creating and updating fuzzy rules the DENFIS
online model has some other major differences from the other inference systems.


Evolving Connectionist Systems

First, for each input vector, the DENFIS model chooses m fuzzy rules from
the whole fuzzy rule set for forming a current inference system. This operation
depends on the position of the current input vector in the input space. In the case
of two input vectors that are very close to each other, especially in the DENFIS
off-line model, the inference system may have the same fuzzy rule inference group.
In the DENFIS online model, however, even if two input vectors are exactly the
same, their corresponding inference systems may be different. This is because
these two input vectors are presented to the system at different time moments and
the fuzzy rules used for the first input vector might have been updated before the
second input vector has arrived.
Second, depending on the position of the current input vector in the input space,
the antecedents of the fuzzy rules chosen to form an inference system for this
input vector may vary. An example is illustrated in Fig. 5.7a,b where two different
groups of fuzzy inference rules are formed depending on two input vectors x1
and x2 , respectively, in a 2D input space. We can see from this example that,
for instance, the region C has a linguistic meaning large, in the X1 direction
for the Fig. 5.7a group, but for the group of rules from Fig. 5.7b it denotes a
linguistic meaning of small in the same direction of X1. The region C is defined
by different membership functions, respectively, in each of the two groups of


Time-Series Modelling and Prediction with the DENFIS

OnLine Model

In this section the DENFIS online model is applied to modelling and predicting
the future values of a chaotic time series: the MackeyGlass (MG) dataset (see
Chapter 1), which has been used as a benchmark example in the areas of
neural networks, fuzzy systems, and hybrid systems (see Jang (1993)). This time
series is created with the use of the MG time-delay differential equation defined
1 + x10 t 


To obtain values at integer time points, the fourth-order RungeKutta method was
used to find the numerical solution to the above MG equation. Here we assume
that the time step is 0.1, x0 = 12
 = 17, and xt = 0 for t < 0. The task is
to predict the values xt + 85 from input vectors [xt 18 xt 12 xt 6xt]
for any value of the time t. For the purpose of a comparative analysis, we also
use some existing online learning models applied to the same task. These models
are neural gas, resource-allocating network (RAN), evolving self-organising maps
(ESOM; see Chapter 2) and evolving fuzzy-neural network (EFuNN; Chapter 3).
Here, we estimate the nondimensional error index (NDEI) which is defined as
the root mean square error divided by the standard deviation of the target
The following experiment was conducted: 3000 data points, for t = 201 to 3200,
are extracted from the time series and used as learning (training) data; 500 data

Evolving Neuro-Fuzzy Inference Models


(a) Fuzzy rule group 1 for a DENFIS








(b) Fuzzy rule group 2 for a DENFIS









Fig. 5.7 Two fuzzy rule groups are formed by DENFIS to perform inference for an input vector x1 (a), and for
an input vector x2 (b) that is entered at a later time moment, all represented in the 2D space of the first two
input variables X1 and X2 (from Kasabov and Song (2002)).

points, for t = 5001 to 5500, are used as testing data. For each of the online models
mentioned above the learning data are used for the online learning processes, and
then the testing data are used with the recalling procedure.
Table 5.2 lists the prediction results (NDEI on test data after online learning)
and the number of rules (nodes, units) evolved (used) in each model.
In another experiment the properties of rule insertion and rule extraction were
utilised where we first obtained a group of fuzzy rules from the first half of the
training data (1500 samples), using the DENFIS off-line model I (introduced in
the next section); then we inserted these rules into the DENFIS online model and


Evolving Connectionist Systems

Table 5.2 Prediction results of online learning models on the MackeyGlass test data.

Fuzzy rules (DENFIS)

Rule nodes (EFuNN)
Units (Others)

NDEI for testing data

Neural gas (Fritzke,1994)

RAN (Platt,1991)
RAN (other parameters)
ESOM (Deng and Kasabov, 2002) (Chapter 2)
ESOM (other parameters)
EFuNN (Kasabov, 2001) (Chapter 3)
EFuNN (other parameters)
DENFIS (Kasabov and Song, 2002)
DENFIS (other parameters)
DENFIS with rule insertion



let it learn continuously from the next half of the learning data (1500 samples).
Then, we tested the model on the test data.
Figures 5.8a,b,c display the test errors (from the recall processes on the test
data) of DENFIS online model with different number of fuzzy rules:
DENFIS online model with 58 fuzzy rules
DENFIS online model with 883 fuzzy rules (different parameter values are used
from those in the model above)
DENFIS online model with 883 fuzzy rules that is evolved after an initial set of
rules was inserted


DENFIS Off-Line Learning Model

The DENFIS online model presented thus far can be used also for off-line, batch
mode training, but it may not be very efficient when used on comparatively small
datasets. For the purpose of batch training the DENFIS online model is extended
here to work efficiently in an off-line, batch training mode.
Two DENFIS models for off-line learning are developed and presented here:
a linear model, model I, and a MLP-based model, model II.
A first-order TakagiSugeno type fuzzy inference engine, similar to the DENFIS
online model, is employed in model I, and an extended high-order TakagiSugeno
fuzzy inference engine is used in model II. The latter employs several smallsize, two-layer (the hidden layer consists of two or three neurons) multilayer
perceptrons to realise the function f in the consequent part of each fuzzy rule
instead of using a function that has a predefined type.
The DENFIS off-line learning process is implemented in the following way.
Cluster (partition) the input space to find n cluster centres (n rule nodes, nrules)
by using the off-line evolving clustering method with constrained optimisation
(ECMc; see Chapter 2).

Evolving Neuro-Fuzzy Inference Models


Fig. 5.8 Prediction error of DENFIS online (a)(b)(c) and off-line (d)(e)(f) models on test data taken from the
MackeyGlass time series (from (Kasabov and Song (2002)).

Create the antecedent part for each fuzzy rule using Eq. (5.8) and also the
current position of the cluster centre (rule node).
Find n datasets, each of them including one cluster centre and p learning data
pairs that are closest to the centre in the input space. In the general case, one
data pair can belong to several clusters.
For model I, estimate the functions f to create the consequent part for each
fuzzy rule using Eq. (5.10) or Eq. (5.12) with n datasets; the distance between
each data point and its corresponding centre is represented as a connection
For model II, each consequent function f of a fuzzy rule (rule node, cluster
centre) is learned by a corresponding MLP network after training it on the
corresponding dataset with the use of the backpropagation algorithm.



Evolving Connectionist Systems

Time-Series Modelling and Prediction with the DENFIS

Off-Line Model

Dynamic time-series modelling of complex time series is a difficult task, especially

when the type of the model is not known in advance. In this section, we applied
the two DENFIS off-line models for the same task of MackeyGlass time-series
prediction. For comparison purposes two other well-known models, adaptive
neural-fuzzy inference system (ANFIS; Jang (1993)) and a multilayer perceptron
trained with the backpropagation algorithm (MLP-BP; Rumelhart et al. (1986)) are
also used for this task under the same conditions.
In addition to the nondimensional error index (NDEI), in the case of off-line
learning, the learning time is also measured as another comparative criterion.
Here, the learning time is the CPU-time (in seconds) consumed by each method
during the learning process in the same computing environment (MATLAB, UNIX
version 5.5).
Table 5.3 lists the off-line prediction results of MLP, ANFIS, and DENFIS, and
these results include the number of fuzzy rules (or rule nodes) in the hidden layer,
learning epochs, learning time (CPU-time), NDEI for training data, and NDEI for
testing data. The best results are achieved in the DENFIS II model.
In Figures 5.8d,e,f the prediction test error on the same test data is shown for
the following three DENFIS off-line learning models,
DENFIS off-line mode I with 116 fuzzy rules
DENFIS off-line mode I with 883 fuzzy rules
DENFIS off-line mode II with 58 fuzzy rules
The prediction error of DENFIS model II with 58 rule nodes is the lowest one.


Rule Extraction from DENFIS Models

DENFIS allow for rules to be extracted at any time of the system operation. The
rules are first-order TakagiSugeno rules.

Table 5.3 Prediction results of off-line learning models on MackeyGlass training and test data.

Neurons or rules


Training time (s)

Training NDEI

Testing NDEI







Evolving Neuro-Fuzzy Inference Models


This is illustrated on the gas-furnace time-series dataset. A DENFIS system is

trained on part of the data. The system approximates this data with a RMSE
of 0.276 and NDEI of 0.081; see Fig. 5.9a. There are 11 rule nodes created that
partition the input space as shown in Fig. 5.9b. Eleven rules are extracted, each of
them representing the 11 rule nodes as shown in Fig. 5.9c.



Fig. 5.9 (a) The trained DENFIS approximates the gas-furnace time-series data; (b) partitioning of the input
space by the 11 evolved rule nodes in DENFIS; (Continued overleaf )


Evolving Connectionist Systems






















if X1 is ( 0.44 0.50 0.57) and X2 is ( 0.45

then Y = 0.53 0.58 X1 + 0.53 X2
if X1 is ( 0.23 0.29 0.36) and X2 is ( 0.63
then Y = 0.52 0.52 X1 + 0.51 X2
if X1 is ( 0.65 0.71 0.78) and X2 is ( 0.25
then Y = 0.45 0.49 X1 + 0.60 X2
if X1 is ( 0.63 0.70 0.76) and X2 is ( 0.14
then Y = 0.41 0.43 X1 + 0.60 X2
if X1 is ( 0.50 0.56 0.63) and X2 is ( 0.33
then Y = 0.48 0.53 X1 + 0.59 X2
if X1 is ( 0.85 0.91 0.98) and X2 is ( 0.05
then Y = 0.54 0.55 X1 + 0.44 X2
if X1 is ( 0.26 0.32 0.39) and X2 is ( 0.51
then Y = 0.54 0.59 X1 + 0.50 X2
if X1 is ( 0.23 0.29 0.36) and X2 is ( 0.74
then Y = 0.51 0.52 X1 + 0.52 X2
if X1 is ( 0.51 0.57 0.64) and X2 is ( 0.55
then Y = 0.59 0.61 X1 + 0.46 X2
if X1 is ( 0.01 0.08 0.14) and X2 is ( 0.77
then Y = 0.53 0.51 X1 + 0.49 X2
if X1 is ( 0.19 0.26 0.32) and X2 is ( 0.83
then Y = 0.53 0.51 X1 + 0.50 X2

0.52 0.58)
0.69 0.76)
0.32 0.38)
0.20 0.27)
0.39 0.46)
0.12 0.18)
0.58 0.64)
0.81 0.87)
0.62 0.68)
0.83 0.90)
0.90 0.96)

Fig. 5.9 (continued ) (c) TakagiSugeno fuzzy rules extracted from a trained DENFIS model on the gas-furnace
time-series dataset.



DENFIS achieved better results in some aspects when compared with the growing
neural gas (Fritzke, 1995), RAN (Platt, 1991), EFuNN (Chapter 3), and ESOM
(Chapter 2) in the case of online learning of a chaotic time series. DENFIS off-line
learning produces comparable results with ANFIS and MLP.
DENFIS uses a local generalisation, like EFuNN and CMAC neural networks
(Albus, 1975), therefore it needs more training data than the models that use global
generalisation such as ANFIS and MLP. During the learning process DENFIS forms
an area of partitioned regions, but these regions may not cover the whole input
space. In the recall process, DENFIS would give satisfactory results if the recall
examples appeared inside these regions. In the case of examples outside this area,
DENFIS is likely to produce results with a higher error rate.
Using a different type of rules (see the list of types of rules at the beginning of
the chapter) in an ECOS architecture may lead to different results depending on
the task in hand. ECOS allow for both fuzzy and propositional (e.g. interval) rules
to be used depending on if there is a fuzzy membership layer.
If the ECOS architecture deals with a fuzzy representation, different types
of fuzzy rules can be exploited. For example, for classification purposes,
ZadehMamdani fuzzy rules may give a better result, but for function approximation and time series prediction a better result may be achieved with the use
of TakagiSugeno rules. The latter is demonstrated with a small experiment on
the gas-furnace and on the MackeyGlass benchmark datasets. Two versions of
EFuNN are compared: the first version uses ZadehMamdani fuzzy rules, and the
second version, TakagiSugeno fuzzy rules (Table 5.4).

Evolving Neuro-Fuzzy Inference Models


Table 5.4 Comparing two versions of EFuNN: the first version uses ZadehMamdani
fuzzy rules, and the second version TakagiSugeno fuzzy rules on the gas-furnace
times-series data, and on the MackeyGlass time-series data.

Gas-furnace time-series data:

number of rule nodes
Gas-furnace time-series data:
online testing NDEI
MackeyGlass time-series data:
number of rule nodes
MackeyGlass time-series data:
online testing NDEI

EFuNN with
fuzzy rules

EFuNN with
fuzzy rules









For the same number of rule nodes evolved in the two types of EFuNNs, the
EFuNN that uses the TakagiSugeno fuzzy rules gives better results on the timeseries prediction problem for the two benchmark time-series data.


Transductive Neuro-Fuzzy Inference Models

Principles and Structure of the TWNFI

TWNFI is a dynamic neural-fuzzy inference system with a local generalization,

in which, either the ZadehMamdani type fuzzy inference engine is used, or the
TakagiSugeno fuzzy inference is applied. Here, the former case is introduced.
The local generalisation means that in a subspace of the whole problem space
(local area) a model is created that performs generalisation in this area. In the
TWNFI model, Gaussian fuzzy membership functions are applied in each fuzzy
rule for both the antecedent and the consequent parts. A steepest descent (BP)
learning algorithm is used for optimizing the parameters of the fuzzy membership
functions. The distance between vectors x and y is measured in TWNFI in weighted
normalized Euclidean distance defined as follows (the values are between 0 and 1),


w j x j y j 
x y =
P j=1


where: x, y RP , and wj are weights.

To partition the input space for creating fuzzy rules and obtaining initial values
of fuzzy rules, the ECM (evolving clustering method) is applied (Chapter 2) and
the cluster centres and cluster radii are respectively taken as initial values of
the centres and widths of the Gaussian membership functions. Other clustering
techniques can be applied as well. A block diagram of the TWNFI is shown in
Fig. 5.10.


Evolving Connectionist Systems

Fig. 5.10 A block diagram of the TWNFI method (from Song and Kasabov (2006)).


The TWNFI Learning Algorithm

For each new data vector xq an individual model is created with the application
of the following steps.
Normalize the training dataset and the new data vector xq (the values are
between 0 and 1) with value 1 as the initial input variable weights.
Search in the training dataset in the input space to find Nq training examples
that are closest to xq using weighted normalized Euclidean distance. The value of
Nq can be predefined based on experience, or optimised through the application
of an optimisation procedure. Here we assume the former approach.
Calculate the distances di , i = 1
Nq , between each of these data samples
and xq . Calculate the vector weights vi = 1di mind, i = 1
Nq , min(d)
is the minimum value in the distance vector d = d1
dNq ].
Use the ECM clustering algorithm to cluster and partition the input subspace
that consists of Nq selected training samples.
Create fuzzy rules and set their initial parameter values according to the
ECM clustering procedure results; for each cluster, the cluster centre is taken as
the centre of a fuzzy membership function (Gaussian function) and the cluster
radius is taken as the width.
Apply the steepest descent method (backpropagation) to optimize the weights
and parameters of the fuzzy rules in the local model Mq following the equations
given below.

Evolving Neuro-Fuzzy Inference Models


Search in the training dataset to find Nq samples (the same as Step 2); if
the same samples are found as in the last search, the algorithm goes to Step 8,
otherwise, to Step 3.
Calculate the output value yq for the input vector xq applying fuzzy inference
over the set of fuzzy rules that constitute the local model Mq .
End of the procedure.
The weight and parameter optimization procedure is described below:
Consider the system having P inputs, one output and M fuzzy rules defined
initially through the ECM clustering procedure, the lth rule has the form of:

If x1 is Fl1 and x2 is Fl2 and    xP is FlP

then y is Gl


Here, Flj are fuzzy sets defined by the following Gaussian type membership

x m2
GaussianMF =  exp
2 2


and Gl are logarithmic regression functions:


Gl = bl0 x1 l1 x2 l2 xplp


Using the modified centre average defuzzification procedure the output value of
the system can be calculated for an input vector xi = x1
xP  as follows.

wj2 xij mlj

Gl lj exp
2 lj2
fxi  =

wj2 xij mlj

lj exp
2 lj2
l=1 j=1
Here, wj are weights of the input variables.
Suppose the TWNFI is given a training inputoutput data pair [xi , ti ], the system
minimizes the following objective function (a weighted error function):

Gl kvi f k xi  ti xi 
bl0 k
E = vi fxi  ti 2

bl0 k + 1 = bl0 k


(vi are defined in Step 3).

The steepest descent algorithm (BP) is used then to obtain the formulas for the
optimisation of the parameters blj , lj , mlj , lj , and wj such that the error E is

blj k + 1 = blj k b Gl k ln xij vi f k xi  ti xi 


Evolving Connectionist Systems

lj k + 1 = lj k


 vi xi  k
f xi  ti Gl k f k xi 
lj k

mlj k + 1 = mlj k

m wj2 kvi xi  
/j2 k


f k xi  ti Gl k f k xi  xij mlj k

lj k + 1 = lj k
 wj2 kvi xi 
/j3 k



f k xi  ti Gl k f k xi  xij mlj k

wj k + 1 = wj k
w wj kvi xi  
/j2 k


f k xi  ti f k xi  Gl k xij mlj k


lj exp

wj2 k xij mlj k

2 lj2 k
xi  =

wj2 k xij mlj k

lj exp
2 lj2 k
l=1 j=1


where b ,  , m ,  , and w and are learning rates for updating the parameters
blj , lj , mlj , lj , and wj , respectively.
In the TWNFI trainingsimulating algorithm, the following indexes are used.
Training data samples: i = 1
Input variables: j = 1
Fuzzy rules: l = 1
Training epochs: k = 1


Applications of TWNFI for Time-Series Prediction

In this section several evolving NFI systems are applied for modelling and
predicting the future values of a chaotic time series: MackeyGlass (MG) dataset
(see Chapter 1), which has been used as a benchmark problem in the areas of

Evolving Neuro-Fuzzy Inference Models


neural networks, fuzzy systems, and hybrid systems. This time series is created
with the use of the MG time-delay differential equation defined below:
1 + x10 t 


To obtain values at integer time points, the fourth-order RungeKutta method was
used to find the numerical solution to the above MG equation. Here we assume
that: the time step is 0.1; x0 = 12;  = 17; and xt = 0 for t < 0. The task is to
predict the values xt + 6) from input vectors [xt 18, xt 12, xt 6, xt]
for any value of the time t. The following experiment was conducted: 1000 data
points, from t = 118 to 1117, were extracted; the first 500 data were taken as the
training data and another 500 as testing data. For each of the testing data sample a
TWNFI model was created and tested on this data. Figure 5.11 displays the target
data and Table 5.5 lists the testing results represented by NDEI, nondimensional
error index, that is defined as the root mean square error (RMSE) divided by
the standard deviation of the target series. For the purpose of a comparative
analysis, we have quoted the prediction results on the same data produced by
some other methods, which are also listed in Table 5.5. The TNFI method there is
the same as the TWNFI method described in Section 5.2, but there is no weight
optimization (all they are assumed equal to one and do not change during the
model development).
The TWNFI model performs better than the other models. This is a result of the
fine tuning of each local model in TWNFI for each tested example, derived according
to the TWNFI learning procedure. The finely tuned local models achieve a better
local generalisation. For each individual model, created for each test sample, input
variable weights w1q , w2q , w3q , and w4q are derived. Table 5.1 shows the average weight
for each variable across all test samples. The weights suggest a higher importance of
the first, second, and fourth input variables, but not the third one.

Fig. 5.11 The MackeyGlass case study data: the first half of data (samples 118617) is used as training data,
and the second half (samples 6181117) as testing data. An individual TWNFI prediction model is created for
each test data vector, based on the nearest data vectors (from Song and Kasabov (2006)).


Evolving Connectionist Systems

Table 5.5 Comparative analysis of test accuracy of several methods
on the MG series.

CCNN model
Sixth-order polynomial


Testing NDEI


Weights of input








Applications of TWNFI for Medical Decision Support

A real dataset from a medical institution is used here for experimental analysis
(Marshal et al., 2005) The dataset has 447 samples, collected at hospitals in New
Zealand and Australia. Each of the records includes six variables (inputs): age,
gender, serum creatinine, serum albumin, race, and blood urea nitrogen concentrations, and one output: the glomerular filtration rate value (GFR).
All experimental results reported here are based on tencross-validation experiments with the same model and parameters and the results are averaged. In each
experiment 70% of the whole dataset is randomly selected as training data and
another 30% as testing data.
For comparison, several well-known methods are applied on the same problem,
such as the MDRD logistic regression function widely used in the renal clinical
practice (MDRD, see Marshall et al., 2005), MLP neural network (Chapter 3),
adaptive neural fuzzy inference system (ANFIS), and a dynamic evolving neural
fuzzy inference system (DENFIS; this chapter, and also Fig. 5.12), along with the
Results are presented in Table 5.6. The results include the number of fuzzy
rules (fuzzy models), or neurons in the hidden layer (MLP), the testing RMSE
(root mean square error), the testing MAE (mean absolute error), and the
weights of the input variables (the upper bound for the variable normalization
Two experiments with TWNFI are conducted. The first one applies the transductive NFI without WDN: all weights values are set as 1 and are not changed
during the learning. Another experiment employs the TWNFI learning algorithm.
The experimental results illustrate that the TWNFI method results in a better
accuracy and also depicts the average importance of the input variables represented
as the calculated weights.
For every patient sample, a personalised model is created and used to evaluate
the output value for the patient and to also estimate the importance of the variables
for this patient as shown in Table 5.6. The TWNFI not only results in a better

Evolving Neuro-Fuzzy Inference Models


Fig. 5.12 The interface of an adaptive, kidney function evaluation decision support system: GFR-ECOS (from
Marshall et al. (2005)).

accuracy for this patient, but shows the importance of the variables for her or him
that may result in a more efficient personalised treatment.
The transductive neuro-fuzzy inference with weighted data normalization
method (TWNFI) performs a better local generalisation over new data as it
develops an individual model for each data vector that takes into account the
new input vector location in the space, and it is an adaptive model, in the sense
that inputoutput pairs of data can be added to the dataset continuously and
immediately made available for transductive inference of local models. This type
of modelling can be called personalised, and it is promising for medical decision

Table 5.6 Experimental results on GFR data using different methods (from Song et al. (2005)).


Neurons or rules

6.8 (average)
6.8 (average)



Test MAE


Weights of Input Variables














Evolving Connectionist Systems

Table 5.7 A personalised model for GFR prediction of a single patient derived with the use of the TWNFI
method for personalised modelling (from Song and Kasabov (2006)).







Weights of input variables (TWNFI)








GFR (desired)



support systems. TWNFI creates a unique submodel for each data sample (see
example in Table 5.7), and usually requires more performing time than an inductive
model, especially in the case of training and simulating on large datasets.
Further directions for research include: (1) TWNFI system parameter optimisation such as optimal number of nearest neighbours; (2) transductive feature
selection; (3) applications of the TWNFI method for other decision support
systems, such as: cardiovascular risk prognosis, biological processes modelling,
and prediction based on gene expression microarray data.


Other Evolving Fuzzy Rule-Based Connectionist Systems

Type-2 Neuro-Fuzzy Systems

In all models and ECOS presented thus far in this part of the book, we assumed
the following.
A connection weight is associated with a single number.
Each neurons activation has a numerical value.
The degree to which an input variable belongs to a fuzzy membership function
is a single numerical value.
In brief, each structural element of an ECOS is associated with a single numerical
The above assumptions may not be appropriate when processing complex information such as noisy information (with a random noise added to each data item
that may vary according to a random distribution function).
Here, the ECOS paradigm presented thus far is extended further to using higherorder representation, for example association of a function rather than a single
value with a connection, with a neuron, or with a membership function. Such an
extension can be superior when:

Data are time varying (e.g. changing dynamics of a chaotic process).

Noise is nonstationary.
Features are nonstationary (they change over time).
Dealing with inexact human knowledge that changes over time and varies across
humans. Humans, especially experts, change their minds during the process of

Evolving Neuro-Fuzzy Inference Models


learning and understanding phenomena. For various people the same concepts
may have different meaning (e.g. the concept of a small salary points to different
salary scales in the United States, Bulgaria, and Sudan).
As a theoretical basis for type-2 ECOS, some principles from the theory of type-2
fuzzy systems can be used (for a detailed description see Mendel (2001)). The
type-2 fuzzy system theory is based on several concepts as explained below.
Zadeh in 1967. This is in contrast to the type-1 fuzzy sets where every element
belongs to the set to a certain membership degree that is represented as a single
number between 0 and 1.
An example of a type-2 fuzzy membership function is given in Fig. 5.13.
Type-2 MF can be used to represent MF that may change over time, or MF that
are interpreted differently by different experts.
Type-2 fuzzy rules are fuzzy rules of the form of: IF x is A AND y is B THEN
z is C , where A , B , and C are type-2 fuzzy membership functions. These
rules deal with interval values rather than with single numerical values.
Type-2 fuzzy inference systems are fuzzy rule-based systems that use type-2
fuzzy rules. The inference process consists of the following steps.
Step 1. Fuzzyfication of input data with the use of type-2 fuzzy MF.
Step 2. Fuzzy inference (decision making) with the use of type-2 fuzzy rules. The
inference produces type-2 output MF.
Step 3. Defuzzification that would include a step of type reduction which transforms the type 2 output MF into type 1 MF, and a step of calculating a
single output value from a type-1 MF.

Fig. 5.13 An example of type-2 triangular fuzzy membership function (MF) of a variable x. Although the
centre of the MF is fixed, the left and the right sides may vary. A fuzzification procedure produces minmax
interval membership values as illustrated on a value x for the variable x.


Evolving Connectionist Systems

The move from type-1 systems to type-2 systems is equivalent to moving from a
2D space to a 3D space of representation and reasoning.

Type-2 EFuNN
In type-2 ECOS each connection has either a function or a numerical minmax
interval value associated with it. For an EFuNN for example that interval could
be formed by the value of the connection weight Wi,j, that represents the fuzzy
coordinate of the ith cluster (rule node ri , plus/minus the radius of this node
Ri: Wi
jRi = Wi
j Ri
j + Ri.
A type-2 EFuNN uses type-2 fuzzy MF defined a priori and modified during the
system operation. For each input vector x
the fuzzification procedure produces
interval membership values. For example, for an input variable xk , we can denote its
membership interval to a type-2 MF A as A xk  = minA xk 
maxA xk .
Having interval values in a connectionist system requires interval operations
to be applied as part of its inference procedure. The interval operations that are
defined for type-2 fuzzy systems (see Mendel (2001)) can be applied here. The
distance between the input interval A xk  and the rule node ri interval Wi,j(Ri)
is defined as an interval operation, so as the activation of the rule node ri . The
activation of the output MF nodes is calculated also based on multiplication
operation of interval values and scalars, and on summation of interval values.
In Mendel (2001) several defuzzification operations over type-2 fuzzy MF are
presented that can be used in a type-2 EFuNN.


Interval-Based ECOS and Other Ways of Defining

Receptive Fields

Most of the ECOS methods presented in this part of the book used hyperspheres
to define a receptive field of a rule node r. For example, in the ECM and in the
EFuNN models, if a new data vector x falls in this receptive field it is associated
with this node. Otherwise, if the data vector is still within a maximum radius R
distance from the receptive field of r, the node r co-ordinates are adjusted and its
receptive field is increased, but if the data vector is beyond this maximum radius
there will be a new node created to accommodate the data vector x.
Instead of using a hypersphere and a maximum radius R that defines the same
distance for each variable from the input space, a rectangular receptive field can
be used with minimum and maximum interval values for each rule node and for
each variable that is derived through the evolving process. These values can be
made restricted with the use of global minimum and maximum values Min and
Max instead of using a single radius R as is the case in the hypersphere receptive
field. An example is given in Fig. 5.14.
Using intervals and hyperrectangular receptive fields allows for a better partitioning of the problem space and in many cases leads to better classification and
prediction results.
Interval-based receptive fields can be used in both a fuzzy version and a nonfuzzy
version of ECOS. Figure 5.14 applies to both. The extracted rules from a trained

Evolving Neuro-Fuzzy Inference Models


node r1

node r2

x(r1)min x(r1)max x(r2)max


Fig. 5.14 Using intervals, hyperrectangles (solid lines) for defining receptive fields versus using hyperspheres
(the dotted circles) on a case study of two rule nodes r1 and r2 and two input variables x and y.

ECOS will be interval-based. For example, the rule that represents rule node 1
from Fig. 5.14 will read:
IF x is in the interval [xr1)min, xr1)max] AND y is in the interval [yr1)min,
yr1)max] THEN class will be (), with Nr1 examples associated with this rule.

Divide and Split Receptive Fields

Online modification of the receptive fields is crucial for successful online learning.
Some times a receptive field is created for a rule node, but within this receptive
field there is a new example that belongs to a different class. In this case the new
example will be assigned a new node that divides the previous receptive field into
several parts. Each of these parts will be assigned new nodes that will have the
same class label as the mother node. This approach is very efficient when applied
for online classification in complex multiclass distribution spaces.
Figure 5.15. shows an example where a new class example divides the receptive
field of the existing node from (a), drawn in a 2D space into six new regions
represented by six nodes.

Here a classical classification benchmark problem was used the classification of
Iris flower samples into three classes Setosa, Versicolor, and Virginica, based on
four features sepal length, petal length, sepal width, and petal width. A dataset
of 150 examples, 50 examples of each of the three classes was used (see Fisher
(1936) and also Duda and Hart (1973)). Three types of EFuNNs were evolved
through one-pass learning on the whole dataset of 150 examples. Then each EFuNN
was tested for classification results on the same dataset. The EFuNN with the


Evolving Connectionist Systems


A new class example divides the

receptive filed of the existing node from (a) into 6 new regions
represented by 6 nodes


Fig. 5.15 (a) An existing receptive field of a rule node in a 2D input space; (b) a new class example divides
the receptive field of the existing node from (a) into six new regions represented by six nodes.

hyperspherical receptive field produced three misclassified examples, the EFuNN

with hyperrectangular fields produced two errors, and the EFuNN with the divide
and split receptive fields resulted in no misclassified examples.


Evolving Neuro-Fuzzy System: EFS

In Angelov (2002) an evolving neuro-fuzzy system framework, called EFS, is

introduced that is represented as a connectionist architecture in Fig. 5.16. The
framework is used for adaptive control systems and other applications. Here a
brief presentation of the structure and the learning algorithm of EFS is presented
adopted from Angelov (2002).
The first layer consists of neurons corresponding to the membership functions
of fuzzy sets. The second layer represents the antecedent parts of the fuzzy rules.

Evolving Neuro-Fuzzy Inference Models

Layer 1
Fuzzy sets


Layer 2




















Layer 5




.. .


Layer 4





Layer 3








Fig. 5.16 Neuro-fuzzy interpretation of the evolving fuzzy system (Angelov, 2002). The structure is not
predefined and fixed; rather it evolves from scratch by learning from data simultaneously with the parameter
adjustment/adaptation (reproduced with permission).

It takes as inputs the membership function values and gives as output the firing
level of the ith rule. The third layer of the network takes as inputs the firing levels
of the respective rule and gives as output the normalized firing level. The fourth
layer aggregates the antecedent and the consequent part that represents the local
subsystems (singletons or hyperplanes). Finally, the last, fifth, layer forms the total
output of the evolving fuzzy system performing a weighed summation of local
multiple model outputs.
There are two main algorithms for training an EFS. The first one, called eTS,
is based on combining unsupervised learning with respect to the antecedent
part of the model and supervised learning in terms of the consequent parameters, where the fuzzy rules are of the TakagiSugeno type, similar to the
DENFIS training algorithm presented in Section 5.3 (Kasabov and Song, 2002).
An unsupervised clustering algorithm is employed to continuously analyze the
inputoutput data streams and identifies emerging new data structures. It clusters
the inputoutput space into N fuzzy regions. The clusters define a fuzzy partitioning of the input space into subsets that are obtained by projecting the
cluster centres onto the space of input (antecedent) variables. The learning
algorithm also assigns to each of the clusters a linear subsystem. The eTS learning
algorithm (Angelov, 2002) is a density-based clustering that stems from the
Mountain clustering method and an extension called the Subtractive clustering.
The eTS clustering is based on the recursive calculation of the potential Pt zt 
of each data point zt in the inputoutput space zt = Rn+m 
Pt zt  =

t 1

t 1at + 1 2ct + bt



Evolving Connectionist Systems


at =

p  2
p  2

j j
z t  bt =
zi  ct = zt ft  ft = zi

i=1 j=1



and the potential of the cluster centres

Pt z  =

t 1 Pt1 z 
t 2 + Pt1 z  + Pt1 z  z zt1 2


Existing cluster centres are updated only if the new data zt point is substantially
different from the existing clusters, i.e. when its potential Pt zt  brings a spatial
innovation in respect to already existing centres.
The second algorithm for training the EFS structure of ZadehMamdani types of
fuzzy rules is called fSPC (learning through distance-based outputinput clustering;
Angelov (2002)). The fSPC algorithm is inspired by the statistical process control
(SPC), a method for process variability monitoring. The SPC procedure naturally
clusters the system output data into granules (clusters) that relate to the same
process control conditions and are characterized with similar system behavior.
The output granules induce corresponding granules (clusters) in the input domain
and define the parameters of the rule antecedents:
close =  exp05x xi  Cxi1 x xi 


where xi is the vector of input means and Cxi is the input covariance matrix.
For outputs that belong to an output granule, i.e. satisfy condition (5.31), the
rule parameters associated with the respective granule are recursively updated
through exponential smoothing.


Evolving Self-Organising Fuzzy Cerebellar Model

Articulation Controller

The cerebellar model articulation controller (CMAC) is an associative memory

ANN that is inspired by some principles of the information processing in the part
of the brain called cerebellum (Albus, 1975).
In Nguyen et al. (2006) an evolving self-organising fuzzy cerebellar model articulation controller (ESOF-CMAC) is proposed. The method applies an unsupervised clustering algorithm similar to ECM (see Chapter 2), and introduces a
novel method for evolving the fuzzy membership functions of the input variables.
The connectionist structure implements ZadehMamdani fuzzy rules, similar to
the EFuNN structure. A good performance is achieved with the use of cluster
aggregation and fuzzy membership function adaptation in an evolving mode of

Evolving Neuro-Fuzzy Inference Models




Choose a time-series prediction problem and a dataset for it.

Select a set of features for the model.
Create an inductive ANFIS model and validate its accuracy.
Create an inductive HyFIS ANFIS model and validate its accuracy.
Create an inductive DENFIS model and validate its accuracy.
Create a transductive TWNFI model and validate its accuracy.
Answer the following questions.
(a) Which of the above models are adaptive to new data and under what
conditions and constraints?
(b) Which models allow for knowledge extraction and what type of knowledge
can be acquired from them?


Summary and Open Problems

The chapter presents evolving neuro-fuzzy inference systems for both off-line and
online learning from data, rule insertion, rule extraction, and inference over these
rules. ANFIS is not flexible in terms of changing the number of membership
functions and rules over time, according to the incoming data. HyFIS and DENFIS
can be used as both off-line and online knowledge-based learning systems. Whereas
HyFIS and EFuNN use ZadehMamdani simple fuzzy rules, DENFIS exploits
TakagiSugeno first-order fuzzy rules. Each of the above systems has its strengths
and weaknesses when applied to different tasks.
Some major issues need to be addressed in a further study:
1. How to choose dynamically the best inference method in an ECOS for the
current time interval depending on the data flow distribution and on the task
in hand.
2. What type of fuzzy rules and receptive fields are most appropriate for a given
problem and a given problem space? Can several types of rules be used in one
system, complementing each other?
3. How to test the accuracy of a neuro-fuzzy inference system in an online mode
if data come from an open space.
4. How to build transductive, personalised neuro-fuzzy inference systems without
keeping all data samples in a database, but only prototypes and already built
transductive systems in the past.


Further Reading

More details on related to this chapter issues can be found as follows.

Fuzzy Sets and Systems (Zadeh, 1965; Terano et al., 1992; Dubois and Prade,
1980, 1988; Kerr, 1991; Bezdek, 1987; Sugeno, 1985)


Evolving Connectionist Systems

ZadehMamdani Fuzzy Rules (Zadeh, 1965; Mamdani, 1977)

TakagiSugeno Fuzzy Rules (Takagi and Sugeno, 1985; Jang, 1995)
Type-2 Fuzzy Rules and Fuzzy Inference Systems (Mendel, 2001)
Neuro-fuzzy Inference Systems and Fuzzy Neural Networks (Yamakawa et al.,
1992; Furuhashi, 1993; Hauptmann and Heesche, 1995; Lin and Lee, 1996;
Kasabov, 1996; Feldcamp, 1992; Gupta, 1992; Gupta and Rao, 1994)
ANFIS (Jang, 1993)
HyFIS (Kim and Kasabov, 1999)
DENFIS (Kasabov and Song, 2002)
Rule Extraction from Neuro-fuzzy Systems (Hayashi, 1991; Mitra and Hayashi,
2001; Duch et al., 1997)
Neuro-fuzzy Systems as Universal Approximators (Hayashi, 1991)
Evolving Fuzzy Systems (Angelov, 2002)
TNFI: Transductive Neuro-fuzzy Inference Systems (Song and Kasabov, 2005)
TWNFI: Transductive Weighted Neuro-fuzzy Inference Systems (Song and
Kasabov, 2006)

6. Population-Generation-Based
Methods: Evolutionary Computation

Natures diversity of species is tremendous. How does mankind evolve in the

enormous variety of variants? In other words, how does nature solve the optimisation problem of perfecting mankind? An answer to this question may be found
in Charles Darwins theory of evolution (1858). Evolution is concerned with the
development of generations of populations of individuals governed by fitness
criteria. But this process is much more complex, as individuals, in addition to
what nature has defined for them, develop in their own way: they learn and evolve
during their lifetime. This chapter is an attempt to apply the principle of nature
nurture duality to evolving connectionist systems. While improving its performance through adaptive learning, an individual evolving connectionist system
(ECOS) can improve (optimise) its parameters and features through evolutionary
computation (EC). The chapter is presented in the following sections.

A brief introduction to EC
Genetic algorithms (GA) and evolutionary strategies (ES)
Traditional use of EC as learning and optimisation techniques for ANN
EC for parameter and feature optimisation of adaptive, local learning models
EC for parameter and feature optimisation of transductive, personalised models
Particle swarm intelligence
Artificial life systems
Summary and open questions
Further reading


A Brief Introduction to EC

Charles Darwin favoured the Mendelian heredity explanation that states that
features are passed from generation to generation.
In the early 1800s Jean-Baptiste Lamarck had expounded the view that changes
in individuals over the course of their lives were passed on to their progeny. This
perspective was adopted by Herbert Spencer and became an established view along
with the Darwins theory of evolution.
The evolution in nature inspired computational methods called evolutionary
computation (EC). ECs are stochastic search methods that mimic the behaviour of


Evolving Connectionist Systems

natural biological evolution. They differ from traditional optimisation techniques

in that they involve a search from a population of solutions, not from a single
point, and carry this search over generations. So, EC methods are concerned
with population-based search and optimisation of individual systems through
generations of populations (Goldberg, 1989; Koza, 1992; Holland, 1992, 1998).
Several different types of evolutionary methods have been developed independently. These include genetic programming (GP) which evolves programs, evolutionary programming (EP), which focuses on optimising continuous functions
without recombination, evolutionary strategies (ES), which focus on optimising
continuous functions with recombination, and genetic algorithms (GAs), which
focus on optimising general combinatorial problems, the latter being the most
popular technique. EC has been applied thus far to the optimisation of different
structures and processes, one of them being the connectionist structures and
connectionist learning processes (Fogel et al., 1990; Yao, 1993). Methods of EC
include in principle two stages (see Fig. 6.1):
1. A stage of creating new population of individuals
2. A stage of development of the individual systems, so that a system develops
and evolves through interaction with the environment, which is also based on
the genetic material embodied in the system
The process of individual (internal) development has been ignored or neglected
in many EC methods as insignificant from the point of view of the long process
of generating hundreds of generations, each of them containing hundreds and
thousands of individuals.
But my personal concern as an individual and also as the author of the book
is that it matters to me not only how much I have contributed to the improvement
of the genetic code of the population that is going to live, possibly, 2,000,000 years
after me, but also how I can improve myself during my lifetime, and how I evolve
as an individual in a particular environment, making the best out of my genetic
ECOS deal with the process of interactive off-line or online adaptive learning
of a single system that evolves from incoming data. The system can either have

Fig. 6.1 A schematic algorithmic diagram of evolutionary computation (EC).

Population-Generation-Based Methods


its parameters (genes) predefined, or it can be self-optimised during the learning

process starting from some initial values. ECOS should also be able to improve
their performance and adapt better to a changing environment through evolution,
i.e. through population-based improvement over generations.
There are several ways in which EC and ECOS can be interlinked. For example,
it is possible to use EC to optimise the parameters of an ECOS at a certain time of
its operation, or to use the methods of ECOS for the development of the individual
systems (individuals) as part of the global EC process.
Before we discuss methods for using EC for the optimisation of connectionist
systems, a short introduction to two of the most popular EC techniques genetic
algorithms and evolutionary strategies is given below.


Genetic Algorithms and Evolutionary Strategies

Genetic Algorithms

Genetic algorithms were introduced for the first time in the work of John Holland
in 1975. They were further developed by him and other researchers (Holland,
1992, 1998; Goldberg, 1989; Koza, 1992). The most important terms used in GA
are analogous to the terms used to explain the evolution processes. They are:
Gene: A basic unit that defines a certain characteristic (property) of an individual
Chromosome: A string of genes; used to represent an individual or a possible
solution to a problem in the population space
Crossover (mating) operation: Substrings of different individuals are taken and
new strings (offspring) are produced
Mutation: Random change of a gene in a chromosome
Fitness (goodness) function: A criterion which evaluates how good each
individual is
Selection: A procedure of choosing a part of the population which will continue
the process of searching for the best solution, while the other individuals die
A simple genetic algorithm consists of steps shown in Fig. 6.2. The process over
time has been stretched in space. Whereas Fig. 6.2 shows graphically how a GA
searches for the best solution in the solution space, Fig. 6.3 gives an outline of
the GA.
When using the GA method for a complex multioption optimisation problem,
there is no need for in-depth problem knowledge, nor is there a need for many
data examples stored beforehand. What is needed here is merely a fitness or
goodness criterion for the selection of the most promising individuals (they may
be partial solutions to the problem). This criterion may require a mutation as
well, which is a heuristic approach of trial-and-error type. This implies keeping
(recording) the best solutions at each stages.
Many complex optimisation problems find their way to a solution through
genetic algorithms. Such problems are, for example, the travelling salesman
problem (TSP): finding the cheapest way to visit n towns without visiting a town


Evolving Connectionist Systems





Solution space


the goal state

Fig. 6.2 A schematic diagram of how a genetic algorithm (GA) works in time (from Kasabov (1996), MIT
Press, reproduced with permission).

1. Initialize population of possible solutions

2. WHILE a criterion for termination is not reached DO
2a. Crossover two specimens ("mother and father") and generate new
2b. Select the most promising ones, according to a fitness function;
2c. Development (if at all);
2d. Possible mutation (rare) }

Fig. 6.3 A general representation of the GA (from Kasabov (1996), MIT Press, reproduced with permission).

twice; the min cut problem: cutting a graph with minimum links between the
cut parts; adaptive control; applied physics problems; optimisation of the parameters of complex computational models; optimisation of neural network architectures (Fogel et al., 1990); and finding fuzzy rules and membership functions
(Furuhashi et al., 1994).
The main issues in using genetic algorithms relate to the choice of genetic
operations (crossover, selection, mutation). In the case of the travelling salesman
problem the crossover operation can be merging different parts of two possible
roads (mother and father roads) until new usable roads are obtained. The
criterion for the choice of the most prospective ones is minimum length (or cost).
A GA offers a great deal of parallelism: each branch of the search tree for a
best individual can be utilised in parallel with the others. That allows for an easy
realisation on parallel architectures. GAs are search heuristics for the best instance
in the space of all possible instances. A GA model requires the specification of the
following features.

Population-Generation-Based Methods


Encoding scheme: How to encode the problem in terms of the GA notation: what
variables to choose as genes, how to construct the chromosomes, etc.
Population size: How many possible solutions should be kept in a population
for their performance to be further evaluated
Crossover operations: How to combine old individuals and produce new, more
prospective ones
Mutation heuristic: When and how to apply mutation
In short, the major characteristics of a GA are the following. They are heuristic
methods for search and optimisation. In contrast to the exhaustive search
algorithms, GAs do not evaluate all variants in order to select the best one.
Therefore they may not lead to the perfect solution, but to one which is closest
to it taking into account the time limits. But nature itself is imperfect too (partly
due to the fact that the criteria for perfection keep changing), and what seems to
be close to perfection according to one goodness criterion may be far from it
according to another.


Selection, Crossover, and Mutation Operators in EC

The theory of GA and the other EC techniques includes different methods for
selection of individuals from a population, different crossover techniques, and
different mutation techniques.
Selection is based on fitness that can employ several strategies. One of them is
proportional fitness, i.e. if A is twice as fit as B, A has twice the probability of
being selected. This is implemented as roulette wheel selection and gives chances to
individuals according to their fitness evaluation as shown in an example in Fig. 6.4.
Other selection techniques include tournament selection (e.g. at every time of
selection the roulette wheel is turned twice, and the individual with the highest fitness
is selected), rank ordering, and so on (Fogel et al., 1990). An important feature of the
selection procedure is that fitter individuals are more likely to be selected.
The selection procedure may also involve keeping the best individuals from
previous generations (if this principle was used by nature, Leonardo Da Vinci
would still be alive, as he was one of the greatest artists ever, presumably having
the best artistic genes). This operation is called elitism.
After the best individuals are selected from a population, a crossover operation is
applied between these individuals. The crossover operator defines how individuals
(e.g. mother and father) exchange genes when creating the offspring. Different
crossover operations can be used, such as one-point crossover (Fig. 6.5), two-point
crossover, etc.
























Fig. 6.4 An example of a roulette selection strategy. Each of the ten individuals has its chance to survive (to
be selected for reproduction) based on its evaluated fitness.


Evolving Connectionist Systems



Fig. 6.5 One-point crossover operation.

Mutation can be performed in several ways, e.g.

For a binary chromosome, just randomly flip a bit (a gene allele).
For a more complex chromosome structure, randomly select a site, delete the
structure associated with this site, and randomly create a new substructure.
Some EC methods just use mutation and no crossover (asexual reproduction).
Normally, however, mutation is used to search in a local search space, by allowing
small changes in the genotype (and therefore it is hoped in the phenotype) as it is
in the evolutionary strategies.


Evolutionary Strategies (ES)

Another EC technique is called evolutionary strategies (ES). These techniques use

only one chromosome and a mutation operation, along with a fitness criterion, to
navigate in the solution (chromosomal) space.
In the reproduction phase, the current population, called the parent population,
is processed by a set of evolutionary operators to create a new population called
the offspring population. The evolutionary operators include two main operators:
mutation and recombination; both imitate the functions of their biological counterparts. Mutation causes independent perturbation to a parent to form an offspring
and is used for diversifying the search. It is an asexual operator because it involves
only one parent. In GA, mutation flips each binary bit of a parent string at a small,
independent probability pm (which is typically in the range [0.001, 0.01]) to create
an offspring. In ES, mutation is the addition of a zero-mean Gaussian random
number to a parent individual to create the offspring. Let sPA and sOF denote the
parent and offspring vector; they are related through the Gaussian mutation
sOF = sPA + z

z N 0 s


where Na s represents a normal (Gaussian) distribution with a mean a and a

covariance s and denotes sampling from the corresponding distribution.
ES uses the mutation as the main search operator.
The selection operator is probabilistic in GA and deterministic in ES. Many
heuristic designs, such as the rank-based selection that assigns to the individuals a
survival probability proportional (or exponentially proportional) to their ranking,
have also been studied. The selected individuals then become the new generation

Population-Generation-Based Methods


of parents for reproduction. The entire evolutionary process iterates until some
stopping criteria are met. The process is essentially a Markov chain; i.e. the
outcome of one generation depends only on the last. It has been shown that under
certain design criteria of the evolutionary operators and selection operator, the
average fitness of the population increases and the probability of discovering the
global optimum tends towards unity. The search could, however, be lengthy.


Traditional Use of EC for Learning and Optimisation

in ANN

Before we present in the next section the use of EC for the optimisation of ECOS,
here a brief review of different approaches for using EC for the optimisation of
MLP and other traditional, not evolving, ANN models is presented. Such reviews
have been published by several authors (Yao, 1993; Fogel et al., 1990; Watts and
Kasabov, 1998, 1999).


ANN Topology Determination by GA

The number of layers within a connectionist structure, e.g. MLP, and the number
of nodes within each layer can often have a significant effect upon the performance
of the network. Too many nodes in the hidden layers of the network may cause
the network to overfit the training data, whereas too few may reduce its ability to
In Schiffman et al. (1993) the length of the chromosome determined the number
of nodes present, as well as the connectivity of the nodes. This approach to ANN
design was tested in the above paper on a medical classification problem, that of
identifying thyroid disorders and provided networks that were both smaller and
more accurate than manually designed ANNs were.


Selection of ANN Parameters by GA

In addition to the selection of the ANN topology, it is also possible to select

the training parameters for the network. This has been investigated by many
authors: see for example Choi and Bluff (1995), who used a GA to select the
training parameters for the backpropagation training of an MLP. In this work
the chromosome encoded the learning rate, momentum, sigmoid parameter, and
number of training epochs to be used for the backpropagation training of the
network. This technique was tested with several different datasets, including bottle
classification data, where a glass bottle is classified as either being suitable for reuse
or suitable for recycling, and breast cancer data, which classifies tissue samples as
malignant or benign. For each of the test datasets, the genetically derived network
outperformed those networks whose control parameters were manually set, often
by a significant margin.



Evolving Connectionist Systems

Training of ANN via GA

For some problems, it is actually more efficient to abandon the more conventional
training algorithms, such as backpropagation, and train the ANN via GA. For some
problems GA training may be the only way in which convergence can be reached.
GA-based training of ANNs has been extensively investigated by Hugo de Garis
(1989, 1993). The networks used in these experiments were not of the MLP variety,
but were instead fully self-connected synchronous networks. These GenNets were
used to attempt tasks that were time-dependent, such as controlling the walking
action of a pair of stick legs. With this problem the inputs to the network were the
current angle and angular velocity of the hip and knee joints of each leg. The
outputs were the future angular acceleration of each of the joints. Both the inputs
and outputs for this problem were time-dependent.
Conventional training methods proved to be incapable of solving this problem,
whereas GenNets solved it very easily. Other applications of GenNets involved
creating production rule GenNets that duplicated the function of a production
rule system. These were then inserted into a simulated artificial insect and used to
process inputs from sensor GenNets. The outputs of the production rule GenNets
sent signals to other GenNets to execute various actions, e.g. eat, flee, or mate.
A similarly structured recurrent network was used in Fukuda et al. (1997) to
attempt a similar problem. The application area in this research was using the
genetically trained network to control a physical biped robot. The results gained
from this approach were quite impressive. Not only was the robot able to walk
along flat and sloped surfaces, it was able to generalise its behaviour to deal with
surfaces it had not encountered in training. Comparison of the results gained from
the genetically trained network with those from networks trained by other methods
showed that not only did the genetically trained network train more efficiently
than the others, it was also able to perform much better than the others.


Neuro-Fuzzy Genetic Systems

GAs have been used to optimise membership functions and other parameters in
neuro-fuzzy systems (Furuhashi et al., 1994; Watts and Kasabov, 1998, 1999). An
example is the FuNN fuzzy neural network (Kasabov et al., 1997). It is essentially
an ANN that has semantically meaningful nodes and weights that represent input
and output variables, fuzzy membership functions, and fuzzy rules. Tuning the
membership functions in FuNNs is a technique intended to improve an already
trained network. By slightly shifting the centres of the MFs the overall performance of the network can be improved. Because of the number of MFs in even
a moderately sized network, and the degree of variation in the magnitude of the
changes that each MF may require, a GA is the most efficient means of achieving
the optimisation.
Much of the flexibility of the FuNN model is due to the large number of design
parameters available in creating a FuNN. Each input and output may have an
arbitrary number of membership functions attached. The number of combinations
that these options yield is huge, making it quite impractical to search for the

Population-Generation-Based Methods


optimal configuration of the FuNN combinatorially. Using a GA is one method of

solving this difficult problem.
Optimisation of FuNN MFs involves applying small delta values to each
of the input and output membership functions. Optimisation of conventional
fuzzy systems by encoding these deltas into a GA structure has been investigated (Furuhashi et al., 1994) and has been shown to be more effective than
manual tuning. The initial GA population is randomly initialised except for one
chromosome, which has all the encoded delta values set to zero to represent the
initial network. This, along with elitism, ensures that the network can only either
improve in performance or stay the same, and never degrade in performance. To
evaluate each individual, the encoded delta values are added to the centre of each
membership function and the recall error over the training datasets is calculated.
In situations where a small number of examples of one class could be overwhelmed
by large numbers of other classes, the average recall error is taken over several
datasets, with each dataset containing examples from one class. The fitness f of an
individual is calculated by the following formula.
f = 1/e


where e is the average overall error on the test datasets.


Evolutionary Neuro-Genetic Systems and Cellular Automata

Other research methods combine GA and cellular automata to grow cells

(neurons) in a cellular automaton (de Garis, 1993). Here a cellular automaton
(CA) is a simplified simulation of a biological cell, with a finite number of states,
whose behaviour is governed by rules that determine its future state based upon
its current state and the current state of its neighbours. CAs have been used to
model such phenomena as the flocking behaviour of birds and the dynamics of
gas molecules. The advantage of CAs is their ability to produce seemingly complex
dynamic behaviour from a few simple rules.
Applying EC methods in an off-line mode on a closed and compact problem
space of possible solutions, through generations of populations of possible
solutions from this space, is time-consuming and not practical for realtime online
applications. The next sections suggest methods that overcome this problem and
optimise the parameters of ECOS.


EC for Parameter and Feature Optimisation of ECOS

When an ECOS is evolved from a stream of data we can expect that for a better
performance of the system, along with the structural and the functional evolution
of the model, its different parameters should also evolve. This problem has been
discussed in numerous papers and methods have been introduced (Watts and
Kasabov, 2001, 2002; Kasabov et al., 2003; Minku and Ludemir, 2005).
In this and in the next two sections, we present some other methods for the
optimisation of the parameters and the structures of ECOS.


Evolving Connectionist Systems


ES for Incremental Learning Parameter Optimisation

Here, a method for adaptive incremental (possibly, online) optimisation of the

parameters of a learning model using ES is presented on a simple example of
optimising the learning parameters l1 and l2 of the two weight connection matrices
W1 and W2 respectively, in an EFuNN ECOS architecture (see Chapter 3; from
Chan and Kasabov (2004)).
k k
Each individual sk = s1  s2  is a two-vector solution to l1 and l2 . Because
there are only two parameters to optimise, the ES method requires only a small
population and a short evolutionary time. In this case, we use one parent and ten
offspring and a small number of generations of genmax = 20. We set the initial
values of s to (0.2,0.2) in the first run and use the previous best solution as initial
values for subsequent runs. Using a nonrandomised initial population encourages
a more localised optimisation and hence speeds up convergence. We use simple
Gaussian mutation with standard deviation 0.1 (empirically determined) for both
parameters to generate new points. For selection, we use the high selection pressure
scheme to accelerate convergence, which picks the best of the joint pool of parents
and offspring to be the next-generation parents.
The fitness function (or the optimisation objective function) is the prediction
error over the last nlast data, generated by using EFuNN model at (t nlast ) to
perform incremental learning and predicting over the last nlast data. The smaller
nlast is, the faster the learning rates adapts and vice versa. Because the effect of
changing the learning rates is usually not expressed immediately but after a longer
period, the fitness function can be noisy and inaccurate if nlast is too small. In this
work we set nlast = 50. The overall algorithm has five steps (Fig. 6.6).
To verify the effectiveness of the proposed online ES, we first train EFuNN (see
Chapter 3) with the first 500 data of the MackeyGlass series (using x0 = 12 and
 = 17) to obtain a stable model, and then apply online ES to optimise the learning

1 Population Initialisation. Reset generation counter gen. Initialise a population of p parent EFuNNs with the best estimate
(l1   l2  ) for the two learning rates of EFuNN:
SPA k =l1   l2   K = 1 2     p
2 Reproduction. Rondomly select one of the P parents, SPA r , to undergo Gaussian mutation defined as a normal distribution
function N to produce a new offspring


SOF = SPA + Zi

Where Zi N0  2  i = 1 2    

3 Fitness Evaluation. Apply each of the offspring EFuNN models to perform incremental learning and prediction using data in
the interval [t tlast  t], where it is the current time moment of the learned temporal process, and tlast is the last moment
of the parameter measure and optimisation in the past. Set the respective prediction error as fitness.
4 Selection. Perform selection
5 Termination. Increment the number of generations (gen). Stop if gen genmax or if no fitness improvement has been
recorded over the past 3 generations; otherwise go to step (2).

Fig. 6.6 An ES algorithm for online optimisation of two parameters of an ECOS, in this case the learning rates
l1 and l2 of an EFuNN (see Chapter 3).

Population-Generation-Based Methods


rates over the next 500 data. The corresponding prediction error is recorded.
Example results are shown in Figure 6.7a,b.
Figure 6.7a shows an example of the evolution of the RMSE (used as the fitness
function) and the learning rates. The RMSE decreases, which is a characteristic
of the selection because the best individual is always kept in the population. The
optimal learning rates are achieved quickly after 14 generations.
Figure 6.7b shows the dynamics of the learning rate over the period t =
500 1000. Both learning rates l1 and l2 vary considerably over the entire course
with only short stationary moments, showing that they are indeed dynamic parameters. The average RMSE for online prediction one-step-ahead obtained with and
without online EC are 0.0056 and 0.0068, respectively, showing that online EC
is effective in enhancing EFuNNs prediction performance during incremental

Evolution of Learning Rate


Evolution of Fitness
(in Average RMSE)

















EC-optimized Learning Rates







t 750







Fig. 6.7 (a) Evolution of the best fitness and the learning rates over 15 generations; (b) optimised learning
rates l1 and l2 of an EFuNN ECOS over a period of time t =
500 1000 .



Evolving Connectionist Systems

ES for Fuzzy Membership Function Optimisation in ECOS

As an example, here, we apply ES to optimise the fuzzy input and output

membership functions (MFs) at the second and fourth layers of an EFuNN (see
Chapter 3) with the objective of minimising the training error (Chan and Kasabov,
2004). For both the input and output MFs, we use the common triangular function,
which is completely defined by the position of the MF centre.
Given that there are p input variables and pMF fuzzy quantisation levels for
each input variable, m output variables and mMF fuzzy quantisation levels for each
output variable, there are nc = p pMF + m mMF  centres c = c1  c2      cnc  to
be optimised.
The ES represents each individual as a (ppMF +mmMF ) real vector solution to
the positions of the MFs. We use five parents and 20 offspring and a relatively larger
number of generations of genmax = 40. Each individual of the initial population
is a copy of the evenly distributed MFs within the boundaries of the variables.
Standard Gaussian mutation is used for reproduction. Every offspring is checked
for the membership hierarchy constraint; i.e. the value of the higher MF must be
larger than that of the lower MF. If the constraint is violated, the individual is
resampled until a valid one is found.
Box 6.1. ES for membership function optimisation
1) Population Initialisation. Reset generation counter gen. Initialise a
population of parents c1   c2       cnc   with the evenly distributed MFs
sPA = c1   c2       cnc  
2) Reproduction. Randomly select one of the parents, sPA , to undergo Gaussian
mutation to produce a new offspring

sOF = sPA + z i and z i N 0 2 I
Resample if the membership hierarchy constraint is violated.
3) Fitness Evaluation. Apply each of the offspring to the model at (t tlast ) to
perform incremental learning and prediction using data in [t tlast  t]. Set
the respective prediction error as fitness.
4) Selection.
5) Termination. Increment the generation number gen. Stop if gen genmax ,
otherwise go to step 2.

The proposed ES is tested on the same MackeyGlass series described above.

We first perform incremental training on EFuNN with the first 1000 data of the
MackeyGlass series to obtain a stable model, and then apply off-line ES during
batch learning (over the same 1000 data) to optimise the input and output MFs.
Example results are shown in Figure 6.8.
Figure 6.8a shows the evolution of the best fitness recorded in each generation.
The unidirectional drop in prediction error shows that the optimisation of MFs has

Population-Generation-Based Methods


Fig. 6.8 (a) The evolution of the best fitness from the off-line ES; (b) initial membership functions and
EC-optimised membership functions on an input variable; (c) the frequency distribution of the same input
variable as in (b).

a positive impact on improving model performance. Figure 6.8b shows the initial
MFs and the EC-optimised MFs and Figure 6.8c shows the frequency distribution
of one of the input variables {x4 }. Clearly, the off-line ES evolves the MFs towards
the high-frequency positions, which maximises the precision for fuzzy quantisation
and in turn yields higher prediction accuracies. The training RMSEs are 0.1129 and
0.1160 for EFuNN with and without off-line ES optimisation, respectively, showing
that the ES-optimised fuzzy MFs are effective in improving EFuNNs data tracking


EC for Integrated Parameter Optimisation and Feature

Selection in Adaptive Learning Models

As discussed in Chapter 3, a simple version of EFuNNECF (evolving classifier

function), can be applied in both online and off-line modes. When working in
an off-line mode, ECF requires accurate setting of several control parameters to
achieve optimal performance. However, as with other ANN models, it is not always
clear what the best values for these parameters are. EC provides a robust global
optimisation method for choosing values for these parameters.
In this work, EC is applied to optimise the following four parameters of an ECF
Rmax : The maximum radius of the receptive hypersphere of the rule nodes. If,
during the training process, a rule node is adjusted such that the radius of its
hypersphere becomes larger than Rmax then the rule node is left unadjusted and
a new rule node is created.
Rmin : The minimum radius of the receptive hypersphere of the rule nodes. It
becomes the radius of the hypersphere of a new rule node.
nMF : The number of membership functions used to fuzzify each input variable.
M-of-n: If no rules are activated when a new input vector is entered, ECF
calculates the distance between the new vector and the M closest rule nodes.
The average distance is calculated between the new vector and the rule nodes
of each class. The vector is assigned the class corresponding to the smallest
average distance.


Evolving Connectionist Systems

Testing GA-ECF on the Iris Data

Each parameter being optimised by the GA is encoded through standard binary
coding into a specific number of bits and is decoded into a predefined range
through linear normalisation. A summary of this binary coding information is
shown in Table 6.1. Each individual string is the concatenation of a set of binary
parameters, yielding a total string-length ltot = 5 + 5 + 3 + 3 = 16.
The experiments for GA are run using a population size of ten, for ten generations. For each individual solution, the initial parameters are randomised within
the predefined range. Mutation rate pm is set to 1/ltot , which is the generally
accepted optimal rate for unimodal functions and the lower bound for multimodal
functions, yielding an average of one bit inversion per string. Two-point crossover
is used. Rank-based selection is employed and an exponentially higher probability
of survival is assigned to high-fitness individuals. The fitness function is determined by the classification accuracy. In the control experiments, ECF is performed
using the manually optimised parameters, which are Rmax = 1, Rmin = 001, nMF = 1,
M-of-N = 3. The experiments for GA and the control are repeated 50 times and
between each run the whole dataset is randomly split such that 50% of the data is
used for training and 50% for testing. Performance is determined as the percentage
of correctly classified test data. The statistics of the experiments are presented in
Table 6.2.
The results show that there is a marked improvement in the average accuracy in
the GA-optimised network. Also the standard deviation of the accuracy achieved
by the 50 GA experiments is significantly less than that for the control experiments,
indicating that there is a greater consistency in the experiments using the GA.
Figure 6.9 shows the evolution of the parameters over 40 generations. Each
parameter converges quickly to its optimal value within the first 10 generations,
showing the effectiveness of the GA implementation.

Table 6.1 Each parameter of the ECF being optimised by the GA is encoded
through standard binary coding into a specific number of bits and is decoded into
a predefined range through linear normalisation.



Number of Bits





Table 6.2 The results of the GA experiment repeated 50 times and

averaged and contrasted with a control experiment.

Evolving Parameters

Average accuracy (%)

Standard dev.



Population-Generation-Based Methods


Figure 6.10 shows the process of GA optimization of the ECF parameters and
features on the case study example of cancer outcome prediction based on clinical
variable (#1) and 11 gene expression variables across 56 samples (dataset is from
Ship et al. 2002).


EC for Optimal Feature Weighting in Adaptive Learning Models

Feature weighting is an alternative form of feature selection. It assigns to each

variable a weighting coefficient that reduces or amplifies its effect on the model
based on its relevance to the output. A variable that has little relevance to the
output is given a small weight to suppress its effect, and vice versa. The purpose
of feature weighting is twofold: first, to protect the model from the random
and perhaps detrimental influence of irrelevant variables and second, to act as
a guide for pruning away irrelevant variables (feature selection) by the size of
the weighting coefficients. Feature weighting/selection are generally implemented
through three classes of methods: Bayesian, incremental/sequential, or stochastic
methods. The first two classes are local methods that are fast and yet susceptible
to local optima; the last class includes EC applications that use computationally
intensive population search to search for the global optimum. Here the algorithm
proposed for ECOS, called weighted data normalisation (WDN; Song and Kasabov
(2006)) belongs to such a class, using the robust optimisation capability of the
genetic algorithm to implement feature weighting.
The WDN method optimises the normalisation intervals (range) of the input
variables and allocates weights to each of the variables from a dataset. The method
consists of the following steps.
1. The training data are preprocessed first by a general normalisation method.
There are several ways to achieve this: (a) normalising a given dataset so
that they fall in a certain interval, e.g. [0, 1], [0, 255] or [1, 1] etc; (b)
normalising the dataset so that the inputs and targets will have means of zero











Fig. 6.9 Evolution of the parameters: number of membership functions nMF ; m-of-n; Rmax and Rmin of an
ECF evolving model over 40 generations of a GA optimisation procedure.


Evolving Connectionist Systems

Fig. 6.10 GA for ECF parameter and feature evaluation on the case study example of lymphoma cancer outcome
prediction (Ship et al., 2002) based on a clinical variable (#1, IPI, International Prognostic Index) and 11 gene
expression variables (#212) across 56 samples. Twenty generations are run over populations of 20 individual
ECF models, each trained five times (five-cross validation) on 70% of the data selected for training and 30%
for testing. The average accuracy of all five validations is used as a fitness function. The accuracy of the best
model evolves from 76% (the first generation) to 90.622% due to optimization of the ECF parameters (Rmin,
Rmax, m-of-n, number of training iterations). The best model uses 9 input variables instead of 12 (variables
5, 8, and 12 are not used).





and standard deviations of 1; (c) normalising the dataset so that the deviation
of each variable from its mean is normalised by its standard deviation. In the
WDN, we normalise the dataset in the interval [0, 1].
The weights of the input variables [x1 , x2      xn ] represented respectively by
[w1 , w2      wn ] with initial values of [1,1, , 1], form a chromosome for a
consecutive GA application. The weight wi of the variable xi defines its new
normalisation interval [0,wi ].
GA is run on a population of connectionist learning modules for different
chromosome values over several generations. As a fitness function, the root
mean square error (RMSE) of a trained connectionist module on the training
or validation data is used, or alternatively the number of the created rule nodes
can be used as a fitness function that needs to be minimised.
The connectionist model with the least error is selected as the best one, and
its chromosome, the vector of weights [w1 , w2      wn ] defines the optimum
normalisation range for the input variables.
Variables with small weights are removed from the feature set and the steps
from above are repeated again to find the optimum and the minimum set of
variables for a particular problem and a particular connectionist model.

Population-Generation-Based Methods


The above WDN method is illustrated in the next section on two case study ECOS
and on two typical problems, namely EFuNN for a time-series prediction, and
ECMC for classification.

Example 1: Off-Line WDN of EFuNN for Prediction

Here an EFuNN model is developed for a time-series prediction problem. Improved
learning with the WDN method is demonstrated on the MackeyGlass (MG) timeseries prediction task. In the experiments, 1000 data points, from t = 118 to 1117,
are extracted for predicting the six-steps-ahead output value. The first half of the
dataset are taken as the training data and the rest as the testing data.
The following parameters are set in the experiments for the EFuNN model:
Rmax = 015; Emax = 015 and nMF = 3. The following GA parameter values are used:
for each input variable, the values from 0.16 to 1 are mapped onto a four-bit
string; the number of individuals in a population is 12; mutation rate is 0.001;
termination criterion (the maximum epochs of GA operation) is 100 generations;
the RMSE on the training data is used as a fitness function. The optimised weight
values, the number of the rule nodes created by EFuNN with such weights, the
training and testing RMSE, and the control experiments are shown in Table 6.3.
With the use of the WDN method, better prediction results are obtained for
a significantly smaller number of rule nodes (clusters) evolved in the EFuNN
models. This is because of the better clustering achieved when different variables
are weighted according to their relevance.

Example 2: Off-Line WDN Optimisation and Feature Extraction for ECMC

In this section, an evolving clustering method for classification ECMC (see
Chapter 2) with WDN is applied to the Iris data for both classification and feature
weighting/selection. All experiments in this section are repeated 50 times with the
same parameters and the results are averaged. Fifty percent of the whole dataset
is randomly selected as the training data and the rest as the testing data. The
following parameters are set in the experiments for the ECMC model: Rmin = 002;
each of the weights for the four normalised input variables is a value from 0.1 to
1 and is mapped into a six-bit binary string.
The following GA parameters are used: number of individuals in a population
12; mutation rate pm = 0005; termination criterion (the maximum epochs of

Table 6.3 Comparison between EFuNN without weighted data normalisation (WDN)
and EFuNN with WDN.

EFuNN without WDN

EFuNN with WDN


Number on
rule nodes



1, 1, 1, 1
0.4, 0.8, 0.28, 0.28





Evolving Connectionist Systems

Table 6.4 Comparison between evolving clustering method for classification ECMC (see
Chapter 2) without weighted data normalisation (WDN) and ECMC with WDN.



without WN
with WN
without WN
with WN
without WN
with WN

feature weights

Number of rule

Number of test

1, 1, 1, 1
0.25, 0.44, 0.73, 1
1, 1, 1
0.50, 0.92, 1
1, 0.97



GA operation) 50; the fitness function is determined by the number of created

rule nodes.
The final weight values, the number of rule nodes created by ECMC, and the
number of classification errors on the testing data, as well as the control experiment
are shown in the first two rows of Table 6.4, respectively. Results show that the
weight of the first variable is much smaller than the weights of the other variables.
Now using the weights as a guide to prune away the least relevant input variables,
the same experiment is repeated without the first input variable. As shown in the
subsequent rows of Table 6.4, this pruning operation slightly reduces test errors.
However, if another variable is removed (i.e. the total number of input variables
is two) test error increases. So we conclude that for this particular application the
optimum number of input variables is three.



EC for Feature and Model Parameter Optimisation

of Transductive Personalised (Nearest Neighbour)
General Notes

In transductive reasoning, for every new input vector xi that needs to be processed
for a prognostic/classification task, the Ni nearest neighbors, which form a data
subset Di, are derived from an existing dataset D and a new model Mi is dynamically created from these samples to approximate the function in the locality of
point xi only. The system is then used to calculate the output value yi for this
input vector xi (Vapnik (1998); also see Chapter 1).
The transductive approach has been implemented in medical decision support
systems and time-series prediction problems, where individual models are created
for each input data vector (i.e. specific time period or specific patient). The
approach gives good accuracy for individual models and has promising applications especially in medical decision support systems. This transductive approach
has also been applied using support vector machines as the base model in the area
of bioinformatics and the results indicate that transductive inference performs
better than inductive inference models mainly because it exploits the structural

Population-Generation-Based Methods


information of unlabelled data. However, there are a few open questions that need
to be addressed when implementing transductive modelling (Mohan and Kasabov,


How Many Nearest Neighbours Should Be Selected?

A standard approach adopted to determine the number of nearest neighbours

is to consider a range starting with 1, 2, 5, 10, 20, and so on and finally
select the best value based on the classifiers performance. In the presence of
unbalanced data distribution among classes in the problem space recommended
value of nearest neighbours is to range from 1 to a maximum of number of
samples in the smaller class or the square root of the number of samples in the
problem space. A similar recommendation is made by Duda and Hart (1973)
based on concepts of probability density estimation of the problem space. They
suggest that the number of nearest neighbours to consider depends on two
important factors: distribution of sample proportions in the problem space and the
relationship between the samples in the problem space measured using covariance
The problem of identifying the optimal number of neighbours that improves the
classification accuracy in transductive modeling remains an open question that is
addressed here with the use of a GA.


What Distance Measure to Use and in What Problem Space?

There exist different types of distance measures that can be considered to measure
the distance of two vectors in a different part of the problem/feature space such as
Euclidean, Mahalanobis, Hamming, cosine, correlation, and Manhattan distance
among others (see Chapter 2). It has been proved mathematically that using an
appropriate distance metric can help reduce classification error when selecting
neighbours without increasing number of sample vectors. Hence it is important
to recognise which distance measure will best suit the data in hand.
Some authors suggest that in the case where the dataset consists of numerical
data, the Euclidean distance measure should be used when the attributes
are independent and commensurate with each other. However, in the case
where the numerical data are interrelated, then Mahalanobis distance should be
considered as this distance measure takes interdependence between the data into
If the data consist of categorical information, Hamming distance can appropriately measure the difference between categorical data. Also, in the case where the
dataset consists of a combination of numerical and categorical values, for example,
a medical dataset that includes the numerical data such as gene expression
values and categorical data such as clinical attributes, then the distance measure
considered could be the weighted sum of Mahalanobis or Euclidean for numerical
data and Hamming distance for nominal data.


Evolving Connectionist Systems

Keeping these suggestions in perspective, it is important to provide a wide

range of options to select the distance measure based on the type of dataset in a
particular part of the problem space for a particular set of features.


How is the Input Vector Assigned an Output Value?

There are different ways of determining the contribution of the nearest neighbours to the output class of the input vector, already discussed in Chapter 1,
such as:

The K-nearest neighbours (K-NN)

Other methods, such as MLR, ECOS, etc.


What Features are Important for a Specific Input Vector?

We discussed the issue of feature selection in Chapter 1, but here we raise the issue
of which feature set Fi is most appropriate to use for each individual vector xi


A GA Optimization of Transductive Personalised Models

The proposed algorithm aims to answer all these questions by applying a transductive modeling approach that uses GA to define the optimal: (a) distance
measure, (b) number of nearest neighbours, and (c) feature subset for every new
input vector (Mohan and Kasabov, 2005).
The number of neighbours to be optimized lies in the minimum range of the
number of features selected in the algorithm and a maximum of the number of
samples available in the problem space.
As an illustration the multiple linear regression method (MLR) is used as the
base model for applying the transductive approach. The model is represented by
a linear equation which links the features/variables in the problem space to the
output of the classification task and is represented as follows:
r = w0 + w1X1 +    + wnXn
where r represents the output, and wi represents the weights for the
features/variables of the problem space which are calculated using the least
square method. The descriptors Xi are used to represent the structural information of the data samples, that is, the features/variables, and n represents the
number of these features/variables. The reason for selecting the MLR model is
the simplicity of this model that will make the comparative analysis of the transductive approach using GA with the inductive approach easier to understand and

Population-Generation-Based Methods


The main objective of this algorithm is to develop an individualized model for

every data vector in a semi-supervised manner by exploiting the data vectors
structural information, to identify its nearest neighbours in the problem space
and finally to test the model using the neighbouring data vectors to check the
effectiveness of the model created. The GA is used to locate an effective set of
features that represent most of the datas significant structural information along
with the optimal number of neighbours to consider and the optimal distance
measure to identify the neighbours. The complete algorithm is described in two
parts, the transductive setting and the GA.
1. We normalize the dataset linearly with values between 0 and 1 to ensure
a standardization of all values especially when variables are represented in
different units. This normalization procedure is based on the assumption that
all variables/features have the same importance for output of the system for the
whole problem space.
2. For every test sample Ti, perform the following steps: select the closest neighbouring samples, create a model, and evaluate its accuracy
For every test sample Ti we select through a GA optimisation: a set of features
to be considered, the number of nearest neighbours, and the distance measure
to locate the neighbours.
The accuracy of the selected set of parameters for the Ti model is calculated
by creating a model with these parameters for each of the neighbours of the
test sample Ti and calculating the accuracy of each of these models. The crossvalidation is run in a leave-one-out manner for all neighbours of Ti. If, for
the identified set of parameters, the neighbours of Ti give a high classification
accuracy rate, then we assume that the same set of parameters will also work
for the sample Ti. This criterion is used as a fitness evaluation criterion for the
GA optimisation procedure.
3. Perform the set of operations in step 2 in a leave-one-out manner for all the
samples in the dataset and calculate the overall classification accuracy for this
transductive approach (see Chapter 1).

We conducted experiments on various UC Irvine datasets with their characteristics represented in Table 6.5a. The tests were carried out on all the data
using the leave-one-out validation technique. The datasets were selected as the
ones without any missing values except for the breast cancer dataset that had
four missing values. At the preprocessing stage, the four samples with missing
values were deleted and the size of the breast cancer dataset reduced from 198
to 194. As the next step of preprocessing, all the datasets were normalised using
linear normalisation resulting in values in the range of 0 and 1 to provide
Table 6.5b presents cross-validation results for comparison between the
inductive modelling approach and the transductive modelling approach without
and with GA parameter optimisation for MLR models.


Evolving Connectionist Systems

Table 6.5 (a) Different machine learning benchmark datasets; (b) classification results
using: inductive multiple linear regression (MLR) model for classification; a transductive
MLR without and with GA parameter optimisation of the following parameters: number
of neighbouring samples K, input variables, distance measure.

No. of classes

No. of features

No. of data points

Breast Cancer





accuracy of an
inductive .MLR

Transductive MLR
with fixed
selected manually
as the best from
multiple runs

Transductive MLR
with GA optimised

Breast Cancer




The results show that the transductive modelling approach significantly outperforms the inductive modelling and parameter optimisation also improves the
accuracy of the individual models at average.


Particle Swarm Intelligence

In a GA optimisation procedure, a solution is found based on the best individual

represented as a chromosome, where there is no communication between the
Particle swarm optimization (PSO), introduced by Kennedy and Eberhard (1995)
is motivated by social behaviour of organisms such as bird flocking, fish schooling,
and swarm theory. In a PSO system, each particle is a candidate solution to
the problem at hand. The particles in a swarm fly in multidimensional search
space, to find an optimal or suboptimal solution by competition as well as by
cooperation among them. The system initially starts with a population of random
solutions. Each potential solution, called a particle, is given a random position and
The particles have memory and each particle keeps track of its previous best
position and the corresponding fitness. The previous best position is called the

Population-Generation-Based Methods


Fig. 6.11 A graphical representation of a particle swarm optimisation process (PSO) in a 2D space.

pbest. Thus, pbest is related only to a particular particle. The best value of all
particles pbest in the swarm is called the gbest. The basic concept of PSO lies in
accelerating each particle towards its pbest and the gbest locations at each time
step. This is illustrated in Fig. 6.11 for a two-dimensional space.
PSO have been developed for continuous, discrete, and binary problems. The
representation of the individuals varies for the different problems. Binary particle
swarm optimisation (BPSO) uses a vector of binary digit representation for the
positions of the particles. The particles velocity and position updates in BPSO are
performed by the following equations.
vnew = w vold + c1 rand pbest pold  + c2 rand gbest pold 

pnew =

if r svnew 
if r < svnew 



svnew  =

and r U0 1
1 + expvnew 


The velocities are still in the continuous space. In BPSO, the velocities are not
considered as velocities in the standard PSO but are used to define probabilities that a bit flip will occur. The inertia parameter w is used to control
the influence of the previous velocity on the new velocity. The term with c1
corresponds to the cognitive acceleration component and helps in accelerating
the particle towards the pbest position. The term with c2 corresponds to the
social acceleration component which helps in accelerating the particle towards
the gbest position.


Evolving Connectionist Systems

A simple version of a PSO procedure is given in Box 6.2.

Box 6.2. A pseudo-code of a PSO algorithm
t 0 (time variable)
1) Initialize a population with random positions and velocities
2) Evaluate the fitness
3) Select the pbest and gbest
while (termination condition is not met) do

t t+1
4) Compute velocity and position updates
5) Determine the new fitness
6) Update the pbest and gbest if required



Artificial Life Systems (ALife)

The main characteristics of life are also the main characteristics of a modelling
paradigm called artificial life (ALife), namely:

Self-organisation and adaptation


A popular example of an ALife system is the so-called Conways Game of Life

(Adami, 1998): Each cell in a 2D grid can be in one of the two states: either on
(alive) or off (dead, unborn). Each cell has eight neighbours, adjacent across the
sides and corners of the square.
Whether cells stay alive, die, or generate new cells depends upon how many
of their eight possible neighbours are alive and is based on the following
transition rule:
Rule S23/B3, a live cell with two live neighbours, or any cell with three neigbhours, is alive at the next time step (see Fig. 6.12).
Example 1: If a cell is off and has three living neighbours (out of eight), it will
become alive in the next generation.
Example 2: If a cell is on and has two or three living neighbours, it survives;
otherwise, it dies in the next generation.

Population-Generation-Based Methods


Fig. 6.12 Two consecutive states of the Game of Life according to rule S23/B3 (one of many possible rules),
meaning that every cell survives if it is alive and is surrounded by two or three living cells, and a cell is born
if there are three living cells in the neighbourhood, otherwise a cell dies (as a result of an either overcrowded
neighbourhood of living cells, or of lack of sufficient living cells, loneliness).

Example 3: A cell with less than two neighbours will die of loneliness and a cell
with more then three neighbours will die of overcrowding.
In this interpretation, the cells (the individuals) never change the above rules and
behave in this manner forever (until there is no individual left in the space). A
more intelligent behaviour would be if the individuals were to change their rules of
behaviour based on additional information they were able to collect. For example,
if the whole population is likely to become extinct, then the individuals would
create more offspring, and if the space became too crowded, the individual cells
would not reproduce every time they are forced to reproduce by the current
rule. In this case we are talking about emerging intelligence of the artificial life
ensemble of individuals (see Chapter 1). Each individual in the Game of Life can
be implemented as an ECOS that has connections with its neighbours and has
three initial exact (or fuzzy) rules implemented, but at a later stage new rules can
be learned.



Choose a classification problem and a dataset.

Create an evolving classification (ECF) or other classfication model for the
problem and evaluate its accuracy.
Apply a GA for 20 generations, 20 individuals in a population, for parameter
and feature optimisation of ECF and evaluate the accuracy.
Apply an ES for 20 generations, 20 individuals in a population, for parameter
and feature optimisation of ECF and evaluate the accuracy.
Apply ECF with WDN (weighted data normalisation) for weighting the input
variables in an optimised model.
Apply transductive modelling with MLR or other methods, with a GA optimisation of the number of the nearest samples, the input features and the distance
Apply particle swarm optimisation (PSO) to the problem.
Compare the results from the above experiments.
Question: In general, what EC method (GA, ES, WDN, transductive, PSO) would
be most suitable to the problem in hand and why?


Evolving Connectionist Systems

What knowledge can be learned from an optimised model when compared to

an unoptimised one?


Summary and Open Questions

This chapter presents several approaches for using evolutionary computation (EC)
for the optimisation of ECOS. There are many issues that need to be addressed
for further research in this area. Some of the issues are:
1. Online optimisation of the fitness function of an EC.
2. Using individual fitness functions for each ECOS.
3. EC helps choose the parameter values of ECOS, but how do we choose the
optimal parameters for the EC method at the same time?
4. Interactions between individuals and populations that have different genetic


Further Reading

Generic Material on Evolutionary Computation (Goldberg, 1989; Michaliewicz,

Genetic Programming (Koza, 1992)
Evolutive Fuzzy Neural Networks (Machado et al., 1992)
Using EC Techniques for the Optimisation of Neural Networks (Fogel, 1990;
Schiffman et al., 1993; Yao, 1993)
Using GA for the Optimisation and Training of Fuzzy Neural Networks (Watts
and Kasabov, 1998)
The Evolution of Connectivity: Pruning Neural Networks Using Genetic
Algorithms (Whitley and Bogart, 1990)
Neuronal Darwinism (Edelman, 1992)
GA for the Optimisation of Fuzzy Rules (Furuhashi et al., 1994)
Using EC for Artificial Life (Adami, 1998)
Online GA and ES Optimisation of ECOS (Kasabov 2003; Chan et al. 2004)
Parameter Optimisation of EFuNN (Watts and Kasabov, 2001, 2002; Minku and
Ludemir, 2005)
Swarm Intelligence (Kennedy and Eberhard, 1995)

7. Evolving Integrated
Multimodel Systems

Chapters 2 to 6 presented different methods for creating single evolving

connectionist models. This chapter presents a framework and several methods
for building evolving connectionist machines that integrate in an adaptive way
several evolving connectionist models to solve a given task, allowing for using
different models (e.g. regression formulas, ANN), and for adding new data and
new variables. The chapter covers the following topics.

A framework for evolving multimodel systems

Adaptive, incremental data and model integration
Integrating kernel functions and regression formulas in knowledge-based ANN
Ensemble learning methods for ECOS
Integrating ECOS with evolving ontologies
Summary and open problems
Further reading


Evolving Multimodel Systems

A General Framework

Complex problems usually require a more complex intelligent system for their
solution, consisting of several models. Some of these models can be evolving
models. A block diagram of a framework for evolving connectionist machines,
consisting of several evolving models (EM) is given in Fig. 7.1 (from Kasabov
The framework facilitates multilevel multimodular ensembles where many EM or
ECOS are connected with inter- and intraconnections. The evolving connectionist
machine does not have a clear multilayer structure. It has a modular open
structure. The main parts of the framework are described below.
1. Input presentation part: This filters the input information, performs feature
extraction and forms the input vectors. The number of inputs (features) can
vary from example to example.


Evolving Connectionist Systems








Action Modules






Fig. 7.1 A block diagram of a framework for evolving multimodel systems (Kasabov, 2001).

2. Representation and memory part, where information (patterns) are stored: This
is a multimodular evolving structure of evolving connectionist modules and
systems organised in spatially distributed groups; for example, one group can
represent the phonemes in a spoken language (e.g. one ECOS representing a
phoneme class in a speech recognition system).
3. Higher-level decision part: This consists of several modules, each taking
decisions on a particular problem (e.g. word recognition or face identification).
The modules receive feedback from the environment and make decisions about
the functioning and adaptation of the whole evolving machine.
4. Action part: The action modules take output values from the decision modules
and pass information to the environment.
5. Knowledge-based part: This part extracts compressed abstract information from
the representation modules and from the decision modules in different forms
of rules, abstract associations, etc.
Initially, an evolving machine is a simple multimodular structure, each of the
modules being a mesh of nodes (neurons) with very little connection between
them, predefined through prior knowledge or genetic information. An initial set

Evolving Integrated Multimodel Systems


of rules can be inserted in this structure. Gradually, through self-organisation,

the system becomes more and more wired. New modules are created as the
system operates. A network module stores patterns (exemplars) from the entered
examples. A node in a module is created and designated to represent an individual
example if it is significantly different from the previously used examples (with a
level of differentiation set through dynamically changing parameters).
The functioning of the evolving multimodel machines is based on the following
general principles.
1. The initial structure is defined only in terms of an initial set of features, a
set of initial values for the ECOS parameters (genes) and a maximum set of
neurons, but no connections exist prior to learning (or connections exist but
they have random values close to zero).
2. Input patterns are presented one by one or in chunks, not necessarily having
the same input feature sets. After each input example is presented, the ECOS
either associates this example with an already existing module and a node in
this module, or creates a new module and/or creates a new node to accommodate this example. An ECOS module denoted in Fig. 7.1 as an evolving
module (EM), or a neuron, is created when needed at any time during the
functioning of the whole system.
3. Evolving modules are created as follows. An input vector x is passed through
the representation module to one or more evolving modules. Nodes become
activated based on the similarity between the input vector and their input
connection weights. If there is no EM activated above a certain threshold a
new module is created. If there is a certain activation achieved in a module
but no sufficient activation of a node inside it, a new node will be created.
4. Evolving a system can be achieved in different modes, e.g. supervised,
reinforcement, or unsupervised (see Chapters 2 through 5). In a supervised
learning mode the final decision on which class (e.g. phoneme) the current
vector x belongs to is made in the higher-level decision module that may
activate an adaptation process.
5. The feedback from the higher-level decision module goes back to the feature
selection and filtering part (see Chapter 1).
6. Each EM or ECOS has both aggregation and pruning procedures defined.
Aggregation allows for modules and neurons that represent close information
instances in the problem space to merge. Pruning allows for removing modules
and neurons and their corresponding connections that are not actively involved
in the functioning of the ECOS (thus making space for new input patterns).
Pruning is based on local information kept in the neurons. Each neuron in
ECOS keeps track of its age, its average activation over the whole lifespan,
the global error it contributes to, and the density of the surrounding area of
neurons (see, for example, EFuNN, Chapter 3).
7. The modules and neurons may be spatially organised and each neuron has
relative spatial dimensions with regard to the rest of the neurons based on
their reaction to the input patterns. If a new node is to be created when an
input vector x is presented, then this node will be allocated closest to the
neuron that had the highest activation to the input vector x, even though
insufficiently high to accommodate this input vector.


Evolving Connectionist Systems

8. In addition to the modes of learning from (4), there are two other general
modes of learning (see Chapter 1):
(a) Active learning mode: Learning is performed when a stimulus (input
pattern) is presented and kept active.
(b) Sleep learning mode: Learning is performed when there is no input pattern
presented at the input of the machine. In this case the process of further
elaboration of the connections in an ECOS is done in a passive learning
phase, when existing connections, that store previous input patterns, are
used as eco-training examples. The connection weights that represent
stored input patterns are now used as exemplar input patterns for training
other modules in ECOS.
9. ECOS provide explanation information (rules) extracted from the structure of
the NN modules.
10. The ECOS framework can be applied to different types of ANN (different types
of neurons, activation functions, etc.) and to different learning algorithms.
Generally speaking, the ECOS machine from Fig. 7.1 can theoretically model the
five levels of evolving processes as shown in Fig.I.1. We can view the functioning
of an ECOS machine as consisting of the following functional levels.
1. Gene, parameter, level: Each neuron in the system has a set of parameters
genes that are subject to adaptation through both learning and evolution.
2. Neuronal level: Each neuron in every ECOS has its information-processing
functions, such as the activation function or the maximum radius of its receptive
3. Ensembles of neurons: These are the evolving neural network modules (EM)
each of them comprising a single ECOS, e.g. an EFuNN.
4. The whole ECOS machine: This has a multimodular hierarchical structure with
global functionality.
5. Populations of ECOS machines and their development over generations (see
Chapter 6).
The following algorithm describes a scenario of the functioning of this system.
Loop 0: {Create a population of ECOS machines with randomly chosen parameter
values chromosomes (an optional higher-level loop)
Loop 1: {Apply evolutionary computation after every p data examples over the
whole population of ECOS machines.
Loop 2: {Apply adaptive lifelong learning (evolving) methods to an ECOS (e.g.
ECM, EFuNN, DENFIS) from the ECOS machine to learn from p examples.
Loop 3: {For each created neuron in an ECOS adapt (optimise) its parameter
values (genes) during the learning process either after each example, or
after a set of p examples is presented. Mutation or other operations over
the set of parameters can be applied. During this process a gene interaction
network (parameter dependency network) can be created, allowing for
observation of how genes (parameters) interact with each other.
} end of loop 3 } end of loop 2 } end of loop 1 } end of loop 0 (optional)
The main challenge here is to be able to model both the evolving processes at each
level of modelling and the interaction between these levels.

Evolving Integrated Multimodel Systems



An Integrated Framework of Global, Local,

and Personalised Models

Global models capture trends in data that are valid for the whole problem space,
and local models capture local patterns, valid for clusters of data (see Chapter 1).
Both models contain useful information and knowledge. Local models are also
adaptive to new data as new clusters and new functions, that capture patterns
of data in these clusters, can be incrementally created. Usually, both global and
local modelling approaches assume a fixed set of variables and if new variables,
along with new data, are introduced with time, the models are very difficult to
modify in order to accommodate these new variables. This can be done in the
personalised models, as they are created on the fly and can accommodate any new
variables, provided that there are data for them. All three approaches are useful for
complex modelling tasks and all of them provide complementary information and
knowledge, learned from the data. Integrating all of them in a single multimodel
system would be an useful approach and a challenging task.
A graphical representation of an integrated multimodel system is presented in
Fig. 7.2. For every single input vector, the outputs of the tree models are weighted.
The weights can be adjusted and optimised for every new input vector in a similar
way as the parameters of a personalised model (Kasabov, 2007b).
yi = wig yi xi global + wil yi xi local + wip yi xi personalised



Spatial and Temporal Complexity of a Multimodel ECOS

Spatial complexity of a system defines the system architecture requirements in

terms of nodes, connections, etc. The spatial complexity of an ECOS could be
evaluated as follows.

New input
vector xi


Output yi

Model Mi

Data base

Fig. 7.2 A graphical representation of an integrated global, local, and personalised multimodel system. For
every single input vector, the outputs of the tree models are weighted (Kasabov, 2007b).


Evolving Connectionist Systems

Number of features; number of NN modules; total number of neurons

Total number of connections
Function of growth (linear, exponential)
Level of pruning and aggregation

As shown graphically in Fig. 7.3, for a hypothetical example, an ECOS creates

modules and connects nodes all the time, but it happens more often at the
beginning of the evolving process (until time t1). After that, the subsequent input
patterns are accommodated, but this does not cause the creation of many new
connections until time t2, when a new situation in the modelled process arises (e.g.
a user speaks a different accent to the machine, or there is a new class of genes, or
the stock market rises/crashes unexpectedly). Some mechanisms prevent infinite
growth of the ECOS structure. Such mechanisms are pruning and aggregation. At
this moment the ECOS shrinks and its complexity is reduced (time t3 in the
Time complexity is measured in the required time for the system to react to an
input datum.


An Agent-Based Implementation of Multimodel

Evolving Systems

An agent-based implementation of the framework from Fig. 7.1 is shown in Fig. 7.4.
The main idea of this framework is that agents, which implement some evolving
models, are created online whenever they are needed in time. Some agents collect
data, some learn from these data in an online lifelong mode, and some extract and
refine knowledge. The models are dynamically created (on the fly). Figure 7.4 shows
Evolving Process


Evolving Model/System

Aggregation (abstraction)


Complexity (e.g. neurons

and connections)



Fig. 7.3 A hypothetical example of a complexity measure in an evolving connectionist system.

Evolving Integrated Multimodel Systems


User task


Intelligent Design Interface



Agent 2
Agent 1


Time series data

Repository of modules
Data Transformation
Neural Networks
Fuzzy Logic
Genetic Algorithms

Fig. 7.4 Multiagent implementation of multimodel ECOS.

the general architecture of the framework. The user specifies the initial problem
parameters and the task to be solved. Then intelligent agents (designers) create
dynamic units modules that initially may contain no structured knowledge,
but rules on how to create the structure and evolve the functionality of the module.
The modules combine expert rules with data from the environment. The modules
are continuously trained with these data. Rules may be extracted from trained
modules. This is facilitated by several evolving connectionist models, such as the
evolving fuzzy neural network EFuNN (Chapter 3). Rules can be extracted for the
purpose of later analysis or for the creation of new modules. Modules can be
deleted if they are no longer needed for the functioning of the whole system.


ECOS for Adaptive Incremental Data

and Model Integration
Adaptive Model and Data Integration: Problem Definition

Despite the advances in mathematical and information sciences, there is a lack of

efficient methods to extend an existing model M to accommodate new (reliable)
dataset D for the same problem, if M does not perform well on D. Examples of
existing models that need to be further modified and extended to new data are
numerous: differential equation models of cells and neurons, a regression formula
to predict the outcome of cancer, an analytical formula to evaluate renal functions,
a logistic regression formula for evaluating the risk of cardiac events, a set of rules


Evolving Connectionist Systems

for the prediction of the outcome of trauma, gene expression classification and
prognostic models, models of gene regulatory networks, and many more.
There are several approaches to solving the problem of integrating an existing
model M and new data D but they all have limited applicability. If a model M was
derived from data DM , DM and D can be integrated to form a dataset Dall and a
new model Mnew could be derived from Dall in the same way M was derived from
DM . This approach has a limited applicability if past data DM are not available or
the new data contain new variables. Usually, the existing models are global, for
example a regression formula that covers the whole problem space. Creating a
new global model after every new set of data is made available is not useful for
the understanding of the dynamics of the problem as the new global model may
appear very differently from the old one even if the new data are different from
the old one only in a tiny part of the problem space.
Another approach is to create a new model MD based only on the dataset D, and
then for any new input data to combine the outputs from the two models M and
MD (Fig. 7.5a). This approach, called mixture of experts, treats each model M and
MD as a local expert that performs well in a specific area of the problem space.
Each models contribution to the final output vector is weighted based on their
strengths. Some methods for weighting the outputs from two or more models
have been developed and used in practice. Although this approach is useful for
some applications, the creation, the weight optimization, and the validation of
several models used to produce a single output in areas where new input data are
generated continuously, could be an extremely difficult task. In Kasabov (2007b)
an alternative approach is proposed that is a generic solution to the problem as
explained below.


Integrating New Data and Existing Models Through Evolving

Connectionist Systems

In the method introduced here, we assume that an existing model M performs well
in part of the problem space, but there is also a new dataset D that does not fit into
the model. The existing model M is first used to generate a dataset D0 of input
output data samples through generating input vectors (the input variables take
values in their respective range of the problem space where the model performs
well) and calculating their corresponding output values using M (Fig. 7.5b). The
dataset D0 is then used to evolve an initial connectionist model M0 with the use of
an evolving connectionist system (ECOS) and to extract rules from it where each
rule represents a local model (a prototype).
The model M0 can be made equivalent to the model M to any degree of accuracy
through tuning the parameters of ECOS. The initial ECOS model M0 is then further
trained on the new data D thus tuning the initial rules from M0 and evolving new
rules applicable to the dataset D. The trained ECOS constitutes an integrated new
model Mnew that consists of local adaptable rules. To compare the generalisation
ability of M and Mnew , the datasets D0 and D are split randomly into training and
testing sets: D0tr  D0tst  Dtr  Dtst ,. The training sets are used to evolve the new model
Mnew and the test sets are used to validate Mnew and compare it with the existing
model M. The new model Mnew can be incrementally trained on future incoming

Evolving Integrated Multimodel Systems



Model M
Data D

Model MD


Model M

Data D0

Model M0
(e.g. ECOS)

Model Mnew

Model Mnew

Data D
Further incoming data

Model M

Data D0,I generated in the

vicinity of the input vector xi

Data D

Data DI selected in the

vicinity of the input vector xi

Model Mnew,i
(ECOS) for
the new input
vector xi

Further incoming data.. xi

Fig. 7.5 (a) The mixture of experts approach for model M and data D integration combines outputs from
different models M and MD (derived from D); (b) the proposed inductive, local learning method generates
data D0 from an existing model M, creates an ECOS model M0 from D0 , and further evolves M0 on the new
data D thus creating an integrated, adaptive new model Mnew ; (c) in a transductive approach, for every new
input vector xi , a new model Mnew  i is created based on generated data from the old model M and selected
data from the new dataset D, all of them being in the vicinity of the new input vector (Kasabov, 2006).

data and the changes can be traced over time. New data may contain new variables
and have missing values as explained later in the book. The method utilizes the
adaptive local training and rule extraction characteristics of ECOS.
In a slightly different scenario (Fig. 7.5c), for any new input vector xi that
needs to be processed by a model Mnew so that its corresponding output value is
calculated, data samples, similar to the new input vector xi are generated from
both dataset D, samples Di, and from the model M, samples D0i , and used to
evolve a model Mnewi that is tuned to generalise well on the input vector xi .
This approach can be seen as a partial case of the approach from above and


Evolving Connectionist Systems

in the rest of the book we consider the adaptive data and model integration
according to the scheme from Fig. 7.5b.

The method is illustrated with a simple model M that represents a nonlinear
function y of two variables x1 and x2 and a new dataset D (see Fig. 7.6a). The
model M does not perform well on the data D. The model is used to generate a
dataset D0 in a subspace of the problem space where it performs well. The new
dataset D is in another subspace of the problem space. Data D0tr extracted from
D0 is first used to evolve a DENFIS model M0 and seven rules are extracted, so
the model M is transformed into an equivalent set of seven local models. The
model M0 is further evolved on Dtr into a new model Mnew , consisting of nine
rules allocated to nine clusters, the first seven representing data D0tr and the last
two, data Dtr (Table 7.1a). Although on the test data D0tst both models performed
equally well, Mnew generalises better on Dtst (Fig. 7.6c).
An experiment was conducted with an EFuNN (error threshold E = 015, and
maximum radius Rmax = 025). The derived nine local models (rules) that represent
Mnew are shown for comparison in Table 7.1b (the first six rules are equivalent to
the model M and data D0tr , and the last three to cover data Dtr ).
The models Mnew derived from DENFIS and EFuNN are functionally equivalent,
but they integrate M and D in a different way. Building alternative models of the
same problem could help to understand the problem better and to choose the
most appropriate model for the task. Other types of new adaptive models can
be derived with the use of other ECOS implementations, such as recurrent and
population (evolutionary) based.


Adding New Variables

The method above is applicable to large-scale multidimensional data where new

variables may be added at a later stage. This is possible as partial Euclidean
distance between samples and cluster centres can be measured based on a different
number of variables. If a current sample Sj contains a new variable xnew , having a
value xnewj and the sample falls into an existing cluster Nc based on the common
variables, this cluster centre N is updated so that it takes a coordinate value xnewj
for the new variable xnew , or the new value may be calculated as weighted k-nearest
values derived from k new samples allocated to the same cluster. Dealing with
new variables in a new model Mnew may help distinguish samples that have very
similar input vectors but different output values and therefore are difficult to deal
with in an existing model M.

Samples S1 = x1 = 075 x2 = 0824 y = 02] and S2 = x1 = 075 x2 = 0823 y = 08]
are easily learned in a new ECOS model Mnew when a new variable x3 is added that
has, for example, values of 0.75 and 0.3, respectively, for the samples S1 and S2 .

Evolving Integrated Multimodel Systems


Partial Euclidean distance can be used not only to deal with missing values, but
also to fill in these values in the input vectors. As every new input vector xi is
mapped into the input cluster (rule node) of the model Mnew based on the partial
Euclidean distance of the existing variable values, the missing value in xi , for an
input variable, can be substituted with the weighted average value for this variable
across all data samples that fall in this cluster.



Fig. 7.6 A case study of a model M (formula) and a data set D integration through an inductive, local,
integrative learning in ECOS: (a) a 3D plot of data D0 (data samples denoted o) generated from a model M
(formula) y = 51x1 + 0345x12 083x1 log10 x2 + 045x2 + 057 expx202  in the subspace of the problem
space defined by x1 and x2 both having values between 0 and 0.7, and new data D (samples denoted  )
defined by x1 and x2 having values between 0.7 and 1. (b) The data clusters of D0 (the seven clusters on
the left, each defined as a cluster centre denoted + and a cluster area) and of the data D (the two upper
right clusters) in the 2D input space of x1 and x2 input variables from Fig. 7.2a , are formed in a DENFIS ECOS
trained with the data D0tr (randomly selected 56 data samples from D0  and then further trained with the
data Dtr (randomly selected 25 samples from D(Continued overleaf).


Evolving Connectionist Systems


Fig. 7.6 (continued) (c) The test results of the initial model M (the dashed line) versus the new model Mnew
(the dotted line) on the generated from M test data D0tst (the first 42 data samples) and on the new test
data Dtst (the last 30 samples) (the solid line). The new model Mnew performs well on both the old and the
new test data, whereas the old model M fails on the new test data (Kasabov, 2006).

Table 7.1a Local prototype rules extracted from the DENFIS new model Mnew from Fig. 7.6. The last rules
(in bold) are the newly created rules after the DENFIS model, initially trained with the data generated from the
existing formula, was further adapted on the new data, thus creating two new clusters (Kasabov, 2006).



x1 is (-0.05, 0.05, 0.14) and x2 is (0.15,0.25,0.35) THEN y = 001 + 07x1 + 012x2

x1 is (0.02, 0.11, 0.21) and x2 is (0.45,0.55, 0.65) THEN y = 003 + 067x1 + 009x2
x1 is (0.07, 0.17, 0.27) and x2 is (0.08,0.18,0.28) THEN y = 001 + 071x1 + 011x2
x1 is (0.26, 0.36, 0.46) and x2 is (0.44,0.53,0.63) THEN y = 003 + 068x1 + 007x2
x1 is (0.35, 0.45, 0.55) and x2 is (0.08,0.18,0.28) THEN y = 002 + 073x1 + 006x2
x1 is (0.52, 0.62, 0.72) and x2 is (0.45,0.55,0.65) THEN y = 021 + 095x1 + 028x2
x1 is (0.60, 0.69,0.79) and x2 is (0.10,0.20,0.30) THEN y = 001 + 075x1 + 003x2
x1 is (0.65,0.75,0.85) and x2 is (0.70,0.80,0.90) THEN y = 022 + 075x1 + 051x2
x1 is (0.86,0.95,1.05) and x2 is (0.71,0.81,0.91) THEN y = 003 + 059x1 + 037x2

Table 7.1b Local prototype rules extracted from an EFuNN new model Mnew on the same problem from
Fig. 7.6. The last rules (in bold) are the newly created rules after the EFuNN model, initially trained with the
data generated from the existing formula, was further adapted on the new data, thus creating three new
clusters (Kasabov, 2006).



x1 is (Low 0.8) and x2 is (Low 0.8) THEN y is (Low 0.8), radius R1 = 024 N1ex = 6
x1 is (Low 0.8) and x2 is (Medium 0.7) THEN y is (Small 0.7), R2 = 026 N2ex = 9
x1 is (Medium 0.7) and x2 is (Medium 0.6) THEN y is (Medium 0.6), R3 = 017 N3ex = 17
x1 is (Medium 0.9) and x2 is (Medium 0.7) THEN y is (Medium 0.9), R4 = 008 N4ex = 10
x1 is (Medium 0.8) and x2 is (Low 0.6) THEN y is (Medium 0.9), R5 = 01 N5ex = 11
x1 is (Medium 0.5) and x2 is (Medium 0.7) THEN y is (Medium 0.7), R6 = 007 N6ex = 5
x1 is (High 0.6) and x2 is (High 0.7) THEN y is (High 0.6), R7 = 02, N7ex = 12
x1 is (High 0.8) and x2 is (Medium 0.6) THEN y is (High 0.6), R8 = 01 N8ex = 5
x1 is (High 0.8) and x2 is (High 0.8) THEN y is (High3 0.8), R9 = 01 N9ex = 6

Evolving Integrated Multimodel Systems



Integrating Kernel Functions and Regression Formulas

in Knowledge-Based ANN
Integrating Regression Formulas and Kernel Functions
in Locally Adaptive Knowledge-Based Neural Networks

Regression functions are probably the most popular type of prognostic and classification models, especially in medicine. They are derived from data gathered from
the whole problem space through inductive learning, and are consequently used to
calculate the output value for a new input vector regardless of where it is located
in the problem space. For many problems, this can result in different regression
formulas for the same problem through the use of different datasets. As a result,
such formulas have limited accuracy on new data that are significantly different
from those used for the original modelling.
Kernel-based ANNs have radial-based function (RBF) kernels attached to their
nodes that are adjusted through learning from data in terms of their centres and
radii. They are trained as a set of local models that are integrated at the output.
A method for the integration of regression formulas and kernel functions in a
knowledge-based neural network (KBNN) model that results in better accuracy
and more precise local knowledge is proposed in Song et al. 2006). A block diagram
of the proposed KBNN structure is given in Fig. 7.7.

Fig. 7.7 A diagram of a kernel-regression KBNN, combining different kernels Gl with suitable regression
functions F l to approximate data in local clusters Cl l = 1 2    M (from Song et al. (2006)).


Evolving Connectionist Systems

The overall functioning of the model can be described by the formula:

yx = G1 x F1 x + G2 x F2 x +    + GM x FM x


where x = x1  x2     xP  is the input vector; y is the output vector; Gl are kernel
functions; and Fl are knowledge-based transfer functions, e.g. regression formulas,
1 = 1 2     M.
Equation (7.1) can be regarded as a regression function. Using different Gl
and Fl , Eq. (7.1) can represent different kinds of neural networks, and describe
the different functions associated with neurons in their hidden layer(s). Gl are
Gaussian kernel functions and Fl are constants in the case of RBF ANNs. Gl are
sigmoid transfer functions and Fl are constants in the case of a generic three-layer
multilayer perceptron (MLP) ANN. Gl are fuzzy membership functions and Fl
are linear functions in the case of a first-order TakagiSugenoKang (TSK) fuzzy
inference model; and in the simplest case, Gl represents a single input variable
and Fl are constants in the case of a linear regression function.
In the KBNN from Fig. 7.7, Fl are nonlinear functions that represent the
knowledge in local areas, and Gl are Gaussian kernel functions that control the
contribution of each Fl to the system output. The farther an input vector is from
the centre of the Gaussian function, the less contribution to the output is produced
by the corresponding Fl .
The KBNN model has a cluster-based, multilocal model structure. Every transfer
function is selected from existing knowledge (formulas), and it is trained within a
cluster (local learning), so that it becomes a modified formula that can optimally
represent this area of data. The KBNN aggregates a number of transfer functions
and Gaussian functions to compose a neural network and such a network is then
trained on the whole training dataset (global learning).


The Learning Procedure for the Integrated

Kernel-Regression KBNN

Suppose there are Q functions fh  h = 1 2     Q, globally representing existing

knowledge that are selected as functions fl (see Fig. 7.7). The KBNN learning
procedure performs the following steps.
1. Cluster the whole training dataset into M clusters.
2. In each cluster l l = 1 2    M Q functions fh are modified (local learning)
with a gradient descent method on the subdataset and the best one (with the
minimum root mean square error, RMSE) is chosen as the transfer function Fl
for this cluster.
3. Create a Gaussian kernel function Gl as a distance function: the centre and
radius of the clusters are, respectively, taken as initial values of the centre and
width of Gl .
4. Aggregate all Fl and Gl as per Eq. (7.1) and optimise all parameters in the KBNN
(including parameters of each Fl and Gl  using a gradient descent method on
the whole dataset.
In the KBNN learning algorithm, the following indexes are used.

Evolving Integrated Multimodel Systems


Training data : i = 1 2     N.
Subtraining dataset: i = 1 2     Nl .
Input variables: j = 1 2     P.
Neuron pairs in the hidden layer: l = 1 2     M.
Number of existing functions: h = 1 2     Q.
Number of parameters in Fl pf = 1 2     Lpf .
Learning iterations: k = 1 2    .

The equations for parameter optimisation are described below.

Consider the system having P inputs, one output, and M neuron pairs in the
hidden layer; the output value of the system can be calculated for an input vector
xi = xi1  xi2    xiP  by Eq. (7.1):
yxi  = G1 xi  F1 xi  + G2 xi F2 xi  + + GM xi  FM xi 


Here, Fl are transfer functions and each of them has parameters bpf  pf =
1 2     Lpf ,
Gl x i  = l




xij mlj 2
2 lj 2


are Gaussian kernel functions. Here,

represents a connection vector between
the hidden layer and the output layer; ml is the centre of Gl . l is regarded as
the width of Gl , or a radius of the cluster l. If a vector x is the same as ml , the
neuron pair Gl xFl x has the maximum output, Fl x; the output will be between
0607 1 Fl x if the distance between x and ml is smaller than l ; the output
will be close to 0 if x is far away from ml .
Suppose the KBNN is given the training inputoutput data pairs [xi  ti ]; the local
learning minimizes the following objective function for each transfer function on
the corresponding cluster,
El =

f x  ti 2
2 i=1 h i


Here, Nl is the number of data that belong to the lth cluster, and the global learning
minimizes the following objective function on the whole training dataset,

yxi  ti 2
2 i=1


A gradient descent algorithm (backpropagation algorithm) is used to obtain the

recursions for updating the parameters b,
, m, and , so that El of Eq. (7.4) and E
of Eq. (7.5) are minimised. The initial values of these parameters can be obtained
from original functions (for b), random values or least-squares method (for
and the result of clustering (for m and ):
bpf k + 1 = bpf k b


for local learning



Evolving Connectionist Systems

bpf k + 1 = bpf k b

l k + 1 = l k

mlj k + 1 = mlj k m


for global learning

G x F x yxi  ti 
l k i=1 l i l i


Gl xi Fl xi yxi  ti xij mlj 


lj k + 1 = lj k


Gl xi Fl xi yxi  ti xij mlj 2






Here, b  
 m , and are learning rates for updating the parameters b, , m,
and , respectively;
respectively, depend on existing and selected functions, e.g. the MDRD function
(this is the GFRR case study introduced in Chapter 5) and the output function can
be defined as follows.
fx = GFR = b0 x1b1 x2b2 x3b3 x4b4 x5b5 x6b6


In this function, x1  x2  x3  x4  x5 , and x6 represent Scr, age, gender, race, BUN, and
Alb, respectively. So that, for the local learning:
= fxi  ti 
b0 i=1



= xp ln bp fxi  ti  p = 1 2     6


and for the global learning (suppose the MDRD function is selected for the lth

= Gl xi  yxi  ti 


Evolving Integrated Multimodel Systems


= Gl xi xp ln bp yxi  ti 


p = 1 2     6


For both local and global learning, the following iterative design method is used:
1. Fix the maximum number of learning iterations (maxKl for the local learning
and maxK for the global learning) and the minimum value of the error on
training data (minEl for the local learning and minE for the global learning).
2. Perform Eq. (7.6) repeatedly for the local learning until the number of learning
iterations k >maxKl or the error El <= minEl (El is calculated by Eq. (7.4)).
3. Perform Eqs. (7.7)(7.10) repeatedly for the global learning until the number
of learning iterations k >maxK or the error E <= minE (E is calculated by Eq.
In this learning procedure, we use the clustering method ECM (evolving clustering
method; see Chapter 2) for clustering, and a gradient descent algorithm for
parameter optimisation. Although some other clustering methods can be used such
as K-means, fuzzy C-means, or the subtractive clustering method, ECM is more
appropriate because it is a fast one-pass algorithm and produces well-distributed
clusters. The number of clusters M depends on the data distribution in the input
space and it can be set up by experience, probing search, or optimisation methods
(e.g. the genetic algorithm). In this research, we do not use any optimisation
method to adjust M. For generalisation and simplicity, we use in the KBNN
learning algorithm a standard general gradient descent method. The Levenberg
Marquardt, one-step second backpropagation algorithm, least squares method,
SVD-QR method, or some others can be applied in the KBNN for parameter
optimisation instead of a general gradient descent algorithm.


A Case Study Example

The method is applied in Song et al. (2006) for the creation of a KBNN model for
the prediction of renal functions using the medical dataset described in Chapter 5.
Nine existing regression formulas for the prediction of renal function are taken
as knowledge-based transfer functions to be used in the KBNN model. Using
the proposed model, more accurate results than the existing formulas or other
well-known connectionist models have been obtained.


Ensemble Learning Methods for ECOS

The above two sections presented ECOS-based methods for integrating existing
models and new data. Here some methods for evolving several models, including
ECOS models, for solving a common problem, are presented. Using several interacting models, instead of one, may improve the performance of the final system
as different models may represent different aspects of the problem and the data
available (Abbass, 2004; Potter and De Jong, 2000; Kidera et al., 2006; Duell et al.,
2006; Liu and Yao, 1999).



Evolving Connectionist Systems

Negative Correlation Ensemble Learning

Negative correlation ensemble learning has been developed as an efficient method

for improving the accuracy and the speed in ensembles of learning models when
compared to single models. Several methods developed by Xin Yao et al. have
been already published (Liu and Yao, 1999; Duell et al., 2006).
This section presents a cooperative neural network ensemble learning method
based on negative correlation learning (from Chan and Kasabov (2005)). It allows
integration of different network models and fast implementation on both serial and
parallel machines. Results demonstrate competitive performance to the original
negative correlation learning method at significantly reduced communication
Effective use of a neural network ensemble requires three critical factors: first, an
efficient ensemble learning scheme that is typically implemented through parallel
computing because the training of multiple neural networks is computationally
intensive; second, integration of different network models, e.g. MLPs and RBFs to
provide a more diversified output; and third, a cooperative learning method to
promote interaction between networks.
For cooperative learning, an effective method called negative correlation (NC)
learning was proposed by Liu and Yao (1999) and has been shown both theoretically and empirically to improve ensemble generalisation performance. The error
functions of the networks are modified to promote negatively correlated prediction
errors, which in effect cause the networks to diversify in their outputs and each
network to specialise in a particular aspect of the data. However, despite its effectiveness, Liu and Yaos method requires high communication overhead between
the networks and it is applicable only to the ensemble of backpropagation-type
networks, which hinder parallel speedup and integration of different network
models, respectively. Its practicality is therefore diminished.
The novel NC (negative correlation) learning method (Chan and Kasabov, 2005)
presented here alleviates the drawbacks of Liu and Yaos method. We generate
new sets of data called correlation-corrected data by correcting the original
training data with the error correlation information. Now, instead of using penalty
functions, NC learning is achieved by simply training the networks with these
correlation-corrected data. This method offers two advantages: first, no error
function recoding is required and second, updating correlation-corrected data
requires much less communication overhead. It is therefore very suitable for
parallel execution of NC learning even with an ensemble of different models in a
distributed computing environment.
Liu and Yaos NC learning requires: (a) introduction of a correlation penalty
function into the error function of each network, and (b) communication between
networks on a pattern-by-pattern basis.
Let T = x d = x1 d1 x2 d2     xN dN represents the
training data where N is the number of patterns and x d are the input
and output (target) vectors, respectively. We form an ensemble of M networks
whose joint output F is the average of all network outputs Fi  I = 1 2     M.

Evolving Integrated Multimodel Systems


Consequently, the error E is also the average of all network errors Ei :

Fn =

F n
M i=1 i

En =

E n
M i=1 i


The correlation penalty Pi measures the error correlation between the ith network
and the rest of the ensemble and it is formulated as follows. Recall that the goal of
generalisation is to learn the generating function of the output and not the target
data themselves. We use Fn to approximate the generating
 function such that
Fi nFn approximates the error of the ith network and j=i Fj nFn the
joint error of the rest of the ensemble from the generating function. The error
correlation Pi is then obtained as their product

Pi n = Fi n Fn j=i Fj n Fn
The new error function Ei is a weighted sum of the original error function and
the penalty function Pi , given by
Ei n =

F n dn2 + Pi n
2 i


where 0  1 is the hyperparameter (a term used to describe a similar instance

in network regularisation) that adjusts the strength of the correlation penalty. For
adjusting the weights of the ith network through standard backpropagation, the
derivative of the ensemble error E with respect to Fi is obtained using (7.16),
(7.17), and (7.18)
2M 1
= Fi n dn
Fi n Fn
Fi n


The computation of the derivative in (7.19) requires periodic updating of the

ensemble output Fn and it is on a pattern-by-pattern basis in Lius method. The
communication overhead is therefore very high.
Correlation-corrected data are transformed target data to which ordinary
training of the networks in the ensemble will automate NC learning. Let ci denote
the correlation-corrected data for the ith network; it is derived as the desired
network output Fi that minimizes the ensemble error E, i.e. when the derivative
of E in (7.19) is set to zero
dn KFn

2M 1
The generation of ci in (7.20) requires only a simple linear combination between the
original target dn and the ensemble output Fn. Like the original method, Fn
must be updated periodically. However, the frequency of updating is significantly
reduced because training to correlation-corrected data (7.20) is more stable and
robust to error than training with a correlation-corrected gradient (7.19) as in Liu
and Yaos method. This hypothesis is empirically verified (shown later) as we find
that updating Fn over a number of training epochs (each epoch denotes one

= 0
Fi n

Fi n = ci n =

where K =


Evolving Connectionist Systems

presentation of the whole set of training patterns) rather than over every training
pattern as in Liu and Yaos case offers very similar performance. The longer update
interval reduces communication overhead and allows effective parallel execution
of NC learning in a coarse-grain distributed computing environment. Figure 7.8
shows the distributed computing environment used in the experiment.
In Fig 7.8 each network of the ensemble operates on a different processor node.
A control centre is established to centralize all information flow and its tasks
are (a) to generate the correlation-corrected data for each network, (b) to send
them out, and (c) to collect the trained network outputs. Let gupdate and gmax
denote the number of epochs between each ci update and the maximum number
of epochs allowable. The updating of the correlation-corrected data ci may be
implemented synchronously (after all networks have finished trained for gupdate
epochs) or asynchronously (whenever a network has finished training for gupdate
epochs). In this work we implement the latter as both methods perform similarly.
The procedures are summarized in the following pseudo-code.
Step 1: Initialise M networks with random weights. Partially train each network
to the training data T ={x d} for gupdate epochs and then obtain network output.
Step 2: Upon receipt of the ith network output Fi at the control centre:
(a) Update the ensemble output F.
(b) Create the c-corrected target ci using (7.20) and send it to the ith network
(c) Train the ith network to Ti = x ci  for gupdate epochs.
(d) Send network output Fi to control centre.
Step 3: Stop if each network has been trained for a total of gmax epochs.

Case Study Experiment

We compare the performance of NC learning using correlation-corrected data
with Lius original method on predicting the MackeyGlass time series, which is a
quasiperiodic and chaotic time series generated by
x t = xt +

1 + x10 t 






Fig. 7.8 An example of a distributed computing environment, suitable for implementing ensemble negative
correlation (NC) learning using correlation-corrected data (Chan and Kasabov, 2005).

Evolving Integrated Multimodel Systems


with parameters
= 0.2,  = 01, and  = 14 and initial conditions x0 =
12, xt = 0 for 0 t  and time-step = 1. The input variables are
xt xt6 xt12 xt18 and the output variable is xt +6. Both the training
set and test set consist of 500 data points taken from the 118617th time point and
the 618117th time point, respectively. Most of our ensemble setup follows Lius
setup. The ensemble contains M = 20 multilayer perceptron networks (MLPs) in
the ensemble. Each network contains one hidden layer of six neurons. The hidden
and output activation functions are the hyperbolic tangent and linear function,
respectively. The hyperparameter  is set to 0.5 and the maximum number of
training epochs gmax is set to 10,000. We experimented with a set of update intervals
gupdate = 10 20 40     2560] to investigate their effect on the performance. Each
trial was repeated ten times. Performance was assessed by the prediction error
on the test set measured in normalised root mean square (NRMS) error, which is
simply the root mean square error divided by the standard deviation of the series.
The results are plotted in Fig. 7.9 and shown in Table 7.2.
Figure 7.9 shows that NC learning using correlation-corrected data is effective over
a range of update intervals gupdate . The error is highest at 0.018 when gupdate = 10, and
decreases gradually with increasing gupdate until it stabilises to roughly 0.0115 when
gupdate > 100. Although this phenomenon is contrary to the intuition that a shorter
update interval produces more network interaction and leads to better overall performance, it may be attributed to an inappropriate choice of the hyperparameter  at
different update intervals (Liu and Yao (1999)) show that the ensemble performance is
highly sensitive to the value of ). It causes no problem, as longer update interval gupdate
is actually advantageous in reducing the required communication overhead.
NC learning using correlation-corrected data is clearly more cost-effective in
terms of communication overhead when comparing with Liu and Yaos method.
At gupdate = 100, it scores slightly higher error (0.0115 cf 0.0100), yet it requires



0.011 1



Fig. 7.9 Plot of test error versus update interval gupdate for the NC-ensemble learning (Chan and Kasabov, 2005).


Evolving Connectionist Systems

Table 7.2 Comparison of test error obtained with the use of different methods for ensemble learning
(Chan and Kasabov, 2005).

No. network communication


Cooperative ensemble learning system (CELS)

Negative correlation (NC) learning using correlation
corrected data (gupdate = 100)
Ensemble learning with independent network training
Cascade-correlation (CC) learning

5 106




network communications of only (20 networks (10,000 epochs/100 epochs)) =

2000 rather than (500 training patterns 10 000 epochs) = 5106 , which is 25103
times smaller. Its error (0.0115) is by far lower than that of other works such as
EPNet (0.02), Ensemble learning with independent network training (0.02), and
cascade-correlation (CC) learning (0.06) (see Table 7.2).
The use of correlation-corrected data provides a simple and practical way to
implement negative correlation ensemble learning. It allows easy integration of
models from different sources and it facilitates effective parallel speedup in a
coarse-grain distributed environment due to its low communication overhead
requirement. Experimental results on MackeyGlass series show that its generalisation performance is comparable to the original NC method, yet it requires a
significantly smaller (2.5 103 times) number of network communications.


EFuNN Ensemble Construction Using a Clustering Method

and a Co-Evolutionary Genetic Algorithm

Using an ensemble of EFuNNs for solving a classification problem was first

introduced in Kasabov (2001c), where the data were clustered using an evolving
clustering method (ECM) and for each cluster, an EFuNN model was evolved. But
this method did not include optimization of the parameters of the EFuNNs.
In Chapter 6 we presented several methods for parameter and feature
optimization, both off-line and online, of ECOS, and in particular of EFuNN
individual models. Here we discuss the issue of co-evolving multiple EFuNN
models, each of them having their parameters optimised.
When multiple EFuNNs are evolved to learn from different subspaces of the
problem space (clusters of data) and each of them is optimised in terms of its
parameters relevant to the corresponding cluster, that could lead to an improved
accuracy and a speedup in learning, as each EFuNN will have a smaller dataset to
learn. This is demonstrated in a method (CONE) proposed by Minku and Ludemir
(2006). The method consists of the following steps.
1. The data are clustered using an ECM clustering method (see Chapter 2), or
other clustering methods, in K clusters.
2. For each cluster, a population of EFuNN is evolved with their parameters
optimised using a co-evolutionary GA (see Potter and de Jong (2000)), and the
best one is selected after a certain number of iterations.

Evolving Integrated Multimodel Systems


3. The output is calculated as the sum of weighted outputs from all the best models
for each cluster.
The above procedure is applied on several benchmark datasets, such as
Iris, Wine, Glass, and Cancer from the UCI Machine Learning Repository
( The classification accuracy
of the ensembles of EFuNN is about 20% better than using a single optimised
EFuNN. The following ranges for the EFuNN parameters are used: m-of-n [1,15],
error threshold [0.01,0.6], maximum radius [0.01, 0.8], initial sensitivity threshold
[0.4,0.99], and number of membership functions [2,8]. The maximum radius (Dthr)
for the clustering method ECF was selected differently for each of the datasets.


Integrating ECOS and Evolving Ontologies

Ontology is a structured representation of concepts, knowledge, information, and

data on a particular problem. Evolving ontologies describe a process rather than a
static model (Gottgtroy et al., 2006). The evolving characteristic of an ontology is
achieved through the use of learning techniques inherited from data mining and
through its meta-knowledge representation. For example, the hierarchical structure
of an evolving ontology can be determined using the evolving clustering method
ECM (Chapter 2), which permits an instance to be part of more than one cluster.
At the same time, evolving ontology uses its meta-knowledge representation to
cope with this multiple clustering requirement.
Figure 7.10 gives an example of linking an ECOS-based software environment
NeuCom ( with an ontology system for bioinformatics
and biomedical applications, so that the learning process is enacted via ECOS and
the resulting knowledge is represented in the ontology formalism (from Gottgtroy
et al. (2006)).
In recent years ontology structures have been increasingly used to provide a
common framework across disparate systems, especially in bioinformatics, medical
decision support systems, and knowledge management. The use of ontology is a
key towards structuring biological data in a way that helps scientists to understand
the relationships that exist between terms in a specialized area of interest, as
well as to help them understand the nomenclature in areas with which they are
For example, gene ontology (GO;, has been
widely used in interdisciplinary research to analyse relationships between genes
and proteins across species, including data, literature, and conceptual structured
and unstructured information.
In addition to research-based literature, the amount of data produced daily by
medical information systems and medical decision support systems is growing
at a staggering rate. We must consider that scientific biomedical information
can include information stored in the genetic code, but also can include experimental results from various experiments and databases, including patient statistics
and clinical data. Large amounts of information and knowledge are available in
medicine. Making medical knowledge and medical concepts shared over applications and reusable for different purposes is crucial.


Evolving Connectionist Systems

Fig. 7.10 Integrated ECOS and ontology system for applications in bioinformatics and medical decision support
(from Gottroy et al., 2006).

A biomedical ontology is an organizational framework of the concepts involved

in biological entities and processes as well as medical knowledge in a system
of hierarchical and associative relations that allows reasoning about biomedical
knowledge. A biomedical ontology should provide conceptual links between data
from seemingly disparate fields. This might include, for example, the information collected in clinical patient data for clinical trial design, geographical
and demographic data, epidemiological data, drugs, and therapeutic data, as well
as from different perspectives as those collected by nurses, doctors, laboratory
experts, research experiments, and so on.
Figure 7.11 shows a general ontology scheme for bioinformatics and medical
decision support.
There are many software environments for building domain-oriented
ontology systems, one of them being Protg, developed at Stanford (http://


Conclusion and Open Questions

This chapter presents general frameworks for building multimodel evolving

machines that make use of the methods and the systems presented in Chapters 2
through 6. Issues such as the biological plausibility and complexity of an evolving

Evolving Integrated Multimodel Systems


Fig. 7.11 A general ontology scheme for bioinformatics and medical decision support.

system, online parameter analysis and feature selection, and hardware implementation, are difficult and need rigid methodologies that would help the future
development and numerous applications of ECOS.
Some open problems raised in the chapter are:
1. How do we build evolving machines that learn the rules that govern the evolving
of both their structure and function in an interactive way?
2. How can different ECOS, that are part of an evolving machine, develop links
between each other in an unsupervised mode?
3. How can ECOS and modules that are part of an evolving machine learn and
improve through communication with each other? They may have a common
4. Can evolving machines evolve their algorithm of operation based on very few
prior rules?
5. How can evolving machines create computer programs that are evolving
6. What do ECOS need in order for them to become reproducible, i.e. new ECOS
generated from an existing ECOS?
7. How can we model the instinct for information in an ECOS machine?


Further Reading

Principles of ECOS and evolving connectionist machines (Kasabov, 19982006)

Dynamic Statistical Modelling (West and Harrison, 1989)
Artificial Life (Adami, 1998)


Evolving Connectionist Systems

Self-adaptation in Evolving Systems (Stephens et al., 2000)

Intelligent Agents (Woldrige and Jennings, 1995)
Evolvable Robots (Nolfi and Floreano, 2000)
Hierarchical Mixture of Experts (Jordan and Jacobs, 1994)
Cooperation of ANN (de Bollivier et al., 1990)
Integrated Kernel and Regression ANN Models (Song et al., 2006)
Evolving Ensembles of ANN (Abbass, 2004; Kidera et al., 2006)
Negative Correlation Ensembles of ANN (Liu and Yao, 1999; Duell et al., 2006;
Chan and Kasabov, 2005)
Ensembles of EFuNNs (Minku and Ludemir, 2006)
Cooperative Co-evolution (Potter and De Jong, 2000)

Evolving Intelligent Systems

Whereas in Part I of the book generic evolving learning methods are presented,
in this part further methods are introduced, along with numerous applications of
ECOS to various theoretical and application-oriented problems in:

Bioinformatics (Chapter 8)
Brain study (Chapter 9)
Language modelling (Chapter 10)
Speech recognition (Chapter 11)
Image recognition (Chapter 12)
Multimodal information processing (Chapter 13)
Robotics and modelling economic and ecological processes (Chapter 14)

All these application-oriented evolving intelligent systems (EIS) are characterised

by adaptive, incremental, evolving learning and knowledge discovery. They only
illustrate the applicability of ECOS to solving problems and more applications are
expected to be developed in the future.
The last chapter, 15, discusses a promising future direction for the development
of quantum inspired EIS.

8. Adaptive Modelling and Knowledge

Discovery in Bioinformatics

Bioinformatics brings together several disciplines: molecular biology, genetics,

microbiology, mathematics, chemistry and biochemistry, physics, and, of course,
informatics. Many processes in biology, as discussed in the introductory chapter,
are dynamically evolving and their modelling requires evolving methods and
systems. In bioinformatics new data are being made available with a tremendous
speed that would require the models to be continuously adaptive. Knowledge-based
modelling, that includes rule and knowledge discovery, is a crucial requirement. All
these issues contribute to the evolving connectionist methods and systems needed
for problem solving across areas of bioinformatics, from DNA sequence analysis,
through gene expression data analysis, through protein analysis, and finally to
modelling genetic networks and entire cells as a system biology approach. That
will help to discover genetic profiles and to understand better diseases that do
not have a cure thus far, and to understand better what the human body is made
of and how it works in its complexity at its different levels of organisation (see
Fig. 1.1). These topics are presented in the chapter in the following order.

Bioinformatics: information growth and emergence of knowledge

DNA and RNA sequence data analysis and knowledge discovery
Gene expression data analysis, rule extraction, and disease profiling
Clustering of time-course gene expression data
Protein structure prediction
Gene regulatory networks and the system biology approach
Summary and open problems
Further reading


Bioinformatics: Information Growth, and Emergence

of Knowledge
The Central Dogma of Molecular Biology: Is That the General
Evolving Rule of Life?

With the completion of the first draft of the human genome and the genomes of
some other species (see, for example, Macilwain et al. (2000) and Friend (2000)) the


Evolving Connectionist Systems

task is now to be able to process this vast amount of ever-growing information and
to create intelligent systems for prediction and knowledge discovery at different
levels of life, from cell to whole organisms and species (see Fig. I.1).
The DNA (dioxyribonucleic acid) is a chemical chain, present in the nucleus
of each cell of an organism, and it consists of pairs of small chemical molecules
(bases, nucleotides) which are: adenine (A), cytosine (C), guanidine (G), and
thymidine (T), ordered in a double helix, and linked together by a dioxyribose
sugar phosphate nucleic acid backbone.
The central dogma of molecular biology (see Fig. 8.1) states that the DNA
is transcribed into RNA, which is translated into proteins, which process is
continuous in time as long as the organism is alive (Crick, 1959).
The DNA contains millions of nucleotide base pairs, but only 5% or so is used
for the production of proteins, and these are the segments from the DNA that
contain genes. Each gene is a sequence of base pairs that is used in the cell to
produce proteins. Genes have lengths of hundreds to thousands of bases.
The RNA (ribonucleic acid) has a similar structure as the DNA, but here
thymidine (T) is substituted by uridine (U). In the pre-RNA only segments that
contain genes are extracted from the DNA. Each gene consists of two types of
segments: exons, that are segments translated into proteins, and introns, segments
that are considered redundant and do not take part in the protein production.
Removing the introns and ordering only the exon parts of the genes in a sequence
is called splicing and this process results in the production of messenger RNA (or
mRNA) sequences.
mRNAs are directly translated into proteins. Each protein consists of a sequence
of amino acids, each of them defined as a base triplet, called a codon. From
one DNA sequence there are many copies of mRNA produced; the presence of
certain gene in all of them defines the level of the gene expression in the cell and
can indicate what and how much of the corresponding protein will be produced
in the cell.
The above description of the central dogma of molecular biology is very much
a simplified one, but that would help to understand the rationale behind using
connectionist and other information models in bioinformatics (Brown, et al. 2000).

Fig. 8.1 A schematic representation of the central dogma of molecular biology: from DNA to RNA (transcription),
and from RNA to proteins (translation).

Modelling and Knowledge Discovery in Bioinformatics


Genes are complex chemical structures and they cause dynamic transformation
of one substance into another during the whole life of an individual, as well as the
life of the human population over many generations. When genes are in action,
the dynamics of the processes in which a single gene is involved are very complex,
as this gene interacts with many other genes and proteins, and is influenced by
many environmental and developmental factors.
Modelling these interactions, learning about them, and extracting knowledge, is
a major goal for bioinformatics. Bioinformatics is concerned with the application
of the methods of information sciences for the collection, analysis, and modelling
of biological data and the knowledge discovery from biological processes in living
organisms (Baldi and Brunak, 1998, 2001; Brown et al. 2000).
The whole process of DNA transcription, gene translation, and protein
production is continuous and it evolves over time. Proteins have 3D structures
that unfold over time and are governed by physical and chemical laws. Proteins
make some genes express and may suppress the expression of other genes. The
genes in an individual may mutate, slightly change their code, and may therefore
express differently at another time. So, genes may change, mutate, and evolve in
a lifetime of a living organism.
Modelling these processes is an extremely complex task. The more new information is made available about DNA, gene expression, protein creation, and
metabolism, the more accurate the information models will become. They should
adapt to the new information in a continuous way. The process of biological
knowledge discovery is also evolving in terms of data and information being
created continuously.


Life-Long Development and Evolution in Biological Species

Through evolutionary processes (evolution) genes are slowly modified through

many generations of populations of individuals and selection processes
(e.g. natural selection). Evolutionary processes imply the development of generations of populations of individuals where crossover, mutation, and selection of
individuals based on fitness (survival) criteria are applied in addition to the developmental (learning) processes of each individual.
A biological system evolves its structure and functionality through both lifelong
learning of an individual and evolution of populations of many such individuals;
i.e. an individual is part of a population and is a result of evolution of many
generations of populations, as well as a result of its own development, of its lifelong
learning process.
The same genes in the genotype of millions of individuals may be expressed
differently in different individuals, and within an individual, in different cells of the
individuals body. The expression of these genes is a dynamic process depending not
only on the types of the genes, but on the interaction between the genes, and the
interaction of the individual with the environment (the nurture versus nature issue).
Several principles are useful to take into account from evolutionary biology:
Evolution preserves or purges genes.
Evolution is a nonrandom accumulation of random changes.


Evolving Connectionist Systems

New genes cause the creation of new proteins.

Genes are passed on through evolution: generations of populations and selection
processes (e.g. natural selection).
There are different ways of interpreting the DNA information (see Hofstadter
DNA as a source of information and cells as information processing machines
(Baldi and Brunak, 2001)
DNA and the cells as stochastic systems (processes are nonlinear and dynamic,
chaotic in a mathematical sense)
DNA as a source of energy
DNA as a language
DNA as music
DNA as a definition of life


Computational Modelling in Molecular Biology

Following are the main phases of information processing and problem solving in
most of the bioinformatics systems (Fig. I.3.)
1. Data collection: Collecting biological samples and processing them.
2. Feature analysis and feature extraction: Defining which features are more
relevant and therefore should be used when creating a model for a particular
problem (e.g. classification, prediction, decision making).
3. Modelling the problem: Defining inputs, outputs, and type of the model (e.g.,
probabilistic, rule-based, connectionist), training the model, and statistical
4. Knowledge discovery in silico: New knowledge is gained through the analysis of
the modelling results and the model itself.
5. Verifying the discovered knowledge in vitro and in vivo: Biological experiments
in both the laboratory and in real life to confirm the discovered knowledge.
Some tasks in bioinformatics are characterised by:
1. Small datasets, e.g. 100 or fewer samples.
2. Static datasets, i.e. data do not change in time once they are used to create a
3. No need for online adaptation and training on new data.
For these tasks the traditional statistical and AI techniques are well suited. The
traditional, off-line modelling methods assume that data are static and no new
data are going to be added to the model. Before a model is created, data are
analysed and relevant features are selected, again in an off-line mode. The offline mode usually requires many iterations of data propagation for estimating
the model parameters. Such methods for data analysis and feature extraction
utilise principal component analysis (PCA), correlation analysis, off-line clustering

Modelling and Knowledge Discovery in Bioinformatics


techniques (such as K-means, fuzzy C-means, etc.), self-organising maps (SOMs),

and many more. Many modelling techniques are applicable to these tasks, for
example: statistical techniques such as regression analysis and support vector
machines; AI techniques such as decision trees, hidden Markov models, and finite
automata; and neural network techniques such as MLP, LVQ, and fuzzy neural
Some of the modelling techniques allow for extracting knowledge, e.g. rules
from the models that can be used for explanation or for knowledge discovery. Such
models are the decision trees and the knowledge-based neural networks (KBNN;
Cloete and Zurada, (2000)).
Unfortunately, most of the tasks for data analysis and modelling in bioinformatics are characterized by:
1. Large dimensional datasets that are updated regularly.
2. A need for incremental learning and adaptation of the models from input data
streams that may change their dynamics in time.
3. Knowledge adaptation based on a continuous stream of new data.
When creating models of complex processes in molecular biology the following
issues must be considered.
How to model complex interactions between genes and proteins, and between
the genome and the environment.
Both stability and repetitiveness are features that need to be modelled, because
genes are relatively stable carriers of information.
Dealing with uncertainty, for example, when modelling gene expressions, there
are many sources of uncertainty, such as
 Alternative splicing (a splicing process of same RNAs resulting in different
 Mutation in genes caused by: ionizing radiation (e.g. X-rays); chemical contamination, replication errors, viruses that insert genes into host cells, etc. Mutated
genes express differently and cause the production of different proteins.
For large datasets and for continuously incoming data streams that require the
model and the system to rapidly adapt to new data, it is more appropriate to use
online, knowledge-based techniques and ECOS in particular as demonstrated in
this chapter.
There are many problems in bioinformatics that require their solutions in the
form of a dynamic, learning, knowledge-based system. Typical problems that are
also presented in this chapter are:
Discovering patterns (features) from DNA and RNA sequences (e.g., promoters,
RBS binding sites, splice junctions)
Analysis of gene expression data and gene profiling of diseases
Protein discovery and protein function analysis
Modelling the full development (metabolic processes) of a cell (Tomita, 2001)


Evolving Connectionist Systems

An ultimate task for bioinformatics would be predicting the development of an

organism from its DNA code. We are far from its solution now, but many other
tasks on the way can be successfully solved through merging information sciences
with biological sciences as demonstrated in this chapter.


DNA and RNA Sequence Data Analysis

and Knowledge Discovery
Problem Definition

Principles of DNA Transcription and RNA Translation

and Their Computational Modelling
As mentioned previously, only two to five percent of the human genome (the
DNA) contains information that concerns the production of proteins (Brown
et al., 2000). The number of genes contained in the human genome is about
40,000 (Friend, 2000). Only the gene segments are transcribed into RNA sequences.
The transcription is achieved through special proteins, enzymes called RNA
polymerase, that bind to certain parts of the DNA (promoter regions) and start
reading and storing in an mRNA sequence each gene code.
Analysis of a DNA sequence and identifying promoter regions is a difficult task.
If it is achieved, it may make possible to predict, from DNA information, how this
organism will develop or, alternatively, what an organism looked like in retrospect.
In simple organisms, bacteria (prokaryotic organisms), DNA is transcribed
directly into mRNA that consists of genes that contain only codons (no intron
segments). The translation of the genes into proteins is initiated by proteins called
ribosomes, that bind to the beginning of the gene (ribosome binding site) and
translate the sequence until reaching the termination area of the gene. Finding
ribosome binding sites in bacteria would reveal how the bacteria would act and
what proteins would be produced.
In higher organisms (that contain a nucleus in the cell) the DNA is first
transcribed into a pre-mRNA that contains all the regions from the DNA that
contain genes. The pre-RNA is then transcribed into many sequences of functional
mRNAs through a splicing process, so that the intron segments are deleted from
the genes and only the exon segments, that account for proteins, are extracted.
The functional mRNA is now ready to be translated into proteins.
Finding the splice junctions that separate the introns from the exons in a premRNA structure, is another difficult task for computer modelling and pattern
recognition, that once solved would help us understand what proteins would be
produced from a certain mRNA sequences. This task is called splice junction
But even having recognized the splice junctions in a pre-mRNA, it is extremely
difficult to predict which genes will really become active, i.e. will be translated into
proteins, and how active they will be: how much protein will be produced.That

Modelling and Knowledge Discovery in Bioinformatics


Fig. 8.2 A hypothetical scheme of using neural networks for DNA/RNA sequence analysis.

is why gene expression technologies (e.g. microarrays) have been introduced, to

measure the expression of the genes in mRNAs. The level of a gene expression
would suggest how much protein of this type would be produced in the cell, but
again this would only be an approximation.
Analysis of gene expression data from microarrays is discussed in the next
section. Here, some typical tasks of DNA and RNA sequence pattern analysis
are presented, namely ribosome binding site identification and splice junction
Recognizing patterns from DNA or from mRNA sequences is a way of recognizing genes in these sequences and of predicting proteins in silico (in a computer).
For this purpose, usually a window is moved along the DNA sequence and data
from this window are submitted to a classifier (identifier) which identifies if one
of the known patterns is contained in this window. A general scheme of using
neural networks for sequence pattern identification is given in Fig. 8.2.
Many connectionist models have been developed for identifying patterns in a
sequence of RNA or DNA (Fu, 1999; Baldi and Brunak, 2001). Most of them deal
with a static dataset and use multilayer perceptrons MLP, or self-organising maps
In many cases, however, there is a continuous flow of data that is being made
available for a particular pattern recognition task. New labelled data need to be
added to existing classifier systems for a better classification performance on future
unlabelled data. This can be done with the use of the evolving models and systems.
Several case studies are used here to illustrate the application of evolving systems
for sequence DNA and RNA data analysis.


DNA Promoter Recognition

Only two to five percent of the human genome (the DNA) contains useful information that concerns the production of proteins. The number of genes contained
in the human genome is about 40,000. Only the gene segments are transcribed into


Evolving Connectionist Systems

RNA sequences and then translated into proteins. The transcription is achieved
through special proteins, enzymes called RNA polymerase, that bind to certain
parts of the DNA (promoter regions) and start reading and storing in a mRNA
sequence each gene code. Analysis of a DNA sequence and identifying promoter
regions is a difficult task. If it is achieved, it may make possible to predict, from
a DNA information, how this organism will develop or, alternatively, what an
organism looked like in retrospect. The promoter recognition process is part of
a complex process of gene regulatory network activity, where genes interact with
each other over time, defining the destiny of the whole cell.
Extensive analysis of promoter recognition methods and experimental results
are presented in Bajic and Wee (2005).

Case Study
In Pang and Kasabov (2004) a transductive SVM is compared with SVM methods
on a collection of promoter and nonpromoter sequences. The promoter sequences
are obtained from the eukaryotic promoter database (EPD) There are 793 different vertebrate promoter sequences of length
250 bp. These 250 bp long sequences represent positive training data. We also
collected a set of nonoverlapping human exon and intron sequences of length
250 bp each, from the GenBank database:
GenbankOverview.html, Rel. 121. For training we used 800 exon and 4000 intron


Ribosome Binding Site Identification

Case Study. A Ribosome Binding Site Identification in E.coli Bacteria

The following are the premises of the task and the parameters of the developed
modelling system. The dataset contains 800 positive (each of them contains a RBS)
and 800 negative base sequences (they do not contain a RBS), each of them 33
bases long.
The task is to develop a RBS identification system, so that if a new 33 base
long sequence is submitted, the system will identify if there is a RBS within this
sequence, or not. This task has been dealt with in several publications (Fu, 1999).
The following encoding is used: A = 1000, T = 0100, G = 0010, C = 0001; binding
site = 1, nonbinding site = 0.
An evolving fuzzy neural network EFuNN (Chapter 3) was trained in an online
mode for the parameter values: 132 inputs (33 4), 1 output, initial sensitivity
threshold Sthr = 09, error threshold Erthr = 01, initial learning rate lr = 01, m-ofn value m = 3, evolved rule nodes Rn = 9 after the presentation of all 1600 examples;
aggregation of rule nodes is performed after every 50 examples Nagg = 50.

Modelling and Knowledge Discovery in Bioinformatics


Desired and Actual



















Number of rule nodes


Fig. 8.3 Online learning of ribosome binding site data of two classes (Yes 1, and No 0). EFuNN has learned
very quickly to predict new data in an online learning mode. The upper figure gives the desired versus the
predicted one-step-ahead existence of RBS in an input vector of 33 bases (nucleotides). The lower figure shows
the number of rule nodes created during the learning process. Aggregation is applied after every 50 examples.

Figure 8.3 shows the process of online learning of EFuNN from the ribosome
binding site data. It took a very small number of examples for the EFuNN to
learn the input patterns of each of the two classes and predict properly the class
of the following example from the data stream. Nine rules are extracted from the
trained EFuNN that define the knowledge when, in a sequence of 33 bases, one
should expect that there would be a RBS, and when a RBS should not be expected.
Through aggregation, the number of the rules is kept comparatively small which
also improves the generalisation property of the system.
Using an evolving connectionist system allows us to add to the already evolved
system new RBS data in an online mode and to extract at any time of the system
operation refined rules.


RNA Intron/Exon Splice Junction Identification

Here a benchmark dataset, obtained from the machine learning database repository at the University of California, Irvine (Blake and Merz, 1998) is used. It
contains primate splice-junction gene sequences for the identification of splice site
boundaries within these sequences. As mentioned before, in eukaryotes the genes
that code for proteins are contained in coding regions (exons) that are separated
from noncoding regions (introns) of the pre-mRNA at definedboundaries, the


Evolving Connectionist Systems

so-called splice junction. The dataset consists of 3190 RNA sequences, each of
them 60 nucleotides long and classified as an exonintron boundary (EI), an
intronexon boundary (IE), or nonsplice site (N).
Several papers reported the use of MLP and RBF networks for the purpose of
the task (see, for example Fu (1999)).
Here, an EFuNN system is trained on the data. The EFuNN is trained with 1000
examples, randomly drawn from the splice dataset. The EFuNN has the following
parameter values: membership functions MF = 2; number of examples before
aggregation of the rule nodes Nagg = 1000; error threshold Ethr = 01.
When the system was tested on another set of 1000 examples, drawn randomly
from the rest of the dataset, 75.7% accuracy was achieved. The number of the
evolved rule nodes is 106. With the use of rule extraction thresholds T1 and T2
(see Chapter 3), a smaller number of rules were extracted, some of them shown
in Table 8.1. If on a certain position (out of 33) in the antecedent part of the rule
there is a rather than a base name, it means that it is not important for this
rule what base will be on this position.
When a new pre-mRNA sequence is submitted to the EFuNN classifier, it will
produce the probability that a splice junction is contained in this sequence.
Using an evolving connectionist system allows us to add new splice junction
site data to an already evolved system in an online mode, and to extract at any
time of the system operation refined splice boundary rules.

Table 8.1 Several rules extracted from a trained EFuNN model on a splicejunction dataset.
Rule4: if GGTGAGC
then [EI], receptive field = 0.221, max radius = 0.626, examples = 73/1000
then [EI], receptive field = 0.190, max radius = 0.627, examples = 34/1000
Rule25: if CC-C-TCC-GCTC-GT-CGGTGAGTGGGCCG-GG-CCCthen [EI], receptive field = 0.216, max radius = 0.628, examples = 26/1000
Rule2: if -CCC-TC-CCCAGG-G-Cthen [IE], receptive field = 0.223, max radius = 0.358, examples = 64/1000
Rule12: if CCCAGG
then [IE], receptive field = 0.318, max radius = 0.359, examples = 83/1000
Rule57: if -TT-C-T-TT-CAGG-C-ATthen [IE], receptive field = 0.242, max radius = 0.371, examples = 21/1000
Rule5: if AT
then [N], receptive field = 0.266, max radius = 0.391, examples = 50/1000
Rule6: if GGGC-GGGGAthen [N], receptive field = 0.229, max radius = 0.400, examples = 23/1000
Rule9: if GGG-T
then [N], receptive field = 0.203, max radius = 0.389, examples = 27/1000
Rule10: if A-AAAA-A-Athen [N], receptive field = 0.291, max radius = 0.397, examples = 36/1000
Rule14: if T-TG-TG-TT-TCCC
then [N], receptive field = 0.223, max radius = 0.397, examples = 21/1000

Modelling and Knowledge Discovery in Bioinformatics



MicroRNA Data Analysis

RNA molecules are emerging as central players controlling not only the
production of proteins from messenger RNAs, but also regulating many essential
gene expression and signalling pathways. Mouse cDNA sequencing project
FANTOM in Japan showed that noncoding RNAs constitute at least a third of
the total number of transcribed mammalian genes. In fact, about 98% of RNA
produced in the eukaryotic cell is noncoding, produced from introns of proteincoding genes, non-protein-coding genes, and even from intergenic regions, and it
is now estimated that half of the transcripts in human cells are noncoding and
These noncoding transcripts are thus not junk, but could have many crucial
roles in the central dogma of molecular biology. The most recently discovered,
rapidly expanding group of noncoding RNAs is microRNAs, which are known
to have exploded in number during the emergence of vertebrates in evolution.
They are already known to function in lower eukaryotes in regulation of cell and
tissue development, cell growth and apoptosis, and many metabolic pathways,
with similar likely roles in vertebrates (Havukkala et al. 2005).
MicroRNAs are encoded by long precursor RNAs, commonly several hundred
basepairs long, which typically form foldback structures resembling a straight
hairpin with occasional bubbles and short branches. The length and the conservation of these long transcribed RNAs make it possible by a sequence similarity
search method to discover and classify many phylogenetically related microRNAs
in the Arabidopsis genome (Havukkala et al., 2005). Such analysis has established that most plant microRNA genes have evolved by inverted duplication of
target gene sequences. The mechanism of their evolution in mammals is less
Lack of conserved microRNA sequences or microRNA targets between animals
and plants suggests that plant microRNAs evolved after the split of the plant
lineage from mammalian precursor organisms. This means that the information
about plant microRNAs does not help to identify or classify most mammalian
microRNAs. Also, in mammalian genomes the foldback structures are much
shorter, down to only about 80 basepairs, making a sequence similarity search
a less effective method for finding and clustering remotely related microRNA
Several sequence similarity and RNA-folding-based methods have been
developed to find novel microRNAs that include: Simple BLAST similarity search;
Screening by RNA-fold prediction algorithms (best known are Mfold and RNAfold)
to look for stem-loop structure candidates having a characteristically low deltaG
value indicating strong hybridization of the folded molecule, followed by further
screening by sequence conservation between genomes of related species; Careful
multiple alignment of many different sequences from closely related primate
species to find accurate conservation at single nucleotide resolution.
The problem with all these approaches is that they require extensive sequence
data and laborious sequence comparisons between many genomes as one key
filtering step. Also, findingspecies-specific, recently evolved microRNAs by these


Evolving Connectionist Systems

methods is difficult, as well as evaluating the phylogenetic distance of remotely

related genes which have diverged too much in sequence.
One tenet of this section is that the two-dimensional (2D) structure of
many microRNAs (and noncoding RNAs in general) can give additional information which is useful for their discovery and classification, even with data
from within only one species. This is analogous to protein three-dimensional
(3D) structure analysis showing often functional and/or evolutionary similarities
between proteins that cannot easily be seen by sequence similarity methods alone
(Havukkala et al., 2006).
Prediction of RNA folding in 2D is more advanced, and reasonably accurate
algorithms are available which can simulate the putative most likely and
thermodynamically most stable structures of self-hybridizing RNA molecules.
Many such structures have been also verified by various experimental methods in
the laboratory, corroborating the general accuracy of these folding algorithms.
In Havukkala et al. (2005, 2006) we have approached the problem by utilising
visual information from images of computer-simulated 2D structures of macromolecules. The innovation is to use suitable artificial intelligence image analysis
methods, such as the Gabor filter on bitmap images of the 2D conformation. This
is in contrast to the traditional approach of using as a starting point various
extracted features, such as the location and size/length of loops/stems/branches
etc., which can comprise a preconceived hypothesis of the essential features of
the molecule conformation. The procedure is to take a sample of noncoding RNA
sequences, calculate their 2D thermodynamically most stable conformation, output
the image of the structure to bitmap images, and use a variety of rotation-invariant
image analysis methods to cluster and classify the structures without preconceived
hypotheses as to what kind of features might be important ones. Although one
may lose specific information about the exact location or length of loops/stems or
specific sequence motifs, the image analysis could reveal novel relevant features in
the image that may not be intuitively obvious to the human eye, e.g. fractal index
of the silhouette, ratio of stem/loop areas, handedness of asymmetric configurations, etc.


Gene Expression Data Analysis, Rule Extraction,

and Disease Profiling1
Problem Definition

One of the contemporary directions while searching for efficient drugs for many
terminal illnesses, such as cancer or HIV, is the creation of gene profiles of these

The gene expression profiling methodology described in this section constitutes intellectual property of the Pacific Edge Biotechnology Ltd. (PEBL) (
Prior permission from PEBL is needed for any commercial applications of this methodology.
The methodology is protected as a PCT patent (Kasabov, 2001b).

Modelling and Knowledge Discovery in Bioinformatics


diseases and subsequently finding targets for treatment through gene expression
regulation. A gene profile is a pattern of expression of a number of genes that is
typical for all, or for some of the known samples of a particular disease. A disease
profile would look like:
IF (gene g1 is highly expressed) AND (gene g37 is low expressed) AND (gene
134 is very highly expressed) THEN most probably this is cancer type C (123 out
of available 130 samples have this profile).
Having such profiles for a particular disease makes it possible to set early
diagnostic testing, so a sample can be taken from a patient, the data related to
the sample processed, and a profile obtained. This profile can be matched against
existing gene profiles and based on similarity, it can be predicted with certain
probability if the patient is in an early phase of a disease or he or she is at risk of
developing the disease in the future.
A methodology of using DNA or RNA samples, labelled with known diseases,
that consists of training an evolving system and extracting rules, that are presented
as disease profiles, is illustrated schematically in Fig. 8.4. Each profile is a rule
that is extracted from a trained ECOS, which on the figure is visualised through
colours: the higher the level of a gene expression, the brighter the colour. Five
profiles are visualised in Fig. 8.4. The first three represent a group of samples of
class 1 (disease), with the second two representing class two (normal case). Each
column in the condition part of the rules (profiles) represents the expression of
one gene out of the 100 relevant genes used in this example.
Microarray equipment is used widely at present to evaluate the level of gene
expression in a tissue or in a living cell (Schena, 2000). Each point (pixel, cell) in a
microarray represents the level of expression of a single gene. Five principal steps
in the microarray technology are shown in Fig. 8.5. They are: tissue collection RNA
extraction, microarray gene expression recording, scanning, and image processing,
and data analysis.
The recent advent of DNA microarray and gene chip technologies means
that it is now possible to simultaneously interrogate thousands of genes in
tumours. The potential applications of this technology are numerousand include



Fig. 8.4 A schematic representation of the idea of using evolving connectionist systems to learn gene profiles
of diseases from DNA/RNA data.


Evolving Connectionist Systems

Fig. 8.5 Principal steps in a gene expression microarray experiment with a consecutive data analysis and profile

identifying markers for classification, diagnosis, disease outcome prediction, therapeutic responsiveness, and target identification. Microarray analysis might not
identify unique markers (e.g. a single gene) of clinical utility for a disease because
of the heterogeneity of the disease, but a prediction of the biological state of
disease is likely to be more sensitive by identifying clusters of gene expressions
(profiles; Kasabov (2001a,b)).
For example, gene expression clustering has been used to distinguish
normal colon samples from tumours from within a 6500 gene set, although
clustering according to clinical parameters has not been undertaken (Alon et al.,
1999). Although distinction between normal and tumour tissue can be easily made
using microscopy, this analysis represented one of the early attempts to classify
biological samples through gene expression clustering. The above dataset is used
in this section to extract profiles of colon cancer and normal tissue through using
an evolving fuzzy neural network EFuNN (Chapter 3).
Another example of profiling developed in this chapter is for the distinction
between two subtypes of leukaemia, namely AML and ALL (Golub et al.,
NN have already been used to create classification systems based on gene
expression data. For example, Khan et al. (2001) used MLP NNs and achieved
a successful classification of 93% of Ewings sarcomas, 96% of rhabdomyosarcomas, and 100% of neuroblastomas. From within a set of 6567 genes,
96 genes were used as variables in the classification system. Whether these
results would be different using different classification methods needs further


A Gene Expression Profiling Methodology

A comprehensive methodology for profiling of gene expression data from

microarrays is described in Futschik et al. (2002, 2003a,b). It consists of the
following phases.

Modelling and Knowledge Discovery in Bioinformatics


1. Microarray data preprocessing. This phase aims at eliminating the low expressed
genes, or genes that are not expressed sufficiently across the classes(e.g.
controlled versus tumour samples, or metastatic versus nonmetastatic tumours).
Very often log transformation is applied in order to reduce the range of gene
expression data. An example of how this transformation squeezes the gene
expression values plotted in the 2D principal components, is given in Fig. 8.6.
There are only two samples used (two cell lines) and only 150 genes out of the
4000 on the microarray, that distinguish these samples.
2. Selecting a set of significant differentially expressed genes across the classes.
Usually the t-test is applied at this stage with an appropriate threshold used
(Metcalfe, 1994). The t-test calculates in principle the difference between the
mean expression values for each gene g for each class (e.g. two classes: class 1,
normal and class two, tumour)
t = 1 2 /12


where 1 and 2 are the mean expression values of gene g for class 1 and class
2 respectively; 12 is the variance.
3. Finding subsets of (a) underexpressed genes and (b) overexpressed genes from
the selected ones in the previous step. Statistical analysis of these subsets is
4. Clustering of the gene sets from phase 3 that would reveal preliminary
profiles of jointly overexpressed/underexpressed genes across the classes. An
example of hierarchical clustering of 12 microarray vectors (samples), each
containing the expression of 50 genes after phases 1 to 3 were applied
on the initial 4000 gene expression data from the microarrays, is given in
Fig. 8.7. Figure 8.7a plots the samples in a 2D Sammon projection space of
the 50D gene expression space. Figure 8.7b presents graphically the similarity
between the samples (columns), based on the 50 selected genes, and the
similarity between the genes (rows) based on their expression in the 12 samples
(see chapter 2).



Log2 Intensities Channel 2

Intensities Channel 2


Intensities Channel 1








Log2 Intensities Channel 1

Fig. 8.6 Gene expression data: (a) before log transformation; (b) after log transformation.



Evolving Connectionist Systems

Fig. 8.7 (a) Sammons projection of 50D gene expression space of 12 gene expression vectors (samples, taken
from 12 tissues); (b) hierarchical clustering of these data. The rows are labelled by the gene names and the
columns represent different samples. The lines link similar items (similarity is measured as correlation) in a
hierarchical fashion.

Degree of membership

Modelling and Knowledge Discovery in Bioinformatics





Gene expression



Fig. 8.8 Gene expression values are fuzzified with the use of three triangular membership functions (MF)
(Futschik et al., 2002).

5. Building a classification model and extracting rules that define the profiles for
each class. The rules would represent the fine grades of the common expression
level of groups of genes. Through using thresholds, smaller or larger groups of
genes can be selected from the profile. For a better rule representation, gene
expression values can be fuzzified as it is illustrated in Fig. 8.8.
6. Further training of the model on new data and updating the profiles. With
the arrival of new labelled data (samples) the model needs to be updated, e.g.
trained on additional data, and possibly modified rules (profiles) extracted.
Two datasets are used here to illustrate the above methodology that explores
evolving systems for microarray data analysis.


Case Study 1: Gene Profiling of Two Classes of Leukaemia

with the Use of EFuNN

A dataset of 72 classification examples for leukaemia cancer disease is used, that

consists of two classes and a large input space, the expression values of 6817
genes monitored by Affymetrix arrays (Golub et al., 1999). The two types of
leukaemia are acute myeloid leukaemia (AML) and acute lymphoblastic leukaemia
(ALL). The latter one can be subdivided further into T-cell and B-cell lineage
classes. Golub et al. split the dataset into 38 cases (27 ALL, 11 AML) for training
and 34 cases (20 ALL, 14 AML) for validation of a classifier system. These
two sets came from different laboratories. The test set shows a higher heterogeneity with regard to tissue and age of patients making any classification more
The task is: (1) to find a set of genes distinguishing ALL and AML; (2) to
construct a classifier based on these data; and (3) to find a gene profile of each of
the classes.
After having applied points 1 and 2 from the methodology above, 100 genes are
A preliminary analysis on the separability of the two classes can be done through
plotting the 72 samples in the 2D principal component analysis space.PCA consists


Evolving Connectionist Systems

of a linear transformation from the original set of variables (100 genes) to a

new (smaller, 2D) set of orthogonal variables (principal components) so that the
variance of the data is maximal and ordered according to the principal components;
see Fig. 8.9a.
Several EFuNNs are evolved through the N-cross-validation technique (leaveone-out method) on the 72 data examples. The EFuNN parameters as well as the
training and test error are given in Table 8.2.
In the case of data being made available continuously over time and fast
adaptation on the new data needed to improve the model performance, online
modelling techniques would be more appropriate, so that any new labelled data



Fig. 8.9 (a) The first two principal components of the leukaemia 100 genes selected after a t-test is applied:
 AML, + ALL (Futschik et al., 2002); (b) some of the rules extracted from an EFuNN trained on the leukaemia
data and visualised as profile patterns (Futschik et al., 2002).

Modelling and Knowledge Discovery in Bioinformatics


Table 8.2 The parameter values and error results of N -cross-validation EFuNN models for the leukaemia
and colon cancer data (Futschik et al., 2002).



N agg

Rule Nodes

Training Data

Accuracy Test

Colon cancerEFuNN
Colon cancerEFuNN

























will be added to the EFuNN and the EFuNN will be used to predict the class of
any new unlabelled data.
Different EFuNN were evolved with the use of different sets of genes as input
variables. The question of which is the optimum number of genes for a particular
task is a difficult one to answer. Table 8.3 shows two of the extracted rules after
all the examples, each of them having only 11 genes, are learned by the EFuNN.
The rules are local and each of them has the meaning of the dominant rule in a
particular subcluster of each class from the input space. Each rule covers a cluster
of samples that belong to a certain class. These samples are similar to a degree
that is defined as the radius of the receptive field of the rule node representing
the cluster. For example, Rule 6 from Table 8.3 shows that 12 samples of class 2
(AML) are similar in terms of having genes g2 and g4 overexpressed, and at the
same time genes g8 and g9 are underexpressed.
One class may be represented by several rules, profiles, each of them covering
a subgroup of similar samples. This can lead to a new investigation on why the
subgroups are formed and why they have different profiles (rules), even being part
of the same class.
The extracted rules for each class comprise a profile of this class. One way
of visually representing these profiles is illustrated in Fig. 8.9b, where rules were
extracted from a trained EFuNN with 100 genes.

Table 8.3 Some of the rules extracted from the evolved EFuNN.
Rule 1: if [g1] is (2 0.9) and [g3] is (2 0.9) and [g5] is (2 0.7) and [g6] is (2 0.7) and [g8] is (1 0.8) and
[g9] is (2 0.7), receptive field = 0.109 (radius of the cluster), then Class 1, accommodated training
examples = 27/72.
--Rule 6: if [g2] is (2 0.8) and [g4] is (2 0.874) and [g8] is (1 0.9) and [g9] is (1 0.7), receptive field = 0.100,
then Class 2, accommodated training examples =12/72.

Denotation: [g1] is (2 0.9) means that the membership degree to which gene 1 expression value belongs to
the membership function High is 0.9. Alternatively 1 means membership function Low. There is a
membership degree threshold of 0.7 used and values less than this threshold are not shown.



Evolving Connectionist Systems

Case Study 2: Gene Profiling of Colon Cancer

The second dataset is two-class gene expression data, the classes being colon
cancer and normal (see Alon et al. (1999)). The data were collected from 40
tumour and 22 normal colon tissues sampled with the use of the Affymetrix
oligonucleotide microarrays. The expression of more than 6500 genes and ESTs is
collected for each sample.
After the preprocessing, the normalisation, the log-transformation, and the t-test
analysis, only 50 genes are selected for the creation of the classification model
and for the knowledge discovery procedure. Figure 8.10 shows the projection
of the 62 samples from the 50D gene expression space into the 2D PCA space,
and also the ordered gene-samples hierarchical clustering diagram according to
a similarity measured through the Pearson correlation coefficients (see Futschik
et al. (2002, 2003)).
Through N-cross-validation, 62 EfuNNs are evolved on 61 data examples each
and tested on the left-out example (the leave-one-out method for cross-validation).
The results are shown in Table 8.2. Subgroups of samples are associated with rule
nodes, as shown in Fig. 8.10 for the rule nodes 2, 13, and 14. Three membership
functions (MF) are used in the EFuNN models, representing Low, Medium, and
High gene expression values respectively. Figure 8.11 shows the degree to which
each input variable, a gene expression value (from the 50 genes selected) belongs
to each of the above MF for rule node 2.
Table 8.4 shows one of the extracted rules after all the examples are learned by
the EFuNN. The rules are local and each of them has its meaning for a particular
cluster of the input space. A very preliminary analysis of the rules points to one
gene that is highly expressed in a colon cancer tissue (MCM3 H09351, which gene
is already known that it is involved in DNA replication) and several genes that
are suppressed in the cancer samples. These are: Caveolin (Z18951), a structural
membrane protein involved in the regulation of signalling pathways and also a
putative tumour suppressor; and the enzymes carbonic anhydrase I (R93176) and
II (J03037), that have already been shown in the literature to be correlated with
the aggressiveness of colorectal cancer.
Figure 8.12 visualises some of the rules extracted from an EFuNN model trained
on 62 samples from the colon cancer data in the format of profiles.

Fig. 8.10 The PCA projection of the 62 samples of colon cancer/normal tissue from 50D gene expression space
and the similarity matrix genes/samples calculated based on the Pearson correlation coefficients (Futschik et al.,

Modelling and Knowledge Discovery in Bioinformatics


MF: High

Improtance value










MF: Medium



















MF: Low


Fig. 8.11 Distribution of fuzzy membership degrees of genes 1 to 50 for rule node 2 from Fig. 8.10 (colon
cancer data; Futschik et al., 2002).

Table 8.4 One of the extracted rules that reveal some conditions for a colon cancer against
normal tissue (Futschik et al., 2002).
Rule for colon cancer:
IF H57136 is Low (1.0) AND H09351 is High (0.92) AND T46924 is Low (0.9) AND
Z18951 is Low (0.97) AND R695523 is Low (0.98) AND J03037 is Low (0.98) AND R93176
is Low (0.97) AND H54425 is Low (0.96) AND T55741 is Low (0.99)
THEN The sample comes from a colon cancer tissue (certainty of 1.0)


How to Choose the Preprocessing Techniques and the Number

of Genes for the Profiles

Preprocessing and normalisation affect the performance of the models as illustrated in Fig. 8.13 on the two benchmark data used here. 100 genes are used in the
N-cross-validation procedure with the following parameter values for the EFuNN
models: E = 09; Rmax = 03; Nagg = 20.
The number of selected genes is another parameter that affects the performance
of the classification system. Figure 8.14 shows the N-cross-validation test accuracy
of EFuNN models for both the leukaemia and the colon cancer datasets, when
the following parameter values are used: E = 01; Rmax = 03; Nagg = 20. For the


Evolving Connectionist Systems

Fig. 8.12 Visualising some the rules (profiles) extracted from an EFuNN model evolved from the colon cancer
data (Futschik et al., 2002).

leukaemia dataset the best classification result is achieved for 100 genes, whereas
for the colon cancer dataset this number is 300.


SVM and SVM Trees for Gene Expression Classification

In the area of bioinformatics, the identification of gene subsets responsible for

classifying available samples to two or more classes (such as malignant or
benign) is an important task. Most current classifiers are sensitive to diseasemarker gene selection. Here we use SVM and SVM-tree (SVMT) on different tasks
of the same problem. Whereas the SVM creates a global model and Transductive
SVM (TSVM) creates a local model for each sample, the SVMT creates a global
model and performs classification in many local subspaces instead in the whole
data space as typical classifiers do.
Here we use four different cancer datasets: lymphoma (Ship et al., 2002),
leukaemia (Golub et al., 1999), colon (Alon et al., 1999), and leukaemia cell line
time-series data (Dimitrov et al., 2004). The lymphoma dataset is a collection of
gene expression measurements from 77 malignant lymphocyte samples reported
by Shipp et al (2002). It contains 58 samples of diffused large B-cell lymphoma
(DLBCL) and 19 samples of follicular lymphoma (FL), where DLBCL samples are
divided into two groups: those with cured disease (n = 32) and those with fatal or
refractory disease (n = 26). The lymphoma data containing 6817 genes is available
at http: //
The leukaemia data are a collection of gene expression measurements from
72 leukaemia (composed of 62 bone marrow and 10 peripheral blood) samples
reported by Golub et al. (1999). They contain an initial training set composed

Modelling and Knowledge Discovery in Bioinformatics


Accuracy (%)





Raw data

Colon Cancer

Scaled data
Scaled +
logged data
Scaled + logged
+ filtered data

Raw data


Scaled data

Scaled +
logged data
Scaled + logged
+ filtered data

Fig. 8.13 Preprocessing affects the performance of the modelling EFuNN system (Futschik et al., 2002).

of 27 samples of acute lymphoblastic leukaemia (ALL) and 11 samples of acute

myeloblastic leukaemia (AML), and an independent test set composed of 20 ALL
and 14 AML samples. The gene expression measurements were taken from highdensity oligonucleotide microarrays containing 7129 probes for 6817 human genes.
These data sets are available at http://www.genome
The second leukaemia data are a collection of gene expression observations of
two cell lines U937 (MINUS, a cancer cell line that is positively affected by retinoic
acid and becomes a normal cell after a time interval of 48 hours, and PLUS cell
line, that is cancerous and not affected by the drug (Dimitrov et al., 2004). Each
of the two time series contains the expression value of 12,000 genes at four time
points: CTRL, 6 hours, 24 hours, and 48 hours. We can view this problem also as
a classification problem where we have four variables (the time points) and 24,000
examples (the gene expression of a gene over the four time points) classified in
two classes, MINUS and PLUS.


Evolving Connectionist Systems


Accuracy (%)




Number of genes


Fig. 8.14 Dependence of the accuracy of N -cross-variation testing on the number of genes in the EFuNN
model for both leukaemia data and colon cancer data (Futschik et al., 2002).

The colon dataset is a collection of 62 expression measurements from colon

biopsy samples reported by Alon et al. (1999). It contains 22 normal and 40
colon cancer samples. The colon data having 2000 genes are available at http://
On the above gene expression cancer datasets, we applied the following
Step 1. Define target classes.
Step 2. Identify a gene subset (variable selection). We employed the multiobjective GA (NSGA-II), where three objective functions are used. The first
objective is to minimize the size of the gene subset in the classifier. The
second objective is to minimize the number of mismatches in the training data
samples calculated using the leave-one-out cross-validation procedure. The third
objective is to minimize the number of mismatches in the test samples.
Step 3. Filter and normalise data. We eliminate genes with not much variation
in the expression values for the two classes to ensure a differentiation of the
classes. We normalize data by evaluating the difference of the maximum and
minimum gene expression values for every gene, and by measuring its standard
Step 4. Build a classifier. For each variable set and defined classes we build
and test classifiers in a cross-validation mode (leave-one-out) by removing one
sample and then using the rest as a training set. Several models are built using
different numbers of marker genes and the final chosen model is the one that
minimizes the total cross-validation error.
Step 5. Evaluate results. We evaluate prediction results and compute confusion
matrices. For the purpose of comparison with past studies, we compare
the proposed classifier algorithm with the K-NN model and an inductive
global SVM.

Modelling and Knowledge Discovery in Bioinformatics


Figure 8.15 shows the created SVMT for the lymphoma dataset (Pand et al.,
2006; Pang and Kasabov, 2004). Each internal node of the tree identifies an SVM
classifier, which is represented as an ellipse with a number as its identity.
When the parent node is labeled i, its two children nodes are identified as 2i
and 2i + 1, respectively. We also represent the terminal node as a circle or a filled
circle, which denotes positive or negative class, respectively.

Fig. 8.15 SVMT for the classification of DLBCL versus FL (first class represented as dark nodes as leafs) based
on the data from (Ship et al., 2002).


Evolving Connectionist Systems

Table 8.5 Results of applying SVM, TSVM, and SVMT on the four gene expression classification problems.
With/Without Marker Gene Selection




SVM Tree


DLBCL vs. FL; 6432 genes, 77 samples

DLBCL vs. FL; 30 genes, 77 samples
cured vs. fatal 6432 genes, 58 samples
cured vs. fatal; 13 genes, 58 samples






ALL vs. AML; 7219 genes, 72 samples

ALL vs. AML; 3859 genes, 72 samples
ALL vs. AML; 27 genes, 72 samples





L. cell line

Min vs. Plus; 4 variables; 24,000 samples






Normal vs. Cancer; 2000 genes, 62 samples

Normal vs. Cancer; 12 genes, 62 samples





From the results in Table 8.5 we can compare inductive SVM, transductive SVM
(TSVM), and the SVM tree (SVMT) on the case study datasets above. The TSVM
performs at least as well as the inductive SVM on a small or a medium variable set
(several genes or several hundred genes). A TSVM model can be generated on a
smaller number of variables (genes) evaluated on the selected small dataset from
a local problem space for a particular new sample (e.g. a new patients record).
The TSVM allows for an individual model generation and therefore is promising
as a technique for personal medicine.
The SVMT performs best on a large variable space (e.g. thousands of genes,
sometimes with little or no preprocessing and no pregene selection). This feature
of the SVMT allows for a microarray data collection from a tissue sample and an
immediate analysis without the analysis being biased by gene preselection.


How to Choose the Model for Gene Profiling and Classification

Tasks Global, Local, or Personalised Models

The gene profiling task may require that the model meets the following requirements.
1. The model can be continuously trained on new data.
2. The model is knowledge-based, where knowledge in the form of profiles is
3. The model gives an evaluation for the validity of the profiles.
The two main reasoning approaches inductive and transductive are used here to
develop global, local, and personalised models on the same data in order to compare
different approaches on two main criteria: accuracy of the model and type of patterns
discovered from data. The following classification techniques are used: multiple
linear regression (MLR), SVM, ECF, WKNN, and WWKNN (see chapters 1,2,3).
Each of the models is validated through the same leave-one-out cross-validation
method (Vapnik, 1998). The accuracy of the different models is presented in
Table 8.6. It can be seen that the transductive reasoning and personalised modelling





IPI (one
11 genes

IPI + 11




( 83, 92)





Ptthr = ..5

Pthr = . 45


K = 26
Pthr = 0.5









k = 26






k = 26






k = 26

Table 8.6 Experimental results in terms of model accuracy tested through leave-one-out cross-validation method when using different modelling techniques on the DLBCL
Lymphoma data for classification of new samples into class 1 survival, or class 2 fatal outcome of the disease within five years time (Shipp et al., 2002). The table shows the
overall model classification accuracy in % and the specificity and sensitivity values (accuracy for class 1 and class 2, respectively) in brackets.


Evolving Connectionist Systems

is sensitive to the selection of the number of the nearest neighbours K. Its

optimization is discussed in the next section.
The transductive, pezvnalised WWKNN produces a balanced accuracy of 80 and
81% for each of the two classes (balanced sensitivity and specificity values) along
with an individual ranking of the importance of the variables for each individual
sample. Having this knowledge, a personalised treatment can be attempted that
targets the important genes and clinical variables for each patient.

Fig. 8.16 Clusterbased, local patterns (rules) extracted from a trained ECF model from chapter3 (inductive,
local training) on 11 gene expression data and clinical data of the lymphoma outcome prediction problem (from
M.Slipp et al., 2002). The first variable (first column) is the clinical variable IPI. The accuracy of the model
measured through the leave-one-out cross-validation method is 88% (83% class one and 92% class two). The
figure shows: (a) 15 local profiles of class 1 (survive), threshold 0.3; (b) 9 local profiles of class 2 (fatal outcome),
threshold 0.3; (c) global class profiles (rules) are derived through averaging the variable values (genes or IPI)
across all local class profiles from Fig. 8.3 and ignoring low values (below a threshold, e.g. 0.1 as an absolute
value). Global profiles for class 1 and class 2 may not be very informative as they may not manifest any variable
that is significantly highly expressed in all clusters of any of the two classes if the different class samples are
equally scattered in the whole problem space.

Modelling and Knowledge Discovery in Bioinformatics


The best accuracy is manifested by the local ECF model, trained on a combined
feature vector of 11 gene expression variables and the clinical variable IPI. Its
prognostic accuracy is 88% (83% for class 1, cured and 92% for class 2, fatal).
This compares favourably with the 75% accuracy of the SVM model used in Shipp
et al. (2002).
In addition, local rules that represent cluster gene profiles of the survival versus
the fatal group of patients were extracted as shown graphically in Fig. 8.16. These
profiles show that there is no single variable that clearly discriminates the two
classes; it is a combination of the variables that discriminates different subgroups
of samples within a class and between classes.
The local profiles can be aggregated into global class profiles through averaging
the variable values across all local profiles that represent one class; see Fig. 8.16c.
Global profiles may not be very informative if data samples are dispersed in the
problem space and each class of samples is spread out in the space, but they show
the big picture, the common trends across the population of samples.
As each of the global, local, and personalised profiles contains a different level
of information, integrating them through the integration of global, local, and
personalised models would facilitate a better understanding and better accuracy
of the prognosis (chapter 7).
When GA is used to optimise the feature set and the ECF model parameters,
a significant improvement of the accuracy is achieved with the use of a smaller
number of input variables (features) as a GA optimised ECF model and a feature
set on the DLBCL Lymphoma data is shown in Chapter 6, Fig. 6.10. Twenty
individual models are used in a population and run for 20 generations with a
fitness function, model test accuracy, where the cross-validation method used is
fivefold-cross-validation done on every model within a population with 70% of
randomly selected data for training and 30% for testing. The same data are used
to test all models in a population. The best performing models are used to create a
new generation of 20 individual models, etc. The accuracy of the optimal model is
now 90.66%, which is higher than the best model from Table 8.6 (no optimization
is used there). The best model does not use features 5, 8, and 12 (genes 4, 7, and 11).


Clustering of Time-Course Gene Expression Data

Problem Definition

Each gene in a cell may express differently over time. And this makes the gene
expression analysis based on static data (one shot) not a very reliable mechanism.
Measuring the expression rate of each gene over time gives the gene a temporal
profile of its expression level. Genes can be grouped together according to their
similarity of temporal expression profiles.
This is illustrated here with case study data. For a demonstration of the applicability of our method, we used yeast gene expression data that are available as a
public database. We analysed the gene expression during the mitotic cell cycle of
different synchronised cultures as reported by Cho et al. (1998) and by Spellman
et al. (1998). The datasets consisted of expression profiles for over 6100 ORFs.


Evolving Connectionist Systems

In this study we did not reduce the original dataset by applying a filter in the
form of a minimum variance. This leads to a higher number of clusters of weakly
cell-regulated genes, however, it diminished the possibility of missing co-regulated
genes during the clustering process.
For the search for upstream regulatory sequences we used Hughes compiled
set of upstream regions for the open reading frames (ORFs) in yeast (Church lab:
One of the main purposes for cluster analysis of time-course gene expression
data is to infer the function of novel genes by grouping them with genes of wellknown functionality. This is based on the observation that genes which show
similar activity patterns over time (co-expressed genes) are often functionally
related and are controlled by the same mechanisms of regulation (co-regulated
genes). The gene clusters generated by cluster analysis often relate to certain
functions, e.g. DNA replication, or protein synthesis. If a novel gene of unknown
function falls into such a cluster, it is likely that this gene serves the same function
as the other members of this cluster. This guilt-by-association method makes it
possible to assign functions to a large number of novel genes by finding groups
of co-expressed genes across a microarray experiment (Derisi et al., 1997).
Different clustering algorithms have been applied to the analysis of time-course
gene expression data: k-means, SOM, and hierarchical clustering, to name just
a few (Derisi, 1997. They all assign genes to clusters based on the similarity of
their activity patterns. Genes with similar activity patterns should be grouped
together, whereas genes with different activation patterns should be placed in
distinct clusters. The cluster methods used thus far have been restricted to a oneto-one mapping: one gene belongs to exactly one cluster. Although this principle
seems reasonable in many fields of cluster analysis, it might be too limited for the
study of microarray time-course gene expression data. Genes can participate in
different genetic networks and are frequently coordinated by a variety of regulatory
mechanisms. For the analysis of microarray data, we may therefore expect that
single genes can belong to several clusters.


Fuzzy Clustering of Time Course Gene Expression Data

Several researchers have noted that genes were frequently highly correlated with
multiple classes and that the definition of clear borders between gene expression
clusters often seemed arbitrary (Chu et al., 1998). This is a strong motivation to
use fuzzy clustering in order to assign single objects to several clusters.
A second reason for applying fuzzy clustering is the large noise component in
microarray data due to biological and experimental factors. The activity of genes
can show large variations under minor changes of the experimental conditions.
Numerous steps in the experimental procedure contribute to additional noise
and bias. A usual procedure to reduce the noise in microarray data is setting
a threshold for a minimum variance of the abundance of a gene. Genes below
this threshold are excluded from further analysis. However, the exact value of the
threshold remains arbitrary due to the lack of an established error model and the
use of filtering as preprocessing.

Modelling and Knowledge Discovery in Bioinformatics


Hence we usually have little information about the data structure in advance,
a crucial step in cluster analysis is selection of the number of clusters. Finding
the correct number of clusters leads to the issue of cluster validity. This has
turned out to be a rather difficult problem, as it depends on the definition of
a cluster. Without prior information, a common method is the comparison of
partitions resulting from different numbers of clusters. For assessing the validity
of the partitions, several cluster validity functionals have been introduced (Pal and
Bezdek, 1995). These functionals should reach an optimum if the correct number
of clusters is chosen. When using evolving clustering techniques the number of
the clusters does not need to be defined a priori.
Two fuzzy clustering techniques were applied: the batch mode fuzzy C-means
clustering (FCM) and an evolving clustering through evolving self-organised maps
(ESOM; see Chapter 2).
In the FCM clustering experiment (for more details see Futschik and Kasabov,
(2002)) the fuzzification parameter m (Pal and Bezdek, 1995) turned out to be
an important parameter for the cluster analysis. For the randomised dataset,
FCM clustering formed clusters only if m was chosen smaller than 1.15. Higher
values of m led to uniform membership values in the partition matrix. This can
be regarded as an advantage of FCM over exat clustering, which always forms
clusters independently of the existence of any structure in the data. An appropriate
choice for a lower threshold for m can therefore be set if no cluster artefacts
are formed in randomised data. An upper threshold for m is reached if FCM
does not indicate any cluster in the original data. This threshold depends mainly
on the compactness of the clusters. The cluster analysis with FCM showed that

Fig. 8.17 Using evolving self-organised maps (ESOM; see Chapter 2) to cluster temporal profiles of yeast gene
expression data (Futschik et al., 1999).


Evolving Connectionist Systems

hyperspherical distributions are more stable for increasing m than hyperellipsoid

distributions. This may be expected because FCM clustering with Euclidean norm
favours spherical clusters.
In another experiment, an evolving self-organising map ESOM was evolved from
the yeast gene temporal profiles used as input vectors. The number of clusters did
not need to be specified in advance (Fig. 8.17). It can be seen from Fig. 8.17 that
clusters 72 and 70 are represented on the ESOM as neighbouring nodes. The ESOM
in the figure is plotted as a 2D PCA projection. Cluster 72 has 43 members (genes,
that have similar temporal profiles), cluster 70 has 61 members, and cluster 5 has
only 3 genes as cluster members.
New cluster vectors will be created in an online mode if the distance between
existing clusters and the new data vectors is above a chosen threshold.


Protein Structure Prediction

Problem Definition

Proteins provide the majority of the structural and functional components of a

cell. The area of molecular biology that deals with all aspects of proteins is called
proteomics. Thus far about 30,000 proteins have been identified and labelled, but this
is considered to be a small part of the total set of proteins that keep our cells alive.
The mRNA is translated by ribosomes into proteins. A protein is a sequence of
amino acids, each of them defined by a group of three nucleotides (codons). There
are 20 amino acids all together, denoted by letters (A,C-H,I,K-N,P-T,V,W,Y). The
codons of each of the amino acids are given in Table 8.7, so that the first column
represents the first base in the triplet, the top row represents the second base, and
the last column represents the last base.
The length of a protein in number of amino acids, is from tens to several
thousands. Each protein is characterized by some characteristics, for example
(Brown et al., 1999):

Molecular weight

An initiation codon defines the start position of a gene in a mRNA where the
translation of the mRNA into protein begins. A stop codon defines the end position.
Proteins with a high similarity are called homologous. Homologues that have
identical functions are called orthologues. Similar proteins that have different
functions are called paralogues.
Proteins have complex structures that include:
Primary structure (a linear sequence of the amino acids): See, for example Fig. 8.18.

Modelling and Knowledge Discovery in Bioinformatics


Table 8.7 The codons of each of the 20 amino acids. The first column represents the first base in the triplet,
the first row represents the second base, and the last column, the last base (Hofstadter, 1979).







Secondary structure (3D, defining functionality): An example of a 3D representation of a protein is given in Fig. 8.19.
Tertiary structure (high-level folding and energy minimisation packing of the
protein): Figure 8.20 shows an example of hexokinase (6000 atoms, 48 kD, 457
amino acids). Polypeptides with a tertiary level of structure are usually referred
to as globular proteins, because their shape is irregular and globular in form.
Quaternary structure (interaction between two or more protein molecules)
One task that has been explored in the literature is predicting the secondary
structure from the primary one. Segments of a protein can have different shapes
in their secondary structure, which is defined by many factors, one of them being
the amino acid sequence itself. The main types of shape are:

Fig. 8.18 A primary structure of a protein, a linear sequence of the amino acids.


Evolving Connectionist Systems

Fig. 8.19 An example of a secondary structure (3D, defining functionality) of a protein obtained with the use
of the PDB dataset, maintained by the National Center for Biological Information (NCBI) of the National Institute
for Health (NIH) in the United States.

Coil (loop)
Qian and Sejnowski (1988) investigated the use of MLP for the task of predicting
the secondary structure based on available labelled data, also used in the following
An EFuNN is trained on the data from Qian and Sejnowski (1988) to
predict the shape of an arbitrary new protein segment. A window of 13 amino
acids is used. All together, there are 273 inputs and 3 outputs and 18,000
examples for training are used. The block diagram of the EFuNN model is given
in Fig. 8.21.
The explored EFuNN-based model makes it possible to add new labelled protein
data as they become available with time.

Modelling and Knowledge Discovery in Bioinformatics




Fig. 8.20 (a) An example of a tertiary structure of a protein (high-level folding and energy minimisation
packing); (b) The hexokinase protein (6000 atoms, 48 kD, 457 amino acids; from the PDB database).




for protein secondary
structure prediction

x2 7 3

Coil (loop)

Fig. 8.21 Evolving system for protein secondary structure prediction.


Gene Regulatory Networks and the System

Biology Approach
The System Biology Approach

The aim of computational system biology is to understand complex biological

objects in their entirety, i.e. at a system level. It involves the integration of different


Evolving Connectionist Systems

approaches and tools: computer modeling, large-scale data analysis, and biological
experimentation. One of the major challenges of system biology is the identification
of the logic and dynamics of gene-regulatory and biochemical networks. The
most feasible application of system biology is to create a detailed model of a cell
regulation to provide system-level insights into mechanism-based drug discovery.
System-level understanding is a recurrent theme in biology and has a long history.
The term system-level understanding is a shift of focus in understanding a systems
structure and dynamics as a whole, rather than the particular objects and their
interactions. System-level understanding of a biological system can be derived
from insight into four key properties (Dimitrov et al., 2004; Kasabov et al., 2005c):
1. System structures. These include the gene regulatory network (GRN) and
biochemical pathways. They can also include the mechanisms of modulation of the
physical properties of intracellular and multicellular structures by interactions.
2. System dynamics. System behavior over time under various conditions can be
understood by identifying essential mechanisms underlying specific behaviours
and through various approaches depending on the systems nature: metabolic
analysis (finding a basis of elementary flux modes that describe the dominant
reaction pathways within the network), sensitivity analysis (the study of how
the variation in the output of a model can be apportioned, qualitatively or
quantitatively, to different sources of variation), dynamic analysis methods such
as phase portrait (geometry of the trajectories of the system in state space),
and bifurcation analysis (bifurcation analysis traces time-varying change(s) in
the state of the system in a multidimensional space where each dimension
represents a particular system parameter (concentration of the biochemical
factor involved, rate of reactions/interactions, etc.). As parameters vary, changes
may occur in the qualitative structure of the solutions for certain parameter
values. These changes are called bifurcations and the parameter values are
called bifurcation values.
3. The control method. Mechanisms that systematically control the state of the cell
can be modulated to change system behavior and optimize potential therapeutic
effect targets of the treatment.
4. The design method. Strategies to modify and construct biological systems having
desired properties can be devised based on definite design principles and
simulations, instead of blind trial and error.
As mentioned above, in reality, analysis of system dynamics and understanding
the system structure are overlapping processes. In some cases analysis of the
system dynamics can give useful predictions in system structure (new interactions, additional member of system). Different methods can be used to study the
dynamical properties of the system:
Analysis of steady states allows finding the system states when there are no
dynamical changes in system components.
Stability and sensitivity analyses provide insights into how system behaviour
changes when stimuli and rate constants are modified to reflect dynamic
Bifurcation analysis, in which a dynamic simulator is coupled with analysis
tools, can provide a detailed illustration of dynamic behaviour.

Modelling and Knowledge Discovery in Bioinformatics


The choice of the analytical methods depends on availability of the data that can
be incorporated in the model and the nature of the model. It is important to know
the main properties of the complex system under investigation, such as robustness.
Robustness is a central issue in all complex systems and it is very essential
for understanding the biological object functioning at the system level. Robust
systems exhibit the following phenomenological properties.
Adaptation, which denotes the ability to cope with environmental changes
Parameter insensitivity, which indicates a systems relative insensitivity (to a
certain extent) to specific kinetic parameters
Graceful degradation, which reflects the characteristic slow degradation of a
systems functions after damage, rather than catastrophic failure
Revealing all these characteristics of a complex living system helps in choosing
an appropriate method for their modelling, and also constitutes an inspiration for
the development of new CI methods that possess these features.
Modelling living cells in silico has many implications; one of them is testing new
drugs through simulation rather than on patients. According to recent statistics
(Zacks, 2001), human trials fail for 7075% of the drugs that enter them.
Tomita (2001) stated in his paper, The cell is never conquered until its total
behaviour is understood, and the total behaviour of the cell is never understood
until it is modelled and simulated.
Computer modelling of processes in living cells is an extremely difficult task for
several reasons; among them are that the processes in a cell are dynamic and depend
on many variables some of them related to a changing environment, and the processes
of DNA transcription and protein translation are not fully understood.
Several cell models have been created and experimented, among them (Bower
and Bolouri, 2001):
The virtual cell model
The e-cell model and the self-survival model (Tomita et al., 2001)
A mathematical model of a cell cycle
A starting point to dynamic modelling of a cell would be dynamic modelling of
a single gene regulation process. In Gibson and Mjolsness (2001) the following
methods for single-gene regulation modelling are discussed, that take into account
different aspects of the processes (chemical reactions, physical chemistry, kinetic
changes of states, and thermodynamics):

Boolean models, based on Boolean logic (true/false logic)

Differential equation models
Stochastic models
Hybrid Boolean/differential equation models
Hybrid differential equations/stochastic models
Neural network models
Hybrid connectionist-statistical models

The next step in dynamic cell modelling would be to try to model the regulation of
more genes, it is hoped a large set of genes (see Somogyi et al. (2001)). Patterns of


Evolving Connectionist Systems

Cell Parameters

DNA data of a
living cell

System Parameters

Evolving model of a cell

Output information

RNA data

Protein data

Existing data bases

New knowledge extracted

(DNA, Genes, Proteins,

Metabolic networks)

Fig. 8.22 A general, hypothetical evolving model of a cell: the system biology approach.

collective regulation of genes are observed in the above reference, such as chaotic
attractors. Mutual information/entropy of clusters of genes can be evaluated.
A general, hypothetical evolving model of a cell is outlined in Fig. 8.22 that
encompasses the system biology approach. It is based on the following principles.
1. The model incorporates all the initial information such as analytical formulas,
databases, and rules of behaviour.
2. In a dynamic way, the model adjusts and adapts over time during its operation.
3. The model makes use of all current information and knowledge at different
stages of its operation (e.g., transcription, translation).
4. The model takes as inputs data from a living cell and models its development
over time. New data from the living cell are supplied if such are available over
5. The model runs until it is stopped, or the cell has died.


Gene Regulatory Network Modelling

Modelling processes in a cell includes finding the genetic networks (the network
of interaction and connections between genes, each connection defining if a gene
is causing another one to become active, or to be suppressed). The reverseengineering approach is used for this task (Dhaeseleer et al., 2000). It consists
of the following. Gene expression data are taken from a cell (or a cell line) at
consecutive time moments. Based on these data a logical gene network is derived.

Modelling and Knowledge Discovery in Bioinformatics


For example, it is known that clustering of genes with similar expression patterns
will suggest that these genes are involved in the same regulatory processes.
Modelling gene regulatory networks (GRN) is the task of creating a dynamic
interaction network between genes that defines the next time expression of genes
based on their previous levels of expression. A simple GRN of four genes is shown
in Fig. 8.23. Each node from Fig. 8.23 represents either a single gene/protein or a
cluster of genes that have a similar expression over time, as illustrated in Fig. 8.24.
Models of GRN, derived from gene expression RNA data, have been developed
with the use of different mathematical and computational methods, such as statistical
correlation techniques; evolutionary computation; ANN; differential equations, both
ordinary and partial; Boolean models; kinetic models; state-based models and others.
In Kasabov et al. (2004) a simple GRN model of five genes is derived from time
course gene expression data of leukaemia cell lines U937 treated with retinoic acid
with two phenotype states: positive and negative. The model, derived from time
course data, can be used to predict future activity of genes as shown in Fig. 8.25.







Fig. 8.23 A simplified gene regulatory network where each node represents a gene/protein (or a group of
them) and the arcs represent the connection between them, either excitatory (+) or inhibitory ().

Fig. 8.24 A cluster of genes that are similarly expressed over time (17 hours).


Evolving Connectionist Systems

plus : 33 8 27 21







Fig. 8.25 The time course data of the expression of four genes (#33, 8, 27, 21) from the cell line used in
(Kasabov et al., 2005). The first four points are used for training and the rest are the predicted by the model
expression values of the genes in a future time.

Another example of GRN extraction from data is presented in Chan et al. (2006b)
where the human response to fibroblast serum data is used (Fig. 8.26) and a GRN
is extracted from it (Fig. 8.27).
Despite the variety of different methods used thus far for modelling GRN and for
system biology in general, there is no single method that will suit all requirements
to model a complex biological system, especially to meet the requirements for
adaptation, robustness, and information integration.


The Response of Human Fibroblasts to Serum Data


10 time (hour) 15


Fig. 8.26 The time course data of the expression of genes in the human fibroblast response to serum benchmark
data (Chan et al., 2006b).

Modelling and Knowledge Discovery in Bioinformatics


Fig. 8.27 A GRN obtained with the use of the method from Chan et al. (2006b) on the data from Fig. 8.26,
where ten clusters of gene expression values over time are derived, each cluster represented as a node in the


Evolving Connectionist Systems for GRN Modelling

Case Study
Here we used the same data of the U937 cell line treated with retinoic acid. The
results are taken from Kasabov and Dimitrov (2004). Retinoic acid and other
reagents can induce differentiation of cancer cells leading to gradual loss of
proliferation activity and in many cases death by apoptosis. Elucidation of the
mechanisms of these processes may have important implications not only for our
understanding of the fundamental mechanisms of cell differentiation but also for
treatment of cancer. We studied differentiation of two subclones of the leukemic
cell line U937 induced by retinoic acid. These subclones exhibited highly differential expression of a number of genes including c-Myc, Id1, and Id2 that were
correlated with their telomerase activity; the PLUS clones had about 100-fold
higher telomerase activity than the MINUS clones. It appears that the MINUS
clones are in a more differentiated state. The two subclones were treated with
retinoic acid and samples were taken before treatment (time 0) and then at 6 h, 1,
2, 4, 7, and 9 days for the plus clones and until day 2 for the minus clones because
of their apoptotic death. The gene expression in these samples was measured by
Affymetrix gene chips that contain probes for 12,600 genes. To specifically address
the question of telomerase regulation we selected a subset of those genes that were
implicated in the telomerase regulation and used ECOS for their analysis.
The task is to find the gene regulatory network G = g1 g2 g3 grest grest+ of
three genes g1 = c-Myc, g2 = Id1, and g3 = Id2 while taking into account the
integrated influence of the rest of the changing genes over time denoted grest and
grest+ representing, respectively, the integrated group of genes which expression
level decreases over time (negative correlation with time), and the group of genes
which expression increases over time (positive correlation with time).


Evolving Connectionist Systems



Fig. 8.28 (a) The gene regulatory network extracted from a trained EfuNN on time course gene expression
data of genes related to telomerase of the PLUS leukemic cell line U937 can be used to derive a state transition
graph for any initial state (gene expression values of the five genes used in the model). The transition graph is
shown in a 2D space of the expression values of only two genes (C-myc and Id1); (b) the same as in (a) but
here applied on the MINUS cell line data.

Groups of genes grest grest+ were formed for each experiment of PLUS and
MINUS cell lines, forming all together four groups of genes. For each group of
genes, the average gene expression level of all genes at each time moment was
calculated to form a single aggregated variable grest
Two EFuNN models, one for the PLUS cell, and one for the MINUS cell, were
trained on five input vector data, the expression level of the genes G(t) at time
moment t, and five output vectors, the expression level Gt + 1 of the same genes
recorded at the next time moment. Rules were extracted from the trained structure
that describe the transition between the gene states in the problem space. The
rules are given as a transition graph in Fig. 8.28.
Using the extracted rules that form a gene regulatory network, one can simulate
the development of the cell from initial state Gt = 0, through time moments in
the future, thus predicting a final state of the cell.


Summary and Open Problems

Modelling biological processes aims at the creation of models that trace these
processes over time. The models should reveal the steps of development, the
metamorphoses that occur at different points of time, and the trajectories of the
developed patterns.
This chapter demonstrates that biological processes are dynamically evolving
and they require appropriate techniques, such as evolving connectionist systems.
In Chapter 9 GRN of genes related to brain functions are derived through
computational neuro-genetic modelling, which is a step further in this area
(Benuskova and Kasabov, 2007).
There are many open problems and questions in bioinformatics that need to be
addressed in the future. Some of them are:
1. Finding the gene expression profiles of all possible human diseases, including
brain disease. Defining a full set of profiles of all possible diseases in silico
would allow for early diagnostic tests.
2. Finding the gene expression profiles and the GRN of complex human behaviour,
such as the instinct for information speculated in the introduction.

Modelling and Knowledge Discovery in Bioinformatics


3. Finding genetic networks that describe the gene interaction in a particular

diseased tissue, thus suggesting genes that may be targeted for a better treatment
of this disease.
4. Linking gene expression profiles with protein data, and then, with DNA data,
for a full-circle modelling and complete understanding of the cell processes.


Further Reading

Further material related to specific sections of this chapter can be found as follows.
Computational Molecular Biology (Pevzner, 2001)
Generic Knowledge on Bioinformatics (Baldi and Brunak, 1998; Brown et al.,
2000b; Attwood and Parry-Smith, 1999; Boguski, 1998)
Artificial Intelligence and Bioinformatics (Hofstadter, 1979)
Applications of Neural Network Methods, Mainly Multilayer Perceptrons and
Self-organising Maps, in the General Area of Genome Informatics (Wu and
McLarty, 2000)
A Catalogue of Splice Junction Sequences (Mount, 1982)
Microarray Gene Technologies (Schena, 2000)
Data Mining in Biotechnology (Persidis, 2000)
Application of the Theory of Complex Systems for Dynamic Gene Modelling
(Erdi, 2007)
Computational Modelling of Genetic and Biochemical Networks (Bower and
Bolouri, 2001)
Dynamic Modelling of the Regulation of a Large Set of Genes (Somogyi et al.,
2001; Dhaeseleer et al., 2000)
Dynamic Modelling of a Single Gene Regulation Process (Gibson and Mjolsness,
Methodology for Gene Expression Profiling (Futschik et al., 2003a; Futschik and
Kasabov, 2002)
Using Fuzzy Neural Networks and Evolving Fuzzy Neural Networks in Bioinformatics (Kasabov, 2007b; Kasabov and Dimitrov, 2004)
Fuzzy Clustering for Gene Expression Analysis (Futschik and Kasabov, 2002)
Artificial Neural Filters for Pattern Recognition in Protein Sequences (Schneider
and Wrede, 1993)
Dynamic Models of the Cell (Tomita et al., 1999)

9. Dynamic Modelling of Brain

Functions and Cognitive Processes

The human brain can be viewed as a dynamic, evolving information-processing

system, and the most complex one. Processing and analysis of information
recorded from brain activity, and modelling of perception, brain functions, and
cognitive processes aim at understanding the brain and creating brainlike intelligent systems.
Brain study relies on modelling. This includes modelling of information preprocessing and feature extraction in the brain (e.g. modelling the cochlea), modelling
the emergence of elementary concepts (e.g. phonemes and words), modelling
complex representation and higher-level functions (e.g. speech and language), and
so on. Whatever function or segment of the brain is modelled, the most important
requirement is to know or to discover the evolving rules, i.e. the rules that allow
the brain to learn, to develop in a continuous way. It is demonstrated here that
the evolving connectionist systems paradigm can be applied for modelling some
brain functions and processes.
This chapter is presented in the following sections.

Evolving structures and functions in the brain and their modelling

Auditory, visual, and olfactory information processing and their modelling
Adaptive modelling of brain states based on EEG and fMRI data
Computational neuro-genetic modelling: integrating gene and brain information
into a single model
Braingene ontology for EIS
Summary and open problems
Further reading


Evolving Structures and Functions in the Brain

and Their Modelling
The Brain as an Evolving System

One of the last great frontiers of human knowledge relates to the study of the
human brain and human cognition. Models of the cognitive processes are almost
without exception qualitative, and are very limited in their applicability. Cognitive


Evolving Connectionist Systems

science would be greatly advanced by cognitive process models that are both
qualitative and quantitative, and which evolve in response to data derived from
quantitative measurements of brain activity.
The brain is an evolving system. The brain evolves initially from stem cells
(Fig. 9.1). It evolves its structure and functionality from an embryo to a sophisticated biological information processing system (Amit, 1989; Arbib, 1972, 1987,
1998, 1995, 2002; Churchland and Sejnowski, 1992; Deacon, 1988, 1998; Freeman,
2001; Grossberg, 1982; Joseph, 1998; J. G. Taylor, 1998; van Owen, 1994; Wong,
1995). As an embryo, the brain grows and develops mainly based on genetic
information. Even at the age of three months, some functional areas are already
formed. But identical embryos, with the same genetic information, can develop
in different ways to reach the state of an adult brain, and this is because of the
environment in which the brain evolves. Both the genetic information (nature)
and the environment (nurture ) are crucial factors. They determine the evolving
rules for the brain. The challenge is how to reveal these rules and eventually use
them in brain models. Are they the same for every individual?
The brain evolves its functional modules for vision, for speech and language,
for music and logic, and for many cognitive tasks. There are predefined areas
of the brain that are allocated for language and visual information processing,

Neural network



Stem cell
Neuronal stem

Fig. 9.1 The brain structure evolves from stem cells.

Dynamic Modelling of Brain Functions and Cognitive Processes


for example, but these areas may change during the neuronal evolving processes.
The paths of the signals travelling and the information processes in the brain
are complex and different for different types of information. Figure 9.2 shows
schematically the pathways for auditory, visual, and sensory motor information
processing in the human brain.
The cognitive processes of learning in the brain evolve throughout a lifetime.
Intelligence is always evolving. An example is the spoken language learning
process. How is this process evolving in the human brain? Can we model it in a
computer system, in an evolving system, so that the system learns several languages
at a time and adapts all the time to new accents and new dialects? In Kim et al.
(1997) it is demonstrated that an area of the brain evolves differently when two
spoken languages are learned simultaneously, compared with languages that are
learned one after another.
Evolution is achieved through both genetically defined information and learning.
The evolved neurons have a spatialtemporal representation where similar stimuli
activate close neurons. Through dynamic modelling we can trace how musical
patterns move from one part of the acoustic space to another in a harmonic and
slightly chaotic way. Several principles of the evolving structure, functions, and
cognition of the brain are listed below (for details see van Owen, 1994; Wong,
1995; Amit, 1989; Arbib, 1972, 1987, 1998, 1995, 2002; Churchland and Sejnowski,
1992; J. G. Taylor, 1998; Deacon, 1988, 1998; Freeman, 2001; Grossberg, 1982;
Joseph, 1998):
Redundancy, i.e. there are many redundant neurons allocated to a single
stimulus or a task; e.g. when a word is heard, there are hundreds of thousands
of neurons that are immediately activated.
Memory-based learning, i.e. the brain stores exemplars of facts that can be
recalled at a later stage. Some studies (see Widrow (2006)) suggest that all human
actions, including learning and physical actions, are based on the memory.

Fig. 9.2 Different areas of the human brain transfer different signals (auditory, visual, somatic-sensory, and
control-action) shown as lines (Reproduced with permission from


Evolving Connectionist Systems

Evolution is achieved through interaction of an individual with the environment

and with other individuals.
Inner processes take place, e.g. information consolidation through sleep
The evolving process is continuous and lifelong.
Through the process of evolving brain structures (neurons, connections) higherlevel concepts emerge; they are embodied in the structure and represent a level
of abstraction.
It seems that the most appropriate sources of data for brain modelling tasks
would come from instrumental measurements of the brain activities. To date, the
most effective means available for these types of brain measurement are electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic
resonance imaging (fMRI). Once the data from these measurement protocols have
been transformed into an appropriate state space representation, an attempt to
model the dynamic cognitive process can be made.


From Neurons to Cognitive Functions

The brain is basically composed of neurons and glial cells. Despite the number
of glial cells being 10 to 50 times bigger than the number of neurons, the role of
information processing is given exclusively to the neurons (thus far). For this very
reason most neural network models do not take into account the glial cells.
Neurons can be of different types according to their main functionality. There
are sensory neurons, motor neurons, local interneurons, projection interneurons,
and neuroendocrine cells. However, independently of the type, a neuron is basically
constituted of four parts: input, trigger, conduction, and output. These parts are
commonly represented in neuronal models.
In a very simplified manner, the neurons connect to each other in two basic
ways: through divergent and convergent connections. Divergent connection occurs
when the output of a neuron is split and is connected to the input of many other
neurons. Convergent connections are those where a certain neuron receives input
from many neurons.
It is with the organization of the neurons in ensembles that functional compartments emerge. Neurosciences provide a very detailed picture of the organization
of the neural units in the functional compartments (functional systems). Each
functional system is formed by various brain regions that are responsible for
processing different types of information. It is shown that paths, which link
different components of a functional system are hierarchically organized.
It is mainly in the cerebral cortex where the cognition functions take place.
Anatomically, the cerebral cortex is a thin outer layer of the cerebral hemisphere
with thickness around 2 to 4 mm. Cerebral cortex is divided into four lobes: frontal,
parietal, temporal, and occipital (see Fig. 9.3). Each lobe has different functional
specialisation, as described in Table 9.1.
Neurons and neuronal ensembles are characterised by their constant activity
represented as oscillations of wave signals of different main frequencies as shown
in Box 9.1.

Dynamic Modelling of Brain Functions and Cognitive Processes


Box 9.1. Main frequencies of wave signals in ensembles of neurons

in the brain
Alpha (812 Hz)
Beta (1328 Hz)
Gamma (2850 Hz)
Delta (0.53.5Hz)
Theta (47 Hz)


Modelling Brain Functions

The brain is the most complex information processing machine. It processes

data, information, and knowledge at different levels. Modelling the brain as an
information processing machine would have different results depending on the
goals of the models and the detail with which the models represent the genetic,
biological, chemical, physical, physiological, and psychological rules and the laws
that govern the functioning and behaviour of the brain.

Fig. 9.3 The cerebral cortex and the human brain (from Benuskova and Kasabov (2007)).

Table 9.1 Location of cognitive functions in the cerebral cortex of the brain.

Cerebral Cortex Location

Visual perception
Auditory perception
Multimodal association (visio-spatial location, language)
Multimodal emotions, memory

Occipital cortex
Temporal cortex
Temporal, frontal, parietal


Evolving Connectionist Systems

Generally speaking there are six levels of information processing in the brain
as shown in Fig. I.1. We consider here the following four.

Molecular/Genetic Level
At the genetic level the genome constitutes the input information, whereas the
phenotype constitutes the output result, which causes: (1) changes in the neuronal
synapses (learning), and (2) changes in the DNA and its gene expression (Marcus,
2004). As pointed out in the introduction, neurons from different parts of the brain,
associated with different functions, such as memory, learning, control, hearing,
and vision, function in a similar way and their functioning is genetically defined.
This principle can be used as a unified approach to building different neuronal
models to perform different functions, such as speech recognition, vision, learning,
and evolving. The genes relevant to particular functions can be represented as
a set of parameters of a neuron. These parameters define the way the neuron
is functioning and can be modified through feedback from the output of the
neuron. Some genes may get triggered-off, or suppressed, whereas others may get
triggered-on, or excited.
An example of modelling at this level is given in Section 9.4, Computational
Neuro-Genetic Modelling.

Single Neuronal Level

There are many information models of neurons that have been explored in the
neural network theory (for a review see Arbib (2003)). Among them are:
1. Analytical models. An example is the HodgkinHuxley model (see Nelson and
Rinzel (1995)) as it is considered to be the pioneering one describing the
neuronal action potentials in terms of ion channels and current flow. Further
studies expanded this work and revealed the existence of a wide number of ion
channels (compartments) as well as showing that the set of ion channels varies
from one neuron to another.
2. McCulloch and Pitts (1943) type models. This type is currently used on traditional ANN models including most of the ECOS methods presented in Part I of
this book.
3. Spiking neuronal models (see Chapter 4).
According to the neuronal model proposed in Matsumoto (2000) and Shigematsu
et al., (1999), a neuron accepts input information through its synapses and, subject
to the output value of the neuron, it modifies back some of the synapses, those
that, although the feedback signal reaches them, still have a level of information
(weights, chemical concentration) above a certain threshold. The weights of the
rest of the synapses decrease; see Fig. 9.4. Tsukada et al. (1996) proposed a spatialtemporal learning rule for LTP in hippocampus.

Dynamic Modelling of Brain Functions and Cognitive Processes









Fig. 9.4 The model of a neuron proposed by Gen Matsumoto (2000). According to the model, each neuron
adjusts its synapses through a feedback from its output activation.

Neural Network (Ensemble) Level

Information is processed in ensembles of neurons that form a functionally defined
area. A neural network model comprises many neuronal models. The model is an
evolving one, and a possible implementation would be with the use of the methods
and techniques presented in Part I.

Entire Brain Level

Many neuronal network modules are connected to model a complex brain structure
and learning algorithms.
One model is introduced in Matsumoto and Shigematsu (1999). At different
levels of information processing, similar, and at the same time, different principles
apply. For example, the following common principles of learning across all levels
of information processing were used in the model proposed by Matsumoto (2000)
and Shigematsu et al. (1999):
Output dependency, i.e. learning is based on both input information and output
Self-learning, i.e. the brain acquires its function, structure, and algorithm based
on both a super algorithm and self-organisation (self-learning).
Modelling the entire brain is far from having been achieved and it will take many
years to achieve this goal, but each step in this direction is an useful step towards
understanding the brain and towards the creation of intelligent machines that will
help people. Single functions and interactions between parts of the brain have been
modelled. An illustration of using spiking neural networks (SNN; see Chapter 4)
for modelling thalamuscortical interactions is shown in Fig. 9.5 (Benuskova and
Kasabov, 2007) and explained below.
The model from Fig. 9.5 has two layers. The input layer is supposed to represent
the thalamus (the main subcortical sensory relay in the brain) and the output
layer represents cerebral cortex. Individual model neurons can be based upon the
classical spike response model (SRM; Gerstner and Kistler (2002)). The weight of


Evolving Connectionist Systems




Gaussian lateral and input



Spiking neural
One-to-many feedforward
input connections
Input layer

Fig. 9.5 (a) Neural network model represents the thalamocortical (TC) system; (b) the SNN represents cerebral
cortex. About 1020% of the neurons are inhibitory neurons that are randomly positioned on the grid (filled
circles). The input layer represents the thalamic input to cortex. The presented model does not have a feedback
loop from the cortex to the thalamus (from Benuskova and Kasabov (2007)).

the synaptic connection from neuron j to neuron i is denoted Jij . It takes positive
(negative) values for excitatory (inhibitory) connections, respectively. Lateral and
input connections have weights that decrease in value with distance from the centre
neuron i according to a Gaussian formula whereas the connections themselves can
be established at random (for instance with p = 05).
For example, the asynchronous thalamic activity in the awake state of the brain
can be simulated by a series of random input spikes generated in the input
layer neurons. For the state of vigilance, a tonic, low-frequency, nonperiodic, and
nonbursting firing of thalamocortical input is typical. For simulation of the sleep
state we can employ regular oscillatory activity coming out of the input layer,
etc. LFP (Local Field Potential) can be defined as an average of all instantaneous
membrane potentials; i.e.

t =

u t
N i=1 i


Spiking neurons can be interconnected into neural networks of arbitrary architecture. At the same time it has been shown that SNN have the same computational power as traditional ANNs (Maas, 1996, 1998). With spiking neurons,
however, new types of computation can be modelled, such as coincidence detection,
synchronization phenomena, etc. Spiking neurons are more easily implemented
in hardware than traditional neurons and integrated with neuromorphic systems.


Auditory, Visual, and Olfactory Information Processing

and Their Modelling

The human brain deals mainly with five sensory modalities: vision, hearing,
touch, taste, and smell. Each modality has different sensory receptors. After the
receptors perform the stimulus transduction, the information is encoded through

Dynamic Modelling of Brain Functions and Cognitive Processes


the excitation of neural action potentials. The information is encoded using pulses
and time intervals between pulses. This process seems to follow a common pattern
for all sensory modalities, however, there are still many unanswered questions
regarding the way the information is encoded in the brain.


Auditory Information Processing

The hearing apparatus of an individual transforms sounds and speech signals into
brain signals. These brain signals travel farther to other parts of the brain that
model the (meaningful) acoustic space (the space of phones), the space of words,
and the space of languages (see Fig. 9.6). The auditory system is adaptive, so new
features can be included at a later stage and existing ones can be further tuned.
Precise modelling of hearing functions and the cochlea is an extremely difficult
task, but not impossible to achieve (Eriksson and Villa, 2006). A model of the
cochlea would be useful for both helping people with disabilities, and for the
creation of speech recognition systems. Such systems would be able to learn and
adapt as they work.
The ear is the front-end auditory apparatus in mammalians. The task of this
hearing apparatus is to transform the environmental sounds into specific features
and transmit them to the brain for further processing. The ear consists of three
divisions: the outer ear, the middle ear, and the inner ear, as shown in Fig. 9.7.


Model of
the cochlea




Fig. 9.6 A schematic diagram of a model of the auditory system of the brain.

Fig. 9.7 A schematic diagram of the outer ear, the middle ear, and the inner ear. (Reproduced with permission


Evolving Connectionist Systems

Figure 9.8 shows the human basilar membrane and the approximate position of
the maximal displacement of tones of different frequencies. This corresponds to a
filter bank of several channels, each tuned to a certain band of frequencies.
There are several models that have been developed to model functions of the
cochlea (see, e.g. Greenwood (1961, 1990), de-Boer and de Jongh (1978), Allen
(1995), Zwicker (1961), Glassberg and Moore (1990), and Eriksson and Villa, 2006).
Very common are the Mel filter banks and the Mel scale cepstra coefficients (Cole
et al., 1995). For example, the centres of the first 26 Mel filter banks are the
following frequencies (in Hertz): 86, 173, 256, 430, 516, 603, 689, 775, 947, 1033,
1120, 1292, 1550, 1723, 1981, 2325, 2670, 3015, 3445, 3962, 4565, 5254, 6029, 6997,
8010, 9216, 11025. The first 20 Mel filter functions are shown in Fig. 9.9.
Other representations use a gammatone function (Aertsen and Johannesma,
1980). It is always challenging to improve the acoustic modelling functions and
make them closer to the functioning of the biological organs, which is expected to
lead to improved speech recognition systems.
The auditory system is particularly interesting because it allows us not only
to recognize sound but also to perform sound source location efficiently. Human
ears are able to detect frequencies in the approximate range of 20 to 20,000 Hz.

Fig. 9.8 Diagram of the human basilar membrane, showing the approximate positions of maximal displacement
to tones of different frequencies.

Dynamic Modelling of Brain Functions and Cognitive Processes


Fig. 9.9 The first 20 Mel filter functions.

Each ear processes the incoming signals independently, which are later integrated
considering the signals timing, amplitudes, and frequencies (see Fig. 9.10). The
narrow difference of time between incoming signals from the left and right ear
results in a cue to location of signal origin.
How do musical patterns evolve in the human brain? Music causes the emergence
of patterns of activities in the human brain. This process is continuous, evolving,
although in different pathways depending on the individual.

Fig. 9.10 A schematic representation of a model of the auditory system. The left and the right ear information
processing are modelled separately and the results are later integrated considering the signals timing, amplitudes,
and frequencies.


Evolving Connectionist Systems

Each musical piece is characterised by specific main frequencies (formants) and

rules to change them over time. There is a large range of frequencies in Mozarts
music, the greatest energy being in the spectrum of the Thet brain activity (see
Box 9.1). One can speculate that this fact may explain why the music of Mozart
stimulates human creativity. But it is not the static picture of the frequencies
that makes Mozarts music fascinating, it is the dynamics of the changes of the
patterns of these frequencies over time.


Visual Information Processing

The visual system is composed of eyes, optic nerves, and many specialised areas
of the cortex (the ape for example has more than 30).
The image on the retina is transmitted via the optic nerves to the first visual
cortex (V1), which is situated in the posterior lobe of the brain. There the information is divided into two main streams, the what tract and the where tract.
The ventrical (what) tract separates targets (objects and things) in the field of
vision and identifies them. The tract traverses the occipital lobe to the temporal
lobe (behind the ears).
The dorsal tract (where) is specialised in following the location and position
of the objects in the surrounding space. The dorsal tract traverses the back of the
head to the top of the head.
How and where the information from the two tracts is united to form one
complete perception is not completely known.
On the subject of biological approaches for processing incoming information,
Hubel and Wiesel (1962) received many awards for their description of the human
visual system. Through neuro-physiological experiments, they were able to distinguish some types of cells that have different neurobiological responses according
to the pattern of light stimulus. They identified the role that the retina has as a
contrast filter as well as the existence of orientation selective cells in the primary
visual cortex (Fig. 9.11). Their results have been widely implemented in biologically
realistic image acquisition approaches.
The idea of contrast filters and orientation selective cells can be considered a
feature selection method that finds a close correspondence with traditional ways
of image processing, such as Gaussian and Gabor filters.
A Gaussian filter can be used for modelling ON/OFF states of receptive cells:

Gx y = e

x2 +y2
2 2

A Gabor filter can be used to model the states of orientation cells:

x2 + 2 y2
2 2
Gx y = e
cos 2 +



x = x cos  + y sin 
y = x sin  + y cos 


Dynamic Modelling of Brain Functions and Cognitive Processes


Contrast Cells



Direction Selective Cells









Fig. 9.11 Contrast cells and direction selective visual cells.

where = phase offset, = orientation (0,360),

= wavelength,  = standard
deviation of the Gaussian factor of the Gabor function, and  = aspect ratio
(specifies the ellipticity of the support of the Gabor function).
A computational model of the visual subsystem would consist of the following
1. A visual preprocessing module, that mimics the functioning of the retina, the
retinal network, and the lateral geniculate nucleus (LGN).
2. An elementary feature recognition module, responsible for the recognition of
features such as the curves of lips or the local colour. The peripheral visual
areas of the human brain perform a similar task.
3. A dynamic feature recognition module that detects dynamical changes of
features in the visual input stream. In the human brain, the processing of visual
motion is performed in the V5/MT area of the brain.
4. An object recognition module that recognises elementary shapes and their parts.
This task is performed by the inferotemporal (IT) area of the human brain.


Evolving Connectionist Systems

5. An object/configuration recognition module that recognises objects such as

faces. This task is performed by the IT and parietal areas of the human brain.


Integrated Auditory and Visual Information Processing

How auditory and visual perception relate to each other in the brain is a fundamental question; see Fig. 9.12. Here, the issue of integrating auditory and visual
information in one information processing model is discussed. Such models
may lead to better information processing and adaptation in future intelligent
A model of multimodal information processing in the brain is presented in
Deacon (1988); see Fig. 9.13. The model includes submodels of the functioning
of different areas of the human brain related to auditory and simultaneously
perceived visual stimuli. Some of the submodules are connected to each other, e.g.
the prefrontal cortex submodel and the Brocas area submodel.
Each distinct processing information unit has serial and hierarchical pathways
where the information is processed. In the visual system, for instance, the information is divided in submodalities (colour, shape, and movements) that are
integrated at a later stage. Analysing one level above, the same pattern can be

Parietotemporal Lobe
Information Processing

Master Visual Map

Master Auditory Map

Left Auditory Map




Band 1

Band 2


Visual field

Occiptotemporal Lobe

Left Auditory field

Right Auditory Map

Band 1


Right Auditory field

Temporal Lobe

Fig. 9.12 A schematic representation of a model of multimodal information processing (visual and auditory
information) in the brain.

Dynamic Modelling of Brain Functions and Cognitive Processes


Fig. 9.13 Deacons model for multi-modal information processing (Deacon, 1988).

noticed. The information from different modules converges to a processing area

responsible for the integration. A simple example is the detection of a certain food
by the integration of the smell and the visual senses.


Olfactory Information Processing

Smell and taste are chemical senses and the only senses that do not maintain
the spatial relations of the input receptors. However, after the transduction of the
olfactory stimuli, the encoding is done similarly to the other senses, using pulse
rate or pulse time. Furthermore, contrast analysis is done in the first stage of the
pathway and parallel processing of olfactory submodalities has been proven to
happen in the brain.
There are different types of olfactory sensory neurons that are stimulated by
different odorants. Thus, having a large number of different receptor types allows
many odorants to be discriminated. The olfactory discrimination capacity in
humans varies highly and can reach 5000 different odorants in trained people.
Chemical stimuli are acquired by millions of olfactory sensory neurons that can
be of as many as 1000 types. In the olfactory bulb, there is the convergence of
sensory neurons to units called glomeruli (approx. 25,000 to 1), that are organized
in such a way that information from different receptors is placed in different
glomeruli. Each odorant (smell) is recognized using several glomeruli and each
glomerula can take part to recognize many odorants. Thus, glomeruli are not
odour-specific, but a specific odour is described by a unique set of glomeruli. How
the encoding is done is still unknown. The glomeruli can be roughly considered
to represent the neural image of the odour stimuli. The information is then sent
to different parts of the olfactory cortex for odour recognition; see Fig. 9.14.
Artificial sensors for odour detection are widely available and include metal
oxide sensors, polymer resonating sensors, and optical bead sensors. The second
step, after data acquisition, is to process the olfactory information. An artificial
system that processes olfactory information is required to have mainly three
properties: process many sensor inputs; discern a large number of different odours;
and handle noisy acquired data.


Evolving Connectionist Systems

Fig. 9.14 A schematic diagram of the olfactory information pathway.

There are many models aiming to describe the flow of olfactory information.
Some of them are oversimplified, tending only to perform pattern recognition
without a long biological description and some of them very complex and detailed.
A spiking neural network is used in the system described in Allen et al. (2002)
where odour acquisition is done through a two-dimensional array of sensors (more
than 1000). The temporal binary output of the sensor is then passed to a spiking
neural network for classification of different scents. The system is then embedded
in an FPGA chip.
In Zanchettin and Ludermir (2004) an artificial nose model and a system are
experimented on for the recognition of gasses emitted at petrol stations, such as
ethane, methane, butane, propane, and carbon monoxide. The model consists of:
Sensory elements: eight polypyrrol-based gas sensors
EFuNN for classification of the sensory input vectors into one or several of the
output classes (gasses)
The EFuNN model performs at a 99% recognition rate whereas a time-delay ANN
performs at the rate of 89%. In addition, the EFuNN model can be further trained
on new gasses, new sensors, and new data, also allowing the insertion of some
initial rules into the EFuNN structure as initialisation, that are well-known rules
from the theory of the gas compounds.
Other olfactory models and artificial nose systems have been developed and
implemented in practice (Valova et al., 2004).


Adaptive Modelling of Brain States Based on EEG

and fMRI Data
EEG Measurements

The moving electrical charges associated with a neuronal action potential emit
a minute, time-varying electromagnetic field. The complex coordinated activities
associated with cognitive processes require the cooperation of millions of neurons,
all of which will emit this electrical energy. The electromagnetic fields generated
by these actively cooperating neurons linearly sum together via superposition.
These summed fields propagate through the various tissues of the cranium to the

Dynamic Modelling of Brain Functions and Cognitive Processes


scalp surface, where EEGs can detect and record this neural activity by means of
measuring electrodes placed on the scalp.
Historically, expert analysis via visual inspection of the EEG has tended to focus
on the activity in specific wavebands, such as delta, theta, alpha, and beta. As
far back as 1969, attempts were made to use computerised analysis of EEG data
in order to determine the subjects state of consciousness. Figure 9.15 shows an
example of EEG data collected from two states of the same subject: the normal
state and an epileptic state.
Noise contaminants in an EEG are called artefacts. For example, the physical
movement of the test subject can contaminate an EEG with the noise generated
by the action potentials of the skeletal muscles. Even if the skeletal muscle action
potentials do not register on the EEG, the shifting mechanical stress on the
electrodes can alter the contact with the subject, thereby affecting the measuring
electrodes conductivity. These variations in electrode contact conductivity will
also result in the recording of a movement artefact.

Fig. 9.15 EEG signals recorded in eight channels from a person in a normal state and in an epileptic state
(the onset of epilepsy is manifested after the time unit 10).



Evolving Connectionist Systems

ECOS for Brain EEG Data Modeling, Classification, and Brain

Signal Transition Rule Extraction

In Kasabov et al. (2007) a methodology for continuous adaptive learning and

classification of human scalp electroencephalographic (EEG) data in response to
multiple stimuli is introduced based on ECOS. The methodology is illustrated on a
case study of human EEG data, recorded at resting, auditory, visual, and mixed audiovisual stimulation conditions. It allows for incremental continuous adaptation and
for the discovery of brain signal transition rules, such as: IF segments S1,S2,    , Sn
of the brain are active at a time moment t THEN segments R1,R2,    , Rm will
become active at the next time moment (t+1). The method results in a good classification accuracy of EEG signals of a single individual, thus suggesting that ECOS
could be successfully used in the future for the creation of intelligent personalized humancomputer interaction models, continuously adaptable over time, as
well as for the adaptive learning and classification of other EEG data, representing
different human conditions. The method could help better understand hidden
signal transitions in the brain under certain stimuli when EEG measurement is used.
Figure 9.16 shows the rule nodes of an evolved ECOS model from data of a person
A using 37 EEG channels as input variables, plotted in a 3D PCA space. The circles
represent rule nodes allocated for class 1 (auditory stimulus); asterisks, class 2

Fig. 9.16 The rule nodes of an evolved ECOS model from data of a person A using 37 EEG channels as input
variables, plotted in a 3D PCA space. The circles represent rule nodes allocated for class 1 (auditory stimulus);
asterisks, class 2 (visual stimulus); squares, class 3 (AV, auditory and visual stimuli combined); and triangles,
class 4 (no stimulus). It can be seen that some rule nodes allocated to one stimulus are close in the models
space, meaning that they represent close location on the EEG surface. At the same time, there are nodes that
represent each of the stimuli and are spread all over the whole space, meaning that for a single stimulus the
brain activates many areas at a different time of the presentation of the stimulus.

Dynamic Modelling of Brain Functions and Cognitive Processes


(visual stimulus); squares, class 3 (AV, auditory and visual stimulus combined);
and triangles, class 4 (no stimulus). It can be seen that rule nodes allocated to one
stimulus are close in the space, which means that their input vectors are similar.
The allocation of the above nodes (cluster centres) back to the EEG channels for
each stimulus is shown in Fig. 9.17 and Fig. 9.18 shows the original EEG electrodes
allocation on the human scalp.


Computational Modelling Based on fMRI Brain Images

Neural activity is a metabolic process that requires oxygen. Active neurons require
more oxygen than quiescent neurons, so they extract more oxygen from the blood
Functional magnetic resonance imaging (fMRI) makes use of this fact by using
deoxygenated haemoglobin as an MRI contrast agent. The entire theory of MRI is
based on the fact that different neurons have different relaxation times. In MRI, the
nuclear magnetic moments of the neurons to be imaged are aligned with a powerful

Fig. 9.17 The location of the selected, significantly activated electrodes, from the ECOS model in Fig. 9.16 for
each of the stimuli of classes from 1 to 4 (A, V, AV, No, from left to right, respectively).

Fig. 9.18 Layout of the 64 EEG electrodes (extended International 10-10 System).


Evolving Connectionist Systems

magnetic field. Once the nuclei are aligned, their magnetic moment is excited with
a tuned pulse of resonance frequency energy. As these excited nuclear magnetic
moments decay back to their rest state, they emit the resonance frequency energy
that they have absorbed. The amount of time that is taken for a given neuron to
return, or decay, to the rest state depends upon that neurons histological type.
This decay time is referred to as the relaxation time, and nerve tissues can be
differentiated from each other on the basis of their varying relaxation times.
In this manner, oxygenated haemoglobin can be differentiated from
deoxygenated haemoglobin because of the divergent relaxation times. fMRI seeks
to identify the active regions of the brain by locating regions that have increased
proportions of deoxygenated haemoglobin.
EEG and fMRI have their own strengths and weaknesses when used to measure
the activity of the brain. EEGs are prone to various types of noise contamination. Also, there is nothing intuitive or easy to understand an EEG recording. In
principle, fMRI is much easier to interpret. One only has to look for the contrasting
regions contained in the image. In fMRI, the resonance frequency pulse is tuned
to excite a specific slice of tissue. This localisation is enabled by a small gradient
magnetic field imposed along the axis of the imaging chamber. After the slice
select resonance frequency excitation pulse, two other magnetic gradients are
imposed on the other two axes within the chamber. These additional gradients
are used for the imaging of specific tissue voxels within the excited slice. If the
subject moves during this time, the spatial encoding inherent in these magnetic
gradients can be invalidated. This is one of the reasons why MRI sessions last so
long. Some type of mechanical restraint on the test subject may be necessary to
prevent this type of data invalidation.
The assumption is that it should be possible to use the information about
specific cortical activity in order to make a determination about the underlying
cognitive processes. For example, if the temporal regions of the cortex are quite
active while the occipital region is relatively quiescent, we can determine that the
subject has been presented with an auditory stimulus. On the other hand, an active
occipital region would indicate the presence of a visual stimulus. By collecting
data while a subject is performing specific cognitive tasks, we can learn which
regions of the brain exhibit what kind of activity for those cognitive processes. We
should also be able to determine the brain activity that characterises emotional
states (happy, sad, etc.) and pathological states (epilepsy, depression, etc.) as well.
Because cognition is a time-varying and dynamic process, the models that we
develop must be capable of mimicking this time-varying dynamic structure.
In Rajapakse et al. (1998) a computational model of fMRI time series analysis
is presented (see Fig. 9.19). It consists of phases of activity measurement, adding



Hemodynamic response

fMRI series y(t)


Fig. 9.19 Rajapakses computational model of fMRI time-series analysis consists of phases of neuronal activity
measurement, modulation, adding noise, and fMRI time series analysis (modified from Rajapakse et al. (1998)).

Dynamic Modelling of Brain Functions and Cognitive Processes


noise, modulation, and fMRI time-series analysis. The time series of fMRI are
recorded from subjects performing information retrieval tasks.
Online brain image analysis, where brain images are added to the model and
the model is updated in a continuous way, is explored in Bruske et al. (1998),
where a system for online clustering of fMRI data is proposed.
A comprehensive study of brain imaging and brain image analysis related to
cognitive processes is presented in J. G. Taylor (1999). Brain imaging has started
to be used for the creation of models of consciousness. A three-stage hypothetical
model of consciousness, for example is presented in J. G. Taylor (1998).
Using ECOS for online brain image analysis and modelling is a promising area
for further research, as ECOS allow for online model creation, model adaptation,
and model explanation.


Computational Neuro-Genetic Modelling (CNGM)

Principles of CNGM

A CNGM integrates genetic, proteomic, and brain activity data and performs data
analysis, modelling, prognosis, and knowledge extraction that reveals relationships
between brain functions and genetic information (see Fig. I.1).
A future state of a molecule M  or a group of molecules (e.g. genes, proteins)
depends on its current state M, and on an external signal Em:
M  = Fm M Em


A future state N  of a neuron, or an ensemble of neurons, will depend on its

current state N and on the state of the molecules M (e.g. genes) and on external
signals En:
N  = Fn N M En


And finally, a future cognitive state C  of the brain will depend on its current state
C and also on the neuronal N and the molecular M state and on the external
stimuli Ec:
C  = Fc C N M Ec


The above set of equations (or algorithms) is a general one and in different cases
it can be implemented differently as shown in Benuskova and Kasabov (2007) and
illustrated in the next section.


Integrating GRN and SNN in CNGM

In Kasabov and Benuskova (2004) and Benuskova and Kasabov (2007) we have
introduced a novel computational approach to brain neural network modelling that


Evolving Connectionist Systems

integrates ANN with an internal dynamic GRN (see Fig. 9.20. Interaction of genes
in model neurons affects the dynamics of the whole ANN through neuronal parameters, which are no longer constant, but change as a function of gene expression.
Through optimisation of the GRN, initial gene/protein expression values, and
ANN parameters, particular target states of the neural network operation can be
It is illustrated by means of a simple neuro-genetic model of a spiking neural
network (SNN). The behaviour of SNN is evaluated by means of the local field
potential (LFP), thus making it possible to attempt modelling the role of genes
in different brain states, where EEG data are available to test the model. We use
the standard FFT signal-processing technique to evaluate the SNN output and
compare with real human EEG data. For the objective of this work, we consider
the time-frequency resolution reached with the FFT to be sufficient. However,
should higher accuracy be critical, wavelet transform, which considers both time
and frequency resolution, could be used instead. Broader theoretical and biological
background of CNGM construction is given in Kasabov and Benuskova (2004) and
Benuskova and Kasabov (2007).
In general, we consider two sets of genes: a set Ggen that relates to general cell
functions and a set Gspec that defines specific neuronal information-processing
functions (receptors, ion channels, etc.). The two sets form together a set G =
G1  G2      Gn . We assume that the expression level of each gene gj t + t   is a
nonlinear function of expression levels of all the genes in G,


wjk gk t
gj t + t  = 

We work with normalized gene expression values in the interval (0,1). The coefficients wij 5 5 are the elements of the square matrix W of gene interaction
weights. Initial values of gene expressions are small random values; i.e. gj 0
0 01.
In the current model we assume that: (1) one protein is coded by one gene;
(2) the relationship between the protein level and the gene expression level is

Fig. 9.20 A more complex case of CNGM, where a GRN of many genes is used to represent the interaction
of genes, and an ANN is employed to model a brain function. The model spectral output is compared against
real brain data for validation of the model and for verifying the derived gene interaction GRN after a GA model
optimization is applied (see Chapters 6 and 8) (Kasabov et al., 2005; Benuskova and Kasabov, 2007).

Dynamic Modelling of Brain Functions and Cognitive Processes


nonlinear; and (3) protein levels lie between the minimal and maximal values.
Thus, the protein level pj t + t is expressed by

pj t + t =




wjk gk t + pmin



The delay t < t  corresponds to the delay caused by the gene transcription,
mRNA translation into proteins, and posttranslational protein modifications
(Abraham et al., 1993). Delay t includes also the delay caused by gene
transcription regulation by transcription factors.
The GRN model from Eqs. (9.8) and (9.9) is a general one and can be integrated
with any ANN model into a CNGM. Unfortunately the model requires many
parameters to be either known in advance or optimized during a model simulation.
In the presented experiments we have made several simplifying assumptions:
1. Each neuron has the same GRN, i.e. the same genes and the same interaction
gene matrix W.
2. Each GRN starts from the same initial values of gene expressions.
3. There is no feedback from neuronal activity or any other external factors to
gene expression levels or protein levels.
4. Delays t are the same for all proteins and reflect equal time points of gathering
protein expression data.
We have integrated the above GRN model with the SNN illustrated in Fig. 9.20.
Our spiking neuron model is based on the spike response model, with excitation
and inhibition having both fast and slow components, both expressed as double
exponentials with amplitudes and the rise and decay time constants (see chapter 4).
Neuronal parameters and their correspondence to particular proteins are
summarized in Table 9.2. Several parameters (amplitude, time constants) are linked

Table 9.2 Neuronal parameters and their corresponding proteins

(receptors/ion channels).
Neurons parameter Pj

Relevant protein pj

Amplitude and time constants of:

Fast excitation
Slow excitation
Fast inhibition
Slow inhibition
Firing threshold and its decay time constant

SCN and/or KCN and/or CLC

AMPAR = (amino-methylisoxazole- propionic acid) AMPA receptor; NMDAR = (N -methylD-aspartate acid) NMDA receptor; GABRA = (gamma-aminobutyric acid) GABA receptor
A; GABRB = GABA receptor B; SCN = sodium voltage-gated channel; KCN = kalium
(potassium) voltage-gated channel; CLC = chloride channel.


Evolving Connectionist Systems

to one protein. However their initial values in Eq. (9.3) will be different. Relevant
protein levels are directly related to neuronal parameter values PJ such that
Pj t = Pj 0pj t


where Pj 0 is the initial value of the neuronal parameter at time t = 0. Moreover,
in addition to the gene coding for the proteins mentioned above, we include in
our GRN nine more genes that are not directly linked to neuronal informationprocessing parameters. These genes are: c-jun, mGLuR3, Jerky, BDNF, FGF-2,
IGF-I, GALR1, NOS, and S100beta. We have included them for later modelling of
some diseases.
We want to achieve a desired SNN output through optimisation of the model
294 parameters (we are optimising also the connectivity and input frequency to
the SNN). We evaluate the LFP of the SNN, defined as LFP = 1/Nui t, by
means of a FFT in order to compare the SNN output with the EEG signal analysed
in the same way. It has been shown that brain LFPs in principle have the same
spectral characteristics as EEG (Quiroga, 1998). Because the updating time for SNN
dynamics is inherently 1 ms, just for computational reasons, we employ the delays
t in Eq. (9.9) being equal to just 1 s instead of minutes or tens of minutes. In order
to find an optimal GRN within the SNN model so that the frequency characteristics
of the LFP of the SNN model are similar to the brain EEG characteristics, we use
the following procedure.
1. Generate a population of CNGMs, each with randomly generated values of
coefficients for the GRN matrix W, initial gene expression values g0, initial
values of SNN parameters P0, and different connectivity.
2. Run each SNN over a period of time T and record the LFP.
3. Calculate the spectral characteristics of the LFP using the FFT.
4. Compare the spectral characteristics of SNN LFP to the characteristics of the
target EEG signal. Evaluate the closeness of the LFP signal for each SNN to
the target EEG signal characteristics. Proceed further according to the standard
GA algorithm to possibly find a SNN model that matches the EEG spectral
characteristics better than previous solutions.
5. Repeat steps 1 to 4 until the desired GRN and SNN model behaviour is obtained.
6. Analyse the GRN and the SNN parameters for significant gene patterns that
cause the SNN model behaviour.

Simulation Results
In Benuskova and Kasabov (2007) experimental results were presented on real
human interictal EEG data for different clinically relevant subbands over time.
These subbands are: delta (0.53.5 Hz), theta (3.57.5 Hz), alpha (7.512.5 Hz), beta
1 (12.518 Hz), beta 2 (1830 Hz), and gamma (above 30 Hz). The average RIRs
over the whole time of simulation (i.e., T = 1 min) was calculated and used as a
fitness function for a GA optimisation. After 50 generations with six solutions in
each population we obtained the best solution. Solutions for reproduction were
being chosen according to the roulette rule and the crossover between parameter

Dynamic Modelling of Brain Functions and Cognitive Processes


values was performed as an arithmetic average of the parent values. We performed

the same FFT analysis as for the real EEG data with the minmax frequency =
01/50 Hz. This particular SNN had an evolved GRN with only 5 genes out of 16
periodically changing their expression values (s100beta, GABRB, GABRA, mGLuR3,
c-jun) and all other genes having constant expression values.
The preliminary results show that the same signal-processing techniques can be
used for the analysis of both the simulated LFP of the SNN CNGM and the real
EEG data to yield conclusions about the SNN behaviour and to evaluate the CNGM
at a gross level. With respect to our neuro-genetic approach we must emphasize
that it is still in an early developmental stage and the experiments assume many
simplifications. In particular, we would have to deal with the delays in Eq. (9.9)
more realistically to be able to draw any conclusions about real data and real
GRNs. The LFP obtained from our simplified model SNN is of course not exactly
the same as the real EEG, which is a sum of many LFPs. However LFPs spectral
characteristics are very similar to the real EEG data, even in this preliminary
Based on our preliminary experimentation, we have come to the conclusion that
many gene dynamics, i.e. many interaction matrices Ws that produce various gene
dynamics (e.g., constant, periodic, quasi-periodic, chaotic) can lead to very similar
SNN LFPs. In our future work, we want to explore statistics of plausible Ws more
thoroughly and compare them with biological data to draw any conclusions about
underlying GRNs. Further research questions are: how many GRNs would lead to
similar LFPs and what do they have in common? How can we use CNGM to model
gene mutation effects? How can we use CNGM to predict drug effects? And finally,
how can we use CNGM for the improvement of individual brain functions, such
as memory and learning?


BrainGene Ontology

In Chapter 7 we presented a framework for integrating ECOS and ontology, where

interaction between ECOS modelling and an evolving repository of data and
knowledge is facilitated. Here we apply this framework to a particular ontology,
braingene ontology (BGO; (Kasabov et al., 2006b).
Gene Ontology (GO; is a general repository that
contains a large amount of information about genes across species, and their
relation to each other and to some diseases. The BGO contains specific information about brain structures, brain functions, brain diseases, and also genes
and proteins that are related to specific brain-related disorders such as epilepsy
and schizophrenia, as well as to general functions, such as learning and memory.
Here in this ongoing research we basically focus on the crucial proteins such as
AMPA, GABA, NMDA, SCN, KCN, and CLC that are in some way controlling
certain brain functions through their direct or indirect interactions with other
genes/proteins. The BGO provides a conceptual framework and factual knowledge
that is necessary to understand more on the relationship between genes involved
during brain disorders and is the best way to provide a semantic repository of
systematically ordered concerned molecules.


Evolving Connectionist Systems

Fig. 9.21 The general information structure of the braingene ontology (BGO) (

Fig. 9.22 A snapshot of the braingene ontology BGO as implemented in Protg, where a direct link to
PubMed and to another general database or an ontology is facilitated.

Dynamic Modelling of Brain Functions and Cognitive Processes


Ontological representation can be used to bridge the different notions in various

databases by explicitly specifying the meaning of and relation between fundamental
concepts. In the BGO this relation can be represented graphically, which enables
visualisation and creation of new relationships. Each instance in this ontology
map is traceable through a query language that allows us, for example, to answer
questions, such as, Which genes are related to epilepsy?
The general information structure of the BGO is given in Fig. 9.21. Figure 9.22
presents a snapshot of the BGO as implemented in Protg, where a direct link to
PubMed and to another general database or an ontology is facilitated. The BGO
allows for both numerical and graphical information to be derived and presented,
such as shown in the Fig. 9.23 histogram of the expression of a gene GRIA1 related
to the AMPA receptor (see Table 9.2).
The BGO contains information and data that can be used for computational
neuro-genetic modelling (see the previous section). Results from CNGM experiments can be deposited into the BGO using some tags to indicate how the results
were obtained and how they have been validated (e.g. in silico, in vitro, or in vivo).


Summary and Open Problems

This chapter discusses issues of modelling dynamic processes in the human brain.
The processes are very complex and their modelling requires dynamic adaptive

Fig. 9.23 A histogram of the expression of a gene GRIA1 related to the AMPA receptor (see Table 9.2) obtained
from the braingene ontology BGO.


Evolving Connectionist Systems

techniques. This chapter raises many questions and open problems that need to
be solved in the future; among them are:
1. How can neural network learning and cell development be combined in one
integrated model? Would it be possible to combine fMRI images with gene
expression data to create the full picture of the processes in the human brain?
2. How does the generic neuro-genetic principle (see the Introduction) relate to
different brain functions and human cognition?
3. Is it possible to create a truly adequate model of the human brain?
4. How can dynamic modelling help trace and understand the development of
brain diseases such as epilepsy and Parkinsons disease?
5. How can dynamic modelling of brain activities help understand the instinct for
information as speculated in the Introduction?
6. How could precise modelling of the human hearing apparatus help to achieve
progress in the area of speech recognition systems?
7. How can we build braincomputer interfaces (see Coyle and McGinnity (2006)?
All these are difficult problems that can be attempted by using different computational methods. Evolving connectionist systems can also be used in this respect.


Further Reading

Principles of Brain Development (Amit, 1989; Arbib, 1972, 1987, 1998, 1995,
2002; Churchland and Sejnowski, 1992; Deacon, 1988, 1998; Eriksson et al.,
1998; Freeman, 2001; Grossberg, 1982; Joseph, 1998; Purves and Lichtman, 1985;
Quartz and Sejnowski, 1997; Taylor, J. G., 1998; van Owen, 1994; Wolpert et al.,
1998; Wong, 1995)
Similarity of Brain Functions and Neural Networks (Rolls and Treves, 1998)
Cortical Sensory Organisation (Woolsey, 1982)
Computational Models Based on Brain-imaging (J.G. Taylor, 1998)
Hearing and the Auditory Apparatus (Allen, 1995; Glassberg and Moore, 1990;
Hartmann, 1998)
Modelling Perception, the Auditory System (Abdulla and Kasabov, 2003; Kuhl,
1994; Liberman et al., 1967; Wang and Jabri, 1998)
Modelling Visual Pattern Recognition (Fukushima, 1987; Fukushima et al., 1983);
EEG Signals Modelling (Freeman, 1987; Freeman and Skarda, 1985)
MRI (Magnetic Resonance Images) Processing (Hall et al., 1992)
Multimodal Functional Brain Models (Deacon, 1988, 1998; Neisser, 1987)
Computational Brain Models (Matsumoto, 2000; Matsumoto et al., 1996; Arbib,
Dynamic Interactive Models of Vision and Control Functions (Arbib, 1998; 2002)
Signals, Sound, and Sensation (Hartmann, 1998)
Learning in the Hippocampus Brain (Durand et al., 1996; Eriksson et al., 1998;
Grossberg and Merrill, 1996; McClelland et al., 1995)
Dynamic Models of the Human Mind (Port and Van Gelder, 1995)
Computational Neuro-genetic Modelling (Marcus, 2004; Kasabov and Benuskova,
2004; Benuskova and Kasabov, 2007; Howell, 2006)

10. Modelling the Emergence

of Acoustic Segments in Spoken

Spoken languages evolve in the human brain through incremental learning and
this process can be modelled to a certain degree with the use of evolving connectionist systems. Several assumptions have been hypothesised and proven through
simulation in this chapter:
1. The learning system evolves its own representation of spoken language
categories (phonemes) in an unsupervised mode through adjusting its structure
to continuously flowing examples of spoken words (a learner does not know
in advance which phonemes are going to be in a language, nor, for any given
word, how many phoneme segments it has).
2. Learning words and phrases is associated with supervised presentation of
3. It is possible to build a lifelong learning system that acquires spoken languages
in an effective way, possibly faster than humans, provided there are fast
machines to implement the evolving learning models.
The chapter is presented in the following sections.

Introduction to the issues of learning spoken languages

The dilemma innateness versus learning, or nature versus nurture, revisited
ECOS for modelling the emergence of acoustic segments (phonemes)
Modelling evolving bilingual systems

The chapter uses some material published by Taylor and Kasabov (2000).


Introduction to the Issues of Learning

Spoken Languages

The task here is concerned with the process of learning in humans and how this
process can be modelled in a program. The following questions are attempted.


Evolving Connectionist Systems

How can continuous learning in humans be modelled?

What conclusions can be drawn in respect to improved learning and teaching
processes, especially learning languages?
How is learning a second language related to learning a first language?
The aim is computational modelling of processes of phoneme category acquisition,
using natural spoken language as training input to an evolving connectionist
system. A particular research question concerns the characteristics of optimal
input and optimal system parameters that are needed for the phoneme categories
of the input language to emerge in the system in the least time. Also, by tracing
in a machine how the target behaviour actually emerges, hypotheses about what
learning parameters might be critical to the language-acquiring child could be
By attempting to simulate the emergence of phoneme categories in the language
learner, the chapter addresses some fundamental issues in language acquisition
and inputs into linguistic and psycholinguistic theories of acquisition. It has a
special bearing on the question of whether language acquisition is driven by
general learning mechanisms, or by innate knowledge of the nature of language
(Chomskys Universal Grammar).
The basic methodology consists in the training of an evolving connectionist
structure (a modular system of neural networks) with Mel-scale transformations
of natural language utterances. The basic research question is whether, and to
what extent, the network will organize the input in clusters corresponding to the
phoneme categories of the input language. We will be able to trace the emergence
of the categories over time, and compare the emergent patterns with those that
are known to occur in child language acquisition.
In preliminary experiments, it may be advisable to study circumscribed aspects
of a languages phoneme system, such as consonantvowel syllables. Once the
system has proved viable, it will be a relatively simple matter to proceed to more
complex inputs, involving the full range of sounds in a natural language, bearing
in mind that some languages (such as English) have a relatively large phoneme
system compared to other languages (such as Maori) whose phoneme inventory
is more limited (see Laws et al. (2003)).
Moreover, it will be possible to simulate acquisition under a number of input
Input from one or many speakers
Small input vocabulary versus large input vocabulary
Simplified input first (e.g. consonantvowel syllables) followed by phonologically more complex input
Different sequences of input data
The research presented here is at its initial phase, but the results are expected
to contribute to a general theory of human/machine cognition. Technological
applications of the research concern the development of self-adaptive systems.
These are likely to substantially increase the power of automatic speech recognition

Modelling Acoustic Segments in Spoken Languages


The Dilemma Innateness Versus Learning or Nature

Versus Nurture Revisited


A General Discussion


A major issue in contemporary linguistic theory concerns the extent to which

human beings are genetically programmed, not merely to acquire language, but
to acquire languages with just the kinds of properties that they have (Pinker,
1994; Taylor and Kasabov, 2000). For the last half century, the dominant view has
been that the general architecture of language is innate; the learner only requires
minimal exposure to actual language data in order to set the open parameters given
by Universal Grammar as hypothesized by Noam Chomsky (1995). Arguments for
the innateness position include the rapidity with which all children (barring cases
of gross mental deficiency or environmental deprivation) acquire a language, the
fact that explicit instruction has little effect on acquisition, and the similarity (at a
deep structural level) of all human languages. A negative argument is also invoked:
the complexity of natural languages is such that they could not, in principle, be
learned by normal learning mechanisms of induction and abstraction.
Recently, this view has been challenged. Even from within the linguistic
mainstream, it has been pointed out that natural languages display so much
irregularity and idiosyncrasy, that a general learning mechanism has got to be
invoked; the parameters of Universal Grammar would be of little use in these cases
(Culicover et al., 1999). Moreover, linguists outside the mainstream have proposed
theoretical models which do emphasize the role of input data in language learning.
In this view, language knowledge resides in abstractions (possibly, rather low-level
abstractions) made over rich arrays of input data.
In computational terms, the contrast is between systems with a rich in-built
structure, and self-organising systems that learn from data (Elman et al., 1997).
Not surprisingly, systems that have been preprogrammed with a good deal of
language structure vastly outperform systems which learn the structure from input
data. Research on the latter is still in its infancy, and has been largely restricted
to modelling circumscribed aspects of a language, most notably, the acquisition of
irregular verb morphology (Plunkett, 1996). A major challenge for future research
will be to create self-organising systems to model the acquisition of more complex
configurations, especially the interaction of phonological, morphological, syntactic,
and semantic knowledge.
In the introductory chapter it was mentioned that learning is genetically defined;
i.e. there are genes that are associated with long-term potentiation (LTP), learning,
and memory (Abraham et al., 1993), but it is unlikely that there are genes associated
with learning languages and even less likely that there are genes associated with
particular languages, e.g. Italian, English, Bulgarian, Maori, and so on.
The focus of this chapter is the acquisition of phonology, more specifically,
the acquisition of phoneme categories. All languages exhibit structuring at the
phoneme level. We may, to be sure, attribute this fact to some aspect of the
genetically determined language faculty. Alternatively, and perhaps more plausibly,
we can regard the existence of phoneme inventories as a converging solution to two
different engineering problems. The first problem pertains to a speakers storage


Evolving Connectionist Systems

of linguistic units. A speaker of any language has to store a vast (and potentially
open-ended) inventory of meaningful units, be they morphemes, words, or fixed
phrases. Storage becomes more manageable, to the extent that the meaningful units
can be represented as sequences of units selected from a small finite inventory
of segments (the phones and the phonemes). The second problem refers to the
fact that the acoustic signal contains a vast amount of information. If language
learning is based on input, and if language knowledge is a function of heard
utterances, a very great deal of the acoustic input has got to be discarded by
the language learner. Incoming utterances have got to be stripped down to their
linguistically relevant essentials. Reducing incoming utterances to a sequence of
discrete phonemes solves this problem, too.


Infant Language Acquisition

Research by Peter Jusczyk (1997) and others has shown that newborn infants
are able to discriminate a large number of speech sounds, well in excess of the
number of phonetic contrasts that are exploited in the language an infant will
subsequently acquire. This is all the more remarkable inasmuch as the infant
vocal tract is physically incapable of producing adultlike speech sounds at all
(Liberman, 1967). By about six months, perceptual abilities are beginning to adapt
to the environmental language, and the ability to discriminate phonetic contrasts
that are not utilized in the environmental language declines. At the same time,
and especially in the case of vowels, acoustically different sounds begin to cluster
around perceptual prototypes, which correspond to the phonemic categories of
the target language, a topic researched by Kuhl (1994). Thus, the perceptual space
of e.g. the Japanese- or Spanish-learning child becomes increasingly different
from the perceptual space of the English- or Swedish-learning child. Japanese,
Spanish, English, and Swedish cut up the acoustic vowel space differently, with
Japanese and Spanish having far fewer vowel categories than English and Swedish.
However, the emergence of phoneme categories is not driven only by acoustic
resemblance. Kuhls research showed that infants are able to filter out speakerdependent differences, and attend only to the linguistically significant phoneme
It is likely that adults in various cultures, when interacting with infants, modulate
their language in ways to optimise the input for learning purposes. This is not just
a question of vocabulary selection (although this, no doubt, is important). Features
of child-directed speech include exaggerated pitch range and slower articulation
rates (Kuhl, 1994). These maximise the acoustic distinctiveness of the different
vowels, and therefore reduce the effect of co-articulation and other characteristics
of rapid conversational speech.



Although it is plausible to assume that aspects of child-directed speech facilitate

the emergence of perceptual prototypes for the different phones and phonemes of
a languages sound system, it must be borne in mind that phoneme categories are

Modelling Acoustic Segments in Spoken Languages


not established only on the basis of acousticphonetic similarity. Phonemes are

theoretical entities, at some distance from acoustic events. As long as the childs
vocabulary remains very small (up to a maximum of about 4050 words), it is
plausible that each word is represented as a unique pathway through acoustic space,
each word being globally distinct from each other word. But with the vocabulary
spurt, which typically begins around age 1620 (Bates and Goodman, 1999), this
strategy becomes less and less easy to implement. Up to the vocabulary spurt, the
child has acquired words slowly and gradually; once the vocabulary spurt begins,
the childs vocabulary increases massively, with the child sometimes adding as
many as ten words per day to his or her store of words. Under these circumstances,
it is highly implausible that the child is associating a unique acoustic pattern with
each new word. Limited storage and processing capacity requires that words be
broken down into constituent elements, i.e. the phonemes. Rather than learning an
open-ended set of distinct acoustic patterns, one for each word (tens of thousands
of them!), the words come to be represented as a linear sequence of segments
selected from an inventory of a couple of dozen distinct elements.
The above is used as a principle for the experiments conducted later in this

Phoneme Analysis
Linguists have traditionally appealed to different procedures for identifying the
phones and phonemes of a language. One of them is the principle of contrast.
The vowels [i:] and [I] are rather close (acoustically, perceptually, and in terms
of articulation). In English, however, the distinction is vital, because the sounds
differentiate the words sheep and ship (and countless others). The two sounds are
therefore assigned to two different phonemes. The principle of contrast can be
used in a modelling system through feedback from a semantic level, back to the
acoustic level of modelling.


ECOS for Modelling the Emergence of Phones

and Phonemes


Problem Definition

We can conceptualise the sounding of a word as a path through multidimensional

acoustic space. Repeated utterances of the same word will be represented by a
bundle of paths that follow rather similar trajectories. As the number of word
types is increased, we may assume that the trajectories of different words will
overlap in places; these overlaps will correspond to phoneme categories.
It is evident that an infant acquiring a human language does not know, a priori,
how many phoneme categories there are going to be in the language that she or
he is going to learn, nor, indeed, how many phonemes there are in any given
word that the child hears. (We should like to add: the child learner does not
know in advance that there are going to be such things as phonemes at all! Each


Evolving Connectionist Systems

word simply has a different global sound from every other word). A minimum
expectation of a learning model is that the language input will be analysed in
terms of an appropriate number of phonemelike categories.
The earlier observations on language acquisition and phoneme categories
suggest a number of issues that need to be addressed while modelling phoneme
1. Does learning require input which approximates the characteristics of
motherese with regard to careful exaggerated articulation, also with respect to
the frequency of word types in the input language?
2. Does phoneme learning require lexicalsemantic information? The English
learner will have evidence that sheep and ship are different words, not just
variants of one and the same word, because sheep and ship mean different
things. Applying this to our learning model, the question becomes: do input
utterances need to be classified as tokens of word types?
3. Critical mass: it would be unrealistic to expect stable phoneme categories to
emerge after training on only a couple of acoustically nonoverlapping words.
We might hypothesize that phonemelike organisation will emerge only when a
critical mass of words has been extensively trained, such that each phoneme
has been presented in a variety of contexts and word positions. First language
acquisition research suggests that the critical mass is around 4050 words.
4. The speech signal is highly redundant in that it contains vast amounts of
acoustic information that is simply not relevant to the linguistically encoded
message. We hypothesize that a learning model will need to be trained on input
from a variety of speakers, all articulating the same words. The system must
be introduced to noise, in order for noise to be ignored during the systems
5. Can the system organize the acoustically defined input without prior knowledge
of the characteristics of the input language? If so, this would be a significant
finding for language acquisition research! Recall that children learning their
mother tongue do not know in advance how many phoneme categories there
are going to be in the language, nor even, indeed, that language will have a
phoneme level of organization.
6. What is the difference, in terms of acoustic space occupied by a spoken language,
between simultaneous acquisition of two languages versus late bilingualism (see
Kim et al., 1997)? Will the acquisition of the second language show interference
patterns characteristic of human learners?
Underlying our experiments is the basic question of whether the system can
organize the acoustic input with minimal specification of the anticipated output.
In psycholinguistic terms, this is equivalent to reducing to a minimum the contribution of innate linguistic knowledge. In genetic terms, this means that there are
no genes associated with learning languages and learning specific languages in
particular. Indeed, our null hypothesis will be that phoneme systems emerge as
organizational solutions to massive data input. If it should turn out that learning
can be modelled with minimal supervision, this would have very significant consequences for linguistic and psycholinguistic theories of human language learning.

Modelling Acoustic Segments in Spoken Languages



Evolving Clustering for Modelling the Emergence

of Phones A Simple Example

The evolving clustering method ECM from Chapter 2 is used here with inputs
that represent features taken from a continuous stream of spoken words. In the
experiment shown in Fig. 10.1 frames were extracted from a spoken word eight,






6470 68
65 5866

73 271 69







9599 156


37 147
135 5






















Fig. 10.1 Experimental results with an ECM model for phoneme acquisition, a single pronunciation of the word
eight. From top to bottom: (a) the two-dimensional input space of the first two Mel scale coefficients of all
frames taken from the speech signal of the pronounced digit eight and numbered with the consecutive time
interval, and also the evolved nodes (denoted with larger font) that capture each of the three phonemes of
the speech input: /silence/, /ei/, /t/, /silence/; (b) the time of the emergence of the three ECM nodes: cluster
centres; (c) all 78 element Mel-vectors of the word eight over time (175 time frames, each 11.6 msec long,
with 50% overlap between the frames).


Evolving Connectionist Systems

in a phonetic representation it would be: /silence/ /ei/ /t/ /silence/. Three timelag feature vectors of 26 Mel-scale coefficients each are used, from a window of
11.6 ms, with an overlap of 50% (see Fig. 10.1c).
A cluster structure was evolved with the use of ECM. Each new input vector
from the spoken word was either associated with an existing rule node that was
modified to accommodate these data, or a new rule node was created. All together,
three rule nodes were created (Fig. 10.1b). After the whole word was presented
the nodes represented the centres of the phoneme clusters without the concept of
phonemes being presented to the system (Fig. 10.1a). The figures show clearly that
three nodes were evolved that represented the stable sounds as follows: frames
053 and 96170 were allocated to rule node 1 that represented /silence/; frames
5678 were allocated to rule node 2 that represented the phoneme /ei/; frames
8591 were allocated to rule node 3 that represented the phoneme /t/; the rest of
the frames represented transitional states, e.g. frames 5455 the transition between
/silence/ and /ei/, frames 7984, the transition between /ei/ and /t/, and frames
9296, the transition between /t/ and /silence/, were allocated to some of the closest
rule nodes.
If in the ECM simulation a smaller distance threshold Dthr had been used,
there would have been more nodes evolved to represent short transitional sounds
along with the larger phone areas. When more pronunciations of the word eight
are presented to the ECM model the model refines the phoneme regions and the
phoneme nodes.
The ECM, ESOM, and EFuNN methods from Chapter 2 and Chapter 3 allow for
experimenting with different strategies of elementary sound emergence:

Increased sensitivity over time

Decreased sensitivity over time
Single language sound emergence
Multiple languages presented one after another
Multiple languages presented simultaneously (alternative presentation of
words from different languages)
Aggregation within word presentation
Aggregation after a whole word is presented
Aggregation after several words are presented
The effect of alternative presentation of different words versus the effect of
one word presented several times, and then the next one is presented, etc.
The role of the transitional sounds and the space they occupy
Using forgetting in the process of learning
Other emerging strategies


Evolving the Whole Phoneme space of (NZ) English

To create a clustering model for New Zealand English, data from several speakers
from the Otago Speech Corpus ( were selected to train
the model. Here, 18 speakers (9 Male, 9 Female) each spoke 128 words three
times. Thus, approximately 6912 utterances were available for training. During
the training, a word example was chosen at random from the available words.

Modelling Acoustic Segments in Spoken Languages


The waveform underwent a Mel-scale cepstrum (MSC) transformation to extract

12 frequency coefficients, plus the log energy, from segments of approximately
23.2 ms of data. These segments were overlapped by 50%. Additionally, the delta,
and the deltadelta values of the MSC coefficients and log energy were extracted,
for an input vector of total dimensionality 39.
The ECM system was trained until the number of cluster nodes became constant
for over 100 epochs. A total of 12,000 epochs was performed, each on one of the
12,000 data examples. The distance threshold Dthr parameter of the ECM was set
to 0.15.
Figure 10.2 shows: (a) the connections of the evolved ECM system (70 nodes
are evolved that capture 70 elementary sounds, the columns) from spoken words
presented one after another, each frame of the speech signal being represented
by 39 features (12 MSC and the power, their delta features, and the deltadelta
features, the rows). The darker the colour of a cell, the higher its value is; (b) the
evolved cluster nodes and the trajectory of the spoken word zero projected in the
MSC1MSC2 input space; (c) the trajectory of the word zero shown as a sequence
of numbered frames, and the labelled cluster nodes projected in the MSC1MSC2
input space.

Fig. 10.2 (a) The connections of the evolved ECM model (there are no fuzzified inputs used) from spoken
words of NZ English. The 70 cluster nodes are presented on the x-axis and the 39 input variables are presented
on the y-axis. The words were presented one after another, each frame of the speech signal being presented
by 39 features: 12 MSC and the power, their delta features, and their deltadelta features. The darker the
colour is, the higher the value. It can be seen, for example, that node 2 gets activated for high values
of MSC 1; (Continued overleaf )


Evolving Connectionist Systems


2019 18 15
22 23


3938 10




76 25 72
11 59
79 74
29 4916
68 50
5453 48
6061 621213
47 44
123 81
9 51183
10 1 24
4 34
26 27
51 282932

841 4 54
4535 19
46 59
58 4457
42 32 67 33

26 52





Fig. 10.2 (continued ) (b) the evolved 70 rule nodes in the ECM model from (a) and the trajectory of the
word zero projected in the MSC1MSC2 input space; (c) the trajectory of the word zero from (b) with the
consecutive time frames being labelled with consecutive numbers: the smaller font numbers) and the emerged
nodes labelled in a larger font according to their time of emergence, all projected in the MSC1MSC2 input

Figure 10.3 shows three representations of a spoken word zero from the corpus.
Firstly, the word is viewed as a waveform (Fig. 10.3, middle). This is the raw signal
as amplitude over time. The second view is the MSC space view. Here, 12 frequency
components are shown on the y-axis over time on the x-axis (Fig. 10.3, bottom).
This approximates a spectrogram. The third view (top) shows the activation of each
of the 70 rule nodes (the rows) over time. Darker areas represent a high activation.
Additionally, the winning nodes are shown as circles. Numerically, these are: 1, 1,
1, 1, 1, 1, 2, 2,2, 2, 22, 2, 2, 11, 11, 11,11, 11, 24, 11, 19, 19, 19, 19, 15, 15, 16, 5,
5, 16, 5, 15, 16, 2, 2, 2, 11, 2, 2, 1, 1, 1. Some further testing showed that recognition of words depended not only on the winning node, but also on the path of the
recognition. Additionally, an n-best selection of nodes may increase discrimination.

Modelling Acoustic Segments in Spoken Languages


Fig. 10.3 ECM representation of a spoken word zero: (upper figure) the activation of the nodes of the evolved
ECM model from Fig. 10.2 (nodes from 1 to 70 are shown on the y-axis) when the word was propagated
through it (time is presented on the x-axis); (middle figure) the wave form of the signal; (bottom figure)
x-axis represents time, and y-axis represents the value of the MSC from 1 to 12 (the darker the colour is, the
higher the value).

Trajectory Plots
The trajectory plots, shown in Figs. 10.4 through 10.7, are presented in three of
the total 39 dimensions of the input space. Here, the first and seventh MSC are
used for the x and y coordinates. The log energy is represented by the z-axis.
A single word, sue, is shown in Fig. 10.4. The starting point is shown as a square.
Several frames represent the hissing sound, which has low log energy. The vowel
sound has increased energy, which fades out toward the end of the utterance. Two
additional instances of the same word, spoken by the same speaker, are shown in


Evolving Connectionist Systems







27 29 16
30 51
















46 8

14 41












Fig. 10.4 Trajectory of a spoken word sue, along with the 70 rule nodes of the evolved ECM model shown
in the 3D space of the coordinates MS1MS7log E (Taylor et al., 2000).











21 24

25 42






36 11 15




17 18
50 57 48
58 20 47









Fig. 10.5 Two utterances of the word sue pronounced by the same speaker as in Fig. 10.4, presented along
with the 70 rule nodes of the evolved ECM model in the 3D space of MS1MS7log E (see Taylor et al. (2000)).

Modelling Acoustic Segments in Spoken Languages






27 29 16
52 67
62 15



53 17
58 2
44 48
46 8




















Fig. 10.6 Trajectories of spoken words sue and nine by the same speaker presented in a 3D space
MS1MS7log E, along with the 70 rule nodes of the evolved ECM model (Taylor et al., 2000).















27 29








35 56
46 8


58 57 4








Fig. 10.7 Trajectories of the words sue and zoo along with the 70 rule nodes of the evolved ECM model in
the MS1MS7log E space (Taylor et al., 2000).


Evolving Connectionist Systems

Fig. 10.5. Here, a similar trajectory can be seen. However, the differences in the
trajectories represent the intraspeaker variation. Interword variability can be seen
in Fig. 10.6, which shows the sue from Fig. 10.4 (dotted line) compared with the
same speaker uttering the word nine. Even in the three-dimensional space shown
here, the words are markedly different. The final trajectory plot (Fig. 10.7) is of
two similar words, sue (dotted line) and zoo (solid line) spoken by the same
speaker. Here, there is a large overlap between the words, especially in the section
of the vowel sound.


A Supervised ECOS Model for the Emergence

of Word Clusters Based on Both Auditory Traces
and Supplied (Acquired) Meaning

The next step of this project is to develop a supervised model based on both
ECM for phoneme cluster emergence, and EFuNN for word recognition. After the
ECM is evolved (it can still be further evolved) a higher-level word recognition
module is developed where inputs to the EFuNN are activated cluster nodes from
the phoneme ECM over a period of time. The outputs of the EFuNN are the
words that are recognized. The number of words can be extended over time thus
creating new outputs that are allowable in an EFuNN system (see Chapter 3).
A sentence recognition layer can be built on top of this model. This layer will
use the input from the previous layer (a sequence of recognized words over
time) and will activate an output node that represents a sentence (a command,
a meaningful expression, etc.). At any time of the functioning of the system,
new sentences can be introduced to the system which makes the system evolve
over time.


Modelling Evolving Bilingual Systems


Bilingual Acquisition

Once satisfactory progress has been made with modelling phoneme acquisition
within a given language, a further set of research questions arises concerning the
simulation of bilingual acquisition. As pointed out before, we can distinguish two
1. Simultaneous bilingualism. From the beginning, the system is trained simultaneously with input from two languages (spoken by two sets of speakers; Kim
et al. (1997)).
2. Late bilingualism. This involves training an already trained system with input
from a second language.
It is well-known that children manage bilingual acquisition with little apparent
effort, and succeed in speaking each language with little interference from the

Modelling Acoustic Segments in Spoken Languages


other. In terms of an evolving system, this means that the phoneme representations
of the two languages are strictly separated. Even though there might be some
acoustic similarity between sounds of one language and sounds of the other, the
distributions of the sounds (the shape of the trajectories associated with each of
the languages) will be quite different.
Late acquisition of a second language, however, is typically characterized by
interference from the first language. The foreign language sounds are classified
in terms of the categories of the first language. (The late bilingual will typically
retain a foreign accent, and will mishear second-language utterances.) In
terms of our evolving system, there will be considerable overlap between the
two languages; the acoustic trajectories of the first language categories are so
entrenched, that second language utterances will be forced into the first language
The areas of the human brain that are responsible for the speech and
the language abilities of humans evolve through the whole development of
an individual (Altman, 1990). Computer modelling of this process, before its
biological, physiological, and psychological aspects have been fully discovered, is
an extremely difficult task. It requires flexible techniques for adaptive learning
through active interaction with a teaching environment.
It can be assumed that in a modular spoken language evolving system, the
language modules evolve through using both domain text data and spoken information data fed from the speech recognition part. The language module produces
final results as well as a feedback for adaptation in the previous modules. This
idea is currently being elaborated with the use of ECOS.


Modelling Evolving Elementary Acoustic Segments (Phones)

of Two Spoken Languages

The data comprised 100 words of each Maori and English (see Laws et al., 2003).
The same speaker was used for both datasets. One spoken word at a time was
presented to the network, frame by frame. In every case, the word was preprocessed
in the following manner. A frame of 512 samples was transformed into 26 Melscale cepstrum coefficients (MSCC). In addition, the log energy was also calculated.
Consecutive frames were overlapped by 50%.
For the English data, a total of 5725 frames was created. For the Maori data,
6832 frames were created. The words used are listed below. The English words
are one and two syllables only. The Maori words are up to four syllables, which
accounts for the slightly larger number of frames.

English Words
ago, ahead, air, any, are, auto, away, baby, bat, bird, boo, book, boot, buzz, card,
carrot, choke, coffee, dart, day, dead, die, dog, dove, each, ear, eight, ether, fashion,
fat, five, four, fur, go, guard, gut, hat, hear, how, jacket, joke, joy, judge, lad, ladder,
leisure, letter, loyal, mad, nine, nod, one, ooze, other, over, palm, paper, pat, pea,


Evolving Connectionist Systems

peace, pure, push, rather, recent, reef, riches, river, rod, rouge, rude, school, seven,
shoe, shop, sing, singer, six, sue, summer, tan, tart, teeth, teethe, that, thaw, there,
three, tour, tragic, tub, two, utter, vat, visit, wad, yard, yellow, zero, zoo

Maori Words
ahakoa, ahau, a hei, ahiahi, a hua, a huatanga, ake, ako, aku, a kuanei, anake, anei,
ano, a popo, aroha, a tahua, atu, atua, aua, aue, a whina, ehara, ena, enei, engari, era,
etahi, hanga, heke, hine, hoki, huri, iho, ihu, ika, inaianei, ingoa, inu, iwa, kaha,
kaiako, kete, kino, koti, kura, mahi, mea, miraka, motu, muri, nama, nei, ngahere,
ngakau, ngaro, ngawari, ngeru, noa, nou, nui, o ku, oma, o na, one, oneone, ono,
ora, oti, otira, pakaru, pekepeke, pikitia, poti, putiputi, rangatira, reri, ringaringa,
rongonui, ruma, tahuri, tena, tikanga, tokorua, tuna, unu, upoko, uri, uta, utu,
waha, waiata, wero, wha, whakahaere, whakapai, whanaunga, whero, whiri, wiki,
Three cases were of interest, to be compared and contrasted. Experiment 1
involved presenting all the English speech samples to the network, followed by
all the Maori speech. Experiment 2 was similar, except that the Maori speech was
presented first. Both English and Maori speech data were used for Experiment 3,
shuffled together.
The evolving clustering method ECM was used for this experiment (Chapter 2).
A distance threshold (Dthr) of 0.155 was used for all experiments.

Table 10.1 shows the number of clusters resulting from each experiment. The
English data alone created 54 clusters and an additional 14 were created by the
Maori data used after the English data. The Maori data alone created 49 clusters.
The addition of the English data produced 15 more clusters. Both languages
presented together and a mixed order produced slightly more clusters than either
language presented separately.
The spoken word zoo was presented to the evolved three models. The activation
of the nodes is presented as trajectories in the 2D PCA space in Figs. 10.8
through 10.10.

Table 10.1 Number of clusters created for each experiment of the bilingual English
and Maori acoustic evolving system based on the ECM evolving clustering method.

English then Maori

Maori then English
Both languages

Number of first
language clusters

Total number
of clusters





Modelling Acoustic Segments in Spoken Languages



53 46




23 52



49 12










41 30
58 26
9 44
64 59
11 20
60 38
217 68








62 8






Fig. 10.8 The projection of the spoken word zoo in English and Maori PCA space (see Laws et al., 2003).





60 51
54 16

23 40

10 29
1 35
9 33
56 36 7
17528 48






30 25


41 27
6 21











Fig. 10.9 The projection of the spoken word zoo in Maori + English PCA space (Laws et al., 2003).


Evolving Connectionist Systems









40 66
39 27 5234












62 46
43 54







35 37





415 67





Fig. 10.10 The projection of the spoken word zoo in a mixed English and Maori PCA space (Laws et al.,

As all the above visualisations were done in a PCA 2D space, it is important to

know the amount of variance accounted for by the first few PCA dimensions. This
is shown in Table 10.2 (Laws et al., 2003).
Different nodes in an evolved bilingual system get activated differently when
sounds and words from each of the two languages are presented as analysed in
Fig. 10.11. It can be seen that some nodes are used in only one language, but other
nodes get activated equally for the two languages (e.g. node #64).
The number of nodes in an evolving system grows with more examples presented
but due to similarity between the input vectors, this number would saturate after
a certain number of examples are presented (Fig. 10.12).

Table 10.2 Variance accounted for by various numbers

of PCA dimensions (Laws et al., 2003).

Variance for each language







Modelling Acoustic Segments in Spoken Languages

NZ English


10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

Rule Nodes (1 to 34)

NZ English






35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70

Rule Nodes (35 to 70)

Fig. 10.11 The activation of cluster (rule) nodes of the evolving system evolved through a mixed presentation
of both English and Maori, when new words from both languages are presented (Laws et al., 2003).


Summary and Open Problems

The chapter presents one approach to modelling the emergence of acoustic clusters
related to phones in a spoken language, and in multiple spoken languages, namely
using ECOS for the purpose of modelling.
Modelling the emergence of spoken languages is an intriguing task that has
not been solved thus far despite existing papers and books. The simple evolving
model that is presented in this chapter illustrates the main hypothesis raised
in this material, that acoustic features such as phones are learned rather than


Evolving Connectionist Systems

Fig. 10.12 The number of cluster nodes (y-axis) over learning frames (x-axis) for mixed English and Maori
evolving acoustic system based on ECM (Laws et al., 2003).

inherited. The evolving of sounds, words, and sentences can be modelled in a

continuous learning system that is based on evolving connectionist systems and
ECOS techniques.
The chapter attempted to answer posed open problems, but these problems are
still to be addressed and other problems arose as listed below:
1. How can continuous language learning be modelled in an intelligent machine?
2. What conclusions can be drawn from the experimental results shown in this
chapter to improve learning and teaching processes, especially with respect to
3. How learning a third language relates to learning a first and a second language.
4. Is it necessary to use in the modelling experiments input data similar in characteristics to those of motherese, with respect to both vocabulary selection and
5. Can a system self-organise acoustic input data without any prior knowledge of
the characteristics of the input language?
6. Is it possible to answer the dilemma of innateness versus learning in respect
to languages through attempting to create a genetic profile of a language
(languages) similar to the gene expression profiles of colon cancer and
leukaemia presented in Chapter 8?
7. How can we prove or disprove the following hypotheses? There are no specific
genes associated with learning the English language, and no genes associated
with the Arabic language, and no genes associated with any particular language.
Can we use the microarray technology presented in Chapter 8 or a combined
microarray technology with fMRI imaging (see Chapter 9)?
8. How does the generic neuro-genetic principle (see Chapter 1 and Chapter 7)
relate to learning languages?
9. How is language learning related to the instinct for information (see Chapter 7)?

Modelling Acoustic Segments in Spoken Languages



Further Reading

For further details of the ideas discussed in this chapter, please refer to Taylor
and Kasabov (2000), Laws et al., 2003.
More on the research issues in the chapter can be found in other references,
some of them listed below.
Generic Readings about Linguistics and Understanding Spoken Languages
(Taylor, 1995, 1999; Segalowitz, 1983; Seidenberg, 1997)
The Dilemma Innateness Versus Learned in Learning Languages (Chomsky,
1995; Lakoff and Johnson, 1999; Elman et al., 1997)
Learning Spoken Language by Children (Snow and Ferguson, 1977; MacWhinney
et al., 1996; Juszyk, 1997)
The Emergence of Language (Culicover, 1999; Deacon, 1988; Pinker, 1994; Pinker
and Prince, 1988)
Models of Language Acquisition (Parisi, 1997; Regier, 1996)
Using Evolving Systems for Modelling the Emergence of Bilingual EnglishMaori
Acoustic Space (Laws et al., 2003)
Modelling the Emergence of New Zealand Spoken English (Taylor and Kasabov,

11. Evolving Intelligent Systems

for Adaptive Speech Recognition

Speech and signal-processing technologies need new methods that deal with the
problems of noise and adaptation in order for these technologies to become
common tools for communication and information processing. This chapter is
concerned with evolving intelligent systems (EIS) for adaptive speech recognition.
An EIS system can learn continuously spoken phonemes, words, and phrases.
New words, pronunciations, and languages can be introduced to the system in an
incremental adaptive way.
The material is presented in the following sections.

Introduction to adaptive speech recognition

Speech signal analysis and feature selection
A framework of EIS for adaptive speech recognition
Adaptive phoneme-based speech recognition
Adaptive whole word and phrase recognition
Adaptive intelligent humancomputer interfaces
Summary and open problems
Further reading


Introduction to Adaptive Speech Recognition


Speech and Speech Recognition

Speech recognition is one of the most challenging applications of signal processing

(Cole et al., 1995). Some basic notions about speech and speech recognition systems
are given below.
Speech is a sequence of waves that are transmitted over time through a medium
and are characterised by some features; among them are intensity and frequency.
Speech is perceived by the inner ear in humans (see Chapter 9). It activates oscillations of small elements in the inner ear, which oscillations are transmitted to
a specific part of the brain for further processing. The biological background
of speech recognition is used by many researchers to develop humanlike
automatic speech recognition systems (ASRS), but other researchers take other


Evolving Connectionist Systems

Speech can be represented on different scales:

Time scale, which representation is called waveform representation.
Frequency scale, which representation is called spectrum.
Both time and frequency scale; this is the spectrogram of the speech signal.
The three factors which provide the easiest method of differentiating speech sounds
are the perceptual features of loudness, pitch, and quality. Loudness is related to
the amplitude of the time domain waveform, but it is more correct to say that it
is related to the energy of the sound (also known as its intensity). The greater the
amplitude of the time domain waveform, the greater is the energy of the sound and
the louder the sound appears. Pitch is the perceptual correlate of the fundamental
frequency of the vocal vibration of the speaker organ.
The quality of a sound is the perceptual correlate of its spectral content. The
formants of a sound are the frequencies where it has the greatest acoustic energy.
The shape of the vocal tract determines which frequency components resonate.
The shorthand for the first formant is F1, for the second is F2, and so on. The
fundamental frequency is usually indicated by F0 .
A spectrogram of a speech signal shows how the spectrum of speech changes
over time. The horizontal axis shows time and the vertical axis shows frequency.
The colour scale (the grey scale) shows the energy of the frequency components.
The fundamental difficulty of speech recognition is that the speech signal
is highly variable due to different speakers, different speaking rates, different
contexts, and different acoustic conditions. The task is to find which of the variations are most relevant for an ASRS (Lee et al., 1993).
There are a great number of factors which cause variability in speech such as the
speaker, the context, and the environment. The speech signal is very dependent
on the physical characteristics of the vocal tract, which in turn are dependent on
age and gender. The country of origin of the speaker and the region in the country
the speaker is from can also affect the speech signal. Different accents of English
can mean different acoustic realizations of the same phonemes.
There are variations of the main characteristics of speech over time within the
same sound, say the sounding of the phoneme /e/. An example is given in Fig. 11.1,
where different characteristics of a pronounced phoneme /e/ by a female speaker
of New Zealand English are shown.
There can also be different rhythm and intonation due to different accents. If
English is the second language of a speaker, there can be an even greater degree
of variability in the speech (see Chapter 10).
The same speakers can show variability in the way they speak, depending on
whether it is a formal or informal situation. People speak precisely in formal
situations and imprecisely in informal situations because they are more relaxed.
Therefore the more familiar a speaker is with a computer speech recognition
system, the more informal their speech becomes, and the more difficult for the
speech recognition systems to recognise the speech. This could pose problems for
speech recognition systems if they do not continually adjust.
Co-articulation effects cause phonemes to be pronounced differently depending
on the word in which they appear; words are pronounced differently depending
on the context; words are pronounced differently depending on where they lie in

EIS for Adaptive Speech Recognition

Phoneme /e/ Female 172






Time (ms)


x(t + 2)

x(t + 1)


















Frequency (Hz)

Fig. 11.1 Some of the characteristics of a pronounced English phoneme /e/ by a female speaker (data are
taken from the Otago Speech Corpus, sample #172: (

a sentence due to the degree of stress placed upon them. In addition, the speaking
rate of the speaker can cause variability in speech. The speed of speech varies
due to such things as the situation and emotions of the speaker. However, the
durations of sounds in fast speech do not reduce proportionally compared to their
durations in slow speech.


Adaptive Speech Recognition

The variability of speech explained above requires robust and adaptive systems
that would be able to accommodate new variations, new accents, and new pronunciations of speech.
The adaptive speech recognition problem is concerned with the development
of methods and systems for speaker-independent recognition with high accuracy,
able to adapt quickly to new words, new accents, and new speakers for a small,
medium, or large vocabulary of words, phrases, and sentences.


Evolving Connectionist Systems

Online adaptive systems perform adaptation during their operation; i.e. the
system would adapt if necessary on the spot, would learn new pronunciations
and new accents as it works, and would add new words in an online mode.
Humans can adapt to different accents of English, e.g. American, Scottish, New
Zealand, Indian. They learn and improve their language abilities during their entire
lives. The spoken language modules in the human brain evolve continuously. Can
that be simulated in a computer system, in an evolving system? We should have
in mind that every time we speak, we pronounce the same sounds of the same
language at least slightly differently.


A Framework of EIS for Adaptive Speech Recognition

The framework is schematically shown in Fig. 11.2. It consists of the following

modules and procedures.

Preprocessing module
Feature extraction module
Pattern classification (modelling) module
Language module
Analysis module

The functions of some of these modules were discussed in Chapters 9 and 10,
especially the preprocessing and the feature extraction modules. In Chapter 9,





To higher



Fig. 11.2 A block diagram of an adaptive speech recognition system framework that utilises ECOS in the
recognition part.

EIS for Adaptive Speech Recognition


Mel-scale coefficients, Mel-scale cepstrum coefficients, gammatone filters, and

other acoustic features were discussed.
The set of features selected depends on the organization and on the function
of the pattern classifier module (e.g. phoneme recognition, whole word recognition, etc.).
The pattern (class) recognition module can be trained to recognize phonemes,
or words, or other elements of a spoken language. The vector that represents the
pronounced element is fed into the classifier module, created in advance with the
use of the general purpose adaptive learning method, such as ECOS. The ECOS
in this case allow for adaptive online learning. New words and phrases can be
added to or deleted from the system at any time of its operation, e.g. go, one,
connect to the Internet, start, end, or find a parking place. New speakers can
be introduced to the system, new accents, or new languages.
In the recognition mode, when speech is entered to the system, the recognized
words and phrases at consecutive time moments are stored in a temporal buffer.
The temporal buffer is fed into a sentence recognition module where multiple-word
sequences (or sentence) are recognized.
The recognized word, or a sequence of words, can be passed to an action module
for an action depending on the application of the system.


Speech Signal Analysis and Speech Feature Selection

The feature selection process is an extremely important issue for every speech
recognition system, regardless of whether it is a phoneme-based or word-based
system (see Chapter 1).
Figure 11.1 shows a histogram of the speech signal that can be used as a feature
vector. Another popular feature set is a vector of FFT (fast Fourier transform)
coefficients (or power spectrum as shown in Fig. 11.1). FFT transforms the speech
signal from the time domain to the frequency domain. A simple program written
in MATLAB that extracts the first 20 FFT coefficients from a spoken signal (e.g.
pronounced word) is presented in Appendix B, along with a plot and a print of
these coefficients.
Many current approaches towards speech recognition systems use Mel frequency
cepstral coefficients (MCCs) vectors to represent, for example each 1050 ms
window of speech samples, taken every 525 ms, by a single vector of certain
dimension (see Fig. 9.9). The window length and rate as well as the feature vector
dimension are decided according to the application task.
For many applications the most effective components of the Mel-scale features
are the first 12 coefficients (excluding the zero coefficient). MCCs are considered
to be static features, as they do not account for changes in the signal within the
speech unit (e.g. the signal sliding window, a phoneme, or a word).
Although MCCs have been very successfully used for off-line learning and static
speech recognition systems for online learning adaptive systems that need to adapt
to changes in the signal over time, a more appropriate set of features would be a
combination of static and dynamic features.
It has been shown (Abdulla and Kasabov, 2002) that the speech recognition
rate is noticeably improved when using additional coefficients representing the


Evolving Connectionist Systems

dynamic behaviour of the signal. These coefficients are the first and second
derivatives of the cepstral coefficients of the static feature vectors. The power
coefficients, which represent the energy content of the signal, and their first and
second derivatives, also have important roles to be included in the representation of the feature vectors. The first and second derivatives are approximated
by difference regression equations and accordingly named delta and deltadelta
coefficients or first and second deltas, respectively. The power coefficients, which
represent the power of the signal within the processed windows, are concatenated
with the Mel coefficients. The static coefficients are normally more effective in
the clean environment, whereas the dynamic coefficients are more robust in the
noisy environment. Concatenating the static coefficients with their first and second
derivatives increases the recognition rate and accounts for dynamic changes in the
This approach has some drawbacks as well. Firstly, the static coefficients will
dominate the effect of the dynamic coefficients. In this respect, a careful normalisation would be efficient to apply, but not a linear one, in the interval [0,1] for each
feature separately. A more appropriate one, for example, is if the delta features are
normalised in the range that is 25% of the range of the MCC, and the deltadelta
features are normalised in the range that is 50% of the range of the delta features.
Using dynamic features also increases the dimensionality of the feature vectors.
Figure 11.3 shows the power and Mel coefficients with their derivatives of the
phoneme /o/.
Other features that account for dynamic changes in the speech signal are wavelets
(see Chapter 12) and gammatone feature vectors (see Chapter 9).
It is appropriate to use different sets of features in different modules if a modular
speech recognition system is built, where a single ANN module is used for one
speech class unit (e.g. a phoneme or a word).

delta (1426)

delta-delta (2739)

Coefficient Value

static (113)


Coefficient Number

Fig. 11.3 The power and 12 Mel-scale coefficients, with their first and second derivatives of a phoneme /o/
sound signal (Abdulla and Kasabov, 2002).

EIS for Adaptive Speech Recognition



Adaptive Phoneme-Based Speech Recognition


Problem Definition

Recognising phonemes from a spoken language is a difficult but important

problem. If it is correctly solved, then it would be possible to further recognize
the words and the sentences of a spoken language. The pronounced vowels and
consonants differ depending on the accent, dialect, health status, and so on of the
As an illustration, Fig. 11.4 shows the difference between some vowels in
English pronounced by male speakers in the R.P. (received pronunciation) English,
Australian English, and New Zealand English, when the first and the second
formants are used as a feature space and averaged values are used. A significant
difference can be noticed between the same vowels pronounced in different dialects
(except the phoneme /I/ for the R.P. and for the Australian English: they coincide
on the diagram). In New Zealand English /I/ and // are very close.
There are different artificial neural network (ANN)-based models for speech
recognition that utilise MLP, SOM, RBF networks, time-delay NN (Weibel et al.,
1989; Picone, 1993), hybrid NN and hidden Markov models (Rabiner, 1989; Trentin,
2001), and so on. All these models usually use one ANN for the classification of all
phonemes and they work in an off-line mode. The network has as many outputs
as there are phonemes.

New Zealand English

General Australian English
R.P. English


F1 frequency (Hertz)



u 3









F2 frequency (Hertz)



Fig. 11.4 Different phones of received pronunciation English, Australian English, and NZ English presented in
the 2D space of the first two formants. Same phonemes are pronounced differently in the different accents,
e.g. /I/ (Maclagan, 1982).



Evolving Connectionist Systems

Multimodel, Phoneme-Based Adaptive Speech

Recognition System

Here an approach is used where each NN module from a multimodular system

is trained on a single phoneme data (see Kasabov (1996)) and the training is in
an online mode. An illustration of this approach is given in Fig. 11.5 where four
phoneme modules are shown, each of them trained on one phoneme data with
three time lags of 26 element Mel-scale cepstrum vectors, each vector representing
one 11.6 milliseconds timeframe of the speech data, with an overlap of 50% between
consecutive timeframes.
The rationale behind this approach is that single phoneme ANN can be adapted
to different accents and pronunciations without necessarily retraining the whole
system (or the whole ANN in case of a single ANN that recognises all phonemes).
Very often, it is just few phonemes that distinguish one accent from another and
only these ANN modules need to be adjusted.
Figure 11.6 shows the activation of each of the seven ANN modules trained to
recognise different phonemes when a spoken word up is propagated through the
whole system over time. Although the // phoneme ANN gets rightly activated
when the phoneme // is spoken, and /p/ NN gets rightly activated when the /p/
phoneme is spoken, the /h/ and /n/ phoneme ANNs gets wrongly activated during
the silence between // and /p/ in the word up, and the /r/ and /d/ phoneme
ANNs get wrongly activated when /p/ is spoken.
This phoneme module misactivation problem can be overcome through analysis
of the sequence of the recognised phonemes and forming the recognized word
through a matching process using a dictionary of words. In order to improve
the recognition rate, the wrongly activated phoneme NN modules can be further
trained not to react positively on the problematic for the phoneme sounds.

From the
(26 MSCC)













Fig. 11.5 Four ANN modules, each of them trained to recognize one phoneme (from Kasabov (1996), MIT
Press, reproduced with permission).

EIS for Adaptive Speech Recognition



Fig. 11.6 The activation of seven phoneme ANN modules, trained on their corresponding phoneme data, when
an input signal of a pronounced word up is submitted. Some of the NN modules are wrongly activated at a
time, showing the dynamical features of the phonemes.

Each of the phoneme NN modules, once trained on data of one accent, can be
further adapted to a new accent, e.g. Australian English. In order to do that, the
NN have to be of a type that allows for such adaptation. Such NN are the evolving
connectionist systems ECOS.
In one experiment a single EFuNN is used as a single phoneme recogniser.
Each EFuNN from a multimodular ECOS can be further adapted to any new
pronunciation of this phoneme (Ghobakhlou et al., 2003). One EFuNN was used
for each phoneme. Each EFuNN was further adapted to new pronunciation of this


Using Evolving Self-Organising Maps (ESOMs) as Adaptive

Phoneme Classifiers

An evolving self-organised map (ESOM; Chapter 2) is used for the classification of

phoneme data. The advantage of ESOMs as classifiers is that they can be trained
(evolved) in a lifelong mode, thus providing an adaptive, online classification
Here, an ESOM is evolved on phoneme frames from the vowel benchmark
dataset from the CMU Repository (see also Robinson (1989)). The dataset consists
of 990 frames of speech vowels articulated by four male and four female speakers.
In traditional experiments 528 frames are used for training and 462 for testing
(Robinson, 1989). Here, several models of ESOM are evolved on the training data
with the following parameter values:  = 05;  = 005.
The test results of ESOM and of other classification systems on the same test data
are shown in Table 11.1. While using tenfold cross-validation on the whole dataset,
much better classification results are obtained in the ESOM models, Table 11.2.
Here  = 12.
When online learning is applied on the whole stream of the vowel data, every
time testing its classification accuracy on the following incoming data, the error
rate decreases with time as can be seen from Fig. 11.7.
Figure 11.7 illustrates that after a certain number of examples drawn from a closed
and bounded problem space, the online learning procedure of ECOS can converge to
a desired level of accuracy and the error rate decrease (Chapters 2, 3, 5, and 7).


Evolving Connectionist Systems

Table 11.1 Classification test results on the CMU vowel data with the use of different classification techniques
(Deng and Kasabov, 2000, 2003).

Number of weights

% Correct best

% Correct average

5-nearest neighbour with local

Squared MLP
5D growing cell structure
(Fritzke, 1994) 80 epochs
DCS-GCS (Bruske and Sommer,
ESOM (one epoch only)












Table 11.2 Tenfold cross-validation classification results on the whole vowel data set (Deng and Kasabov,
2000, 2003).

% Correct (average)

CART (Classification on a regression tree)

ESOM (average number of nodes is 233)

95.0 +/- 0.5

Error occurrence in online classification











Learning time

Fig. 11.7 Error rate of an ESOM system trained in an online mode of learning and subsequent classification of
frames from the vowel benchmark dataset available from the CMU repository (see explanation in the text). The
longer the ESOM is trained on the input stream of data, the less the error rate is. The system is reaching an
error convergence (Deng and Kasabov, 2000, 2003).


Adaptive Whole Word and Phrase Recognition


Problem Definition

In this case, the speech signal is processed so that the segment that represents a
spoken word is extracted from the rest of the signal (usually it is separated by
silence). Extracting words from a speech signal means identifying the beginning
and the end of the spoken word.

EIS for Adaptive Speech Recognition


There are many problems that need to be addressed while creating a whole
word speech recognition system, for example the problems that relate to ambiguity
of speech. This ambiguity is resolved by humans through some higher-level
Ambiguity can be caused by:
Homophones: Words with different spellings and meanings but that sound the
same (e.g. to, too, two or hear, hair, here). It is necessary to resort to a higher
level of linguistic analysis for distinction.
Word boundaries: Extracting whole words from a continuous speech signal may
lead to ambiguities; for example /greiteip/ could be interpreted as grey tape
or great ape. It is necessary to resort to a higher-level linguistic knowledge to
properly set the boundaries.
Syntactic ambiguity: This is the ambiguity arising before all the words of a
phrase or a sentence are properly grouped into their appropriate syntactic units.
For example, the phrase the boy jumped over the stream with the fish means
either the boy with the fish jumped over the stream, or the boy jumped over
the stream with a fish in it. The correct interpretation requires more contextual
All speech recognition tasks have to be constrained in order to be solved. Through
placing constraints on the speech recognition system, the complexity of the speech
recognition task can be considerably reduced. The complexity is basically affected
1. The vocabulary size and word complexity. Many tasks can be performed with
the use of a small vocabulary, although ultimately the most useful systems will
have a large vocabulary. In general the vocabulary size of a speech recognition
system can vary as follows:.

Small, tens of words

Medium, hundreds of words
Large, thousands of words
Very large, tens of thousands of words

2. The format of the input speech data entered to the system, that is: Isolated
words (phrases)
Connected words; this represents fluent speech but in a highly constrained
vocabulary, e.g. digit dialling
Continuous speech
3. The degree of speaker dependence of the system:
Speaker-dependent (trained to the speech patterns of an individual user)
Multiple speakers (trained to the speech patterns of a limited group of people)
Speaker-independent (such a system could work reliably with speakers who
have never or seldom used the system)


Evolving Connectionist Systems

Sometimes a form of task constraint, such as formal syntax and formal semantics,
is required to make the task more manageable. This is because if the vocabulary size increases, the possible combinations of words to be recognised grows
Figure 11.8 illustrates the idea of using a NN for the recognition of a whole
word. As inputs, 26 Mel-scale cepstrum coefficients taken from the whole word
signal are used. Each word is an output in the classification system.


A Case Study on Adaptive Spoken Digit

Recognition English Digits

The task is of the recognition of speaker-independent pronunciations of

English digits. The English digits are taken from the Otago Corpus database
( Seventeen speakers (12 males and 5 females) are used
for training, and another 17 speakers (12 males and 5 females) are used for testing
an EFuNN-based classification system. Each speaker utters 30 instances of English
digits during a recording session in a quiet room (clean data) for a total of 510
training and 510 testing utterances (for details see Kasabov and Iliev (2000)).
In order to assess the performance of the evolved EFuNN in this application,
a comparison with the linear vector quantization (LVQ) method (Chapter 2,
Kohonen (1990, 1997)) is presented. Clean training speech data is used to train
both the LVQ and the EFuNN models. Noise is introduced to the clean speech test
data to evaluate the behaviour of the recognition systems in a noisy environment.
Two different experiments are conducted with the use of the standard EFuNN
learning method from Chapter 3. In the first instance, car noise is added to the
clean speech. In the second instance office noise is introduced over the clean
signal. In both cases, the signal-to-noise ratio SNR ranges from 0 dB to 18 dB.
The results for car noise are shown in Fig. 11.9. The word recognition rate
(WRR) ranges from 86.87% at 18 dB to 83.33% at 0 dB. The EFuNN method
outperforms the LVQ method, which achieves WRR = 82.16% at 0 dB.
The results for office noise are presented in Fig. 11.10. The WRR of the evolved
EFuNN system ranges from 78.63% at 18dB to 71.37% at 0 dB, and is significantly
higher than the WRR of LVQ (21.18% at 0 dB).





Fig. 11.8 An illustration of an ANN for a whole word recognition problem on the recognition of two words,
yes and no (from Kasabov (1996), MIT Press, reproduced with permission).

EIS for Adaptive Speech Recognition









Fig. 11.9 Word recognition rate (WRR) of two speech recognition systems when car noise is added: LVQ, codebook
vectors, 396; training iterations, 15,840; EFuNN, 3MF; rule nodes, 157; sensitivity threshold Sthr = 09; error
threshold Errthr = 01; learning rates lr1 = 001 and lr2 = 001; aggregation thresholds thrw1 = 02,
thrw2 = 02; number of examples for aggregation Nexa = 100; 1 training iteration (Kasabov and Iliev, 2000).











Fig. 11.10 Word recognition rate (WRR) of two speech recognition systems when office noise is added: LVQ,
codebook vectors, 396; training iterations, 15,840; EFuNN, 3MF; rule nodes, 157; Sthr = 09, Errthr = 01,
lr1 = 001, lr2 = 001, thrw1 = 02, thrw2 = 02, Nexa = 100, 1 training iteration (Kasabov and Iliev, 2000).


Evolving Connectionist Systems

A significant difference between the two compared systems EFuNN and LVQ is
that EFuNN can be further trained on new data in an online mode.


Adding New Words to Adapt an ECOS Classifier

When a NN is trained on a certain number of words (either in an off-line or in an

online mode) at a certain time of its operation there might be a need to add new
words to it, either of the same language, or of a different language. For example, a
command avanti in Italian may be needed to be added to a system that is trained
on many English commands, among them, a go command. The two commands,
although from different languages, have the same meaning and should trigger the
same output action after having been recognized by the system.
Adding new words (meaning new outputs) to a trained NN is not easy in many
conventional NN models. The algorithm for adding new outputs to an EFuNN,
given in Chapter 3, can be used for this purpose, supposing an EFuNN module is
trained on whole words (e.g. commands).
Experiments on adding new English words, and adding new words from the
Maori language to already trained (evolved) EFuNN on a preliminary set of English
words only, is presented in Ghobakhlou et al. (2003). A simple ECOS (a threelayer evolving NN without the fuzzy layers used in EFuNN) was initially evolved
on the digit words (from the Otago Speech Corpus,
and then new words were added in an incremental way to make the system
work on both old and new commands. The system was tested on a test set of
data. It manifested very little forgetting (less than 2%) of the previously learned
digit words, increased generalisation on new pronunciations (10% increase), and
very good adaptation to the new words (95.5% recognition rate), with an overall
increase of the generalisation capability of the system. This is in contrast to many
traditional NN models whose performance deteriorates dramatically when trained
on new examples (Robins,1996).


Adaptive, Spoken Language

HumanComputer Interfaces

Speech recognition and language modelling systems can be developed as main

parts of an intelligent humancomputer interface to a database. Both data entry
and a query to the database can be done through a voice input.
Using adaptive online speech recognition systems means that the system can
be further trained on new users, new accents, and new languages in a continuous
online way. Such a system contains a language analysis module that can vary from
simple semantic analysis to natural language understanding.
Natural language understanding is an extremely complex phenomenon. It
involves recognition of sounds, words, and phrases, as well as their comprehension
and usage. There are various levels in the process of language analysis:
Prosody deals with rhythm and intonation.
Phonetics deals with the main sound units of speech (phonemes) and their
correct combination.

EIS for Adaptive Speech Recognition


Lexicology deals with the lexical content of a language.

Semantics deals with the meaning of words and phrases seen as a function of
the meaning of their constituents.
Morphology deals with the semantic components of words (morphemes).
Syntax deals with the rules, which are applied to form sentences.
Pragmatics deals with the language usage and its impact on the listener.
It is the importance of language understanding in communication between humans
and computers which was the essence of Alan Turings test for AI (see Introduction).
Computer systems for language understanding require methods that can
represent ambiguity, common-sense knowledge, and hierarchical structures.
Humans, when communicating with each other, share a lot of common-sense
knowledge which is inherited and learned in a natural way. This is a problem for
a computer program. Humans use face expressions, body language, gestures, and
eye movement when they communicate with each other. They communicate in a
multimodal manner. Computer systems which analyse speech signals, gestures, and
face expressions when communicating with users are called multimodal systems.
An example of such systems is presented in Chapter 13.



Task: A small EIS for adaptive signal recognition, written in MATLAB

1. Record or download wave data related to two categories of speech or sound
(e.g. male versus female, bird song versus noise, Mozart versus Heavy Metal,
Yes versus No).
2. Transform the wave data into features.
3. Prepare and label the samples for training an adaptive neural network model
4. Train the model on the data.
5. Test the model/system on new data.
6. Adapt the system (add new data) and test its accuracy on both the new and the
old data.
7. Explain what difficulties you have overcome when creating the system.
A simple MATLAB code, that implements only the first part of the task, is given
in Appendix B. Screen shots of printed results after a run of the program are


Summary and Open Problems

The applicability of evolving, adaptive speech recognition systems is broad and

spans all application areas of computer and information science where systems


Evolving Connectionist Systems

that communicate with humans in a spoken language (hands-free and eyes-free

environment) are needed. This includes:
Voice dialling, especially when combined with hands-free operation of a
telephone system (e.g. a cell phone) installed in a car. Here a simple vocabulary
that includes spoken digits and some other commands would be sufficient.
Voice control of industrial processes.
Voice command execution, where the controlled device could be any terminal in
an office. This provides a means for people with disabilities to perform simple
tasks in an office environment.
Voice control in an aircraft.
There are several open problems in the area of adaptive speech recognition, some
of them discussed in this chapter. They include:
1. Comprehensive speech and language systems that can quickly adapt to every
new speaker.
2. Multilingual systems that can learn new languages as they operate. The ultimate
speech recognition system would be able to speak any spoken language in the
3. Evolving systems that would learn continuously and incrementally spoken
languages from all available sources of information (electronic, human voice,
text, etc.).


Further Reading

Reviews on Speech Recognition Problems, Methods, and Systems (Cole et al.,

1995; Lippman, 1989; Rabiner, 1989; Kasabov, 1996)
Signal Processing (Owens, 1993; Picone, 1993)
Neural Network Models and Systems for Speech Recognition (Morgan and
Scofield, 1991)
Phoneme Recognition Using Timedelay Neural Networks (Waibel et al., 1997)
Phoneme Classification Using Radial Basis Functions (Renals and Rohwer, 1989)
Hybrid NN-HMM Models for Speech Recognition (Trentin, 2001)
A Study on Acoustic Difference Between RP English, Australian English, and NZ
English (Maclagan, 1982)
Evolving Fuzzy Neural Networks for Phoneme Recognition (Kasabov, 1998b,
Evolving Fuzzy Neural Networks for Whole Word Recognition, English and
Italian Digits (Kasabov and Iliev, 2000)
Evolving Self-organising Maps for Adaptive Online Vowel Classification (Deng
and Kasabov, 2000, 2003)
Adaptive Speech and Multimodal Word-Based Speech Recognition Systems
(Ghobaghlou et al., 2003)

12. Evolving Intelligent Systems

for Adaptive Image Processing

In adaptive processing of image data it is assumed that a continuous stream of

images or videos flows to the system and the system always adapts and improves
its ability to classify, recognise, and identify new images. There are many tasks in
the image recognition area that require EIS. Some application-oriented models and
experimental results of using ECOS, along with other specific or generic methods
for image processing, are presented in this chapter. The material here is presented
in the following sections.

Image analysis and feature selection

Online colour quantisation
Adaptive image classification
Online camera operation recognition
Adaptive face recognition and face membership identification
Summary and open problems
Further reading


Image Analysis and Feature Selection


Image Representation

A 2D image is represented usually as a set of pixels (picture elements), each of

them defined by a triplet (x y u), where x and y are the coordinates of the pixel and
u is its intensity. An image is characterised by spatial and spectral characteristics.
The latter represents the colour of a pixel, as identified uniquely through its three
components red, green, and blue (RGB) each of them reflecting the white
light, producing a signal with a different wavelength (in nm; 465 blue; 500 green;
570 red). The RGB model that has 256 levels in each dimension, can represent
16,777,216 colours. The visual spectrum has the range of (400:750) nm wavelengths
Images are represented in computers as numerical objects (rather than perceived
objects) and similarity between them is measured as a distance between their
corresponding pixels, usually measured as Euclidean distance.


Evolving Connectionist Systems

Grey-level images have one number per pixel that represents the parameter u
(usually between 0 and 256), rather than three such numbers as it is in the colour


Image Analysis and Transformations

Different image analysis and transformation techniques can be applied to an image

in order to extract useful information and to process the image in an information
system. Some of them are listed below and are illustrated with a MATLAB program
in Appendix C, along with prints of resulted images.
Filtering, using kernels, e.g. Ker = [1 1 1 1 -7 1 1 1 1], where each pixel (or
a segment) Im_j and its 8 neighbored pixels of an image Im is convolved
ConvIm_j Ker = SumIm_j Ker


where*means vector multiplication

Statistical characteristics: e.g. histograms;
Adding noise:
Im_j = Im_j + randomNj


A benchmark image Lena is used to illustrate the above techniques; see the
MATLAB demo program and the figures in Appendix C.
Different image analysis and image transformation techniques are required for
different tasks.

For a task of counting objects (e.g. molecules, atoms, sheep, aircraft, etc.) and
their location from a camera image, the following image analysis techniques may
be needed as illustrated in Fig. 12.1 in the case of counting sheep.
Texture analysis
Finding boundaries of objects (areas of contrasts)
Finding spatial object location


Image Feature Extraction and Selection

An image can be represented as a set of features forming a feature vector as

follows. Some of the most used features are the following ones.

EIS for Adaptive Image Processing






Fig. 12.1 Image transformations on a problem example of counting sheep in a paddock: (a) original image;
(b) texture analysis; (c) contour detection; (d) object location identification.

Raw pixels: Each pixel intensity is a feature (input variable) .

Horizontal profile: The sum (or average) intensity of each row of the image, in
the same order for all images.
Vertical profile: The sum (or average) intensity of each column of the image, in
the same order for all images.
Composite profile: The sum (or average) intensity of each column plus each
row of the image, in the same order for all images.
Grey histogram.
Colours as features and/or object shapes.
Specialised features, e.g. for face recognition.
FFT frequency coefficients.
Wavelength features: see Fig. 12.2 for an example and a comparison between
a wavelet function and a periodic function (sine).

A small program in MATLAB for extracting a composite profile from a raw image
is given in Appendix C.


Evolving Connectionist Systems



Fig. 12.2 Image transformation functions: (a) Meyer wavelet; (b) sine wave.


Online Colour Quantisation


The Colour Quantisation Task

This task is concerned with the reduction of the number of colours n an image is
represented in, into a smaller number of colours m without degrading the quality
of the image (Chen and Smith, 1977). This is necessary in many cases as images
might be represented in thousands of colours, which makes the image transmission
on long distances and the image processing in a computer unacceptably slow. In
many cases keeping the thousands of colours may not be necessary at all.
As each colour is a mixture of the three main colours, red, green, and blue
(RGB), each colour is a point in the 3D space of the RGB. Mapping all n colours
of the pixels of an image into the RGB space and clustering all the data points
into a smaller number of clusters (e.g. 256; colour prototypes) that best represent
each colour of the original picture, is the first step of the colour quantisation.
The second step is substituting the original colour for each pixel with its closest
prototype. That gives the quantised image.
Many methods are known for colour quantisation (Chen and Smith, 1977;
Chaudhuri et al., 1992). Most of them perform in many iterations. In the next
section, online colour clustering and quantisation is achieved through applying
ESOM from Chapter2.


Online Colour Quantisation Using Evolving Self-Organising

Maps (ESOM)

Here, the ESOM algorithm from Chapter 2 is applied to the problem of online
colour quantisation. Results are compared with those achieved by applying other
methods, including median-cut, octree, and Wus method. Three test images are
chosen: Pool Balls, Mandrill, and Lena, as shown in Figs. 12.3 through 12.5.
The Pool Balls image is artificial and contains smooth colour tones and shades.
The Mandrill image is of 262,144 (512 512) pixels but has a very large number of
colours (230,427). The Lena image is widely used in the image processing literature
and contains both smooth areas and fine details.
Test images are quantised to 256 colours. For the different images, different
ESOM parameters are used as follows: (a) Pool Balls, e = 186, (b) Mandrill,
e = 204, (c) Lena, e = 319. In all three cases Tp = 2000 and  = 005.

EIS for Adaptive Image Processing


Fig. 12.3 Pool Balls benchmark colour image.

Fig. 12.4 Mandrill benchmark colour image.

Fig. 12.5 Lena colour benchmark image.


Evolving Connectionist Systems

Table 12.1 Quantization performance of different methods over the three
benchmark images from Figs. 12.1, 12.2, and 12.3, where the quantisation
error and the quantisation variance are shown (Deng and Kasabov, 2003).

Pool Balls





11.32 / 5.59
13.17 / 4.98
9.89 / 4.56
9.47 / 3.86






Online clustering is applied directly to the RGB colour space. Here we denote
the image as I, with a pixel number of N. The input vector to the ESOM algorithm
is now a three-dimensional one: Ii = (Ri, Gi, Bi).
The online clustering process of ESOM will construct a colour map C = cjj =
1     256. Each image pixel is then quantised to the best-matching palette colour
cm , a process denoted Q: Ii > cm . To speed up the calculation process, the
Lnorm (see Chaudhuri et al. (1992) is adopted as an approximation of the
Euclidean metric used in ESOM. The quantisation root mean square error between
the original and the quantised images is calculated pixelwise.
Apart from the quantisation error, quantisation error variance is another factor
which influences the visual quality of the quantised image.
Quantisation performance of different methods is compared in Table 12.1, where
the quantisation error and the quantisation variance are shown.
Generally speaking, ESOM not only achieves a very small value of average
quantisation error; its error variance is also the smallest. This explains why images
quantised by ESOM have better visual quality than those done by other methods.
Figures 12.6 through 12.9 show the quantized Lena image with the use of the
median cut method (Heckbert, 1982), octree (Gervautz and Purghatofer, 1990),
Wus method (Wu, 1992), and ESOM, respectively (Deng and Kasabov, 2003). The

Fig. 12.6 Off-line quantized Lena image with the use of the median cut method.

EIS for Adaptive Image Processing


Fig. 12.7 Off-line quantized Lena image with the use of the octree.

Fig. 12.8 Off-line quantized Lena image with the use of Wus method.

accuracy of the ESOM model is comparable with the other methods, and in several
cases, e.g., the Lena image, the best one achieved.
Using ESOM takes only one epoch of propagating pixels through the ESOM
structure, whereas the other methods require many iterations. With the 512
480-sized Lena image, it took two seconds for the ESOM method to construct
the quantisation palette on a Pentium-II system running Linux 2.2. By using an
evolving model, the time searching for best matching colours is much less than
using a model with a fixed number of prototypes. In addition to that, there is a
potential of hardware parallel implementation of ESOM, which will increase greatly
the speed of colour quantisation and will make it applicable for online realtime
applications to video streams.


Evolving Connectionist Systems

Fig. 12.9 Online quantized Lena image with the use of ESOM (Deng and Kasabov, 2003).

As ESOM can be trained in an incremental online mode, the already evolved

ESOM on a set of images can be further tuned and modified according to new


Adaptive Image Classification


Problem Definition

Some connectionist and hybrid neuro-fuzzy connectionist methods for image

classification have been presented in Pal et al. (2000) and Hirota (1984). The
classification procedure consists of the following steps.
1. Feature extraction from images. Different sets of features are used depending on
the classification task. Filtering, fast Fourier transformation (FFT), and wavelet
transformations (Wang et al., 1996; Szu and Hsu, 1999) are among the most
popular ones.
2. Pattern matching of the feature vectors to a trained model. For pattern matching,
different NN and hybrid neuro-fuzzy-chaos techniques have been used (see e.g.
Bezdek (1993) and Szu and Hsu (1999)).
3. Output formation. After the pattern matching is achieved, it may be necessary
to combine the calculated output values from the pattern classifier with other
sources of information to form the final output results. A simple technique is to
take the max value of several NN modules, each of them trained on a particular
class datum, as is the case in the experiment below.
EIS for adaptive image classification are concerned with the process of incremental
model creation when labelled data are made available in a continuous way. The

EIS for Adaptive Image Processing


classifier is updated with each new labelled datum entered and is used to classify
new, unlabelled data. And this process is a continuous one.


A Case Study of Pest Identification from Images

of Damaged Fruit

The case study here is the analysis of damage to pip fruit in orchards with the goal
of identifying what pest caused the damage (Wearing, 1998). An image database
is also built, allowing for content-based retrieval of damage images using wavelet
features (Wearing, 1998). The problem is normally compounded by the fact that
the images representing a damaged fruit vary a lot. Images are taken either from
the fruit or from the leaves, and are taken at different orientations and distances as
shown in Figure 12.10a,b. As features, wavelets are extracted from the images (see
Fig. 12.2) for the difference between wavelets and sin functions used in the FFT.
Using Daubechies wavelets for image analysis and image comparison has already
been shown to be a successful technique in the analysis of natural images (Wang
et al., 1996; Szu and Hsu, 1999). In the experiment here the coefficients resulting
from the wavelet analysis are used as inputs to an EFuNN module (see Chapter 3)
for image classification.
This section suggests a methodology for classification of images based on
evolving fuzzy neural networks (EFuNNs) and compares the results with the use
of a fixed size, off-line fuzzy neural network (FuNN; Kasabov et al. (1997)) and
with some other techniques.
For the experimental modelling, a set of 67 images is used to train five EFuNNs,
one for the identification of each of the five pests in apples, denoted as: alm l;
alm f ; cm; lr l; lr f . The initial sensitivity threshold is selected as Sthr =
0.95 and the error threshold used is Errthr = 0.01. The EFuNNs are trained for
one epoch. The number of rule nodes generated (rn) after training for each of the
EFuNN models is as follows: EFuNN alm l: rn = 61, EFuNN alm f rn = 61,
EFuNN cm: rn = 51, EFuNN lr l: rn = 62, and EFuNN lr f : rn = 61. The
results of the confusion matrix are presented in Table 12.2.
The evolving EFuNN models are significantly better at identifying pests on new
test data (what pest has caused the damage to the fruit) than the FuNNs (not


Fig. 12.10 Examples of codling moth damage: (a) on apples; (b) on leaves (Wearing, 1998).


Evolving Connectionist Systems

Table 12.2 Test classification results of images of damaged fruit with the use of EFuNN, the
confusion matrix over five types of pests (Woodford et al., 1999).
training data
5 eFunns
























lr = 00 pune = 01 errth = 001 sthr = 095 fr = 0.







rule nodes


test data






evolving and having a fixed structure; see Kasabov (1996)). Computing the kappa
coefficient for both the FuNN and EFuNN confusion matrixes substantiates this
with results of 0.10 for the FuNN and 0.45 for the EFuNN.
New images can be added to an EFuNN model in an online mode. Rules can be
extracted that represent the relationship between input features, encoding damage
on a fruit, and the class of pests that did the damage.


Incremental Face Membership Authentication and

Face Recognition

Face image recognition is a special case of the image recognition task. Here,
incremental adaptive learning and recognition in a transformed feature space of
PCA and LDA using IPCA and ILDA (see Chapter 1) are used for two problems.


Incremental Face Authentication Based on Incremental PCA

The membership authentication by face classification is considered a two-class

classification problem, in which either member or nonmember is judged by the
system when a human tries to get authentication. The difficulties of this problem
are as follows.

EIS for Adaptive Image Processing


1. The size of the membership/nonmembership group is dynamically changed.

2. In as much as face images have large dimensions, the dimensional reduction
must be carried out because of the limitation of the processing time.
3. The size of the membership group is usually smaller than that of nonmembership group.
4. There are few similarities within the same class.
In this research case, the first two difficulties are tackled by using the concept
of incremental leaning. In real situations, only a small amount of face data are
given to learn a membership authentication system at a time. However, the system
must always make a decision as accurately as possible whenever the authentication
is needed. To do that, the system must learn given data incrementally so as to
improve the performance constantly. In this sense, we can say that membership
authentication problems essentially belong to the incremental learning problem
as well. However, if large-dimensional data are given to the system as its inputs,
it could be faced with the following problems.
1. Face images have large dimensions and the learning may continue for a long
time. Therefore, it is unrealistic to keep all or even a part of data in the memory.
2. The system does not know what data will appear in the future. Hence, it is quite
difficult to determine appropriate dimensions of feature space in advance.
The first problem can be solved by introducing one-path incremental learning.
On the other hand, for the second problem, we need some method to be able to
construct feature space incrementally. If we use principal component analysis
(PCA) as a dimensional reduction method, incremental PCA (IPCA; Ozawa
et al. (2004a,b, 2005a,b)), can be used. This is illustrated by Ozawa et al. with
a model that consists of three parts.
Incremental PCA
ECM (online evolving clustering method; see Chapter 2)
K-NN classifier (see Chapter 1)
In the first part, the dimensional reduction by IPCA is carried out every time a
new face image (or a small batch of face images) is given to the system. In IPCA,
depending on given face images, the following two operations can be carried out
(see Chapter 1),
Eigenaxes rotation
Dimensional augmentation
When only rotation is conducted, the prototypes obtained by ECM can be easily
updated; that is, we can calculate the new prototypes by multiplying it and
the rotation matrix. On the other hand, if dimensional augmentation is needed,
we should note that the dimensions of prototypes in ECM are also increased.


Evolving Connectionist Systems

A simple way to cope with this augmentation is to define the following twofold
prototypes for ECM: pNi  pDi  i = 1  Pt  where pNi and pDi are, respectively, the
ith prototype in the N-dimensional image space and the D-dimensional eigenspace,
and Pt is the number of prototypes at time t. Keeping the information on prototypes in the original image space as well as in the eigenspace, it is possible to
calculate a new prototype in the augmented (D+ 1)-dimensional eigenspace exactly.
Therefore, we do not have to modify the original ECM algorithm at all, except
that the projection of all prototypes from the original space to the augmented
(D+ 1)-dimensional eigenspace must be carried out before clustering by ECM.
In the last part of the k-NN classifier, we do not need any modifications for
the classifier even if the rotation and augmentation in the eigenspace are carried
out, because they only calculate the distance between a query and each of the
To evaluate the performance of the proposed incremental authentication system,
we use (see Ozawa et al. (2005a,b;2006)) the face dataset that consists of 1355
images (271 persons, 5 images for each person). Here, 4 of 5 images are used
for training and the rest are used for test. From this dataset, 5% of persons
images are randomly selected as the initial training (i.e., 56 images in total). The
number of incremental stages is 51; hence, a batch of about 20 images is trained
at each stage. The original images are preprocessed by wavelets, and transformed
into 644-dimensional input vectors. To evaluate the average performance, fivefold
cross-validation is carried out. For comparative purposes, the performance is also
evaluated for the nonincremental PCA in which the eigenspace is constructed from
the initial dataset and the eigenvectors with over 10% power against the cumulative
eigenvalue are selected (only one eigenvector is selected in the experiment).
The two results are very similar confirming the effectiveness of the incremental


Incremental Face Image Learning and Classification Based

on Incremental LDA

In Pang et al. (2005;2005a,b) a method for incremental LDA (ILDA) is proposed

along with its application for incremental image learning and classification (see
Chapter 1). It is shown that the performances of ILDA on a database with a large
number of classes and high-dimension features are similar to the performance
of batch mode learning and classification with the use of LDA. A benchmark
MPEG-7 face database is used, which consists of 1355 face images of 271 persons
(five different face images per person are taken), where each image has the size
of 56 46. The images have been selected from AR(Purdue), AT&T, Yale, UMIST,
University of Berne, and some face images obtained from MPEG-7 news videos.
An incremental learning is applied on a database having 271 classes (faces)
and 2576 (56 46) dimension features, where the first 30 eigenfeatures of ILDA
are taken to perform K-NN leave-one-out classification. The discriminability of
ILDA, specifically when bursts of new classes are presented at different times, was
evaluated and also the execution time and memory costs of ILDA with the increase
of new data addition.

EIS for Adaptive Image Processing