Beruflich Dokumente
Kultur Dokumente
UML Bible
byTom Pender ISBN:0764526049
John Wiley & Sons 2003 (940 pages)
For beginning to advanced users, this book provides
comprehensive coverage of the versatility of 1.4 and 2.0 UML
specifications, and shows how to use UML to improve
timeliness, quality, and efficiency in development.
Companion Web Site
Table of Contents
UML Bible
Preface
Part I - An Introduction to UML
Chapter 1 - What Is UML?
Chapter 2 - UML Architecture
Chapter 3 - UML Diagrams and Extension Mechanisms
Chapter 4 - Object-Oriented Concepts
Part II - Modeling Object Structure
Chapter 5 - Capturing Rules about Objects in a Class Diagram
Chapter 6 - How to Capture Rules about Object Relationships
Chapter 7 - Testing with Objects
Part III - Modeling Object Interactions
Chapter 8 - Modeling Interactions in UML 1.4
Chapter 9 - Modeling Interactions in UML 2.0
Chapter 10 - Modeling an Object's Lifecycle in UML 1.4
Chapter 11 - Modeling an Object's Lifecycle in UML 2.0
Part IV - Modeling Object Behavior
Chapter 12 - Modeling the Use of a System with the Use Case Diagram
Chapter 13 - Modeling Behavior Using an Activity Diagram
Part V - Modeling the Application Architecture
Chapter 14 - Modeling the Application Architecture
Chapter 15 - Modeling Software Using the Component Diagram
Chapter 16 - Using Deployment Diagrams in UML 1.4
Chapter 17 - Representing an Architecture in UML 2.0
Part VI - Bringing Rigor to the Model
Chapter 18 - Applying Constraints to the UML Diagrams
Chapter 19 - Action Semantics
Part VII - Automating the UML Modeling Process
Chapter 20 - Using a Modeling Tool
Chapter 21 - Customizing UML Using Profiles
Chapter 22 - XML Metadata Interchange
Appendix A - UML 1.4 Notation Guide
Appendix B - UML 2.0 Notation Guide
Appendix C - Standard Elements
Glossary
<Day Day Up>
UML Bible
byTom Pender ISBN:0764526049
John Wiley & Sons 2003 (940 pages)
For beginning to advanced users, this book provides
comprehensive coverage of the versatility of 1.4 and 2.0 UML
specifications, and shows how to use UML to improve
timeliness, quality, and efficiency in development.
Companion Web Site
Table of Contents
UML Bible
Preface
Part I - An Introduction to UML
Chapter 1 - What Is UML?
Chapter 2 - UML Architecture
Chapter 3 - UML Diagrams and Extension Mechanisms
Chapter 4 - Object-Oriented Concepts
Part II - Modeling Object Structure
Chapter 5 - Capturing Rules about Objects in a Class Diagram
Chapter 6 - How to Capture Rules about Object Relationships
Chapter 7 - Testing with Objects
Part III - Modeling Object Interactions
Chapter 8 - Modeling Interactions in UML 1.4
Chapter 9 - Modeling Interactions in UML 2.0
Chapter 10 - Modeling an Object's Lifecycle in UML 1.4
Chapter 11 - Modeling an Object's Lifecycle in UML 2.0
Part IV - Modeling Object Behavior
Chapter 12 - Modeling the Use of a System with the Use Case Diagram
Chapter 13 - Modeling Behavior Using an Activity Diagram
Part V - Modeling the Application Architecture
Chapter 14 - Modeling the Application Architecture
Chapter 15 - Modeling Software Using the Component Diagram
Chapter 16 - Using Deployment Diagrams in UML 1.4
Chapter 17 - Representing an Architecture in UML 2.0
Part VI - Bringing Rigor to the Model
Chapter 18 - Applying Constraints to the UML Diagrams
Chapter 19 - Action Semantics
Part VII - Automating the UML Modeling Process
Chapter 20 - Using a Modeling Tool
Chapter 21 - Customizing UML Using Profiles
Chapter 22 - XML Metadata Interchange
Appendix A - UML 1.4 Notation Guide
Appendix B - UML 2.0 Notation Guide
Appendix C - Standard Elements
Glossary
Index
List of Figures
List of Tables
List of Listings
List of Sidebars
<Day Day Up>
<Day Day Up>
Back Cover
Todays economy demands top quality software development in record time and maximum efficiency. UML arms you to meet
that challenge, and the UML Bible supplies the most comprehensive UML education you can get. One volume covers
everything from understanding and using UML and diagramming notation to the object constraint language (OCL) and
profiles, in both 1.4 and 2.0 UML specifications. Its the one resource you can rely on to virtually guarantee your success.
Learn to model object structure, interactions, behavior, and architecture using UML
Explore diagram structure and usage
Understand how to utilize the overlapping features of the UML diagrams to facilitate the modeling process
Learn to exploit the features of the UML diagrams to test them for consistency and accuracy
Learn to assess modeling tools to choose the one that suits your needs
Comprehend how the statechart diagram is used to model changes in an object over its lifetime
Apply object constraint language (OCL) and work with Action Semantics to specify behaviors that ultimately will be
implemented in code
Understand the XML Model Interchange (XMI) standard that helps enable model sharing between modeling tools and
other XMI-compatible applications
Customize UML to meet the needs of specific industries or application types
About the Author
Tom Pender is currently a teacher and mentor for UML courses offered through Sun Microsystems and DigitalThink, Inc.
Tom has worked as a software engineer for more than 20 years in a wide variety of industries. He has worked in just about
every position within software development from programmer to manager. His extensive and diverse experience brings the
real world into the classroom, where he has spent the past six very successful years teaching analysis and design using
UML. He has authored four online courses about UML through DigitalThink, Inc. and the book UML Weekend Crash Course,
which has been enthusiastically praised as a very practical, approachable, and comprehensive introduction to UML.
<Day Day Up>
<Day Day Up>
UML Bible
Tom Pender
WILEY
Wiley Publishing, Inc.
Published by
Wiley Publishing, Inc.
10475 Crosspoint Boulevard
Indianapolis, IN 46256
http://www.wiley.com
Copyright 2003 Wiley Publishing, Inc., Indianapolis, Indiana
Library of Congress Control Number: 2003101942
0-7645-2604-9
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1
1B/QZ/QZ/QT/IN
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any
means, electronics, mechanical, photocopying, recording, scanning or otherwise, except as permitted under
Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the
Publisher, or authorization through payment of the appropriate per copy fee to the Copyright Clearance center,
222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8700. Requests to the Publisher for
permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd.,
Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4447, E-Mail: permcoordinator@wiley.com.
LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: WHILE THE PUBLISHER AND AUTHOR HAVE USED
THEIR BEST EFFORTS IN PREPARING THIS BOOK, THEY MAKE NO REPRESENTATIONS OR
WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS
BOOK AND SPECIFICALLY DISCLAIM ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR
A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES
REPRESENTATIVES OR WRITTEN SALES MATERIALS. THE ADVICE AND STRATEGIES CONTAINED
HEREIN MAY NOT BE SUITABLE FOR YOUR SITUATION. YOU SHOULD CONSULT WITH A PROFESSIONAL
WHERE APPROPRIATE. NEITHER THE PUBLISHER NOR AUTHOR SHALL BE LIABLE FOR ANY LOSS OF
PROFIT OR ANY OTHER COMMERCIAL DAMAGES, INCLUDING BUT NOT LIMITED TO SPECIAL,
INCIDENTAL, CONSEQUENTIAL, OR OTHER DAMAGES.
For general information on our other products and services or to obtain technical support, please contact our
Customer Care Department within the U.S. at (800) 762-2974, outside the U.S. at (317) 572-3993 or fax (317)572-
4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be
available in electronics books.
Trademarks: Wiley, the Wiley Publishing logo, and related trade dress are trademarks or registered trademarks of
John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without
written permission. UML is a trademark of Object Management Group, Inc. All other trademarks are the property of
their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book.
Diagrams identified by the text "OMG 1.4" are from the OMG UML specification v1.4 and are used with
permission of the OMG. Copyright C 2001, Object Management Group, Inc. http://www.omg.org.
Diagrams identified by the text "OMG 2.0" are from the OMG UML specification v2.0 and are used with
permission of the OMG. Copyright C 2003, Object Management Group, Inc. http://www.omg.org.
Figures identified by the text "No Magic" are printed with permission from No Magic, Inc., 1998--2003, All rights
reserved.
WILEY is a trademark of Wiley Publishing, Inc.
About the Authors
Tom Pender is currently a teacher and mentor for UML courses offered through Sun Microsystems and
DigitalThink, Inc. Tom has worked as a software engineer for more than 20 years in a wide variety of industries. He
has worked in just about every position within software development from programmer to manager. His extensive
and diverse experience brings the real world into the classroom, where he has spent the past six very successful
years teaching analysis and design using UML. He has authored four online courses about UML through
DigitalThink, Inc. (http://www.digitalthink.com/) and the book UML Weekend Crash Course, which has been
enthusiastically praised as a very practical, approachable, and comprehensive introduction to UML. When not
standing on the UML soapbox, Tom works with orphanages in Romania and with challenged families in his
community, trains his dogs, and periodically indulges in his hobby of collecting old comic books. Tom can be
reached at tom@pender.com or via the Wiley Web site, http://www.wiley.com/compbooks/pender.
Eugene McSheffrey has over sixteen years experience in the software industry and is a senior consultant with
Popkin Software, a company which provides enterprise architecture modeling tools and services that help
companies to align their business and IT. Since joining Popkin in 1996, he has worked as a consultant and trainer,
helping clients in Europe, Asia, and North America to build effective architecture and development models using
UML and other modeling techniques. Eugene holds a B.Sc. degree from the University of Edinburgh and a M.Sc.
degree in Computing for Commerce and Industry from the Open University.
Lou Varveris is Director of Research & Communications at Popkin Software and also serves as UML Product
Manager for System Architect. He has been involved with the implementation of object methodologies for nine
years. He has published white papers, given seminars, and written and taught numerous training courses and
tutorials on UML and enterprise architecture. Prior to working at Popkin Software, he was an engineer at Unisys
Corporation for nine years. He holds a B.S. in Engineering Science from the College of Staten Island, CUNY, and
an M.S. in Specialized Journalism, with graduate engineering work in communication theory, from Polytechnic
University.
Credits
Senior Acquisitions Editor
Jim Minatel
Project Editor
Sara Shlaer
Technical Editors
Robert Rodes
Lou Varveris
Copy Editor
Maryann Steinhart
Editorial Manager
Mary Beth Wakefield
Vice President & Executive Group Publisher
Richard Swadley
Vice President and Executive Publisher
Bob Ipsen
Vice President and Publisher
Joseph B. Wikert
Executive Editorial Director
Mary Bednarek
Project Coordinator
Regina Snyder
Graphics and Production Specialists
Amanda Carter, Jennifer Click,
Michael Kruzil, Lynsey Osborn,
Mary Gillot Virgin
Quality Control Technician
Charles Spencer
Carl William Pierce
Permissions Editor
Carmen Krikorian
Media Development Specialist
Kit Malone
Proofreading and Indexing
TECHBOOKS Production Services
I dedicate this book to my wife Jackie, my son Tom, and my daughter Tami, who have stood by me through
the demands and the chaos of work and writing, and through it all have managed to preserve our family life. I
love you all.
Acknowledgments
Thank you to my wife Jackie, my son Tom, and my daughter Tami who put up with my crazy work schedule and
my mood swings as I juggled work, writing, and my family life (not always very successfully). Their patience and
steadfast support keep me going. Thank you for standing by me through the challenges.
Sara Shlaer was indispensable. She held my hand through the first book, UML Weekend Crash Course, showing
me the ropes and showing me how little I knew (know) about writing well. UML Bible was a much bigger
undertaking, and once again Sara exemplified professionalism, talent, and patience. I sincerely appreciate the
tremendous effort Sara made to keep me on track despite the myriad specification changes, schedule revisions,
and missed deadlines. She always made an effort to keep our working relationship humane, balancing our
humanity and our obligations with grace and compassion. I could not have done this book without her.
Bob Rhodes labored diligently to understand my often cryptic text and sort out what I really meant to say (or so he
hoped I meant to say), always giving me the benefit of the doubt, deserved or not. Thank you very much, Bob, for
hanging in there through all the changes to the specification and the hundreds of detailed diagrams used to
illustrate this book.
Lou Varveris is the product manager for Software Architect at Popkin Software. He helped me as technical editor
on my first book, UML Weekend Crash Course. Then as here he was a tremendous help. Lou is meticulous,
patient, and thorough. His insights were always dead on and saved me embarrassment on many occasions. Thank
you, Lou. Lou also authored the chapters on packages and automating the UML using XMI, profiles, and modeling
tools, taking a tremendous burden off my schedule and enhancing the quality of the book. Again, thank you very
much Lou.
Eugene McSheffrey works at Popkin Software with Lou Varveris. Eugene took precious time out of his incredibly
busy schedule to author the chapter on Action Semantics, relieving another burden from the schedule and
providing excellent primers for both subjects.
Maryann Steinhart took on a great deal of the editing responsibility. Knowing nothing about UML, she faithfully
labored through the morass of my technical jargon and cryptic examples to offer insights and alternatives so that
people could actually read the book. Her comments were always offered in a professional and understanding
manner, underscoring her commitment to the quality of the book and to the readers. Thank you very much for your
perseverance, Maryann.
Jim Minatel ran interference and politely, but steadfastly, worked to keep me on track. Jim could have played the
bully project manager, but he chose just the right balance of accountability and understanding.
My thanks to the Graphic Techs at Wiley who spent hours making faithful reproductions of my original art for the
book: Karl Brandt, Kelly Emkow, Lauren Goddard, Lynsey Osborn, Rashell Smith, and Mary Virgin. They did an
incredible job and never complained (at least, within my hearing) about the continual revisions I submitted.
Finally, thank you to all my students who have offered countless suggestions and feedback as I grappled with ways
to communicate the features and benefits of modeling. Their honesty and enthusiasm were at once an inspiration
and a motivation to keep trying. Thank you, to all of you, for speaking up, for asking questions, for challenging me,
and for earnestly seeking to understand.
<Day Day Up>
<Day Day Up>
Preface
In more than 20 years on projects and five years in the classroom, I've seen a lot of things change and a lot of
things stay the same. I've watched people get thrown in every direction by the speed and impact of changes in
technology, and I've listened to the same unchanging list of complaints:
Poorly defined requirements
Rapidly changing requirements
Difficulties among IT and business team members and clients
Estimates that can't be trusted because they are actually individualized guesses because no one has kept any
development metrics
The overwhelming challenge of maintaining existing systems with no documentation
So how can the Unified Modeling Language help solve these problems? UML has become an increasingly popular
and powerful component of software development strategies around the world. More than 90 percent of the
Fortune 500 companies use it in some form. Why? Because UML supplies some essential tools that support a
professional response to these challenges. UML provides
A consistent form of communication that works equally well in analysis and design
A visual presentation that works equally well for technical and non technical team members
A formal yet flexible standard to ensure consistency and clarity
An extensible language that may be tailored to any industry or application type
A programming language independent way to specify software
A rigorous means to specify software structure and behavior to generate code or even generate complete
applications (executable UML)
Regardless of the method you use to apply it, UML supports the development of precise, consistent, and traceable
communication that will survive the chaotic and rapid pace of change.
The goal of every book in the Bible series is to provide you with a comprehensive explanation of a topic, and in
UML Bible you learn all about the Unified Modeling Language (UML): diagramming notations and semantics, the
Object Constraint Language, Action Semantics, the UML Metamodel (2M), the XML Metamodel Interchange (XMI),
and the evolution of the standard from UML 1.4 to UML 2.0.
UML Bible presents each concept in the UML specification with an introduction that assumes no prior knowledge
and progresses through a complete description of its advanced application, while pointing out the changes from
UML 1.4 to UML 2.0. For example, the Class diagram includes a number of modeling elements. Each element is
explained from the basics to the complete specification. Then the entire Class diagram is explained, using the
individual modeling elements together to build a complete example.
In like manner, many concepts at times come together, as in the relationships between Class diagrams and
Sequence diagrams. Here, too, the book presents an introduction to the basic relationships and their
consequences and then proceeds with a complete description of how those relationships might play out in your
software modeling process.
This is unlike many other books that progress from beginning to end, using the first chapters as an introduction and
later chapters for the more advanced topics. But I believe that you will find the approach in UML Bible most
effective for this particular subject. You will quickly be able to recognize the presentation pattern. As you become
accustomed to the approach, you can choose to skip over sections that you might already be familiar with and
focus on the new material that is most useful to you.
It is my hope that UML Bible will provide you with both an appreciation for the usefulness of the software modeling
resources of UML and a working knowledge of the tremendous potential that these tools and concepts have for
improving the way you build software.
Who Should Read This Book
This book presents everything you need to know in order to become proficient and productive at modeling software
with UML. It includes my own experience, and insights from many of the authors and teachers with whom I've had
the pleasure of working, as well as the works of many of the industry leaders.
UML Bible is aimed primarily at those people responsible for the countless hours of communication between users
and IT staff and at those tasked with turning the user's wishes into code. Participants at every level of the software
development process stand to gain from this book:
People who are new to OO modeling will get a step-by-step introduction to everything from basic concepts
of object orientation that support the modeling notation and process to insights into the use and application of
the models. You will quickly obtain a working knowledge of the diagrams used by your teammates so that you
can become an active participant in the review and enhancement of the models.
Experienced modelers will be able to dig a little deeper and discover the relationships between the various
diagrams and how they may be used to test the quality and completeness of the whole model. If you want, you
will be able to exploit the rigors of OCL and Action Semantics as well.
Programmers will gain a working knowledge of the models that the analysts and designers are asking you to
implement. Communication will improve as you become comfortable with the vocabulary used by both
technical and non-technical team members to describe the software requirements. The models will become
your roadmap for understanding existing systems, new systems, and ideas for enhancements in a way that
code alone cannot, while leaving open the freedom to incorporate new techniques and technologies in your
implementation.
People evaluating modeling tools will gain a complete understanding of the features and benefits that you
should expect from such a tool.
Managers and project leaders will gain an appreciation for the power and value of the modeling process as
a means to solve many of the problems that plague the software development process. Hopefully, you will
also come to appreciate the value of having a rigorous and testable tool to understand and solve a problem
before committing to the expense and uncertainty of code. In short, I hope you come to see that "the model is
the code" (more of that sermon later ).
<Day Day Up>
<Day Day Up>
Why You Need This Book
To get the most out of this book, you need to appreciate the intense challenges inherent in today's software
development environment. Smarter people than I have worked long years at developing real-world strategies to
address these challenges. UML is the distillation of a key component of many of these road-tested strategies,
namely, communication.
Communication of needs and ideas is the substance of the software development process. The quality of
communication between analysts, developers, trainers, testers, managers, and users can make or break a project.
As a software professional, you are used to dealing with most, if not all, of these classic communication
challenges. See whether these examples sound familiar.
Poor communication leads to delays and extra cost
As systems change, all project participants must keep informed of the nature and impact of those changes on the
requirements and solutions already in progress or already implemented. Without a standard way to communicate,
everyone is left to his own creativity. You've probably seen teams where more time is spent in meetings than
actually working. The meetings use up valuable time in an effort to communicate. One person uses flowcharts;
another person uses screen layouts; others use volumes of written specifications. The project incurs the overhead
of becoming familiar with a variety of communication styles and techniques.
Meanwhile, individual developers who need to apply badly needed changes have to instead labor with widely
varied types of documentation to try to understand and repair complex systems. In a straw poll that I conduct in
each class, students are asked how they handle a request to change an existing system when the documentation
is difficult to understand or nonexistent. Almost without exception, the students say they simply rewrite the code.
For those of you managing the budget, that means that they are throwing away a corporate asset simply because
they can't understand it and didn't have the time to try. Then they are spending more time creating an entirely new
and untried solution than they would in making a simple correction-if they only knew where to make it.
Translation: Poor communication = slower projects and higher cost of maintenance
The volatile nature of the IT work environment
Between corporate restructuring, economic ups and downs, and team members' personal lives, teams are
constantly changing. The typical mode of each developer keeping everything in her head or in what she believes is
well-documented code produces a tremendous liability. As people move from project to project and as priorities
shift and budgets are cut, the knowledge of the system goes away with the well-meaning people who last worked
on it.
Many of you are living with this problem today. Two years ago, when companies were throwing money at IT, you
had plenty of staff to cover each system. Since the summer of 2001 the situation has changed substantially.
Where you once had people dedicated to each system, you now have one person covering five systems that he's
never seen before. Without a description of the workings of the system, the only way to know how to make
changes is to read thousands of lines of code. While the staff is reading, it isn't making the badly needed changes.
When changes are made, no one can be certain that they are the right changes.
A system is a corporate asset and as such it must live beyond the participation of any one individual, and it must do
so without incurring the overhead of rewriting with every change in personnel.
Lack of control over the development process and product quality
Another consequence of developers keeping everything in their heads or in the code itself is the complete trust
demanded of management. Unless a manager is willing to learn and read the code herself, she cannot assess the
quality of the product. Testing is the standard response to this problem. But any programmer knows that it is easy
to get an application to pass the test suite. After all, in many cases it is the developer who writes the tests.
Consequently, the application works "just like I said it would." But does it work like it really should?
Quality is more than passing a test. Quality implies durability, flexibility, maintainability, low cost of maintenance,
and much more. Unfortunately, because code is difficult to understand and poorly documented, if documented at
all, over time the code becomes a collage of patches and add-ons in response to change requests. I suspect that if
you would poll your team and evaluate your own experience, you'd find that most rewrites of systems result in
radically reduced amounts of code yet increased functionality specifically because rewriting enables you to
eliminate all of those patches and focus back on the real requirements.
The foremost lesson that I hope you take away from this book is that every participant on a project has to go
through the same thought process to decide what code to write. Modeling is simply a visual and standardized way
to step through that thought process of leveling in a way that all participants can understand, verify, and discuss.
With the current modeling tools available, there is every opportunity to keep your models in sync with your code, or
even generate and maintain your code from the models, while never actually touching the code.
Changing requirements
The longstanding rift between IT project members and clients nearly always centers on requirements. In fact,
Capers Jones of Software Productivity Research, and author of Software Systems Failure and Success, did a
study on litigation over software. Almost without exception, software litigation is based on a debate over
requirements. The clients claim that their requirements were not satisfied. IT says that the clients kept changing
their requirements. What is even more disturbing about the results of this study is the fact that all of the projects
studied were bound by contract. How much worse is the problem where there is no contract that formalizes the
requirements?
Without means to track requirements and means to trace those requirements into the finished product, there is no
way to find out where the problems crop up. Is the client really changing his mind? Did IT really fail to include
requirements? In my experience, I have found that when an analyst or designer does not have a disciplined
approach to gathering requirements, the project hinges on intangible and highly risky factors: the developer's
memory, work habits, personal life, communication skills, and his relationship with the client. If any of these factors
is less than ideal, the project is at risk.
Very often it is not true that the client changed her mind. Instead, the developer simply did not ask the right
questions, did not ask the right people, or simply did not challenge what the client was telling him. The developer
may be hurried and have no interview plan and no way to capture the client's knowledge except with notes and
nonstandard drawings, neither of which may be tested or traced.
A standardized approach to analysis and to capturing the work products of analysis can rapidly eliminate these
problems and just as rapidly elevate the skill level of the people using them. Using the same techniques over and
over, sharing the results with others using the same techniques, and using the work products to communicate with
all participants in the development effort fosters proficiency and facility and reduces the possibility of
misunderstandings and misrepresentations.
The debate over methods and tools
It is easy lately to get caught up in the feverish debate over development methodologies. Tool vendors know this
best because they have to support these warring factions. Every software development method has at least three
parts: a process, process management, and a vocabulary for expressing the work products of the process.
Process management follows many of the same principles used for managing other business processes. The
process itself is a still greater challenge. Software development is far too diversified to allow just one process.
Transaction-oriented systems simply are not the same as real-time systems, which are different still from game
software or e-commerce. Even the size of the specific project changes the process requirements.
What can be standardized is the vocabulary for expressing the work products of the process. UML offers a solution
for standardizing the way you describe your work products no matter what method you follow. It does not dictate
that you use all of the features or how you use them. But it does help ensure that we can all express our
knowledge and ideas in a consistent manner that we can all understand equally, which radically reduces the
learning curve and the time needed to share requirements and ideas about solutions.
This standardization has opened the door for a wealth of tools that automate the creation, maintenance, and
tracking of these work products. With the standard in place, the vendors have been able to focus more effort on
feature-rich development environments, customization, and integration with valuable technologies such as
database management systems, change management, integrated development environments (IDE), and
frameworks.
The demand for cost effective and high quality systems
A particular strength of modeling is that it reveals our assumptions. Without reading mountains of code we can
measure communication traffic, evaluate quality by assessing coupling and cohesion, and test our models before
we write faulty code. Modeling accomplishes these objectives by raising the level of abstraction. Years ago we
wrote Assembler code, just one level above machine language. Then we moved to third-generation languages
such as COBOL. The assembly programmers of the time thought COBOL just couldn't do what Assembler could
do and would eventually go away. They insisted that to write applications, you had to know how the machine really
works. How many of you (other than game programmers) write any Assembler today, let alone develop systems
with it? In contrast, how many hundreds of millions of lines of COBOL code are in production today?
The Assembler programmers were missing an essential fact in their machine-centric perspective. That is, the only
reason to write code is to solve a problem. The machine and the languages are tools to solve a problem, not the
end itself. Still today, when our focus is forced onto the technology (Java versus C++, JSP versus ASP, and so
forth) and away from the problem, we risk losing sight of our real purpose and the factors that define true success,
and that is not whether the code is finished, but does the application do what we need it to do and can it survive the
inevitable changes?
UML is an attempt to express in precise, familiar, and consistent language everything needed to generate
complete, stable, maintainable systems. UML is a level of abstraction above the current programming languages,
just like COBOL was a level above Assembler. People like Leon Starr and Stephen Mellor have been generating
complete systems from models since the mid-1980s. I'm not talking about code generators that belched out
bloated low-performance code, or CASE tools that generate Java declarations but not the method body. I'm
talking about customized generation of the complete application, even generating a single model in multiple
implementation environments.
The question is not whether this is possible. The question is, when will we adopt modeling as the next generation of
coding? Oops, I tripped over my soapbox again .
<Day Day Up>
<Day Day Up>
How This Book Is Organized
This book is organized into Parts, which are groups of chapters that deal with a common theme. Here's what you'll
find:
Part I: An Introduction to UML
UML is actually the culmination of years of effort to isolate and standardize the tools used to express business and
software concepts. Chapter 1 explains the development of UML so that you can understand what it looks like today
and why. Other chapters introduce you to UML architecture, the UML diagrams and extensions, and the basic
concepts of object orientation that provide the foundation for the modeling concepts captured in UML diagrams.
Part II: Modeling Object Structure
UML defines a number of diagrams suited to capturing unique aspects of software requirements. One such aspect
includes definitions of the resources used by the application and the resources that make up the application itself.
Part II covers the Class and Object diagrams, as well as the Composite Structure diagram and collaborations,
including their structure and usage. The modeling elements explained include classes, attributes, operations,
associations, objects, links, inheritance, and patterns.
Part III: Modeling Object Interactions
Once the resources have been identified and you have scoped their purpose and role within the design, you need
to put the resources to work. Work implies cooperation. Cooperation requires communication. Part III presents the
many different interaction diagrams used to model how objects talk to one another when you run the application. It
also explains the use of the Statechart diagram to model the changes in an object over its lifetime. The modeling
elements explained include messages, events, and states.
Part IV: Modeling Object Behavior
It is one thing to say that a system or object does something. It is another thing entirely to explain how it is
supposed to do that something. Part IV explains how the UML Use Case diagram models the behavior of a system
from the perspective of the users, while the Activity diagram can model behavior at any level of abstraction from
workflow to method implementation. The modeling elements explained in this part include use cases, actors,
dependencies, activities, decisions, object flow, and partitions.
Part V: Modeling the Application Architecture
When you are ready to implement your system, or understand an existing implementation, you'll need a way to
model the software and hardware elements that make up the system configuration. Part V illustrates the use of the
Component and Deployment diagrams for modeling the implementation environment. The modeling elements
explained include packages, components and artifacts, nodes, interfaces, and ports.
Part VI: Bringing Rigor to the Model
Part VI takes you beyond the diagrams of the UML standard to the syntax and semantics for implementing
requirements regarding rules and behavior. The Object Constraint Language enables you to model the rules that
define the correctness of the relationships between model elements and the validity of values for a model element.
Action Semantics enable you to specify, in an implementation-language-independent manner, behaviors that are
ultimately implemented in code.
Part VII: Automating the UML Modeling Process
The UML standard has made it easier for modeling tool vendors to support the diagramming process and code
generation. Part VII offers a description of the capabilities of today's modeling tools. Beyond the tools, or rather
beneath the tools, lie the infrastructure elements that make using the models and exchanging them possible. This
part presents the XML Model Interchange (XMI) standard that helps make it possible to share models between
modeling tools and other XMI-compatible applications. Finally, it explains how the UML can be customized using
Profiles that tailor the features of the UML diagramming standard to specific industries or application types.
<Day Day Up>
<Day Day Up>
The Companion Web Site
The companion Web site for this book, located at http://www.wiley.com/compbooks/pender, includes the following
elements:
A list of UML resources so that you can keep up with the latest news on UML developments, tools, vendors,
and forums.
Modeling tool vendor links so that you can investigate the available tools.
Links to online courses about using UML.
The complete set of diagrams for the Ticketing System in PDF format and the original modeling tool files.
<Day Day Up>
<Day Day Up>
Conventions Used in This Book
Every chapter in this book opens with a quick look at what's in the chapter and closes with a summary. Along the
way, you also find icons in the margins to draw your attention to specific topics and items of interest.
Here's what the icons mean:
Cross-Reference These icons point you to chapters or other sources for more information on the topic
under discussion.
NoteNotes provide extra information about a topic, perhaps some technical tidbit or background explanation.
TipExpert Tips offer ideas for the advanced user who wants to get the most out of UML.
CautionCautions point out how to avoid the pitfalls that beginners commonly encounter.
Since one purpose of this book is to highlight the changes between UML 1.4 and UML 2.0, I often use figures that
highlight the new or modified portions of the UML metamodel. Figure FM-1 is an example of a diagram that uses
gray shading to identify differences between the two versions of UML. The gray rounded rectangle is not part of the
UML notation. It is there only to help you quickly identify the details that have changed.
Figure FM-1: Using gray shaded rounded rectangles to highlight changes in the metamodel.
I use the same convention when I need to focus attention on a specific item in a diagram, especially when
explaining examples for concepts and notations, as shown in Figure FM-2. Here, as in the previous example, the
gray rounded rectangles are used only to help identify key elements of the figure. They are not part of the UML
notation.
Figure FM-2: Using gray shaded rounded rectangles to highlight elements of an illustration.
<Day Day Up>
<Day Day Up>
Part I: An Introduction to UML
In This Part
Chapter 1: What is UML?
Chapter 2: UML Architecture
Chapter 3: UML Diagrams and Extension Mechanisms
Chapter 4: Object-Oriented Concepts
<Day Day Up>
<Day Day Up>
Chapter 1: What Is UML?
Overview
The Unified Modeling Language (UML) has been formally under development since 1994. UML is a distillation of
three major notations and a number of modeling techniques drawn from widely diverse methodologies that have
been in practice over the previous two decades. During this time it has had an undeniable impact on the way we
view systems development. Despite early competition from existing modeling notations, UML has become the de
facto standard for modeling object-oriented software for nearly 70 percent of IT shops. UML has been adopted by
companies throughout the world, and today more than 50 commercial and academic modeling tools support
software and business modeling using UML.
UML enables system developers to specify, visualize, and document models in a manner that supports scalability,
security, and robust execution. Because UML modeling raises the level of abstraction throughout the analysis and
design process, it is easier to identify patterns of behavior and thus define opportunities for refactoring and reuse.
Consequently, UML modeling facilitates the creation of modular designs resulting in components and component
libraries that expedite development and help insure consistency across systems and implementations.
Unlike previous methodologies, you don't have to change the way you work just to suit the demands of a vendor or
methodologist. UML uses extension mechanisms to customize UML models to a particular application type or
technology. While the extension mechanisms are a bit limited today, they do provide substantial support for
tailoring UML to the needs of a specific project, whether the project's goal is a transaction-oriented application,
real-time or fault-tolerant system, or e-commerce or Web service, and regardless of the subject domain.
UML profiles collect predefined sets of extension mechanisms for a specific environment. For example, in the UML
specification itself you will find profiles for J2EE, COM, .NET, and CCM development. Each profile provides
customized modeling elements that map to the common elements and features in each of these architectures.
This approach enables the modeler to focus time and energy on the project content instead of the unique
modeling features of the implementation domain.
The standardized architecture of UML is based on the Meta-Object Facility (MOF). The MOF defines the
foundation for creating modeling languages used for object modeling, such as UML, and for data modeling, such
as the Common Warehouse Model (CWM). The MOF defines standard formats for the key elements of a model
so that they can be stored in a common repository and exchanged between modeling tools and languages. XML
Metadata Interchange (XMI) provides the mechanism to implement the sharing of these modeling elements
between modeling tool vendors and between repositories. This means, for example, that a project can use one
tool for developing a platform-independent model (PIM) using UML diagrams and switch to another tool to refine
the model into a platform-specific model (PSM) using a CWM model to generate the database schemas. This
standards-based approach places the choice of tools in the hands of the modelers instead of the tool vendors.
UML models can be precise enough to generate code or even the entire application. Automated test suites can
verify the accuracy of the model. When coupled with tools to compile the UML model, the model can even be
executed before any code exists. Vendors (for example, Kabira Technologies http://www.kabira.com and Project
Technology, Inc. http://www.projtech.com) are already providing compilers that are being used in projects today. A
fully executable UML model may be deployed to multiple platforms that each use different technologies. A model
might be deployed in one place using one language, middleware, and database configuration, and at another
location with an entirely different configuration. The mapping of the model to an implementation configuration is
accomplished using a profile, with a separate layer that maps the two requirements, the model and the
implementation environment. To use the model in other implementation environments, simply create a new profile.
Thus the UML profile represents a level of indirection between the model and the implementation environment,
freeing each to be created independently of the other.
All of these features didn't appear overnight. A great deal of collaborative effort was invested to create the current
standard, and not without conflict. You may have already heard people taking sides on a variety of modeling issues
and practices that arise when they try to use UML. To clarify some of the reasons behind these debates, I begin
with a brief history explaining how UML first came to be. If you can understand the process behind the ongoing
development of the standard, you will be better equipped to follow the changes between version 1.4 and the new
developments in version 2.0 described throughout the rest of this book. Even more important, you need to
understand how UML fits into the much larger plan by the Object Management Group (OMG) to standardize
systems development with Model-Driven Architecture (MDA).
Since UML 2.0 is pending as of the writing of this book, I have included both UML 1.4.1 and UML 2.0. I hope this
will help those of you who might be using modeling tools based on UML 1.4.1 until you are able to upgrade. At the
same time it should give you some insights for evaluating either your existing vendor's implementation of UML 2.0
or other new modeling products. For a more complete explanation of this approach, refer to "How to read this
book" in the Preface.
NoteWhen the OMG added Action Semantics to the 1.4 specification, it originally called it "UML 1.4 with
Action Semantics." It later appeared on the OMG site as UML 1.5. So don't be surprised if I accidentally
bounce between references to 1.4 and 1.5.
The rest of this chapter discusses
The history of the UML through version 1.4
The goals, scope, and features of the UML
The objectives of UML 2.0
The role of the Object Management Group (OMG)
How UML fits into the bigger picture: The OMG's Model-Driven Architecture (MDA) initiative
<Day Day Up>
<Day Day Up>
Understanding the History Behind UML
UML is designed specifically to represent object-oriented (OO) systems. Object-oriented development techniques
describe software as a set of cooperating blocks of information and behavior. For example, a performance at a
theater would be coded as a discrete module with its own data about dates and time, and behavior such as
schedule, cancel, or reschedule all rolled together. This was a stark departure from the old notion that data
resides in files and behavior lives in programs.
The effect of this simple idea, the combining of data and behavior into objects, had a profound effect on
application design. As early as the 1970s, a number of methods were developed to exploit the new object-oriented
(OO) programming concepts. Developers quickly recognized that object orientation made possible a development
process in which the way that they talk about the application corresponds directly to how they code it. They also
found that it was relatively easy to draw (model) the objects so that they could talk about the design. Each object
was represented as an element on a diagram. Because the model elements were almost identical to the code
elements, the transition from model to code was simple and efficient. Moving design discussions up to the models
instead of the code helped the developers deal with design issues at a high level of abstraction without getting
caught up in the coding syntax.
Early Modeling Methodologies
Software developers weren't the only people who discovered the benefit of modeling. Other engineering disciplines
such as database management and design were also creating modeling techniques such as Entity Relationship
modeling (ER diagrams) and Specification and Description Language (SDL). It quickly became clear that modeling
provided a way to cope with complexity, encourage collaboration, and generally improve design in all aspects of
software development.
The need for modeling solutions increased with the growth in numbers and sophistication of software systems.
Systems were growing rapidly in complexity and required more and more collaboration and solid, durable design
quality. Modeling had proven itself in exactly these circumstances. Literally hundreds of people sprang to work
developing modeling methodologies to solve the growing problem. But the resulting proliferation of solutions
caused some problems. The widely diverse efforts were inefficient in that they lacked the necessary collaboration
to produce results that could be widely applied by the IT community. In fact, the diversified approach resulted in
what were affectionately called the "method wars", battles between method authors with their loyal followers pitted
against one another over who had the best solution. Authors of each method vied for support for their methods.
Tool vendors labored to support many different notations in the same tool. Companies struggled to identify and
follow a single "best" method, train their people, and support the method only to find that no one method could
fully meet their needs.
The proliferation of isolated solutions and the associated battles were signals that the need for a comprehensive
solution for software modeling was a priority. The solution needed to be flexible, scalable, secure, and robust
enough to handle the diverse software and business environments of the present and the future.
The Creation of UML
By the early 1990s, a few leaders had emerged from the field of methods and notations. Object-Oriented Software
Engineering (OOSE), developed by Ivar Jacobson, is based around the use-case concept that proved itself by
achieving high levels of reuse by facilitating communication between projects and users, a key success factor for
IT projects. James Rumbaugh developed the Object-Modeling Technique (OMT) with an emphasis on the analysis
of business and data intensive systems for defining a target problem, a second key success factor for IT projects.
The Booch method, developed by Grady Booch, had particular strengths in design and implementation, defining
and mapping a solution to the target problem, a third key to successful IT projects. These significant contributions
are like the legs on a three-legged stool: the combination of the three methods and their notations supported the
entire range of requirements needed to create a single, comprehensive software-modeling standard.
It is important to point out that many other methods provided some of the same three key factors. The difference is
that they did not aggressively seek to combine their efforts to address the bigger picture, a standards-based
approach to modeling software. In October 1994, Grady Booch and Jim Rumbaugh, working at Rational Software
Corp., started merging their two methods. The independent evolution of their two products was bringing the
methods closer together anyway; Booch was adopting more of an analysis focus and Rumbaugh was assuming
more of a design focus. Now the deliberate reconciliation began in earnest. The effort resulted in a greatly
simplified notation and a deliberate effort to address the need for a true language architecture rather than simply a
notation. An architectural approach would bring the needed semantic integrity and consistency for a durable
standard.
A year later, in the fall of 1995, Booch and Rumbaugh had completed the first draft of the merged method referred
to as Unified Modeling Language version 0.8. About the time that the draft was completed, Ivar Jacobson and his
company, called Objectory, joined Rational Software Corp., and the "three amigos"-Booch, Rumbaugh, and
Jacobson-began integrating OOSE into the UML standard. The use-case concept brought to UML the essential
user-centric elements that completed the range of features to make UML the comprehensive standard that it
needed to be to gain wide acceptance.
Booch, Rumbaugh, and Jacobson established four goals for the Unified Modeling Language:
Enable the modeling of systems (not just software) using object-oriented concepts 1.
Establish an explicit coupling to conceptual as well as executable artifacts 2.
Address the issues of scale inherent in complex, mission-critical systems 3.
Create a modeling language usable by both humans and machines (UML 1.4, pgs. 1-12,13) 4.
The result of the collaborative effort of the three amigos was the release of UML versions 0.9 and 0.9.1 in the fall
of 1996. However, despite the fact that they sought feedback from the development community, they recognized
the need for broader involvement if the UML was truly to be a standard.
Enter the Object Management Group (OMG), the standards body that brought us CORBA, Interface Definition
Language (IDL), and the CORBA Internet Inter-ORB Protocol (IIOP). By this time UML was being recognized as
vital to the goals of many companies. It was in their best interest to see that the standard get the support it needed
to be completed. In response to this overwhelming need, the OMG published a Request for Proposal (RFP), and
then the Rational Software Corporation created the UML Partners consortium, which was committed to finishing
what the three amigos had started. Contributing members of the consortium included a mix of vendors and system
integrators: Digital Equipment Corporation, HP, i-Logix, IntelliCorp, IBM, ICON Computing, MCI Systemhouse,
Microsoft, Oracle, Rational Software, TI, and Unisys. The result of their efforts was published in January 1997 as
UML 1.0.
At the same time, another group of companies (IBM & ObjecTime, Platinum Technologies, Ptech, Taskon & Reich
Technologies, and Softteam) was working on and submitted another proposal for UML. And, exemplary of the
UML history, the alternative proposal was viewed not as competitive, but collaborative. The new team joined the
UML Partners consortium and the work of the two groups was merged to produce UML 1.1 in September 1997.
Since then, the OMG has assumed formal responsibility for the ongoing development of the standard, but most of
the original consortium members still participate.
In reading this brief history you've probably noticed that this all happened pretty fast. The drive to deliver the final
version so quickly had its consequences. While the architecture infrastructure and even the superstructure were
relatively well defined, some problems remained. For example, the Activity diagram did not have the ties to the
state machine semantics required to support all of the features and notations needed for real business modeling.
Also, many of the Standard Elements were added hastily and had not been fully defined. Most important, the
meta-modeling approach fell short of the desired implementation, making it difficult to align UML with the Meta-
Object Facility (MOF), a foundation technology in the OMG's MDA strategy. Fortunately, the standard is still
evolving.
The OMG set up a Revision Task Force (RTF) to oversee the ongoing evolution of the UML standard. The RTF is
responsible for addressing all questions, changes, and enhancements to UML and for publishing subsequent
releases. To date, the RTF has taken up more than 500 formal usage and implementation issues submitted to the
OMG for consideration. In fact, you can submit your own suggestions and comments on existing issues to uml -
rtf@omg.org.
The standard has since progressed through version 1.3 (1.2 was a purely editorial revision) and on to version 1.4.
The most recently adopted specification (September 2002) is version 1.4.1 with Action Semantics, which, as the
name implies, added action semantics to the 1.4 specification. Action Semantics is a critical element in the creation
of executable UML models.
Cross-Reference To learn more about Action Semantics, refer to Chapter 19.
<Day Day Up>
<Day Day Up>
The Goals and Features of UML
UML is designed to meet some very specific objectives so that it can truly be a standard that addresses the
practical needs of the software development community. Any effort to be all things to all people is doomed to fail,
so the UML authors have taken care to establish clear boundaries for the features of the UML.
The next section explains the objectives and the scope of the UML, the fundamental features provided by the UML,
and discusses the role of the OMG in the management and ongoing development of the UML as a part of its MDA
strategy.
The goals of UML
The OMG knows that the success of UML hinges on its ability to address the widely diverse real-world needs of
software developers. The standard will fail if it is too rigid or too relaxed, too narrow in scope or too all-
encompassing, too bound to a particular technology or so vague that it cannot be applied to real technologies. To
ensure that the standard will, in fact, be both practical and durable, the OMG established a list of goals.
UML will
Provide modelers with a ready-to-use, expressive, and visual modeling language to develop and exchange
meaningful models.
Furnish extensibility and specialization mechanisms to extend the core concepts.
Support specifications that are independent of particular programming languages and development
processes.
Provide a formal basis for understanding the modeling language.
Encourage the growth of the object tools market.
Support higher-level development concepts such as components, collaborations, frameworks, and patterns.
(UML1.4 specifications)
Each of these goals is discussed in detail in the following sections.
Goal 1: Provide modelers with a ready-to-use, expressive, and visual modeling
language to develop and exchange meaningful models
UML must be defined at a level that allows it to be used as-is off the shelf. Modelers should be able to start
building diagrams without first customizing the notation to their development environment, programming language,
or application. The modeling language should work equally well for Java and C++, for accounting and aviation.
To accomplish this, the standard has to define the semantics of the modeling language as well as the visual
representation of the language. Semantics provide the rigor that ensures the consistent application of the models
and model elements. A consistent visual representation of the model elements facilitates adoption and use of the
modeling technique.
The standard also must be comprehensive but not exhaustive. It must include all the core modeling elements
common to most, not all, software projects. If it is not complete, modelers will not be able to use it without
customization. If it is exhaustive-well, it just can't be. Instead, the OMG adopted the second goal.
Goal 2: Furnish extensibility and specialization mechanisms to extend the core
concepts
In overly simplified terms, the core concepts should represent the old 80/20 rule. We should be able to build 80
percent of the systems out there with 20 percent of the conceivable concepts. When these core concepts are not
enough, there should be a way to build on them to get what we need.
Wherever possible a modeler should not have to invent entirely new concepts. Users should be able to use
concepts already defined by UML. There are at least three ways that UML enables modelers to create new model
elements:
The core defines a number of fundamental concepts that may be combined to create the new concept.
The core provides multiple definitions for a concept.
UML supports the ability to customize a concept by specializing one or more of its definitions. (To specialize
means to use an existing definition and then override and/or add elements to it.)
A UML-defined solution for wholesale extensibility is a profile. A profile is basically an implementation of UML for a
specific domain, such as a particular technology platform or a specific line of business. A profile predefines a set of
model elements that are unique or simply common to the target environment. In this manner, profiles tailor the
modeling elements so that the modeler can represent his environment more accurately than is possible with
generic UML but without losing any of the semantic clarity of UML concepts.
Goal 3: Support specifications that are independent of particular programming
languages and development processes
One very valuable reason for modeling is to separate the requirements from the implementation. Tying UML to a
particular language automatically alienates everyone not using that language. An implementation also ties UML to
a point in time. For example, when the programming language changes, UML becomes obsolete until it can be
brought up to date.
However, UML must map to the common object-oriented design constructs defined in most OO languages. This
alignment will support code generation and reverse engineering, the integration of the modeling and coding
environments. But rather than alter UML to conform to languages, the mapping is accomplished through profiles
that define the relationships between the model elements and the implementation constructs. Using a separate
mapping layer effectively decouples, or separates, UML from the implementation languages, allowing both to
evolve at their own pace.
Goal 4: Provide a formal basis for understanding the modeling language
The language must be defined at a level that is precise yet accessible. Without precision, the models do not help
define a real solution. Without accessibility, no one will use it. The UML standard uses Class diagrams to represent
the formal definitions of objects and their relationships. Each Class diagram is supplemented with text detailing the
semantics and the notation options. The constraints that define the integrity of the model elements are expressed
using Object Constraint Language (OCL). (See Chapter 18.)
Goal 5: Encourage the growth of the object tools market
The modeling tool market is dependent on a unified standard for modeling, for the model repository, and for model
interchange. To the extent that vendors can rely on a stable standard, they can quickly and effectively implement
all three fundamental tool features. As the vendors cost to provide the core functionality decreases, vendors are
freed to pursue value-added modeling-environment enhancements such as integration with coding environments,
database management tools, syntax checking, model verification, and more.
We are seeing the effect of the standard today. The number of tools has mushroomed, and the feature sets
offered in the tools have exploded. Where tools used to focus almost exclusively on just being able to draw
diagrams, today they are performing syntax checking of OCL statements, diagram synchronization, code
generation and reverse engineering, importing from various other tools, exporting HTML or XML reports,
supporting integration with one or more coding environments, and much more.
Goal 6: Support higher-level development concepts such as components,
collaborations, frameworks, and patterns
The standard needs to support the modeling of higher-level concepts such as frameworks, patterns, and
collaborations. Doing so supports the advancement of modeling and systems development. By ensuring this future
potential, UML becomes an asset that facilitates technological evolution rather than being one more legacy that
has to be dragged into the future with all the other old technologies.
The scope of the UML
UML is designed to be the merging of best development practices and the leading modeling concepts of the past
30 years. There is also a deliberate effort to take into account the fact that development technologies and
techniques are always changing.
With such an ambitious goal, it would be easy to fall into the trap of making UML define everything-modeling,
development methodology, project management, systems integration, and so forth-about the software
development process. So the first and most visible boundary established by the OMG was to define only the
modeling language, including the semantics and the notation for creating models. Therefore, UML defines only the
modeling elements used to describe the artifacts of software development. It does not describe any process for
creating those artifacts. In fact, the intent of the standard is to create a language that may be used with any
process, much like you could hand someone a hammer and say, "Hang this picture" or "Build a house." The same
tool may be used for very different tasks. In the same manner, UML might be (and is) used with the Rational
Unified Process, Shlaer/Mellor, Agile Modeling, or any number of proprietary methodologies.
UML also says nothing about programming languages. The object-oriented concepts applied in modeling are the
same concepts applied in OO programming languages, but the relationship begins and ends with this common
foundation. For example, Java does not support multiple inheritance but UML does. Neither Java nor UML is going
to change because of this inconsistency. They each have their own goals and audiences that drive their choices
regarding how to support OO concepts. Again, the UML standard cannot be tied to a particular technology without
losing its ability to keep pace with advancements in technology.
Finally, UML does not seek to usurp other modeling techniques such as Business Process Re-engineering (BPR)
flowcharts or entity-relationship modeling. However, it has proven itself to be robust enough to bring added
precision, comprehensiveness, and flexibility to the same modeling domains. The infrastructure of UML (covered
later in this chapter) is actually designed to be the basis for defining any number of modeling languages. In the
event, and to the extent that, these other modeling techniques conform to the infrastructure, it will be possible to
exchange model elements between the various techniques. For example, a UML model could be input to an entity-
relationship model and vice versa. In fact, this is already implemented in some tools.
Features of UML
In addition to the set of diagrams defined in the UML specification, UML provides a set of features that derive from
a variety of sources but have all proven valuable in real-world modeling:
Extensibility mechanisms (stereotypes, tagged values, and constraints): No standard, language, or tool
will ever be able to address 100 percent of the users' needs. Trying to deliver such a tool would result in a
never-ending project without any product. Instead, UML authors focused on a core set of functionality and
features. Then they added a set of mechanisms-stereotypes, tagged values, and constraints-that may be used
to augment or tailor the core concepts without corrupting the integrity of those core concepts. Applying a
stereotype to a model element is like getting dressed up for a special occasion. You remain the same person
regardless of what you wear, but the attire helps you fit into the particular situation.
Stereotypes help identify the role of an element within the model without defining or altering its fundamental
purpose or function. Stereotypes for model elements work much the same as stereotypes for business
descriptions; that is, you might point out a company and identify it as an accounting firm, a shoe distributor, or
a grocery store. But it is still a company. Tagged values provide the means to add new model elements that
hold values-for example, author="Tom Pender". Constraints allow you to define rules regarding the
integrity or use of a model element, such as the attribute "name" must be between 1 and 40 characters
including spaces and punctuation, but no special characters. UML added OCL (see Chapter 18) for formally
specifying constraints.
Threads and processes: Threads and processes are an increasingly common aspect of applications. UML
supports the modeling of threads and processes in all of the behavioral models, including the enhanced
Activity diagram. (See Chapters 8 through 13.)
Patterns and collaborations: In recent years developers have come to appreciate more and more the value
of designs based on proven solutions. Patterns and collaborations allow the modelers to define standard
approaches to solving common problems. A pattern may then be applied to a variety of specific situations,
bringing with it a combination of predefined roles and interactions. Patterns and collaborations may be
identified and defined at many levels of abstraction, cataloged, and documented for others to use. This
approach brings reuse out of the realm of pure code and into every phase of the modeling effort, from
requirements and architecture through implementation.
Activity diagrams (for business process modeling): For years business and technical staff have relied on
the flowchart. UML renamed the flowchart to Activity diagram. The Activity diagram is a simple yet effective
tool to model logic. Logic appears throughout the development process in workflow, method design, screen
navigation, calculations, and more. The value of the Activity diagram cannot be overlooked so it has been
incorporated into the UML standard since the earliest versions. To bring it up to date, it has been enhanced
most recently with its own semantics, distinct from state machines, to represent control flow and/or object flow.
Refinement (to handle relationships between levels of abstraction): Many concepts, such as classifiers
and relationships, permeate all layers of systems development, and the semantics for these concepts hold
true regardless of the business or technical environment. Each abstraction layer adds to, customizes, and
otherwise refines the original definition. This approach supports and in some ways encourages the
development of varying applications of the concepts at each new level of abstraction. The result of this
approach has been the development of an increasingly holistic set of models for systems development, all
founded on the same conceptual standard, but each tailored to a unique perspective.
Interfaces and components: One advantage of modeling is the ability to work at different levels of
abstraction instead of always working at the code level.
Interfaces and components allow the modeler to work on a problem by focusing on the connectivity and
communication issues that can help solve that problem. The implementation or even the internal design of a
component can be ignored temporarily until the bigger issues of policy, protocol, interface, and communication
requirements are resolved. Working at this higher level of abstraction produces a model that later can be, and
often is, implemented in multiple environments.
Constraint language: The Object Constraint Language (OCL) provides the syntax to define rules that insure
the integrity of the model. Much of the constraint concept is borrowed from programming by contract, in which
relationships between model elements are defined in terms of the rules that govern an interaction. When two
parties enter into a contract, the terms of the contract place obligations on the client (the person asking for a
product or service) and the supplier (the one providing the product or service). Constraints called pre-
conditions define what the client must do in order to have the right to receive the product or service.
Constraints also define the obligations the supplier must fulfill if the client fulfills her part. These constraints are
called post-conditions or guarantees. Constraints can also apply to individual elements to define the domain of
valid values. (See Chapter 18.)
Action semantics: The goal of UML has always been to model software as accurately as possible. Modeling
software means modeling behavior. The action semantics extensions enable you to express discrete behaviors
as actions. Actions can transform information and/or change the system. Furthermore, UML models actions as
individual objects. As such, actions may execute concurrently. In fact, that is their normal mode of execution
unless chained together to enforce sequential execution. Settling on concurrent execution as the norm better
supports today's distributed environments. Action semantics is also a major contribution toward executable
UML. (See Chapter 19.)
<Day Day Up>
<Day Day Up>
Introducing UML 2.0
The next version of UML, 2.0, is due to be released sometime in 2003. Three proposals have been submitted. I
have based the content of this book on those submissions and my expectation that they will be adopted in whole or
in part. Version 2.0 is a substantial improvement of the underlying architecture, cleaning up many of the
fundamental definitions and improving the alignment with the other key technologies sponsored by the OMG.
I've outlined some of the specific objectives for version 2.0 set forth in the RFP. I don't expect beginners to UML to
understand them from these very brief descriptions. The rest of this chapter explains many of the new terms. The
rest of the book is devoted to explaining how these concepts have been addressed in the diagrams and in the
semantics that support the diagrams. For those of you who have been working with UML for a while, these items
should demonstrate the OMG's commitment to the long-term success of UML.
Improve the architecture: Rework the physical metamodel so that it is more tightly aligned with the MOF meta-
metamodel. Improve the guidelines that establish what constructs should be defined in the kernel language and
what constructs should be defined in UML profiles or standard model libraries. (See Chapter 2.)
Provide improved extensibility: Enhance the extensibility mechanisms to align them more closely to a true "four-
layer architecture." Profiles provide much of the customization support, at least in concept. But the extensibility
features used to create them (stereotypes, tagged values, and constraints) are still rather low-level. UML
extensibility features should align more closely with the MOF extensibility features, that is, metaclasses. (See
Chapters 2 and 3.)
Improve support for component-based development: Current technologies such as EJB and COM+ require a
means to model and manage component-based designs. The current semantics and notation are not quite up to
the task. (See Chapters 15 through 17.)
Improve the modeling of relationships: Improve the semantics for refinement and trace
dependencies. Today it is difficult to support refinement of the models through the life cycle of a project, that is,
analysis to design or design to implementation. (See Chapter 6.)
Separate the semantics of statecharts and activity graphs: The initial UML specification tried to define activity
graphs as a specialization of a statechart. The overlap has created obstacles to business modeling and has
prevented the addition of valuable business modeling features. Support more relaxed concurrency in both
diagrams. Support specialization of state machines. (See Chapters 11 and 13.)
Improve model management: Update the notation and semantics for models and subsystems to improve support
for enterprise architecture views.
General mechanisms: Define support for model versioning.
<Day Day Up>
<Day Day Up>
The Object Management Group
The organization responsible for developing the UML goals described previously is the Object Management Group
(OMG). The OMG is the official steward of the UML standard. This is not simply because the OMG likes standards
or likes to take on work, but because it is the driving force behind a much larger plan for software development
called Model-Driven Architecture (MDA). MDA is a genuinely ambitious effort to standardize systems development.
The goal is to create a complete standard for the creation of implementation-independent models that may be
mapped to any platform, present or future. Did I say it was ambitious?
UML plays an integral role in the development and use of the MDA approach. UML is the language used to
describe the key standards of MDA, namely UML itself, the Meta-Object Facility (MOF), and the Common
Warehouse Model (CWM). UML is also used to create the work products of the MDA process, specifically the
business and implementation models.
Model-Driven Architecture (MDA)
Developers usually find that there is a division in most applications between the business logic and the
implementation mechanisms to support that logic. For example, selling tickets to a performance is a business
practice that could be implemented using any of dozens of technologies and techniques. But no matter how it is
implemented, there are fundamental rules that must hold true. If the rules for conducting the business transaction
are bound to specific implementation technologies, then changes in those technologies require changes to the
rules of the transaction. Such changes incur the risk either of corrupting the transaction or of causing delays while
you untangle the business from the technology. This makes even changing the application to take advantage of
technological advancements a risk to the business.
Model-Driven Architecture (MDA) separates the two fundamental elements of an application into two distinct
models. The platform-independent model (PIM) defines business functionality and behavior, the essence of the
system apart from implementation technologies. The platform-specific model (PSM) maps the PIM to a specific
technology without altering the PIM. That last phrase is critical. Defining a PIM is like defining the job description
"bookkeeper". We can define the purpose, responsibilities, qualifications, and skills for the job, that is, the PIM,
without knowing who will actually do the job. The PSM corresponds to hiring someone to do the job defined by the
bookkeeper job description.
This example highlights the power of the MDA approach. I can hire different people over time. I can even hire
multiple people at the same time to perform the book-keeping duties. I can even take the job description to another
company and use it there. In the same manner, I should be able to take the same PIM and deploy it in many
technologies or even in different businesses.
The division of the two models also supports interoperability. A business function does not need to know the
implementation of another business function in order to access it. The interface is defined in PIM fashion. The PSM
takes care of the mapping to the implementation. So the calling function is unaffected by changes to the
implementation of the called function. In the bookkeeper example, the bookkeeper PIM/job description can define
an interface to the general ledger. Where or how the general ledger is implemented is irrelevant. The interface is
always the same. Whether I implement the bookkeeping system at a department store, software consulting firm, or
insurance company, and whether the system is implemented in .NET or Java One, the interaction between the
bookkeeper and the general ledger is the same. So, as technologies change over time, as they inevitably will, the
business remains stable and relatively unaffected by the changes.
This same concept applies to functions that should be globally available to systems such as transaction
management, domain specific services, and application services. Having learned from its experience with the
CORBA-based Object Management Architecture, OMG recognizes the need for three levels of MDA-based
specifications built on the standardized technologies already defined by the OMG (see Figure 1-1):
Pervasive Services include security, transaction management, directory support, and event generation and
handling common to most systems.
Domain Facilities include standardized models for subject areas such as telecom, space sciences,
biotechnology, and finance.
Applications are within a domain, such as a heart monitor in the biotechnology domain or a funds transfer
application in a financial domain.
Figure 1-1: The Model-Driven Architecture.
The core of the MDA is the set of standards (MOF, UML, CWM, and XMI) and technologies (CORBA, .NET, Java,
and so on). The pervasive services are built on that core. Then based on these standards and services, businesses
can build domain specific profiles for finance, e-commerce and so on. Within each domain, businesses may then
build specific applications that conform to the supporting standards.
The Pervasive Services
The features included in the Pervasive Services level are those commonly found in the existing list of CORBA
services:
Directory services
Transactions
Event handling/notification
Security
The list is sure to grow in time with additions from the OMG itself, based on CORBA, and from CMG members.
Work has already started on mapping these services to PIMs so that they can be applied to all platforms through
the MDA development approach.
Domain Facilities
A domain is simply a subject area, such as warehousing and distribution, or biotechnology. Each domain has its
own peculiar set of problems and concepts. For example, there are many banks but they all conduct the same
type of business. The resources they use, the behaviors they support, and even many of the regulations that
govern their performance are the same. The differences arise in how they choose to embellish the basic business
to appeal to their customers and to improve profitability. A domain is the description of the fundamental elements
common to all systems in the same subject area. The uses of those fundamental elements define the applications
within the domain. These applications are covered in the next section.
Work has already begun on a number of domain models. Even though MDA-based standards for specific domains
are still under development, OMG Domain Task Forces (DTF) have started to apply MDA to existing projects. For
example, OMG's Life Science Research DTF, working in biotechnology, has already modified its Mission and
Goals Statement to reflect its work in MDA. In mid-2000, even before MDA, OMG's Healthcare DTF (formerly
know by its nickname, CORBAmed) published its Clinical Image Access Service (CIAS) (http://www.omg.org/cgi-
bin/doc?dtc/01-07-01) including a nonstandard UML model that describes the specification written in OMG IDL.
The document provides a good example of what a future MDA specification might look like.
NoteIn a true MDA specification, the model follows the UML standard and is fully developed, defining all
interfaces and operations including parameters and types, and specifying pre- and post-conditions in
OCL.
MDA Success Stories
Companies who have applied/are applying MDA include:
Regions Bank of Birmingham, Alabama
Swedish Parliament
Deutsche Bank Bauspar AG
U.S. Government Intelligence Agency
The Open System Architecture for Condition Based Monitoring (OSA-CBM) Project
CGI
ff-eCommerce
Swisslog Software AG
Adaptive; Adaptive Framework
Financial Systems Architects
Headway Software; Headway review
IKV++ GmbH; m2c(tm)
Applications
For years, businesses have started projects by modeling the business application requirements. As the projects
proceeded, they fell deeper and deeper into implementation-dependent modeling, often losing sight of the original
business requirements in the midst of the overwhelming task of working with ever-changing implementation
technologies. As MDA-based development tools become more widely available, projects can be focused more on
the platform independent model of the business requirements. In fact, the focus throughout the project will remain
on the original requirements while the implementation becomes more and more automated through the
application of platform specific models.
Lest you think that this is a pipe dream, take a look at the list of companies in the sidebar who are already using
this technique successfully. Many more companies are listed at http://www.omg.org/mda/products_success.htm.
Meta-Object Facility (MOF)
The Meta-Object Facility (MOF) is at the heart of the MDA strategy along with the UML, CWM, CORBA, and XMI. It
is the starting point, the standard that defines the languages used to describe systems and MDA itself. The MOF is
a metamodel (often called M2), a model defining the concepts required to build a model and to store the model in
a repository. The model is stored by representing the metadata as CORBA objects.
Cross-Reference Models, metamodels, and meta-metamodels are more fully explained in Chapter 2.
Currently the MOF defines all the foundation concepts needed to build the two modeling languages UML and
CWM. Now just to make this a little more confusing, both UML and CWM are themselves metamodels. They are
models that define modeling languages. When a metamodel like MOF is used to define another metamodel, it
becomes a meta-metamodel, or M3 for short. Since all elements defined by UML or CWM conform to the MOF
standard, it is possible to define a standardized repository for all data generated in UML or CWM or, in the future,
any other languages derived from MOF.
The model elements in the UML are created, or instantiated, from model elements defined in the MOF. For
example, the MOF defines the concept "Classifier." UML defines a concept called "Classifier" that inherits the
description in the MOF and then adds to it for the purpose of modeling objects. CWM also inherits "Classifier" but
for a different reason: CWM adds to the "Classifier" definition to support modeling data. Figure 1-2 illustrates this
relationship between the three models.
Figure 1-2: The relationship between the MOF and the UML and CWM languages.
MOF is also part of the long-term OMG strategy to support the creation and exchange of a variety of metamodels
across diverse repositories. For example, using the MOF, a UML model might be transmitted between tools by
different vendors. Likewise, a UML object model might be ported to a data-modeling tool in order to derive a logical
data model from the object model.
MOF supports this long-term strategy by providing
The infrastructure for implementing CORBA-based design and reuse repositories
The definition for a set of CORBA IDL interfaces to define and manipulate metamodels and the models
created using them
The rules for automatically generating the CORBA interfaces for metamodels, thus insuring consistency
Common Warehouse Metamodel (CWM)
The Common Warehouse Model (CWM) was developed in cooperation with the Meta-Data Coalition (MDC). The
goal of CWM was to provide to the data modeling community the same type of solution that UML provided to the
object modeling community. In the same way that UML describes a common modeling language for building
systems, CWM describes metadata interchange among data warehousing, business intelligence, knowledge
management, and portal technologies. Like UML, CWM is a language derived from the MOF. CWM provides the
mapping from MDA PIMs to database schemas. CWM covers the full life cycle of designing, building, and
managing data warehouse applications and supports management of the life cycle.
You can find the specifications for CWM at http://www.omg.org/technology/documents/formal/cwm.htm. Two other
specifications to extend CWM to the Internet are also currently under way: CWM Web Services
(http://www.omg.org/techprocess/meetings/schedule/CWM_Web_Services_RFP.html) and CWM Metadata
Interchange Patterns (MIP) (http://www.omg.org/techprocess/meetings/schedule/CWM_MIP_RFP.html).
XML Metadata Interchange (XMI)
At its simplest level, XMI defines a mapping from UML to XML. It defines standard formats and Document Type
Definitions (DTD) to capture UML models (and metamodels). This makes it possible to then convert a UML model
into XML, distribute it pretty much anywhere, and then convert it back to UML. The mapping also makes it possible
to exchange UML models between tools and across platforms.
Cross-Reference You can read more about XMI in Chapter 22.
Technically, XMI mapping uses MOF metadata, not UML. But since UML is based on the MOF metadata, anything
defined by UML is compatible with XMI mapping features. Additional work is being done to extend XMI to support
W3C-standard XML schema.
<Day Day Up>
<Day Day Up>
Summary
UML grew out of the increasingly complex challenge to build systems that not only met users' requirements but
that could withstand the ever-changing technological environment. Change, complexity, and speed conspired to
focus critical attention on how to build robust, durable systems. One result was a standard language for modeling
systems, the Unified Modeling Language (UML).
But the desire for truly industry-capable tools to build systems did not stop there. The OMG has continued to
spearhead the effort to build a comprehensive strategy in the form of Model-Driven Architecture (MDA).
There are a lot of languages involved in these strategies. Here's how they all relate:
The Meta Object Facility (MOF) defines a common meta-language for building other languages.
UML defines a meta-language, derived from the MOF, for describing object-oriented systems.
The Common Warehouse Metamodel defines a meta-language, derived from the MOF, for describing data
warehousing and related systems.
XML Metadata Interchange defines the means to share models derived from the MOF.
<Day Day Up>
<Day Day Up>
Chapter 2: UML Architecture
In every version of the UML, the authors have applied a four-layer metamodel architecture. Although time
pressures kept the original specification's implementation from being all it needed to be for the long term, UML 1.4
successfully adheres to the four-layer concept. The authors of 2.0 have taken great pains to expand on the four-
layer approach, and their effort results in an improved implementation of it. I'll step through both UML 1.4 and 2.0
versions after I discuss the significance of the four-layer metamodel architecture.
The Four-Layer Metamodel Architecture
Understanding the four-layer model will be easier if we start with an example using two layers and build up. To do
this, though, I need to use Class diagram notation. If you are unfamiliar with this Class diagram notation, you may
want to read Chapters 5 and 6 first, and then come back here.
Chapter 1 explained that the MOF is a metamodel, a model that defines the concepts used to build models. When
you show the relationship between the metamodel and the model, as in Figure 2-1, you get two layers, the
metamodel and the model. The metamodel layer defines what a Class is. It tells us that a class may contain
attributes and operations and that it may participate in associations. The Class in the metamodel is a metaclass, a
concept that describes what a class is and how to use it. An instance of the Class metaclass is a class that you can
see on a diagram. It contains attributes about a type of object, operations that the type of object can support, and
information about the associations that the type of object participates in. Different instances of the metaclass Class
describe different types of objects. In Figure 2-1, Person and Car are both model-level classes that are instances
of the metaclass Class.
Figure 2-1: A metamodel defining Class and Association instantiated in a model containing two instances of
class and an instance of association (the arrow between Person and Car).
The metamodel also defines an Association and how to represent it; that is, structures and relationships that
describe any association. The Association in the metamodel is a metaclass just like Class. An instance of the
metaclass Association is an association on a class diagram, modeled as the arrow between Person and Car in
Figure 2-1.
In this two-layer example, the metamodel layer defines the symbols-such as classes and associations-that can be
used to create a model. The model layer describes information such as people and cars and their relationships,
using the symbols defined in the metamodel. The model layer is where all UML diagrams drawn by developers
exist. The UML (metamodel) defines the rules that govern how modelers draw the diagrams and define the
elements of the diagram.
Earlier I said that a model element is an instance of a metamodel element. But that seems to conflict with the
common object-oriented terminology that says that an object is an instance of a class. The concept of instantiation
can become confusing in this context. To instantiate a metamodel means to create a model, like creating a Person
class from the Class metaclass. Instantiating a model class means creating an object of that type. In Figure 2-2
the object Mike is an instance of the class Person. Both are model level elements.
Figure 2-2: A metamodel defining a Class and an Instance Specification instantiated by a diagram containing
a class and an object.
But in order to model the object Mike we need a definition for modeling an object. Figure 2-2 illustrates that it is
valid and useful to define a metamodel for instances, in this case a metaclass called InstanceSpecification.
InstanceSpecification defines the needed modeling elements for describing an instance. To instantiate the
InstanceSpecification, model an object like Mike:Person, that is, an instance called Mike of type Person. Note that
Mike: Person is still a model element, not the actual object Mike. That would be yet a third layer.
Using the layering concepts similar to those illustrated in the previous examples, the UML authors set out to
employ a multi-layered architecture like the one shown in Table 2-1.
Table 2-1: The Four-Layer Architecture (OMG 1.4)
Layer Description Example
Meta-
metamodel
(M3)
The infrastructure for a metamodeling
architecture. Defines the language for
specifying metamodels.
MetaClass, MetaAttribute, MetaOperation
Metamodel
(M2)
An instance of a meta-metamodel.
Defines the language for specifying a
model.
Class Property, Operation, Component
Model (M1) An instance of a metamodel. Defines
a language to describe an information
domain.
StockShare, askPrice, sellLimitOrder,
StockQuoteServer
User object
(user data)
(MO)
An instance of a model. Defines the
values of a specific domain.
<Acme_SW_Share_98789>, <654.56,
sell_limit_order, <Stock_Quote_Svr_32123>
Layer M3 (meta-metamodel) is a model that describes the artifacts of and rules for a metamodel. In Table 2-1, the
M3 layer defines the rules for defining a metaclass. This level of abstraction supports the creation of many different
models from the same set of basic concepts. For example, both UML and CWM derive from the same MOF
model. MOF exemplifies M3, and UML exemplifies M2.
Layer M2 (metamodel) is a model that describes the artifacts of and rules for a model. In the context of this book,
it is UML that defines the model for elements like attributes, classes, and instances. (These are just samples. The
actual specification includes many more model elements.) The UML definition of a class extends the core
definition of the Class metaclass found in the M3 layer. The UML definition for an attribute also extends the MOF
Class definition. It is valid, and useful, to allow a class at one level to inherit from multiple classes in the level
above. This encourages a model in which the higher layers are extremely cohesive and loosely coupled to support
the widest range of application.
Layer M1 (model) is the model that describes the artifacts and rules for the problem domain. This is the level at
which we draw Class diagrams, Sequence diagrams, and so forth. Classes and objects, associations, attributes,
and all other elements of the model layer depend on the M2 layer definitions.
Layer MO consists of the runtime elements created by executing the model.
To sum up, layer M0 represents the actual artifacts of the problem domain, consisting of the elements created and
used at runtime. Layer M1 is a model of layer M0, layer M2 is a model of layer M1, and finally, layer M3 is a model
of M2.
NoteThis layered approach to defining concepts leads to the possibility that the same concept (class name)
may appear at multiple levels. Each lower level inherits the concept from the layer above and adds to or
overrides the definition. For example, Classifier appears in both M3 and M2. The definition in M2 adds
new features not defined in M3.
Although this is as far as the UML architecture goes, multilayer architecture may actually have an infinite number
of layers. Successive higher layers come from the process of abstraction, a natural process of refining a set of
rules. It is a bit like mathematics in the sense that as we progress through our education in math, we discover first
the basic concepts like adding and subtracting. Then we learn that there are rules that govern why addition and
subtraction actually work. As we continue our math studies we encounter higher and higher level principles that
govern broader principles that can be applied to many different types of math.
Moving to lower levels of abstraction, the application of the principles can be layered to define rules for converting
a visual model to a programming language model. That language can be mapped to other lower-level languages,
and finally to ones and zeros in computer memory.
This layering approach is at the heart of MDA (Model Driven Architecture). You could think of it as the old divide-
and-conquer approach. As we isolate the different levels of problems to solve and abstract the principles further
away from the implementation, we create the ability to mix and match solutions and principles. We have already
done something similar for years, separating interface from implementation in object-oriented design. A business
problem solved at one layer may be implemented in any number of solution environments while preserving
consistent definitions throughout by mapping each layer to the next.
<Day Day Up>
<Day Day Up>
UML Version 1.4
UML 1.4 was published in September of 2001. UML 1.4 with Action Semantics (also known as UML 1.5) was
published a year later. UML 1.4 was developed alongside MOF and OMG's CORBA technologies. The complete
integration of these three standards would later be realized by UML 2.0. By striving to align MOF, UML, and
CORBA, the authors were trying hard to pave the way for future extensibility. The 1.4 specification includes
Formal definition of a common Object-Oriented Analysis and Design (OOA&D) metamodel semantics
Graphic notation for OOA&D
Model interchange using XMI
Model interchange using CORBA IDL (this feature might be dropped in UML 2.0 due to lack of interest)
Language architecture
UML 1.4 consists of three top-level packages and additional specifications for Action Semantics and the Object
Constraint Language. The three top-level packages each contain a unique set of resources needed to define a
UML model. Figure 2-3 represents the three packages: Behavioral Elements and Model Management packages,
which depend on the Foundation package (in UML, the dashed arrow represents a dependency).
Figure 2-3: Top-level packages of UML 1.4.
OMG 1.4
Cross-Reference Packages are UML's way of organizing information, much like directories. Packages are
fully explained in Chapter 14.
A dependency arrow between packages simply means that the thing at the source of the arrow needs something
that is owned by the thing at the target end of the arrow. For example, the Behavioral Elements package contains
the Instance class that inherits from the Classifier class in the Foundation package. Without access to the
Classifier class, the Instance class would be incomplete. Because Classifier is required by many classes in both
Behavioral Elements and Model Management packages, it makes sense to put it in a common package, namely
the Foundation package.
Foundation package
The Foundation package provides those model elements that are required throughout the metamodel in the
construction of other elements. Figure 2-4 identifies three sub-packages: Core elements, Extension Mechanisms,
and Data Types.
Figure 2-4: UML 1.4 Foundation packages.
OMG 1.4
Nearly all of the diagram elements in UML derive their basic features from the elements defined in these three
packages. So it is important that you be familiar with their basic features. You don't need to memorize the
descriptions that follow, but you will probably want to mark these pages so that you can refresh you memory when
these elements are used later to explain the features of each diagram notation.
The Core package
The Core package provides the bulk of the fundamental constructs of the UML metamodel. It contains some
classes that cannot be instantiated but which define a fundamental set of features. A class that cannot be
instantiated is called abstract. Some abstract classes are ModelElement, GeneralizableElement, and Classifier. I'll
explain these classes in more depth in just a moment. These abstract classes serve as the basis for a number of
other, more specialized classes that can be instantiated.
A class that can be instantiated is called concrete. Core concrete classes include Class, Property, and Association.
Other concrete classes include Instance, Operation, Link, and many more. These are the classes that define the
concepts that appear either as notation on UML diagrams or as description elements for notations on the
diagrams.
Take a brief look at three Core abstract classes:
ModelElement: ModelElement is the most basic definition of a modeling entity. It is the definition from which
all other modeling metaclasses derive. A model element may have constraints, may be derived, has an
associated set of zero or more presentation options, may be stereotyped, and may contain any number of
tagged values. Hence, any other metaclass deriving from ModelElement already possesses all of these same
features.
GeneralizableElement: A GeneralizableElement may be specialized into any number of other elements. For
example, a Classifier may be specialized into Class, Property, and Association. When specialized like this, the
GeneralizableElement contains the features that all the specialized elements have in common. It may also be
a specialization of another GeneralizableElement. This concept makes it possible to construct a hierarchy of
elements in which each element above contains shared or generalized properties, and each element below in
the hierarchy contains only those properties unique to that new type of element.
Classifier: A classifier describes a named element with features. As a subclass of ModelElement, a classifier
may have constraints, may be derived, has an associated set of zero or more presentation options, may be
stereotyped, and may contain any number of tagged values. It describes an element that may be named
uniquely within a namespace like a package.
A classifier is itself a namespace. As such, it can contain other, nested classifiers. A classifier declares a collection
of features, both structural and behavioral, like attributes and operations, respectively. It may be generalizable, that
is, it may inherit from GeneralizableElement. A classifier may own behavioral models such as state machines, and
collaborations that are used to explain the classifier's lifecycle and behaviors.
The classifier metaclass is specialized to define many other metaclasses such as Class, Object, Association, Link,
Use Case, Collaboration, and many more common UML model elements. So in some of the book narrative I refer
to "classifier" instead of the specific sub-metaclass, to make it clear that I am describing a concept that applies to
all classifiers.
The Extension Mechanisms package
The Extension Mechanisms package provides a means to tailor the use and appearance of existing elements for
specific application domains or technologies. Extension mechanisms include stereotypes, constraints, and tagged
values. (A complete description of these mechanisms is provided in Chapter 3.)
The Data Types package
The Data Types package defines a common set of valid data types and enumerations for use in defining the
metamodel. They are the data types used in the diagrams that describe the UML metamodel, not the data types
that are used in UML modeling. The available data types are defined in the following list. You might want to skip
this section for now and refer back to it when you encounter the terms in the chapters that describe the diagrams
that use the data types.
AggregationKind: Defines the end of an association.
none: The association end is not an aggregate.
aggregate: The association end is an aggregate so the object on the other end is part of it. The part must
have the aggregate value of "none."
composite: The association end is a composite so the other end is part of it. The part must have the
aggregate value of "none." The distinction between aggregation and composition is covered fully in
Chapter 6.
ArgListsExpression: In the metamodel an ArgListsExpression defines a statement that evaluates to a set of
object lists.
Boolean: A set of logical alternatives.
true: The condition is satisfied.
false: The condition is not satisfied.
BooleanExpression: A statement that evaluates to a Boolean value.
CallConcurrencyKind: Used to describe how calls may be made to an instance and how they will be
processed when received.
sequential: Calls to the instance must be coordinated so that no more than one call is handled at a time.
Attempts to do otherwise put the integrity of the system at risk.
guarded: Multiple calls are allowed but only one is processed at a time.
concurrent: Multiple calls may occur simultaneously and all may proceed simultaneously.
ChangeableKind: Defines the allowed modifications for an attribute value (via an AttributeLink) or the end of a
link (LinkEnd).
changeable: No restrictions. All modifications are allowed.
frozen: Once values have been initialized, they may not be altered.
addOnly: Once the values have been initialized, new values may be added but values may not be
deleted.
Expression: A statement that evaluates elements of the environment but does not alter the environment.
Evaluation of the statement results in a set of instances. It is valid to get an empty result set. (Compare with
ProcedureExpression.)
name: An identifier for the expression.
language: The name of the language used to write the expression. The predefined languages are Object
Constraint Language (OCL) and the default signified by a blank. A blank language is interpreted to mean
that the language is natural language intended for human use. In other words the expression is written in
free-form text. The language may be any programming language or specification language.
body: The text of the expression.
Geometry: Geometry is defined outside the UML in vendor tools. The attribute is used to hold the values
defined by the vendor to describe the shape of the icon associated with a model element like a class or
decision node.
Integer: integer in the metamodel is a classifier that is an instance of the Primitive class representing the set of
integers.
LocationReference: A means to identify where to make a reference to another element during a behavior
sequence, such as an extension use case.
Mapping: (Identified but not actually defined in UML 1.4.) A text string that describes how elements in one
model map to elements in another model.
MappingExpression: A statement that evaluates to a mapping.
MultiplicityRange: An upper and lower limit on the cardinality that may be assigned to an element. The lower
limit may be zero but not negative. The upper limit must be equal to or greater than the lower limit and may be
infinity.
Name: A token that is assigned to a model element.
OrderingKind: Used in conjunction with elements that may have a multiplicity of greater than one. The values
designate the sequencing requirements for the members of the set.
unordered: The members of the set are maintained in no particular order.
ordered: The members of the set are kept in order as created.
Other options (sorted, for example) may be created using stereotypes.
ParameterDirectionKind: Defines the usage of a parameter on behavioral features such as an operation.
in: The value is for input only and may not be modified.
out: The value is for output and may be modified.
inout: The value is provided as input to the behavior and may be modified as part of the output.
return: The return value of a call.
ProcedureExpression: A statement that can modify the environment when it is evaluated. (Compare with
Expression.)
PseudostateKind: Within a Statechart diagram, states define the condition of an object. Pseudostates define
mechanisms that support navigation through the Statechart diagram.
choice: A decision point from which there may be any number of alternative transitions.
deepHistory: When a transition ends in a deepHistory pseudostate, the state of the object is fully restored
to the state it was in before it last exited. (Contrast with shallowHistory.)
shallowHistory: When a transition ends in a shallowHistory pseudostate, the state of the object is restored
to the state it was in before it last exited, but without resetting any substates that might have applied.
(Contrast with deepHistory.)
fork: Identifies a point where a single transition generates multiple concurrent transitions. (See join.)
join: Identifies a point where multiple concurrent transitions end and become a single transition. (See
fork.)
initial: The default transition when entering a composite state or the starting transition on a Statechart
diagram.
junction: Defines a focal point at which many incoming and outgoing transitions intersect. Only one
combination of one incoming and one outgoing path fire at any one execution.
ScopeKind: Defines the governing boundaries for the definition of an element.
instance: The element is contained within an instance of a classifier, that is, an attribute value is
contained within an object.
classifier. The element is contained within a classifier, that is, an attribute value is contained within a
class, common to all instances of the class.
String: A classifier element that contains text.
TimeExpression: A statement that defines the occurrence of an event. However, UML does not define the
format for the expression. Instead, it defers to the constraints of the implementation environment.
TypeExpression: The encoding of programming language data type, like Java short, long, or float, used with
an instance of ProgrammingLanguageDataType.
UnlimitedInteger: A reference to the symbol used to mean that there is no upper limit to a value-for example,
the asterisk (*) used in multiplicity ranges.
Uninterpreted: In the UML metamodel, an Uninterpreted element is a blob, a domain specific concept that is
not defined within the UML. The designation is interpreted by the domain into which the model is mapped.
VisibilityKind: Defines the allowed access to a model element such as an attribute or operation. The types of
visibility are private, public, package, and protected. The meaning of each visibility is fully explained in Chapter
5.
Behavioral Elements package
The Behavioral Elements package contains the model elements used to represent how the system works in terms
of controlling actions and interactions between elements. As Figure 2-5 shows, behavior is described from a
number of perspectives using different diagrams, namely Collaboration diagram, Use Case diagram, State
Machine, and Activity graph. But all of these diagrams depend on the same set of core-behavior-related concepts
to build their unique description of system behavior. The Use Case diagram models user interactions. The State
Machines model object lifecycles. Activity graphs model logic sequences. Collaboration diagrams model standard
patterns of interactions that appear through the system design. All describe behavior but for a different audience
and to reveal a different aspect of the system's behavior.
Figure 2-5: UML 1.4 Behavioral Elements packages.
OMG 1.4
Collaborations
Collaborations explain how classifiers work together to perform an operation or an interaction between elements.
Collaborations include two key concepts: the structure of the participating elements and the pattern of messages
exchanged between the elements. Collaboration may be modeled at the classifier or instance level. In fact, it may
be modeled at just about any level of abstraction all the way up to systems.
Collaboration is also a common way to model design and analysis level patterns. Patterns define common ways
that model elements may be configured to accomplish a type of work. Work requires interaction, and a
collaboration provides the needed concepts to appropriately represent the pattern requirements.
Collaborations are defined fully in Chapter 7.
Use cases
Use cases represent how clients interact with the system. A Use Case diagram is like an encapsulated view of the
entire system in that the client can only see and interact with the interface provided by the system. The internal
workings, the implementations, are inaccessible to the client except through the published interfaces, the use
cases.
Use cases are defined fully in Chapter 12.
State Machines
State Machines model the transformations that take place within an object over time. The transformations are
responses to stimuli from outside the object. State is described by the values of the properties of an object at a
point in time. Transformations in the values redefine the state of the object. A State Machine reveals that two key
elements are needed to understand and manage the life of the object: the events that trigger the changes, and the
behaviors that accompany the events and actually make the changes.
State Machines are defined fully in Chapter 11.
NoteState Machines are formally implemented in UML 1.4 as Statechart diagrams. In UML 1.4, the State
Machine package is the parent package for both Statecharts and Activity graphs.
Activity graph
An Activity graph is basically the old flowchart. It models logic, any logic, from workflow to the sequence of
behaviors in a single method. The authors of UML 1.4 tried to fit the Activity graph into the State Machine
metamodel as a refinement of a state machine. UML 2.0 has chosen to separate the two to more fully support the
business modeling potential of the Activity graph.
Activity graphs are defined fully in Chapter 13.
Model Management package
Model Management refers to the means to model the organization of modeling artifacts. Artifacts may be
organized along very general lines such as project phases, application incremental builds, subject matter, and
utility versus business models, using packages. Packages may also be specialized to represent more refined
views. For example, artifacts may be organized to represent the partitioning of a system into subsystems at any
number of levels. Finally, the artifacts may represent a physical system such as a billing or receiving system. Views
of the physical system are called models.
Packages, subsystems, and models are fully defined in Chapter 14.
Object Constraint Language
Object Constraint Language (OCL) provides the semantics for declaring static requirements for attributes and
operations. A constraint on an attribute is called an invariant, which is a rule that must never be violated during the
life of the system. For example, a phone number must always have 10 digits (not including country code).
Constraints on an operation define what must be true in order to invoke the operation, called a pre-condition, and
what must be true when the operation is completed, called a post-condition. For example, to place an order you
must provide a valid customer account number. When the order is placed, all items are reserved in inventory and
the order value is posted to the customer account. Together these constraints ensure that an operation is always
used properly and always yields a proper result. Constraints define the static requirements of the system. Contrast
this with the dynamic requirements defined by the Action Semantics discussed next.
Action Semantics
Action Semantics define the rules that govern the dynamic aspects of a system. An action is a class that defines a
behavior. For example, the instance of CreateObjectAction in Figure 2-6 defines how to create an object. An action
also defines the classes that participate in a behavior. In Figure 2-6 the participants include six instances of Action
metaclasses and one of the Class metaclass.
Figure 2-6: UML 1.4 Action example.
OMG 1.4
Here's an explanation of the roles of each element:
The CreateObjectAction object generates an instance, labeled customer, of the Class metaclass. 1.
This new customer object is attached to an OutputPin object that holds the result of the action. 2.
A Dataflow object connects the OutputPin object to an InputPin object so that the "customer" object can be
passed through it to another action.
3.
InputPin provides the customer object as an input value to the next action called WriteVariableAction. 4.
WriteVariableAction assigns the customer object to the variable called newCustomer. 5.
Actions can be used to define method implementations, handling of calls and signals between objects, and all
other behaviors that define the proper operation of a system. Action Semantics combined with Object Constraint
Language provide all of the precision needed to generate a complete platform independent model of a system.
Diagrams of UML 1.4
UML 1.4 defines nine diagrams for describing a system and one for organizing the various artifacts of the
development process. The diagrams may be categorized in a variety of ways. For description purposes I find the
following groupings helpful. There is nothing standard about the groupings but it has helped many of my students
grasp the different types of diagrams and their relationships to one another. Figure 2-7 shows the organization of
the UML diagrams into three categories.
Figure 2-7: Three complementary sets of diagramming tools.
Static (or structural) view
The static view includes those diagrams that model resources used in the construction of the system. Class
diagrams define the resources in terms of their allowed features and relationships. Object diagrams model facts or
examples about resources. The Object diagram may be used either to figure out what the Class diagram should
look like or to verify that the Class diagram is correct.
The Component diagram models the physical pieces of software in a system, including applications, files, user
interfaces, and pretty much anything that can execute on a processor, be stored in memory, or performed by a
person.
The Deployment diagram models the hardware environment, namely processors where components may run. The
UML's loose definition of processor allows for human beings to be processors so that manual processes can be
modeled as well.
Dynamic view
The dynamic view includes diagrams that model the behavior of objects in terms of interactions. The Sequence
diagram and Collaboration diagram use slightly different means to model objects passing messages back and
forth to accomplish a task. The dynamic view is particularly useful for discovering the interface requirements to
support the interactions. The interactions also reveal the data that is passed and that has to be owned and
managed by the objects. Both the interfaces and the data reveal updates to the structure of the objects defined in
the Class diagram.
The Statechart diagram examines the effect of the interaction in terms of the inner workings of a single object. It
tracks the changes in an object's state and the reasons for the changes. The reasons for the changes are often
messages from other objects seen clearly on the interaction diagrams. Here again, the changes to an object's
state reveal changes in data within the object, which can reveal changes for the Class diagram.
Functional view
Functionality drives the requirements for most applications. The users want the system to provide information or
behavior to support the business process or goals. The Use Case diagram specifically models what the users
expect to see when they interact with the system. A use case captures the dialog between a user and the system in
performing a specific task. For example, a user at an ATM requesting a withdrawal will answer a series of
questions in response to prompts from the system. The end result is already defined to be either one of a number
of predefined error messages or money and optionally a receipt.
The Activity diagram (also referred to as an Activity graph) models logic. Since logic appears throughout the
design of a system, the Activity diagram has broad application. Workflow, use cases, collaborations, and
operations all involve logic and may be represented in an Activity diagram.
<Day Day Up>
<Day Day Up>
UML 2.0
UML 1.4 was the culmination of a concerted effort to create a practical tool for modeling systems. The focus was
on "practical." The tool had to be useful for a broad spectrum of projects and easily implemented by users and tool
vendors alike.
UML 2.0 provided the opportunity to go back over the tool and fine-tune the definitions, clean up the architecture,
and generally refine the tool to ensure its long-term success. Also, while UML was gaining acceptance, OMG was
hard at work promoting Model Driven Architecture (MDA). UML is a major component of MDA, so complete
alignment with the other elements of MDA is essential.
UML 2.0 is comprised of two libraries, the Infrastructure and the Superstructure. The Infrastructure defines the
core metamodel used to define the MOF, UML, CWM, and Profiles. Now that probably sounds strange. Didn't I say
earlier that UML derived from the MOF? Well, yes. But one of the requirements for UML 2.0 was to go back and
improve the alignment of MDA components. The Infrastructure is now the top metamodel. The MOF, UML, and
CWM all derive from the Infrastructure. But the UML also still derives from the MOF. Confused? Well, read on and
it should become clearer.
The Superstructure extends and customizes the Infrastructure to define the UML metamodel. The Superstructure
defines the elements that make up the modeling notations of the UML, created by extending and adding to the
basic elements defined in the Infrastructure.
As part of the effort to fine-tune the standard and align it more closely with MDA, the authors established a set of
design principles to guide their work. The principles reflect some of the objectives set forth by the RFP but they
apply beyond specific corrections to the whole approach to the revision process. The design principles include
modularity, layering, and extensibility. These principles were chosen specifically (though not exclusively) because
they yield a language structure that facilitates reuse.
Modularity/Partitioning: To maximize reuse, the modeling elements are isolated into highly cohesive and
loosely coupled packages. The use of small, well-defined units enables a "cookbook" approach to assembling
new model elements at each successive layer of the architecture, taking from separate modules as needed to
build a new concept. For example, the Infrastructure defines the model elements Classifier and Relationship,
among others. The Superstructure combines these two metaclasses and three others from five separate
packages to put together all of the features needed to define an Association.
Layering: Layering refers to two means of separating concerns in the way the models are organized. The first
type is seen in the four-layer architecture explained earlier. The second type is within each layer. The
packages in each layer may also separate model elements to define levels of abstraction within the layer. For
example, within the Infrastructure, the Abstractions package provides the foundation for the concepts defined
in the Constructs package. You might call them layers within layers, providing successive refinements of the
model elements in each new layer.
A layer may use different degrees of modularity or partitioning to coincide with the purpose of the layer. For
instance, the Infrastructure uses very fine-grained partitioning to maximize reuse across diverse
implementations, and the Superstructure uses more coarse-grained partitioning to support the use of the
modeling concepts in context. In the Superstructure, for example, a partition might correspond to a type of
diagram, so the partition contains all of the elements that support the diagram, but those concepts have been
constructed from metaclasses defined in many different Infrastructure partitions.
Extensibility: The specification must support two types of extension. One type uses profiles. Profiles use
adornments to customize UML for use with a specific platform or domain. Second, the Infrastructure may be
specialized to create a new language like UML, as has been done to create the CWM. (Technically CWM
existed before the Infrastructure but it is currently being aligned with the Infrastructure.)
All three of these principles were chosen to support high reuse. Modularity provides small, well-defined, easy-to-
use units. Layering organizes the units for ease of use. Extensibility supports customization of existing model
elements so that new elements do not have to be created to solve new problems.
Links to the UML 2.0 RFPs
The four UML 2.0 RFPs are available at the following locations:
UML Infrastructure:
http://www.omg.org/techprocess/meetings/schedule/UML_2.0_Infrastructure_RFP.html
UML Superstructure:
http://www.omg.org/techprocess/meetings/schedule/UML_2.0_Superstructure_RFP.html
Object Constraint Language:
http://www.omg.org/techprocess/meetings/schedule/UML_2.0_OCL_RFP.html
UML Diagram Interchange:
http://www.omg.org/techprocess/meetings/schedule/UML_2.0_Diagram_Interchange_RFP.html
Many of the issues that came out of the development of UML 1.4 were simply too large to be addressed in UML
1.4 or even small revisions. So in mid-2001, even as UML 1.4 was being released, OMG members started work on
a major upgrade from UML 1.4 to UML 2.0. The OMG sent out separate RFPs for the Infrastructure, the
Superstructure, the Object Constraint Language, and Diagram Interchange.
Infrastructure library
The Infrastructure library contains the Core and Profiles packages. The Core package is the metamodel at the
heart of the MDA architecture. The Profiles package defines UML extension mechanisms for creating a UML
dialect, variations on the basic language that are customized to specific environments or application subject areas.
Core package
The Core package provides the foundation on which to build the MOF, UML, CWM, profiles, and future languages.
The common metamodel makes possible model interchange via XML Model Interchange (XMI). It also makes it
possible to customize UML variations using profiles and to create other languages like UML, but for other domains.
Common Warehouse Model (CWM) is one of another language. The same Infrastructure that supports UML is
used to define a language for specifying data structures for database design. Figure 2-8 models the relationship
between the Core package and the other major components of the MDA. The dependency arrows represent the
fact that each of the four packages, UML, CWM, MOF, and Profiles, need help from the Core package. They each
use information defined in the Core. So they depend on the contents of the Core in order to complete their own
sets of definitions.
Figure 2-8: UML 2.0 Infrastructure defines the core metamodel for the rest of the MDA components.
The Core package contains three other packages: Abstractions, Basic, and Constructs, that all depend on the data
types defined in the PrimitiveTypes package (see Figure 2-9).
Figure 2-9: UML 2.0 Core package.
The Primitive Types package defines a small set of data types that are used to specify the core metamodel. These
are not data types used for modeling application domains. They are used to create the models that express the
core metamodel. The three types are Integer, Boolean, and String. An instance of an Integer is any valid integer
value. An instance of a Boolean is either true or false. An instance of a String is some text.
The Abstractions package defines the common concepts needed to build modeling elements such as classifiers,
behavioral elements, comments, generalizations, and visibilities. Nearly all of these metaclasses are abstract,
meaning that they may not be instantiated. The purpose of the metaclasses is to define the fundamental concepts
common to most modeling languages. The metaclasses are generalizations of concepts used throughout the
language and in many different ways. Defining them in this generalized form makes them available to any package
that chooses to build on the concepts and customize them for use in a specific setting.
The Basic package defines the common characteristics of classifiers, classes, data types, and packages. The
Constructs package refines contents of the Abstractions and Basic packages, merging and refining abstract
concepts to create a set of common modeling elements. The sub-packages under the Constructs package reflect
the progressive refinement from abstract concepts to concrete; they are implementable concepts with package
names like Classes, Attributes, Associations, and Packages. All of these package names reflect notation elements
for diagramming languages rather than abstract concepts.
Now, to clarify my earlier statement about how the MOF uses the concepts defined by the Infrastructure, refer to
Figure 2-10. The MOF defines a set of concepts needed to define the elements of the MDA, for example, models,
primitive data types, and the means to provide identity for elements in a model. These concepts are created by
utilizing and extending the abstract concepts defined within the Infrastructure.
Figure 2-10: The relationship between MOF and the Infrastructure Core package.
Profiles package
The Profiles package contains mechanisms to adapt existing metaclasses to a specific subject or platform. Profiles
are fully explained in Chapter 21, including some profiles that have already been created and standardized.
Superstructure library: The UML package
The Superstructure library, shown in Figure 2-11, is really the UML package, containing all the elements used to
construct the UML diagrams.
Figure 2-11: The SuperstructureLibrary package.
The Superstructure (or UML) defines all the diagramming elements of the UML. Within the specification document,
the elements are organized by the type of diagram that they support. The three categories are structure, behavior,
and supplemental.
The Structure section defines Class, Object, Composite Structure, Component, Deployment, and Package
diagrams that model various elements and the relationships between them.
The Behavioral section defines Sequence, Interaction Overview, Timing, Communication, and State Machine
diagrams, as well as Action Semantics.
The Supplemental section defines auxiliary concepts like information flows and class templates, and profiles
as defined within the UML (versus profiles as defined in a more general way by the Infrastructure).
Diagrams of UML 2.0
UML 2.0 kept most of the diagrams of UML 1.4 and added some of its own. Table 2-2 lists the old and the new with
a few notes to identify the differences.
Table 2-2: Comparison of UML 1.4 and 2.0 Diagrams
UML 1.4 UML 2.0 Changes
Class diagram Class diagram
Object diagram Object diagram The Object diagram is drawn in the Class diagram
canvas, not in its own diagram space. (See Composite
Structure diagram.)
Composite Object
diagram
Deployment
diagram
Deployment diagram
Combined
Deployment and
Component
diagram
Combined
Deployment and
Component diagram
Protocol State
Machine diagram
This is a State Machine at a higher level of abstraction.
Activity graph Activity diagram The Activity diagram has been substantially refined and
improved with its own metamodel, independent of the
state machine.
Composite Structure
diagram
This is kind of a combination of an Object diagram and a
Composite Object diagram.
Communication
diagram
(See Interaction diagrams.)
Interaction Overview
diagram
(See Interaction diagrams.)
Deployment
diagram
Deployment diagram
Combined
Deployment and
Component
diagram
Combined
Deployment and
Component diagram
Protocol State
Machine diagram
This is a State Machine at a higher level of abstraction.
Activity graph Activity diagram The Activity diagram has been substantially refined and
improved with its own metamodel, independent of the
state machine.
Composite Structure
diagram
This is kind of a combination of an Object diagram and a
Composite Object diagram.
Communication
diagram
(See Interaction diagrams.)
Interaction Overview
diagram
(See Interaction diagrams.)
UML 2.0
Sequence diagram remains Sequence diagram
Collaboration diagram becomes Communication diagram
Statechart diagram becomes State Machine
1.2Identify the use cases, the behaviors of the system, in terms of specific goals and/or results that must
be produced.
1.
Evaluate the actors and use cases to find opportunities for refinement, such as splitting or merging
definitions.
2.
Evaluate the use cases to find include type relationships. 3.
Evaluate the use cases to find extend type relationships. 4.
Evaluate the actors and use cases for generalization opportunities (shared properties). 5.
Modeling actors
In UML, the term actor refers to a type of user. Users, in the classic sense, are people who use the system. But
users may also be other systems, devices, or even businesses that trade information. In Use Case diagrams,
people, systems, devices, and even enterprises are all referred to as actors. The icons to model them may vary,
but the concept remains the same.
Figure 12-5 models the most common icons for actors. People are typically represented using stick figures. Other
types of actors are normally represented with a rectangle stereotyped as an actor and named for the type of actor.
Figure 12-5: UML suggested icons for actors.
Any icon may be used to replace these. Figure 12-6 offers some alternatives. A company logo might represent an
enterprise. A cartoon image might represent a device. A graphic may be used to represent a system. Often, using
alternative icons in a modeling tool is as simple as importing the graphics to a specific directory.
Figure 12-6: Alternative actor icon examples.
Actors in a low-level use case may even be elements of the physical system, such as a class or a component.
An actor is a role that an entity, external to the system, plays in relation to the system. An actor is not necessarily a
specific person or system. For example, a person may act in the role of a venue manager scheduling a new event.
Later that day the same person might work on setting up the pricing for a series of performances. The same
person can play two different roles, and therefore can function as two different actors when interacting with the
system. Likewise, many people can function in the same role. For example, many people function as agents for
the theater, all performing the same set of duties and having the same relationship with the theater's system.
Using roles helps keep you focused on how the system is being used rather than on the current organization of job
titles and responsibilities. The things that people do should be separated from their current job titles if the system is
to be able to cope with the changes that are inevitable in any organization. In fact, reevaluating roles in an
organization often provides valuable insights into improving job descriptions.
How do you identify actors? Listen to descriptions of the system. Listen for the ways in which people interact with
the system. Ask why they are using the system in that manner. The answer to the why question often describes a
duty they are performing that supports a function or the creation of some result. When multiple people perform the
same function, try to name the role that they all play while performing the particular function.
TipThroughout the modeling effort, the vocabulary of the users will reveal most of the key elements of the
model. Watch for how parts of speech translate into model elements; actor names often show up as the
subject in sentences describing how people use the systems.
Initially actors are modeled as communicating with the system behaviors, the use cases. As the project
progresses, use cases are realized or implemented by classes and later components. As the project progresses,
actors also evolve. Instead of representing roles that people perform, they transform into the user interfaces that
people playing these roles use to interact with the system. For example, in the system-level Use Case diagram, a
Customer actor buying seats to a performance at the theater interacts with the PlaceOrder use case. In the design-
level Use Case diagram, the actor becomes two elements: the role of customer, and a user interface used by a
customer. The use case becomes one or more objects that manage the behavior of the application to interact with
the user interface and the rest of the system.
This relationship between the high-level description in the use cases and the low-level description in the design
and implementation models provides traceability.
Traceability describes how a requirement can be identified in the beginning of the project and followed through its
evolution to its ultimate implementation. Traceability helps insure that requirements are not lost or corrupted during
the development.
Furthermore, project members working at the enterprise level can use the same tools to describe requirements as
the members who work on the detailed design of the system components. Changes made on one level more
easily translate to changes on other levels, improving communication and coordination of project tasks and
deliverables.
Actor descriptions may also be refined using generalization. Conceptually the process of refinement of actors is
the same as for classes. Actors have a purpose and one or more interfaces. Evaluation of the similarities and
differences between actors can identify opportunities to merge, and to specialize, their descriptions.
For example, in Figure 12-7, the VenueManager actor is responsible for scheduling. However, interviews with our
clients reveal that only certain venue managers have the authority to reschedule performances and events.
Rescheduling can influence customer relations, and may involve the decision whether or not to offer refunds. The
two roles have a lot of similarity, and few differences. So, a second actor, ExecutiveVenueManager, is defined to
describe the higher level of authority. An executive venue manager has (inherits) all of the responsibilities of a
regular venue manager, plus a few unique to the executive role.
Figure 12-7: Using generalization to refine the definitions of actors.
Generalization may be used for more than just a single specialization, as in the venue manager example. For
example, in the original system description a customer is defined as someone who is interested in purchasing
tickets for performances at the theater. The definition of a customer includes people who have already purchased
and those who simply want to browse but don't yet know if there is anything they want to buy.
While working on the purchasing rules, we discover that some customers have access to special pricing that other
customers do not, and that some customers are extended a line of credit, while others are not. The rules also draw
a distinction between customers who are individuals and customers who are companies (corporate customers).
Figure 12-8 models a generalization structure for customers. At the top is a basic definition of the customer actor.
Below that, and attached by generalization relationships, are the specializations of the customer actor (role).
Figure 12-8: Using generalization to refine the definitions of customers.
Modeling use cases
A use case defines a behavioral feature of a system (or enterprise, subsystem, and so on). Without these features,
the system cannot be used successfully. Each use case is named using a verb phrase that expresses a goal the
system must accomplish. For example, use cases for the theater system include Create Agent Contract,
Reschedule Show, and Schedule Event. Although each use case implies a supporting process, the focus is on the
goal, not the process. Figure 12-9 illustrates notations for use cases. The name may appear inside or outside of
the ellipse.
Figure 12-9: Use case notation alternatives.
One very common question about use cases is, "What requirements belong on the Use Case diagram and what
requirements should be explained elsewhere?" The simplest answer I've found is to model only the behavioral
features of the system that can be seen by an actor. For example, most systems must save data to a database, but
the actors can't actually see this happening. The most they typically see is a message indicating that the system
did, or did not, save their data. In this situation, the use case-level requirement is a message indicating the success
or failure of the save function, not a description of the save process. The implementers can use the success or
failure message as a requirement that defines the type of information that needs to be produced by the process
used for saving the data.
This applies even though use cases might describe different levels of detail. A use case drafted at the enterprise
level would describe interactions with people or other companies, while those drafted at the component level might
describe operations on a class, or a set of operations performed in sequence. Regardless of the level, the focus is
on the purpose and interfaces of the entity, not the implementation. The sequences of actions performed by a use
case are the interactions with the actors, not the internal processes.
By defining use cases in this manner, the model defines a set of requirements, not a solution. It does not describe
how the system must work. It describes what the system must be able to do. For example, when I decide that the
theater is going to contract with agents, I define what it means to successfully complete negotiations with an agent.
In our theater example, the result of this system feature is a complete contract (with the term "complete" fully
defined) and a set of zero or more sales agreements that define what seats they are allowed to sell for discrete
periods of time within the contract.
The solutions for achieving these results could include manual processes and a simple data-entry feature, support
for automated calculation of projected commissions (for the agent) and profitability (for the theater), or even
interactive, Internet-based collaboration between venue managers at the theater and agents at remote locations to
negotiate and finalize the terms of the contract. The merits of each of these alternatives would have to be
measured against
How well it satisfies the established objectives (the desired results of the use case). 1.
How well it can be supported within the constraints for the project (performance, cost, time to deliver,
quality, and maintainability).
2.
Keeping these two principles in mind (goal-oriented definitions and an actor-centered perspective) will help you
avoid functional decomposition, the breaking down of procedures and tasks into smaller and smaller processes
until you have described all the internal workings of the system.
CautionOne of the pitfalls of systems development is going over budget, which happens when we don't limit
the scope of each task or we make a model too inclusive. UML provides 12 other diagrams, in
addition to the Use Case diagram, for fully describing the solution for a system. You don't have to
explain everything in the Use Case diagram.
The complete set of use cases for an entity describes all of the behaviors of that entity. Such a list can be large. To
organize modeling information, UML provides packages, which function like directories. Use cases pertaining to a
category of behaviors may be grouped together within a package for ease of use. For example, within the theater
there are many functions, some related to marketing, others to scheduling, and still others to contract
administration and to sales. Packages provide a means to scope information and effort. Information of a given type
may be kept together, separated from other topics. This means that those who need the information to implement
the topic have a single, well-identified place to look for the information. Figure 12-10 models the subsystems of the
theater system in a Package diagram.
Figure 12-10: Package diagram for the theater example.
The packages here are the same packages modeled in Figure 12-3. In Figure 12-3, the packages are modeled
from the perspective of package containment. Here, the model emphasizes package dependencies rather than a
containment hierarchy.
Use cases may also be viewed much like the services provided by classes or states. That is, services may be
provided or required. A use case may provide or offer a service to an actor. An offered service is triggered by the
user and implemented by the use case. For example, a use case could query the scheduled performances at the
theater, or complete a ticket purchase. In an offered service the actor is making a request of the system. The
system is responsible for fulfilling the request by asking for input, enforcing integrity rules, and managing the
information.
A use case may also require a service from the actor. For example, a use case might support entering mailing or
contact information, or entering contract details. In this type of use case, the system depends on the actor to
provide all of the information. It functions a bit like a getSomeInfo() call from the system to the actor.
Both types of use cases may be implemented in the form of a dialogue between the actor and the system.
Finally, use cases often reflect a business entity's need to manage resources. Managing resources includes
acquiring, using, and disposing of those resources. The business often reports on the status and use of the
resource throughout the life of the resource. These tasks have been summed up in a humorous acronym CRUD,
for Create, Read, Update, and Delete. CRUD can be a checklist to remind you of the tasks to consider when
brainstorming use cases.
Since the CRUD behaviors are so common, you can save yourself a lot of work by combining them into one or a
few use cases that provide all the maintenance features related to the resource. You might also find that the
dependencies between resources require you to maintain them together in the same use case.
Be careful not to follow the CRUD checklist too literally. Consider that a resource might be acquired (created) in
many different ways. It might be queried (read) from a variety of perspectives. A resource can be altered (updated)
in many ways. A resource may be disposed of (deleted) in many ways, too. Each different way of interacting with
the resource might reveal the need for a different use case.
So, a good approach is to brainstorm the use cases with the CRUD checklist as a starting point. Then evaluate
what you find and consolidate where it makes sense from the users' perspective and where the rules that govern
the resource make it prudent to do so.
Adding classifiers
UML 2.0 adds an association, in the metamodel, between a use case and a classifier. Classifiers may be classes,
associations, collaborations, interfaces, and more. A classifier is basically an entity that can own behaviors. A use
case describes a behavior. Consequently, a classifier may own, or be described by, any number of use cases.
Figure 12-11 highlights the modification to the metamodel, making use cases an optional owned element of a
classifier. This relationship means that the classifier provides the context for the use case.
Figure 12-11: Metamodel update supporting classifier ownership of use cases. $OMG 2.0
An enterprise might own dozens of use cases organized into a hierarchy of subsystems. A class might have a use
case for each interface. A collaboration may be represented as a single use case or a hierarchy of use cases,
depending on the level of the collaboration description.
Alternately, a use case may be used to explain the behavior of more than one classifier. This highlights the UML
emphasis on encouraging reuse. Reuse is valuable not only in the code, but in every artifact of the development
cycle, from requirements through implementation.
Modeling associations
An association is a relationship between an actor and a use case. It is an instance of the same Association
metaclass used to model relationships between classes on a Class diagram. The relationship is represented by a
line between an actor and a use case. The association represents the fact that the actor communicates with the
use case. In fact, in earlier versions of the UML specification, the line was called a Communicates With
relationship. This is the only relationship that exists between an actor and a use case. Figures 12-12 and 12-13
model associations between the customer actor, the agent actor, and the use cases with which they each interact.
Figure 12-12: Modeling associations between agents and use cases.
Figure 12-13: Modeling associations between customers and use cases.
Notice that different actors may access the same use case. This typically means that they interact with the use
case in different ways. If their interactions are identical, it might mean that their roles are the same. I emphasize
might because the purpose of two interactions also has to be the same in order to merge them. For example, in
Figure 12-13, a person in the customer role uses the PlaceOrder use case because he wants to place an order for
himself. In Figure 12-12, the agent uses the PlaceOrder use case because she is helping a customer place an
order. The result is the same in both cases; an order is created for a customer. But the relationship to the result is
different. The completed order belongs to the customer in both cases, but the person who entered the order is
different.
TipSome tools, like Rational Rose, place a navigation arrow on one end of the association (depending on the
direction you draw it). All of the details of the interaction between an actor and a use case are explained
either in the use case description or in Sequence diagrams. So the navigation arrow does not provide any
additional information. Modeling tools typically provide the option to turn the navigation arrows on or off.
The important thing to remember is to identify what use cases the actors need to access. These
connections will form the basis for the interfaces of the system and subsequent modeling efforts.
Use case associations may also be adorned with multiplicity. Despite the many times I have seen multiplicity used
in Use Case diagrams, I have never seen a good explanation for why to use it or what it adds to the model. To
reiterate, all of the details of the relationship between the actor and the use case are explained in either the use
case description and/or a set of Sequence diagrams and/or collaborations modeled within Composite Structure
diagrams.
Modeling use case relationships
Use cases define discrete behaviors. It is possible for a system to use the same behavior under a variety of
circumstances and as part of many larger, more comprehensive behaviors. In other words, behaviors can be
reused; and we need some notation to show that use cases can be reused. UML defines two standard stereotypes
to represent two common use case relationships: include and extend.
The include relationship
While researching systems it is common to find a behavior that can be performed under many different
circumstances. In code, we tend to make reusable components such as class libraries, utility classes, subroutines,
and functions that we can simply reference or call from within other code. UML supports the same practice when
identifying common features in the use case approach.
The include relationship is analogous to a call between objects. One use case requires some type of behavior.
That behavior is already fully defined in another use case. Within the logic of the executing use case, there is a call
to the previously defined use case. The distinguishing characteristic of the include relationship is that the
decision to incorporate the second use case is in the calling use case. The called use case is unaware of the
calling use case and has no participation in the choice to execute.
Included use cases can be identified in at least two ways. Included use cases might be pre-existing. They are
defined for one purpose, but in the development of another use case, the same behavior is required. Rather than
define the behavior again within the new use case, the behavior can simply be included in the logic of the new use
case. Another way to identify included use cases is to pull functionality out of existing use cases to form a new use
case. This happens most often when merging use cases from the efforts of multiple developers. When comparing
their work, they find the same behaviors defined within a number of their use cases. One option is to pull those
behaviors out, encapsulate them as discrete use cases, and then replace their former location within the use case
description with a call to the new use case using the include relationship.
To use the include relationship, the use cases must conform to two constraints:
The calling use case may only depend on the result from the called use case. It can have no knowledge of the
internal structure of the use case.
The calling use case must always require the execution of the called use case. The use of the called use case
is unconditional.
Figure 12-14 models two include relationships:
Figure 12-14: include notation for the Use Case diagram.
Between PlaceOrder and SelectPerformance
Between PlaceOrder and SelectSeats
Figure 12-14 shows how the include relationship is modeled as a dashed open arrow pointing from the calling
use case to the called use case- from PlaceOrder to SelectPerformance, for example, where PlaceOrder calls the
SelectPerformance use case. The direction of the arrow helps to reinforce visually that the call is initiated by the
use case at the base of the arrow.
This example tells us that when a customer interacts with the PlaceOrder use case, he will always be asked to
select a performance and to select the seats at that performance that he wants to order. The diagram does not tell
us when, in the execution of use case, the calls will be made, or even the order of the calls. For those details we
need a use case narrative.
Note that both the SelectPerformance and SelectSeats use cases may be called independently of the PlaceOrder
use case. These are examples of using an existing use case in the context of another use case. A customer can
simply look up performances or seats at a performance without placing an order.
The extend relationship
The extend relationship says that one use case might augment the behavior of another use case. The extension
use case provides a discrete behavior that might need to insert itself into the base use case. The arrow is drawn
from the extension to the executing use case. Drawing the arrow with the base at the extension use case indicates
that the extension, not the executing use case, decides whether to impose itself on the executing use case. The
executing use case is unaware of the extension.
This might sound strange at first, but consider the impact on changes to the system. As the base use case evolves
and new extensions are developed, the base use case does not have to be changed with each new or revised
extension. In this respect, the extend relationship functions much like the observer pattern. That is, the extension
watches for circumstances that would require it to jump into the execution of the base use case. Using extensions
enables us to leave the base use case untouched while freely adding behaviors as the system evolves.
The extend relationship is modeled in Figure 12-15 as a dashed stick arrow from the extension use case to the
base use case. The direction of the arrow helps to reinforce visually that the call is initiated by the use case at the
base of the arrow. The extension use case decides when it is time to execute. The base use case has no part in
the decision.
Figure 12-15: Notation for the extend relationship.
Applying an extend relationship requires four elements:
The base use case: The use case that will be augmented by the extension use case (the RescheduleEvent
use case, for example).
The extension use case: The use case that provides the added behavior (CancelPerformance and
ReschedulePerformance are extension uses cases in this example).
The extend relationship: A dashed arrow with the base attached to the extension use case and the arrow
attached to the base use case.
Extension points: One or more locations in the base use case where a condition is evaluated to determine
whether the extension should interrupt the base use case to execute. The extension points may be listed within
the use case icon or simply identified within the use case narrative.
The extension point is a condition that determines whether the extension should be used. There is no such
condition in an include relationship. The extension point defines what the extension use case is watching for in
order to know when it needs to insert itself into the executing use case.
For example, an extension point may be an error condition. During a PayForOrder use case, for instance, the
connection to the credit card company could be down. The base use case would trigger an error to the system. It
would be up to the extension use case to watch for the error and to execute (that is, handle the error) when the
error occurs. Once it completes, the extension would notify the system and the base use case would be allowed to
resume execution from the point where it was suspended.
CautionThe extend relationship can be confusing for Java programmers who use the extends keyword on a
class declaration to define an inheritance relationship. These two concepts have nothing in common.
UML provides a separate notation for inheritance/generalization.
The extension point notation is added inside the use case ellipse in the following format:
<extension point> ::= <name> [: <explanation>]
(Note that not all tools use the recommended format, as can be seen in Figure 12-15 where only the explanation is
used.)
The name follows the normal rules for identifiers and describes a location within the logic of the use case. Since a
use case describes a behavior, the location is often a state of the object some time during the execution of the
behavior. The explanation, which is optional, may be any informal text adequate to describe the condition that
governs the execution of the extending use case. Figure 12-16 models the notation for an extend relationship,
adding the extension point explanation in a compartment below the name within the use case ellipse.
Figure 12-16: Extension point notation using a comment.
Extension points may also be documented in a comment attached to the extend relationship, as shown in Figure
12-16. The comment uses a page icon with the top right corner folded down. It contains the condition that the
extension is watching for and the label (extension point) that identifies the location in the base use case where the
decision would take place. In this example, the condition tests to see whether both the start and end dates
changed. It is possible that only one of the dates changed, in which case no performances would be rescheduled.
They would either be deleted (if the start date moved later or the end date moved earlier) or added (if the end date
moved out). The comment is attached with a binary constraint line, a fancy name for a dashed line connecting two
model elements.
Yet another approach to modeling extension points is to represent a use case as a class or object, as shown in
Figure 12-17. The extension points may then be listed in a user-defined compartment. The ellipse in the top right
corner of the name compartment identifies the class as a use case.
Figure 12-17: Using class notation to model a use case with extension points.
In this example, there are two extension points. The first was modeled in Figure 12-15 between
ReschedulePerformance and RescheduleEvent. The second extension refers to the relationship between
CancelPerformance and CancelEvent in Figure 12-15.
In addition to the condition that defines when an extension is needed, the extension use case may itself be
conditional. The condition attribute on the extension use case functions as a Constraint. If the constraint expression
is satisfied when the extension point condition is true, the extension use case will execute. For example, if the
extension point "event moved" is true, the extension use case will test its constraint "reschedule performances
authorized". If this further constraint is satisfied, the extension use case will execute, otherwise it will not.
The contrast between include and extend relationships is sometimes confusing. Table 12-1 sets them side by
side to highlight the similarities and differences.
Table 12-1: Include versus Extend
Include Extend
Augments the behavior of the base use case. Augments the behavior of the base use case.
The included use case is always used to
augment the executing use case.
The extension use case might be used to augment the
executing use case.
The executing use case decides when to call
the included use case. The included use case is
unaware of the base use case.
The extension use case decides when it will insert
itself into the execution of the base use case. The base
use case is unaware of the extension.
The relationship arrow is drawn from the
executing use case to the included use case.
The base of the arrow indicates that the base
use case directs the included use case to
execute.
The relationship arrow is drawn from the extension use
case to the executing use case. The base of the arrow
indicates that the extension use case is making the
decision whether to interrupt the executing use case.
<Day Day Up>
<Day Day Up>
Writing a Use Case Narrative
A Use Case diagram is very descriptive of the relationship between the actors and the features of the system, but it
lacks the details needed to support the system behaviors. A use case narrative is a written document that explains
a use case is a behavior of the system with a beginning (trigger), middle (dialog), and end (termination). To do so,
the use case narrative often includes the following elements.
Assumptions
Preconditions
Use case initiation/triggers
Dialog
Use case termination
Post conditions
And a couple of interesting alternative concepts:
Minimal guarantees
Successful guarantees
The names may be different in various methods but in general the same details are covered. In addition to these
main items, it is common to find audit details such as update logs, status, author, unique identifier and/or name,
open issues, future enhancements, and more. There is nothing standard about these items. Many people have
contributed to the concepts and many people use entirely different approaches. Here I present what I believe to be
the minimum concepts that can make or break the use case definition.
Much of this language is borrowed from the programming by contract concept developed and implemented by
Bertrand Meyer in the creation of the Eiffel programming language. One chief goal of the programming by contract
concept is that each relationship, whether between actors and use cases, or between different use cases, should
be described much like a contract. A contract states terms to which both parties must comply. If one party fails to
comply with its part of the contract, then the other party is not obligated to fulfill its part of the contract.
Each unit should remain as loosely coupled as possible. Loosely coupled entities are connected in such a way as
to minimize their dependence upon one another. Unit independence allows each unit to be maintained without
requiring corresponding changes in the other unit (or at least the fewest changes possible). Loose coupling
reduces the time and cost required to develop and maintain the system.
Assumptions
In order for a use case to work properly, certain conditions must be true within the system. The system agrees, or
contracts, never to invoke the use case unless it knows that all of the needed conditions have been met.
Assumptions describe a state of the system that must be true before the system may use the use case. These
conditions are not tested by the use case; the use case simply assumes them to be true. (Contrast this with
preconditions, which I take up later).
For example, consider authentication and authorization. A standard security feature typically handles these
functions. Each subsequent use case assumes that the user could not be accessing the use case had she not
made it past the security check. Consequently, you would rarely, if ever, include the security check in each use
case.
So how does this help you with the design of the system? Well, if one use case can't work, and should not even be
accessed, unless another use case has first done its job, then this condition dictates the order of execution. In
other words, the assumptions give you explicit clues about the sequence of execution, or the workflow, for use
cases.
An assumption for the Select Show Seats use case might read:
Assumption: The user must have authority to access this transaction.
TipThe Select Show Seats assumption provides a simple example that is almost too common. If an
assumption such as this one appears in many use cases, it can be redundant and tedious to document.
Instead, document it at the system level, in a document separate from the individual use cases.
Workflow is often established by interviewing users to find out how they do their jobs. A liability created by this
approach is that it brings the focus back to the process the users currently apply instead of the goals and the rules
that define success. In a very generic example, Figure 12-18 represents the assumptions for a set of use cases.
Use Case 3 assumes that use case 1 has completed its task. Use Case 4 assumes that Use Case 2 has
completed its task. The current workflow (shown on the left) dictates that the order of execution is 1, 2, 3, and 4.
But is that really the only option given the documented assumptions?
Figure 12-18: Evaluating assumptions to establish workflow options.
Would it not be equally valid to perform use cases 2 and 4 concurrent with 1 and 3 since they have no
dependencies? Or how about a workflow that executes in the order 1, 3, 2, and 4? The point of the example is that
workflow options are often more flexible than we might at first realize. Evaluate the true dependencies and
discover the available alternatives. It might open doors to more efficient work practices.
Granted there might be other factors involved in the final decision regarding workflow. But assumptions provide a
very specific and valuable insight into the hard dependencies that constrain workflow options.
Preconditions
Like assumptions, preconditions describe a state of the system that must be true before you can use the use case.
But unlike assumptions, these conditions are tested by the use case before doing anything else. If the conditions
are not true, the use case will not execute.
If you have a programming background, you have probably already encountered preconditions even if you didn't
call them by that name. Whenever you call a behavior (function or operation) that has parameters, the first section
of code in the behavior checks the values of the parameters. If any of the values fails to pass the validity checks,
the request for the behavior is rejected. Simply stated, the behavior cannot work with bad information.
The same is true of a use case. For example, when placing an order at the theater, you need to invoke the use
case to view the seating chart and select seats at a performance. If you don't first decide what show you want to
view, the use case cannot pull up the right seating chart. Furthermore, if you choose a performance that has
ended, or is not in the appropriate status, then the use case should not allow you to view the seating chart.
Preconditions for the Select Show Seats use case might read:
Precondition: The requestor must provide a valid Performance reference. A valid performance is defined as
a performance in Available for Sale status.
The preconditions need to be published along with the interface to your use case, and later to the interface of the
class or classes that implement the use case. This is because the implementation interface only tells the client to
send two integers and a character string. It can't tell them, for example, that the first integer must be a value
between 1 and 10, the second must be an integer greater than 100, and the character string can only be 30
characters in length. By publishing these preconditions, anyone who wants to use your use case (or object) is sure
of the correct set of values and is able to fulfill his part of the contract, providing good input values with his request.
TipNotice how rapidly we bring precision to the model from the simple beginnings of the Use Case diagram.
You'll find the analysis process akin to pulling a thread on an old sweater. If you keep tracking down each
new discovery, eventually you'll unravel the whole complex problem. Using simple checklists to remind you
of the questions to ask can expedite the process and build a successful pattern of thought for problem
solving. As you gain experience, modify the list of questions and tasks to improve the process and to make
it your own.
Remember, the goal is not to become a disciple of a particular technique, but to evolve a technique that
works for you.
Use case initiation/triggers
A use case has to start somehow. Use case initiation simply identifies how. For example, a use case might start
because
An actor selects an option on a menu.
A time alarm goes off.
A device or another system sends a signal.
A specific system condition occurs.
Any of these events would start (or trigger) the use case, so they are often called triggers. The use case triggers
for the Select Show Seats use case might read:
Trigger: The user selects the Select Show Seats option from the menu.
Trigger: PlaceOrder invokes SelectShowSeats.
In the first example, a user accesses the use case directly to see what seats are available. In the second example,
another use case invokes SelectShowSeats as part of the dialog for placing an order.
Use case initiation provides a place to think through all the possible triggers that could launch the use case. This is
critical when you start thinking about reusing use cases. If five actors and/or use cases plan to use the same use
case, you need to know how each user plans to kick it off. If each has different expectations, you could be creating
a problem. Multiple triggering mechanisms lead to tight coupling and low cohesion. In other words, every time you
change one of the triggers you need to change the corresponding use case and make certain that you haven't
created problems with the other triggering mechanisms. More triggers mean more complicated and more costly
maintenance.
Use case dialog
The use case dialog refers to a step-by-step description of the interaction between the user (an actor or another
use case) and the executing use case (the system implementing the use case). Very often, it is helpful to model
this sequence of events using an Activity diagram, or an Interaction Overview diagram, just as you might model a
procedure for communication between two business units.
Granted, some use cases are simple queries with a request as input and a response as output. In fact, a use case
may be triggered by an event within the system and simply send a signal to an actor. But for those use cases that
are a bit more complex (an online transaction, for example), the dialog helps to identify clearly the responsibilities
of each participant and the expectations the users have regarding how they interact with the system to accomplish
the goal/s defined by the use case.
For example, when an actor invokes the SelectShowSeat use case, the following dialog ensues.
NoteThe first step in the dialog is to test the preconditions because those preconditions are a responsibility of
the use case (as opposed to assumptions, which are the responsibility of some other use case).
The system verifies that the user provided a valid Performance reference. A valid performance is defined as
a performance in Available for Sale status. If the test fails, the actor is informed of the failed request and
directed back to the menu or to the option to select a performance (outstanding issue for the clients to
decide).
1.
The system provides default lists of all currently scheduled events and performances scheduled within the
next 20 days.
2.
The user may choose one of the following options:
Cancel out of the transaction. a.
Select an event. The system responds by replacing the existing list of performances with a list of
performances for the selected event.
b.
Select a performance. The system responds by asking the customer to confirm his request. The
actor may either confirm or reject the choice. Rejecting the choice allows him access to the original
list of options. If the actor confirms the choice, the system saves the selected performance. The
system provides confirmation before terminating the use case.
c.
Provide a date range. The system responds by replacing the existing list of performances with a list
of performances for the selected date range.
d.
3.
If the system is interrupted during the use case, no action is required of the use case. The system provides
confirmation before terminating the use case.
4.
When the dialog is defined separately from the implementation, you can evolve the implementation without
affecting the participants, because the interface (the way they communicate) remains stable. For example, this
conversation could just as easily have taken place between a customer and clerk at the theater ticket office,
between a customer and an agent over the phone, or between a customer and the system over the Internet. Also,
you begin to see that some of the steps don't necessarily have to happen in the sequence presented here. The
goal of the dialog is to uncover just what really must happen in a specific sequence. Based on those facts, it is
easier to determine what variations could be valid.
Use case termination
Although there is usually only one triggering event to start a use case, there are often many ways to end one. You
can pretty much count on some kind of normal termination where everything goes as planned and you get the
result you anticipated. There may even be more than one successful termination. But things do go wrong. This
could mean shutting down the use case with an error message, rolling back a transaction, or simply canceling the
transaction. Each termination mechanism in the list of termination options should be addressed in the use case
dialog. The list is separate from the dialog but part of the complete narrative.
The list of termination options is a bit redundant with the dialog, but as with preconditions, this redundancy provides
some good checks and balances. The termination options list for the SelectShowSeats use case would look like
this:
Success:
A performance is selected and saved. The transaction is logged.
The actor cancels without making a selection. The transaction is logged.
Failure:
The precondition tests false. The actor is informed of the failed request and directed back to the menu or
to the option to select a performance (outstanding issue for the clients to decide).
The system is interrupted. No action is required outside of the default behavior (resetting the application
to the menu).
Post conditions
Post conditions provide the system portion of the contract. Post conditions describe
What the system must do if the preconditions are satisfied.
The state that the system must be in when the use case ends.
Workflow Requirements
A common question about use cases is, "How do I show workflow or screen flow?" The short answer is that
you don't. A more appropriate question would be, "How do I use the use cases to determine screen flow and
workflow requirements?"
Workflow is often a difficult problem in system design. Personal opinion, personal preferences, and legacy
processes often get included as requirements (remember the rocket story)? Business practices are prone to
faulty assumptions and unquestioned repetition. New systems often contain the same deficiencies that the old
ones had because they were not critically evaluated.
To determine workflow, check out the preconditions and assumptions. If one use case requires the user to
provide data that is the result of a second use case, or even multiple use cases, or do something that another
use case is responsible for, then logically, the second use case must come first.
These clues are a tremendous help when you recognize that many workflows were designed based on user
preferences or experience and have not been checked against the rules and constraints that define the
successful operation of the system (assumptions and preconditions). Assumptions explicitly define the
precedence dependencies between use cases that ensure that all the rules and constraints will be enforced.
Quite often, screen flow and workflows are far more flexible than you might think. Let the use case
assumptions and preconditions tell you what the flow options are. Then design the workflows that are possible,
letting the users decide what works best for them.
You may never know what comes after the use case terminates, so you must guarantee that the system is in a
stable state when it does end. In fact, some people use the term guarantee for just this reason. You guarantee
certain things to be true when this use case completes its job. For instance, you might
Guarantee to give the user a confirmation at the end of the transaction, whether it succeeded or failed.
Promise to notify the user of the result of an attempted save to the database.
Log every transaction.
There is overlap between the post conditions and the dialog. Although this overlap is a bit redundant, the added
visibility of the post conditions has proven to be an excellent check-and-balance mechanism as well as very helpful
in reviews with clients who want to know how the system will behave under every situation. In fact, it can sometimes
work best to define the termination options first, and then address how the dialog would bring about each option.
Minimal guarantees
I have found two additional ways to look at use case requirements: minimal guarantees and successful
guarantees. Both concepts come from Alistair Cockburn's book, Writing Effective Use Cases, which brings this
important facet of analysis to the forefront. These methods are very comparable to post conditions except that they
break the conditions down a bit more and speak directly to the interests of the users. According to Cockburn.
The minimal guarantees are the fewest promises the system makes to the stakeholders, particularly when
the primary actor's goal cannot be delivered. They hold when the goal is delivered, of course, but they are of
real interest when the main goal is abandoned. Most of the time, two or more stakeholders have to be
addressed in the minimal guarantees, examples being the user, the company providing the system, and
possibly a government regulatory body.
He goes on to say that the most common minimal guarantees include activities such as logging the results of the
transaction. Logging is often a background task that can provide valuable information about why a transaction
failed, how the failure was handled, and how the system was restored to a stable condition to prevent cascading
effects on other transactions or on other parts of the system.
The proper use of minimal guarantees can be particularly valuable because it brings these important issues to the
front of the project where they can be handled strategically. When they are not addressed up front, they are often
addressed tactically by individual developers who encounter the issues when writing the code. As in nearly all
modeling concepts, the goal is to protect the integrity of the system at all times, and to protect the interests of the
users.
Here's an example of a minimal guarantee for the PlaceOrder use case:
Minimal guarantee: If the order is incomplete, all selected seats must be returned to available status, the
order deleted, and any charges to the customer's credit card reversed. The reason for the incomplete order
is recorded, such as the type of system interrupt (failure) or a user canceling the transaction.
Success guarantees
Success guarantees function like minimal guarantees but they have a different focus. Minimal guarantees focus
on the "no matter what happens" view of the use case. Success guarantees focus on what must be true when
everything works properly. Cockburn says of success guarantees:
The success guarantee states what interests of the stakeholders are satisfied after a successful conclusion of the
use case, either at the end of the main success scenarios or at the end of a successful alternative path. It is
generally written to be added to the minimal guarantees: The minimal guarantees are delivered, and some extra
conditions are true; those additional conditions include at least the goal stated in the use case title.
Here's an example of a successful guarantee for the PlaceOrder use case:
Minimal guarantee: If the order is completed successfully, then the order is saved, the seats at the show are
updated to Reserved status and associated with the order, the customer's credit card is charged for the total
order, and the credit card information is deleted. The transaction is logged with applicable stats for reporting
transaction duration and amount.
<Day Day Up>
<Day Day Up>
Describing Use Case Scenarios
Scenarios describe each possible outcome of an attempt to accomplish a use case goal. A use case identifies a
primary goal of the system. When an actor attempts to accomplish a goal using the system, there are usually
decisions and rules that influence the outcome of the use case. For example, the user may be able to choose from
a list of alternatives, or exception conditions may hinder the accomplishment of the goal.
A scenario is a single logical path through a use case. UML even defines a scenario as an instance of a use case
in that a scenario is one realization, or execution, of the conceptual use case. In other words, a use case defines
what could happen, and a scenario defines what does happen under a given set of conditions. A scenario may be
modeled using a Sequence diagram.
CautionThe word scenario is used a number of ways. In the context of UML use cases, scenarios have a very
specific meaning. Be careful not to confuse the more general usage of the term scenario, as an
example or situation, with the explicit definition used here.
There are many ways to work with scenarios. You can simply read the narrative and extract each logical path from
the text. You can draw out the logic with an Activity diagram so that the flow of logic can be visualized and more
easily segmented. Whatever the means, the scenarios start to reveal the inner workings of the system and the
expectations of the users in a way that the use case alone cannot. This closer look can open doors to further
analysis of the system and ultimately to the design.
Probably the key lesson in scenarios is the necessity of tackling the important questions and issues early, when
you have the best chance and the most time to come up with options and solutions. All too often, project teams
leave these questions until they're working on the code, when many of the big issues are easily lost in the mountain
of details. This causes the requirements to wind up being expressed only as code (rather than in prose), which is
alien to most users and difficult for them to evaluate.
Why you should care about use case scenarios
In some situations a use case is simple enough that the narrative is more than ample to explain all the issues that
define its proper execution. But in many other use cases, the logic can become troublesome. Many of the
applications we work on today are complex and require significant scrutiny.
In addition to addressing complexity, we need some way to test the accuracy and completeness of the use cases.
Unfortunately, for many projects, developers often hold off testing until the end of the project, when they're short
on time and focused on the solution rather than the requirements. Or worse yet, there is no time for testing at all,
so final testing happens in production.
Speaking of requirements, did you know that the overwhelming majority of litigation regarding software projects is
based on misunderstandings over requirements? In a recent abstract, Capers Jones, of Software Productivity
Research, had this to say:
The clients charge that the development group has failed to meet the terms of the contract and failed to
deliver the software on time, fully operational, or with acceptable quality. The vendors charge that the clients
have changed the terms of the agreement and expanded the original work requirements.
Furthermore, the problems that Jones refers to here are on projects where there i s a contract. Consider how much
worse the situation can become where the requirements process is less formal!
If you've ever worked in a quality-assurance group, or even worked with one, you know how frustrating the tester's
role can be. Think about how the tester's challenge changes when she's able to create a test plan at the beginning
of the project instead of waiting until the end. Then testing can take place in small increments throughout the
project instead of in a massive, difficult, and frustrating process at the end (if there is time to test at all).
This is why scenarios have taken on an increasingly important role in the requirements phase. What happened to
other tools such as screen layouts and prototypes? They are still valuable, and can provide a great way for users
to visualize what you are trying to express in words and diagrams. The challenge is that the layouts and prototypes
themselves don't explain how they'll be used and why.
Avoiding Analysis Paralysis
Having said this, I want to voice a caution: You can dissect use cases endlessly looking for anything and
everything possible. To avoid analysis paralysis, recognize that you won't get everything right the first time
through. It just isn't possible. Here are some tips to keep things moving:
Allow for a number of passes through the problem.
Set limited time frames for each pass (time boxes).
Move on to other diagrams, and then revisit the use cases and scenarios after looking at the problem from
the unique perspectives that the other diagrams provide. Let the diagrams reveal information and
inconsistencies, or prove you are right.
Above all, allow time for practice. Trial and error can be excellent teachers.
Another liability of prototypes is that they give the false impression that the application is nearly complete. This is
understandable because the user only works with the interface, and to users, the interface i s the application. But
you and I know that there is a lot more to it, including the logic expressed in the use case narrative and the
scenarios; the places where we grapple with business objectives; everything that can and probably will go wrong;
and our plans to cope with all of these issues to ensure the success of the system.
How to find use case scenarios
Reading a use case narrative and following every possible path can be difficult. One very simple and practical tool
to use to find scenarios visually is an Activity diagram. Visual models provide a valuable complement to text.
Together, the two perspectives can reveal insights not visible from only one of these perspectives. Given that
people learn in different ways, having different tools to explain the same problem can help everyone grasp the
important issues more easily.
One major benefit of an Activity diagram is its ability to quickly reveal dead-end segments and incomplete paths.
The Activity diagram is still grounded in textual description for each activity, so there is significant overlap with the
narrative that helps insure that the two representations, the narrative and the Activity diagram, are consistent.
To find a scenario, start at the beginning of the use case narrative. It usually works best to determine the path of
the successful scenario first, since it's usually the most commonly encountered path in practice. Trace the steps
until you come to a decision, represented by a diamond. Now you have to make a choice. Select one of the
possible paths leading out of the diamond, preferably the path that leads to the successful completion of the use
case, and continue to trace the steps. Continue the process until you reach an end point. That completes scenario
1. Now return to the top and retrace the first scenario to the first branching point (decision). Start identifying the
unique-portion of the second scenario at the branch by following a different path leading out of the decision.
Continue tracing the steps as before. If you loop back to a place you have already been, then stop. Avoid creating
redundant segments.
Repeat the process until you have traced a path through every segment of the diagram. You should now have a
set of traces that together account for every element of the Activity diagram. The set of use case scenarios is then
derived from a unique combination of these segments. For example, the successful path is one use case scenario.
The first three steps of the successful scenario and the first exception segment make up one exception scenario
for the use case.
The next section steps through the process of finding scenarios for a theater system use case.
Finding use case scenarios for the case study
Now that you have an understanding of what a scenario is and how to find one, this section provides an example of
the process for finding scenarios using the theater systems's SelectPerformance use case. Table 12-2 contains
the use case narrative that is the basis for the Activity diagram to follow. Please read through the narrative to get
familiar with how the use case works. Note the assumptions, preconditions, and post conditions, and how they are
integrated into the use case dialog.
Table 12-2: Use Case Narrative for SelectPerformance
Element Description
Use Case
Name
SelectPerformance
Use Case
Number
12
Author Jane Analyst and Joe Client
Last Updated April 1, 2003
Assumptions The actor has the appropriate authority to use this feature.
Preconditions None.
Use Case
Initiation
This use case starts on demand.
Use Case
Dialog
The user should be given a default set of information about the shows scheduled within
the next 20 days. The user should also be provided with all the events currently
scheduled at the venue.
When the user selects an event, the system should provide the set of shows scheduled
for that event (the event display should remain unchanged).
When the user selects a show, the system should prompt him for a confirmation of his
selection in order to avoid mistakes.
The user should also be able to request a list of shows for a date range and get a new
list of shows (the event display should remain unchanged).
The user may cancel out of this use case without making a selection.
Post conditions The selected show must be saved so that it can be passed on to the next step in the
workflow. One selected show is the net output from this use case.
Figure 12-19 models the Activity diagram for the dialog in Table 12-2 so that you can more easily see the logical
steps involved in the execution of the use case.
Element Description
workflow. One selected show is the net output from this use case.
Unresolved
issues
Should we allow the users to establish their own defaults? How many items should we
allow them to view at one time?
Figure 12-19 models the Activity diagram for the dialog in Table 12-2 so that you can more easily see the logical
steps involved in the execution of the use case.
Figure 12-19: Activity diagram for the SelectShow use case.
Figure 12-20 traces the logic for the successful scenario first. It traces each step, following the choices we make at
each decision point. On this-first pass we want the choices that will lead us through the successful scenario.
workflow. One selected show is the net output from this use case.
Unresolved
issues
Should we allow the users to establish their own defaults? How many items should we
allow them to view at one time?
Figure 12-19 models the Activity diagram for the dialog in Table 12-2 so that you can more easily see the logical
steps involved in the execution of the use case.
Figure 12-19: Activity diagram for the SelectShow use case.
Figure 12-20 traces the logic for the successful scenario first. It traces each step, following the choices we make at
each decision point. On this-first pass we want the choices that will lead us through the successful scenario.
Figure 12-20: Trace the successful scenario from beginning to end.
NoteNotice how the concurrent activities (between the two horizontal bars near the top of the diagram) are
treated as a single path.
Figure 12-21 selects the "No" path out of the confirm decision, representing the fact that the user decided not to go
with the show they selected (accidents happen). (This segment will reveal a unique path through the use case
narrative that we can use to build the second scenario.) This time the logic leaves the decision and loops back to
the merge point just before the first decision. The scenario segment stops here. We can later put it together with
the steps of scenario 1 that led up to the decision point to complete the definition of scenario 2.
Figure 12-21: In scenario 2, a performance is selected and the confirmation is declined.
Figure 12-22 selects the second path out of the first decision, representing the fact that the user selected an event.
This time the logic leaves the decision and loops back to the decision point again. The third scenario segment
stops here. We can later put it together with the steps of scenario 1 that led up to the decision point to complete the
definition of scenario 3.
Figure 12-22: Scenario 3.
Figure 12-23 identifies the fourth scenario by continuing from the first decision when the user chooses to get a new
list of performances based on a date range. Once again the scenario segment loops back to the decision point.
Figure 12-23: Scenario 4.
Figure 12-24 identifies the fifth scenario, in which the user chooses to cancel out of the use case without selecting
a performance.
Figure 12-24: Scenario 5.
The goal of developing scenarios is to account for every logical possibility in the flow of the use case. Every
segment identifies a unique line of logic within the total use case.
But you're not done yet.
Applying use case scenarios
Technically, the definition of a use case scenario says that each scenario describes a single logical path through a
use case, from start to finish. Using the Activity diagram, you can visually identify each path simply by following
lines in the diagram. But in the SelectPerformance use case example, each individual arrow traces a separate
logical segment, not necessarily a complete logical path, from the beginning of the use case to the end of the use
case. For example, alternative scenario 3 only identifies the steps that were unique from those already identified by
scenario 1. In fact, in this use case, a scenario could involve multiple loops through each segment in any order
before the user finally cancels or confirms a selection.
The diagram did not show repeated segments of paths already singled out by a previous scenario. This convention
is a common one employed to avoid redundancy and extra work. When writing the formal scenarios, or test cases,
simply build the test case from the scenario segments. By doing a little mixing and matching, you can provide
comprehensive coverage of every combination. For example, to fully specify scenario 2, include the first two steps
and the decision from scenario 1 plus the unique steps of scenario 2.
Note, too, that whenever you encounter a loop, the scenario maps out only a single pass through the loop. To test
the loop thoroughly, run the scenario segment multiple times remembering to test the boundary conditions.
The result of completing the description of all the scenarios should be a reasonably complete acceptance-level (or
user-level) test plan for each use case. Remember, though, that so far you have only modeled the system at the
use case level. That means that the test plan you have so far is really only an acceptance-level test plan, not full
system or integration testing. But the use case level test plan does provide the framework for all of the test plans
for successive phases in the project.
<Day Day Up>
<Day Day Up>
Summary
Use case diagrams, together with use case narratives and scenarios, define the goals of a system, or other
classifier, such as an enterprise, subsystem, or component. The concept came from the work of Ivar Jacobson
and his associates on a methodology called Object-Oriented Software Engineering (OOSE). The purpose of the
use case approach is to focus the development effort on the essential objectives of the system without getting lost
in, or driven by, particular implementations or practices.
The elements of a Use Case diagram include:
The Use Case diagram is complemented by the use case narrative and use case scenarios.
Actors define entities outside the system that will use the system in some way.
Associations indicate which actors will interact with features (use cases) of the system.
include and extend relationships describe the nature of the interactions between use cases.
Generalization defines inheritance relationships between use cases or between actors.
The goal of the Use Case diagram is to define the expectations of the external actors. Those actors may be
people, systems, or devices that need to interact with the system. Their interactions may be to provide input, to
receive output, or to dialog with the system in order to cooperate in the completion of a task. All of these
interactions are focused through a set of specific features of the system called use cases. Each use case defines
one specific goal that the system can achieve.
The construction of a Use Case diagram employs the following steps:
Define the context of the system:
1.1Identify the actors and their responsibilities.
1.2Identify the use cases, the features of the system, in terms of specific goals and/or results that must be
produced.
1.
Evaluate the actors and use cases to find opportunities for refinement (splitting or merging definitions). 2.
Evaluate the use cases to find include type relationships. 3.
Evaluate the use cases to find extend type relationships. 4.
Evaluate the actors and use cases for generalization opportunities (shared properties). 5.
The use case narrative describes what the actors expect from the use case. The use case is a behavior of the
system with a beginning (trigger), middle (dialog), and end (termination). As such, it needs to be explained in terms
plain enough for the actors to understand and verify, but precise enough for analysts and designers to rely on in
order to build the system.
The features of a use case narrative aren't standardized, but these are common elements in wide use:
Use case initiation, or trigger, describes how to start a use case.
Assumptions define conditions that must be true before the use case may execute, but are not tested by the
use case.
Preconditions define conditions that must be true before the use case may execute, and are tested by the use
case.
The use case dialog explains how the user (whether an actor or another use case) interacts with the system
during the execution of the use case.
Use case terminations define the different mechanisms that can cause the use case to stop execution.
Post conditions define the state of the system that must be true when the use case ends. This helps prevent
use cases from leaving the system in an unstable condition for other use cases that follow.
Minimum guarantees describe what the actors can expect from a use case, no matter what happens during
the execution of the use case.
Successful guarantees describe what the actors can expect from a use case when it completes successfully.
Although these elements are valuable, they are by no means exclusive. Definitely look into other books and online
resources on use cases, and augment the narrative to support your own method of development.
Use cases express what the users expect the system to provide. Use case narratives explain in detail how the
users expect to interact with the system when they invoke the use case. Scenarios break down the narrative
explanation to provide a detailed examination of every possible outcome of the use case; why each outcome
happens, and how the system is supposed to respond.
The Activity diagram provides a visual evaluation of the use case narrative. Although it isn't necessary to use an
Activity diagram, it can be very helpful, especially for complex use cases.
A scenario is a single logical path through a use case, expressing one possible outcome. Finding use case
scenarios requires that you follow each unique series of activities and decisions from the beginning of the use case
to a single end point. Together, the scenarios should account for every possible way that a use case could
execute.
When the scenarios have been identified, they may be used to develop a comprehensive acceptance-level test
plan. They may also be used to test the results of subsequent analysis and design efforts.
<Day Day Up>
<Day Day Up>
Chapter 13: Modeling Behavior Using an Activity Diagram
Overview
The Activity diagram is often seen as part of the functional view of a system because it describes logical processes,
or functions. Each process describes a sequence of tasks and the decisions that govern when and how they are
performed. You must understand a process in order to write or generate correct code for a behavior.
NoteSome authors lump functional and dynamic aspects of modeling together because they both express
behavior. However, in teaching these concepts, I find it useful to distinguish logic from interaction.
Interactions address net results of processes, that is, net input and output. Functional or logical models
address the mechanics of transforming input to output.
Also, functional modeling acquired a poor reputation with the onset of object-oriented (OO) modeling. After all,
object-orientation addresses the deficiencies of the earlier modeling approaches such as functional and data
modeling. But both functional modeling and data modeling still provide valuable insight into software development.
OO development methods do not eliminate the need for these valuable perspectives; they simply bring the two
concepts together to provide a more comprehensive and accurate model of how things work. Functional modeling
is still a very basic part of any application design.
So UML has preserved functional modeling in the form of the Activity diagram, which is designed to support the
description of behaviors that depend upon the results of internal processes, as opposed to external events as in
the interaction diagrams. The flow in an Activity diagram is driven by the completion of an action. In a state
machine, the flow is driven by external events or conditions in the associated classifier. Consequently, Activity
diagrams are useful for defining operations, use cases, and workflow that links a series of use cases.
<Day Day Up>
<Day Day Up>
Activity Diagram Changes from UML 1.4 to 2.0
The changes between UML 1.4 and UML 2.0 metamodels are substantial, although the net impact on the diagram
notation is not very dramatic. This chapter provides an explanation of the diagrams in both versions and more in-
depth coverage of the UML 2.0 metamodel for those who need to support the new concepts in a modeling tool, or
who want to define their own extensions to the model.
State Machine versus standalone model
In UML 1.4, the Activity diagram (ActivityGraph) was defined as a subclass of StateMachine. Thus, most of the
information needed to describe an Activity diagram was already defined as part of the state machine specification.
The Activity diagram simply added a few new classes, namely ActionState, ObjectFlowState, SubactivityState, and
Partition.
UML 2.0 views an Activity diagram as an entirely unique diagram from a State Machine. The metamodel is
substantially expanded and there is virtually no over lap with the state machine metamodel.
Even with this rather substantial change to the underlying metamodel, the notation is still remarkably similar. Table
13-1 identifies the notation from both versions, revealing the similarities and differences.
Table 13-1: Activity Diagram Notation in UML 1.4 and 2.0
UML 1.4 UML 2.0 Description
Action state Action with preconditions, post
conditions, and parameter set
UML 2.0 expands the description of
an action.
Subactivity state Activity, with preconditions and post
conditions. Can belong to an Activity
Groupprecondition
UML 2.0 expands the description of
an activity.
Decision and merge DecisionNode and MergeNode UML 2.0 also allows the two icons
to be consolidated into one.
Call state Simply another action
Swimlanes (called
Partition in the
metamodel)
Partitions (called ActivityPartition in the
metamodel)
New name. Same basic function.
ObjectFlowState ObjectNode and ObjectFlow
Deferred events
Fork and join ForkNode and JoinNode UML 2.0 also allows the two icons
to be consolidated into one.
UML 1.4 UML 2.0 Description
to be consolidated into one.
Finalstate ActivityFinalNode and FlowFinalNode UML 2.0 refines the terminator
notation to support terminating an
individual flow versus the entire
Activity diagram.
Transition ActivityEdge