Sie sind auf Seite 1von 144

Practical Guide to

Enterprise Architecture
Volume One
James McGovern
Scott Ambler
Michael Stevens
James Linn
Vikas Sharan
Elias Jo

Prentice Hall PTR


Upper Saddle River, New Jersey 07458
http://www.phptr.com

Library of Congress Cataloging in Publication Data

Acquisitions Editor: Paul Petralia


Editorial/Production Supervision:
Cover Design Director: Jerry Votta
Cover Design:
Cover Illustration:
Manufacturing Manager: Alexis R. Heydt
Marketing Manager:
Editorial Assistant:
Development Editor:
2001 by Prentice Hall PTR
Prentice-Hall, Inc.
A Simon & Schuster Company
Upper Saddle River, New Jersey 07458

All products mentioned herein are trademarks or registered trademarks of their respective owners.
Prentice Hall books are widely used by corporations and
government agencies for training, marketing, and resale.
The publisher offers discounts on this book when ordered in bulk quantities.
For more information, contact Corporate Sales Department, Phone: 800-382-3419;
FAX: 201-236-7141; E-mail: corpsales@prenhall.com
Prentice Hall PTR, One Lake Street, Upper Saddle River, NJ 07458.
All rights reserved. No part of this book may be reproduced, in any form or by any means, without
permission in writing from the publisher.
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
ISBN 0-13-xxxxxx-x
Prentice-Hall International (UK) Limited, London
Prentice-Hall of Australia Pty. Limited, Sydney
Prentice-Hall Canada Inc., Toronto
Prentice-Hall Hispanoamericana, S.A., Mexico
Prentice-Hall of India Private Limited, New Delhi

Prentice-Hall of Japan, Inc., Tokyo


Simon & Schuster Asia Pte. Ltd., Singapore
Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro

To those who are savage in pursuit of excellence

Preface

To consider what enterprise architecture means, it is important to understand its


origin. All architecture within information technology can trace its ancestry back to the
lessons learned from building architecture. Throughout time, there have been building
architects who guided the construction of buildings even though the profession itself is less
than 200 years old. In cultures past, people used the same term for builder and architect.
Craftsman such as a carpenter or mason designed building structures, hired laborers,
bought materials and managed the development from digging the trench for the foundation
to building the roof.
Like all industries over time, a revolution came that brought new techniques,
regulations and so on. With each revolution also comes highly specialized experts who
have mastered a particular aspect of construction each of which is responsible for solving
specialized problems. The details of construction became the concern of experts while the
role of the architect (general contractor) became primarily focused on providing the
overall blueprint for the structure and the relationships that exist between the client and the
specialists.
The role of the architect has many parallels to the building trade. Many systems are
created by small teams that might envisage the entire application. As organizations and
their systems grow larger, the complexity of incorporating business specifications, use of
distributed teams and integration with external parties have changed the roles associated
with creating the nervous system of an organization. Many organizations have created the
role of Enterprise Architect whose primary responsibility is to ensure the integrity of
information systems and the development process.
Enterprise Architecture is one of the most challenging roles in information
technology today. Many aspects of the role are technical while much more of it is about
interaction. Many people that have this position have significant responsibility but do not
have authority or control. Enterprise Architecture as an area of study allows one to focus
on interesting, complex problems, advance in the corporate ladder, maintain technical
competencies while making a colossal difference to the enterprise.
Enterprise Architecture requires skill sets not normally taught in university
curriculum or acquired on the job. Even good architectures are rarely accepted without

formidable challenges. The successful architect has to dissolve any dissatisfaction for
politics and become the evangelist of architecture in the eyes of its various stakeholders.
Architects cannot simply buy into any vision that is articulated. Any involvement in the
implementation of enterprise architecture requires one to understand it. The most
successful architecture will have an architect that can describe the motivation behind
architectural choices. Likewise, a successful architecture has an architect that leads the
architecture team, the development team, the technical direction of the enterprise and
sometimes the organization itself. The primary focus of this book is to be a guide and
trusted advisor to those who want to be successful in this role

CONTENT OF THIS BOOK


The goal of this book is to share insight gathered by industry thought leaders in an easy to
read practical manner. This book contains many leading edge examples that illustrate how
enterprise architecture can be applied to existing business and technology issues. It will
help one focus on how to think concretely about enterprise architecture while providing
solutions to todays problems. Within the covers of this book, we will cover the following
topics:

Systems Architecture
Service Oriented Architecture
Software Architecture
Product Line Practice
Enterprise Unified Process
Agile Modeling
Agile Architecture
Presentation Tier Architecture
User Experience and Usability
Data Architecture

It is recommended that the chapters in this book be read in order as each chapter builds on
the previous chapter. If reading the chapters in order is not a viable option, then reading a
particular section may be a better choice.

ABOUT THE BOOK


The idea about this book came about during a lunchtime conversation amongst two
of the authors who were complaining about why the computer field was the only field in
which you could become a manager, director, or even vice president without any

computing experience. During this lunchtime conversation, we compared the computer


field to other professional fields such as accounting, law enforcement and even the dreaded
legal field.
Imagine if the town you lived in were looking to hire a new police chief. Could you
have confidence in the chief if he were not a police officer at any time in his life?
Furthermore, what if the chief did not know how to use a gun or had no idea about proper
investigation procedures? Lets extend this thought to accountants and lawyers by further
extending our vision of a former MacDonalds manager applying to be a partner at one of
the firms simply because he was a leader? Maybe this is the reason why we have so many
accounting scandals these days.
The partner in the legal and accounting fields are leaders but they also retain current
knowledge of their practice and could argue a case or balance your books. The same police
chief is a leader but most likely still remembers how to write a parking ticket.
Getting to the root cause of this problem is one that cannot be solved quickly. Unlike
other fields, the computer field is one of the fields where you can have the title of architect
but not necessarily know your job. The information technology field is constantly evolving
and ever changing. For the motivated, it requires long hours reading multiple books by the
thought leaders of our industry to figure out the next minute. Even for the most diligent of
us, the goal of seeking knowledge and enlightenment may never come.
Many businesses are faced with challenging problems such as technology changing
rapidly, employees with limited skill sets, smaller budgets and less tolerance for failure.
The only way one can be successful in new order is to learn a new strategy. Todays
business no longer has the opportunity to learn from their failures.
This book presents several provocative alternatives and serves as a leader, teacher
and guide to help you manage the chaos. It will challenge both conventional and
contrarian wisdom. This book is written by some of the brightest information technology
thought leaders and can help you not only learn from a single source but create an agile
enterprise using techniques learned on the battlefield of life.
Come to success

AUDIENCE
This book is suitable for all information technology people who have at least five
years in the field. Ideally, the reader of this book should be a project manager, senior
developer, software architect, enterprise architect or director of a large Fortune 1000

organization. This book is also suitable for information technology executives such as
Chief Information Officers (CIO) and Chief Technology Officers (CTO).
This book could also be used for discussion purposes in college classrooms. For
professors focusing on Management Information Sciences (MIS) at the undergraduate or
graduate level, this book could be used in any strategic planning or architectural course.

WHAT THIS BOOK IS NOT!


This book assumes from the beginning that its readers have been in the Information
Technology field for at least five years and have deployed systems using one or more
technologies (or at least attempted to). We also assume that the reader has basic familiarity
with emerging technologies and their attempt to solve existing problems.
This book may reference certain topics that are relevant for database administrators,
developers and project managers but in no way attempts to serve as a complete reference
for this audience. We have provided references at the end of each chapter that the reader
could refer to if they are interested in drilling deeper into any particular topic.

COMPANION WEB SITE


Be sure to visit the companion Web Site for this book at
http://www.architectbook.com. Here you will be able to download supplemental
information related to Enterprise Architecture.

CONTACTING THE AUTHOR TEAM


The authors of this fine guide welcome your questions, comments and constructive
criticism regarding the book. The authors frequent multiple Yahoo discussion groups on
Agile processes and can usually be found there. We believe it is important to answer
questions that are asked in a forum as others may also benefit from the answers we
provide.

DISCLAIMER
The advice, diagrams and recommendations contained within this book may be used
as your heart desires, with the sole discretion that you may not claim that you were the
author. The publisher, authors or their respective employers do not provide any form of
warranty or guarantee its usefulness for any particular purpose.
This book is 100% error free! Not! If you believed us even for a second, I have a
suspension bridge in my backyard that I would like to sell you. The author team and
editors have worked hard to bring you an easy to understand, accurate guide on Enterprise
Architecture. If you find any mistakes in this book, we would appreciate contacting us.
This book will use for its examples a fictitious auto manufacturer. Any example
companies, organizations, products, domain names, email addresses, people, places,
events depicted herein are fictitious. No association with any real company, organization,
product, domain name, email address, person, places or events is intended or should be
inferred.

MISCELLANEOUS RAMBLINGS
Authors and readers alike have their own religion when it comes to the best way a
given topic should be explained. The examples contained within this book will not be a
blow-by-blow description of the subject. There are books that are better suited for this
goal. We shall strictly focus on content that provides an additional value added experience.
Enterprise architecture is a new immature discipline in our industry. The author team
struggled countless hours on what is the best way to help others build solutions that are
highly available, extremely scalable, cost effective and easily maintainable. Many late
nights have resulted in what we believe is the right formula of architectural advice and
implementation details.

ABOUT THE

AUTHORS

James McGovern
He is the lead author of three recent books: Java Web Services Architecture (Morgan
Kaufmann) XQuery Kick Start (Sams Publishing) and the J2EE Bible (Wiley). He has
served as technical editor for five different books on various computer topics. He has
sixteen years of experience in Information Technology. He is employed as an Enterprise
Architect for Hartford Financial Services. He holds industry certifications from Microsoft,
Cisco and Sun. He is member of the Java Community Process and currently working on
Performance Metric Instrumentation (JSR 138) specification. He is a member of the
Worldwide Institute of Software Architects.

Scott Ambler
He is the President and senior object-oriented (OO) development consultant with
Ronin International and specializes in OO analysis and design, Agile Modeling, training &
mentoring and architectural audits of mission-critical systems. He is the author of the bestselling books: Elements of UML Style, Agile Modeling, Mastering Enterprise JavaBeans,
The Elements of Java Style, Building Object Applications that Work, The Object Primer,
Process Patterns. He has a Masters of Information Science from University of Toronto. He
is a Senior Contributing Editor with Software Development magazine, a columnist with
IBM DeveloperWorks and a columnist with Computing Canada.

Michael Stevens
Michael Stevens is employed as a software architect for The Hartford. He
specializes in service-oriented architecture and is a co-author of a recent book: Java Web
Services Architecture (Morgan Kaufmann). He is a columnist for Developer.com on
various software development topics. He received his B.S. degree in computer science
from Central Connecticut State University and is a candidate for a masters degree in
computer science from Rensselaer Polytechnic Institute. He has over fourteen years of
experience in information technology. He founded a software company that developed
solutions for the mailing industry. He is a certified Java programmer, a member of the
IEEE, the IEEE Computer Society, the ACM and the ACM SIGSOFT.

James Linn
He is the co-author of a recent book: XQuery Kick Start (Sams Publishing). He
has eight years of experience in software development. He has worked with Java, C++ and
MFC. He has possesses a masters in Biochemistry and a masters in software engineering.
He is a consultant with Hartford Technology Services Company.

Vikas Sharan
Vikas Sharan is currently Lozoic's co-founder and Managing Partner. He has spent
ten years in various technical management and entrepreneurial positions defining,
developing and selling software and Internet services. Before starting Lozoic, Vikas was a
Director, with a services firm in the Silicon Valley. Prior to this, he founded and set up a
successful usability practice for PRT Group, a large system integrator on the east coast and
with a development center in Barbados.
Vikas is currently a part of the architecture team at Baypackets, a telecom
application platform engineering company. Vikas has a strong telecom, supply-chain and
financial services background, and has worked computer-based training systems and
mobile communication products. Vikas has a background in design and human factors.
Vikas has obtained a masters degree in Industrial Design from the Indian Institute of
Technology, Bombay.

Elias Jo
Elias Jo is a Systems Architect for the New York Times Digital and is responsible
for the technical direction for their enterprise web site. He has a background in deploying
enterprise scale web applications and middle-ware products for both the media and
financial services industry. Prior to the Times, Elias was the architect/lead developer for
multi-million dollar project deployments at Deutche Bank, Citibank, Standard & Poors,
Guardian, AMKOR and ADP. He has been published in ColdFusion Developer's Journal
and has participated as a technical editor for O'Reilly's technical publications. Elias is a
graduate of Columbia University and holds a degree in Industrial Engineering with a minor
in Computer Science.

ABOUT THE

REVIEWERS

Chris Caserio Aetna


Richard Maynard The Hartford
Nitin Narayan Accenture
Alan Williamson Editor JDJ

ACKNOWLEDGEMENTS
This book would not be possible without the keen eye of our acquisitions editor Paul
Petralia and Peter Coad. Many thanks go to our development editor who has spent many
hours refining our manuscript and the delicate procedure of linguistic hygiene. We would
also like to acknowledge the marketing and production staff of Pearson education for
making this book an Amazon.com bestseller.
Many of the thoughts contained within this book are about creating agile processes
for software development. A debt of gratitude is owned to the founding fathers of the agile
alliance (www.agilealliance.org) including: Kent Beck, Mike Beedle, Arie van Bennekum,
Alistair Cockburn, Ward Cunningham, Martin Fowler, James Grenning, Jim Highsmith,
Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, Robert C. Martin, Steve Mellor, Ken
Schwaber and Jeff Sutherland.
The team would also like to thank other writers we have worked with in the past and
hope to work with in the future: Vaidynathan Nagarajan, Marty Biggs, Steven Spewak,
Anita Cassidy, Dimitris Chorafas, Marc Goodyear, Tony Beveridge, Alan Williamson, Col
Perks, Fred Cummins, Sameer Tyagi, Sunil Mathew, Kurt Cagle, Per Bothner, John Crupi,
Graham Glass, Walter Hurst, Richard Stallman and Anne Thomas Manes.
The team would also like to thank our unfortunate reviewers for their sterling efforts
and apologize profusely for all of the hate mail we composed but never sent. Finally, we
would like to thank the I.T. community as a whole for providing the motivation for writing
this book. We wouldnt still be here if it werent for you.

James McGovern
It is much easier to write a book when others believe you can. My deepest thanks to
Sherry: my true love and constant source of inspiration. To my parents James and Mattie
Lee for bringing me up and always supporting me. Universal greetings to my family:
Daisy, Annika, Pamela, Demesha, Aunt Jesse Soogia, Kenrick, Kiley, Nicholas, Ian, Alex,
Kelon and Keifer. To my perfect son Little James, I am going to get you.
Thanks to all of my co-workers at Hartford Financial Services who never stopped
being interested in Enterprise Architecture and the progress of this book. Furthermore, I
would like to thank the readers of my previous works for their suggestions, praise, letters
of admiration and gratitude.

Scott Ambler
XXX

Michael Stevens
XXX

James Linn
I thank my dog Trabbie, my cat Harlie (aka Wanda Widebody) and most especially
my wife Terria for her acceptance of this project and her forbearance in not hurting James
McGovern like she wanted to.

Vikas Sharan
XXX

Elias Jo
XXX

FUTURE VOLUMES
Enterprise architecture spans many topics. To cover all topics in a single book is not
realistic. This is the first volume of many more to come. Listed below are future volumes
in this series
Volume Two Methodology
Publication Date: Summer 2004
The second volume on the topic of Enterprise Architecture will bring you
methodologies and best practices for implementing not only enterprise architecture within
your organization but will also provide guidance on other increasing the productivity of
other disciplines. Some of the topics that will be covered are:
o
o
o
o
o
o

Business Architecture
I.T. Governance
Project Management
Quality Assurance
Defect Management
Testing Strategies

Volume Three Industry Verticals


Publication Date: Spring 2005
The third volume on the topic of Enterprise Architecture will take an industry
specific approach to architecture and will focus on the disruptive technologies within the
vertical. This book will dive into the following industry verticals:
o
o
o
o
o
o
o

Insurance
Healthcare
Retail
Media / Publishing
Telecommunications
Investment / Brokerage
Manufacturing

WHY ENTERPRISE ARCHITECTURE?


A wise Enterprise Architect once observed a conversation between two individuals,
one who also had the title of Enterprise Architect and worked for a very well respected
organization (name intentionally withheld). He was passionately engaged in a conversation
with a senior executive about what direction the organization should take with its
information systems. His recommendations fell on death ears until he asked the executive
three very simple questions:
How can we have a viable customer relationship management strategy when we
do not even know where all of our customers data resides?
How does our technology spending equate to enabling our strategic business
goals?
Does our organization have a operational process model?
As you can surmise, the answers to these questions were not known. Many
executives in Information Technology are not knowledgeable about what it takes to guide
the architecture in the direction that results in the biggest bang for the buck. Sadly, many
executives have lost control of the ship they steer, which has resulted in bloated
infrastructures and conclusions being made on whims versus sound principled-based
judgment and the experience of others.
In many organizations, the annual budget cycle starts with clueless, disruptive
questions such as How many servers do we have? or Lets get the new Intel 3333 GHz
machines as desktops. None of these statements is appropriate for a discussion about
enterprise architecture. A good architecture addresses:
The organizations business and information needs.
Leverages the synergistic relationship between Return on Investment (ROI) and
Total Cost of Ownership (TCO).
The ability to support migration from the current state (as-is).
The ability to support easy migration to the organizations desired future state.
Technology changes over time that affords the opportunity to do things faster, smarter, and
hopefully cheaper.
Many organizations will claim to have an architect who creates architecture. Others
will attempt to buy a nicely packaged architecture from one of the Big Five consulting
firms without having clue as to what they are getting. If your architecture does not address
the following six principles, then you may be doing more harm than good:

1. Ensure that the architecture can be measured for effectiveness.


2. Make sure that data is separated from logic (business functions).
3. Observe the principles of modularity by making sure that each component within
enterprise architecture performs only one or two discrete tasks.
4. Make sure that functions are separated into differentiated tasks that are generic
and self-contained.
5. The architecture should be self-documenting. If it isnt, then something is wrong.
6.
Require that all data, artifacts and so on that are generated by an
organization are managed and maintained by the same organization.

B u s in e s s
In f o r m a tio n
O p e r a t io n a l
O r g a n iz a tio n a l
A r c h ite c t u r a l
In fr a s t r u c tu r e
Figure: 1 Enterprise Architecture Model
Sound enterprise architecture can be described (modeled) using a layered
approach as pictured above. Each layer brings enlightenment to the mission of the
organization allowing all parties to become active participants in defining the next
minute architecture.
The business model specifies the enterprises business goals, their products and
services they produce, what additional products and services they desire to produce
in the future, and constraints that restrict how these goals can be accomplished and
include such factors as time, budget, legal and regulatory restrictions, and so on. The
ultimate goal of the business model is to define in a single location the near term
(next minute) and strategic intent of the organization.

Business

Technology

Management

Figure: 2 Business Technology Integration


Information models demonstrate how the enterprise as a business functions as a
cohesive unit. Business analysts and executives review the output of the business
model and decide on what processes and integration are required to achieve the goals
of the business. The key to success in this layer is having sufficient information to
make a valid conclusion. This model contains information about processes,
resources, how processes interact and associations amongst data elements being used
to drive the business. Lack of information within this model points to weak data
architecture within an enterprise.

Information Technology Lifecycle

Program
Management

Idea
Generation

Assemble
Team

Business
Analysts

Capture
Customer
Requirements

Evaluate
Industry
Trends

Architects

Evaluate
Technology
T rends

Determine
Feasibility

Prototype

Developers

Design
Com ponents

Develop
System

Refactor

QA

Design
Test Plan

Test
Functional
Requirem ents

Test Non
Functional
Requirements

P RODUCTION

Go / No Go?

Determine
Architecture

Figure: 3 Roles and Responsibilities


Operational models describe the structure of the business and provide for
allocation of resources such that it optimally addresses elements of an enterprises
information model. This model creates the organization chart and assigns resources
(people and materials) to the specified hierarchy. Many organizations have struggled
with creating the right organization chart. This model if done correctly will cause
cultural pain. Paradigm shifts occur at this level primarily due to many organizations
adopting a traditional organizational structure versus aligning to their stated
informational model.
Organizational models scrutinize the processes described within the
informational model and endeavors to provide optimal execution of those processes
by creating diverse operational entities. This model contains an enterprises various
departments and traditionally includes functional areas such as engineering, sales &
marketing, administration and research & development.
Architectural models study the previous models and define the optimal
technology topology for information systems. It will take into account systems or
services which today are being served by humans which could allow for the business
to increase agility if they were to automate using systems. This usually results in the

creation of an architecture that includes distributed processing, middleware, and


open systems.
Infrastructure models capture the current state of technology decisions that
have been made in the past and have been implemented. The model at this layer is
constantly evolving and has the least amount of stability when compared to other
models. Some components of an infrastructure architects have lives of only several
weeks or months.
No Enterprise Architecture can be considered complete unless it addresses all the
above layers. To be successful with enterprise architecture may require a divide and
conquer approach as no one individual can do all of the work. Figure out how to
engage other resources within your organization to assist you with the task.

BUILDING AN ENTERPRISE ARCHITECTURE


Lets revisit the analogy of the building trade and the architect of a building
and compare it to the enterprise architect of a corporation. The building of a
corporations nervous system is directly comparable to building a house. One may
ask, I have no clue about how to build a $50 million dollar skyscraper, let alone a
$50,000 house? The answer to both problems is simple; it is accomplished by
creating blueprints.
Blueprints show how a house will be constructed and takes on multiple views
each expressing their own level of detail. One view of the house may include all the
electrical circuits that are of extreme value to an electrician. Likewise, the plumber
will want to know where all the pipes and water should go. Enterprise architecture
diagrams the blueprints for all of the people within an organization so they also
know how to build an agile enterprise. The blueprint like in a house is meant to
provide sufficient detail to allow the idea to become a reality when put in the hands
of skilled professionals.
In the above example, we do not mean to imply that building a skyscraper is as
easy as building a house. A decade ago, Garlan stated as the size and complexity of
software systems increases, the design problem goes beyond the algorithms and data
structures of the computation: designing and specifying the overall system structure
emerges as a new kind of problem.. This is the software architecture level of
design. [Garlan92].
Enterprise architecture within this context seeks to solve for intellectual
manageability. Architecture of large things and their complexity arise from broad

scope, massive size, novelties of the minute, currently deployed technologies and
other factors. The goal of Enterprise architecture in this situation is to provide a
manner to hide unnecessary detail, simply difficult to understand concepts and break
the problem down into better-understood components (decomposition).

Architecture

M e ta A r c h it e c t u r e
C o n c e n t r a tio n is o n h ig h - le v e l d e c is io n s t h a t in f lu e n c e t h e
s tr u c tu r e o f t h e s y s t e m . S e le c t io n a n d T r a d e o f f s o c c u r h e r e .

A r c h it e c t u r e

Construction

D e f in it io n o f t h e s tr u c t u r e , r e la t io n s h ip s , v ie w s , a s s u m p t io n s
a n d r a tio n a le a r e c r e a t e d f o r t h e s y s t e m . D e c o m p o s it tio n a n d
s e p a r a t io n o f c o n c e r n s o c c u r h e r e .

G u id e lin e s
C r e a t io n o f m o d e ls , g u id e s , te m p la t e s a n d u s a g e o f d e s ig n
p a t t e r n s f o r t h e s y s t e m . A t t h is s t a g e , f r a m e w o r k s a r e u s e d
a n d s t a n d a r d s a r e e m p lo y e d . P r o v id in g g u id a n c e t o
e n g in e e r s s o t h e in t e g r ity o f t h e a r c h ite c t u r e is m a in ta in e d
o c c u rs h e re .

Figure: 3 Architectural Composition


Complexity also arises within organizations due to processes used in building
the system (The lets do RUP party train), the sheer number of people who are
involved with construction, introduction of the thought of outsourcing and
geographically dispersed teams and the organizational funding model. Ideally,
Enterprise architecture can assist (but not eliminate) in the management of the
process by providing clearer separation of concerns, enhancing communications and
making dependencies more manageable.
Building enterprise architecture is difficult but not impossible, as long as you
stick to architectural principles. The architectural blueprint defines what should be
built and not how or why. Project plans document the how and their creation is best
left to individuals who have studied the project management discipline. The why is
best left to your business customer, as all architecture should be business-driven.
One word of advice, a good architecture provides agility but also provides
constraints. A highly successful architecture will remove points of decision from

those who are traditionally accustomed to having a free for all making decisions
impulsively or reacting to the opinions of those who are uninformed. A good
enterprise architect will see things that make their heads hurt but their aspirin is in
knowing that their hard work and persistence will result in the reduction of future
problems. Helping others see the value of Enterprise architecture is crucial as it
brings benefit to the enterprise even if the decision may have local consequences

CLOSING THOUGHTS
The development of enterprise architecture is difficult. It requires refinement
of overlapping and conflicting requirements; invention of good abstractions from
those requirements; formation of an efficient, cost effective implementation and
ingenious solutions to isolated coding and abstraction evils. Additionally, we must
manage all this work to a successful finish, all at the lowest possible cost in time and
money. The only way this can be truly accomplished is by staying agile.
Before continuing, if you are not familiar with the agile manifesto, we have included
a copy in the appendix. Please read before continuing. Many of the thoughts
contained with the chapters will take an agile approach to explaining the solutions to
the problems presented. An understanding of agile development will help you
become more successful in implementing practical enterprise architecture

REFERENCES
Bass, Clements and Kazman, Software Architecture in Practice, Addison Wesley 1997
Garlan and Perry, guest editorial to the IEEE Transactions on Software Engineering, April
1995
Mirriam-Websters Collegiate Dictionary
Morris, Charles and Ferguson, Charles. How Architecture Wins Technology Wars
Harvard Business Review, March 1993.

Service Oriented Architecture

Service-oriented architecture (SOA) is a specialized architecture that formally


separates services, which are the functionality that a system can provide, from the
consumers of those services. SOA has generated enormous interest due to the fact that
SOA can provide a major increase in the value of the data and application resources of a
company by enabling the integration of these assets.

CHARACTERISTICS OF A SERVICE-ORIENTED
ARCHITECTURE

In general, SOA has the following characteristics:


Design by contract.

Usually represents a business function or domain.

Modular design.

Loosely coupled.

Connections between modules are dynamically established.

The location of the service is transparent to the client.

Transport independence.

Platform agnostic.

Language agnostic.
Let us examine each of these characteristics in further detail.

Design by Contract
Design by contract is one of the most important aspects of SOA. In an SOA, all services
publish a contract. This contract completely encapsulates the agreement between the
client of the service and the service. The details of the contract encompass
Required inputs

Expected output.

Preconditions

Post conditions

Error messages
Many software engineers are used to thinking of applications as a set of classes that
provide the functionality. The classes interact via their functions. These function consist
of a (hopefully) descriptive name and a function signature. The function signature states
what datatype the class will return and what data it requires as input. Function signatures
imply synchronous, tightly coupled, procedure call interactions. Function signatures
usually
A similar but more useful software engineering concept is that of the interface. In
object-oriented design, a class that subclasses another inherits data and functionality from
its parent If changes are made in the parent, it can affect the functionality of the children.
Interfaces, on the outer hand, have no associated functionality. Another way of thinking
about interfaces is that they specify a datatype, not a function. This places an important
level of indirection between the consumer and the provider. Any class can declare that it
implements an interface making it much easier to change the subunit that is providing the
service. An interface is more closely aligned to the SOA concept of a contract. It provides
to the consumer:
Expected output.

Required inputs.

Error messages (sometimes).

SOA extends on the concept of interfaces to produce the idea of a consumer and a
service being tied by a contract. The contract specifies what the service provides; the
consumer peruses a catalog of contracts and chooses the service provider that it wishes to
use. In addition to the usual interfaces semantics, the concept of contracts in an SOA also
involves the following optional elements:
Preconditions. These are inputs or application states that must exist prior to
the service being invoked. A common precondition is that the client has been
a dedicated an authorized through a security mechanism. This could be
represented as a requirement for a security token to be part of the input or
required security information in a SOAP header.

Post conditions. Post conditions are restrictions on what the service


consumer can do after they have invoked the service. A post condition could
be that, if the interaction between the client and the service is part of a
transaction, the service should not commit the transaction until notified to do
so by a transaction coordinator.

Error messages. Error messages are another area that should be established
in the contract. In modern object-oriented programming languages the
paradigm is to throw exceptions when an error condition is encountered. This
is acceptable when modules are running in the same memory space. However
the concept of exception throwing does not translate well to the
communication mechanisms in an SOA. SOAs impose no conditions on the
mechanisms used to communicate between service provider and service
consumer. Therefore the throwing of exceptions is inappropriate in an SOA. If
an error condition is encountered in the operation of the service the client will
expect to receive some sort of structure that contains a useful description of the
error or errors encountered. Usually that error structure will consist of an
XML string or document If any error messages are to be returned by a service
provider, that has to be specified in the contract too.

Quality of service agreements. These are specifications on matters such as


performance, multithreading, fault-tolerance etc.
In addition to the contract, SOA specifies that there be a registry for these contracts,
a mechanism for consumers to find contracts in this registry and to bind to them.

Service Provider

Registers

Registy

Search
Bind and Use

Fig1.xxx: SOA Contract Mechanism.

The concept of a registry is central to the SOA. A registry is a well-known place,


external to the service, which provides a catalog of available service providers, a
mechanism to iterate through and examine what a service provider has to offer and a
facility to bind the consumer to a service provider. This registry can be maintained and
provided by the enterprise, by an independent source, or by other enterprises that have
services to provide. All registries must implement an API that allows services to register
themselves and consumers to discover service providers and bind to them. Example of
such an API is the Universal Document Discovery Interface (UDDI) used by many Web
services. The important thing about a registry API and is that it must conform to open
standards. Enterprises setting up their own registry should select one of the open standards
for registries, and enterprises building clients should only build them to use the open
standards registry API's.
Registries don't contain the contracts. They contain descriptions of what the service
provider offers. If the consumer is interested in the service they bind to the resource locator
provided by the registry and get the contract from the service provider.

Services Represent a Business Domain


It is true that not all services represent a business domain. There are a host of
infrastructure and utility functions that an enterprise will want to move into their SOA as
services. Registries, security providers and logging are a few of the infrastructure and
utility applications that make excellent services.
When architecting business processes as services it is useful to think of them in
terms of a business problem or use case. As part of employing modular design, all of the
services should completely encapsulate a business domain or problem. Once more, this
will reduce or eliminate dependencies between different services. If your services have
dependencies on other services, then changes in the other service can potentially cascade
to your service. Changes in the other service could force you to redesign some of the
modules in your service. At the very least it will force you to perform a complete
regression test on your service. But even beyond the practical considerations of
application maintenance, forcing services to represent a business domain greatly facilitates
the mapping of application architecture to business architecture.
Modular Design
Services are built up of modules. Modular design is important to SOAs. A module can be
thought out as a software subunit or subsystem that performs a specific, well defined
function. For the purposes of our discussion, a module is a set of software units that

provide value to a service consumer. A module should perform only one function, but it
should perform that function completely. That is, the module should contain all of the
subunits or sub modules necessary to completely perform required function.
Application of this rule brings up questions of granularity.
Granularity is a
measurement of size of the functionality that a module performs. Fine-grained modules
perform a single, small function. Coarse-grained components perform a multitude of
functions. For example, a transfer of money in a banking system involves withdrawal
from one account and deposit into another account. A fine-grained module would be one
that only performs the withdrawal functionality. A coarser-grained module would be one
that indivisibly combines both the withdrawal and deposit functions. Keeping modules
fine-grained promotes flexibility and reuse at the expense of performance. Coarse-grained
modules are difficult to mix and match with other modules to produce new services.
Modules should be built on the use-case level of fine-grained components. To take the
funds-transfer example, the module should be composed of fine-grained withdrawal and
deposit subunits with the logic necessary for them to work together. Using this design, if
the withdrawal function changes, all that is necessary to deal with that change is to swap in
a new withdrawal subunit. The rest of the module can remain unaffected.
Keeping these rules in mind will make it easier to design modules or to work with
the existing applications to prepare them to fit into a SOA framework. Putting too many
functions into a module will make it awkward and difficult to replace or modify. If a
module does not have all the functionality required to completely fulfill its purpose he
could introduce dependencies into the framework that will make it difficult to modify it.
We suggest keeping subunits fine-grained and using use cases to help specify the
functionality of the module and which subunits should be included within it.
Breaking functionality into modules allows for tremendous freedom in swapping
modules in and out of a service. Being able to swap modules freely without requiring any
changes to any of the other modules in the application or even a recompile of the other
modules, is one of the things that gives the framework built on SOA principles a
tremendous amount of reusability and room for growth.
Loosely Coupled
Service clients and service providers should be loosely coupled. Loose coupling
means that there are no static, compile-time or dynamic run-time dependencies between
clients and providers. It also means that the dependencies that do exist between client and
provider are a few and of such a nature that the service provider can be freely modified or
replaced without affecting the client. One place that it is crucial that the client be
independent of the provider is that the client should need no knowledge of the physical
location of the provider. Binding a provider to a physical location such as a particular
URL or socket address greatly limits the flexibility of any architecture.
Loose coupling is also provided by having the service hide the details of how it
functions internally in fulfilling its duties. Knowledge of internal implementation details

can lead to unnecessary dependencies between client and service. One very effective way
to manage the exposure of internal service details to clients is by using a proxy or a faade.
Proxies are particularly useful in providing the correct granularity for the service. As we
have indicated above, providing the correct granularity can be difficult. Forcing the user
to interact with a myriad of fine-grained services is poor design and leads to poor network
performance due to the many network calls required to completely unit of work. Coarsegrained design, by aggregating too much functionality in a single service reduces the
flexibility and adaptability of an architecture. The use of a proxy allows the flexible
aggregation of fine-grained components into what appears to be a coarse-grained
component of just the right functionality. In the case of the customer management
function of a CRM application, a fine-grained design would provide the client with a
function to edit the customer's first name, one to edit the middle initial, one to edit the last
name, one day at the street address and so forth. A coarse-grained design would provide a
function that would handle all of the assets possible for client. A proxy could provide a
client with a function to edit the name, another function edit the address and other
functions to edit subparts of the customer profile. Behind the scenes of proxy would be
talking too many fine-grained modules operating in the proxies memory space to
accomplish the various edits. By keeping the actual modules that carry out the edits on the
customer profile fine-grained, the architecture gains flexibility and modifiability.
Another way to resolve the issue of granularity is the concept of multi-grain. Multigrained services combine fine-grained components to produce a coarse-grained result. See
Michael Steven's article at: www.developer.com/design/article.php/1142661

The use of proxies is one of the patterns in the seminal book on software design
patterns,. "Design Patterns Elements of Reusable Software Architecture" by Erich Gamma,
Richard Helm, Ralph Johnson, John Vlissides.

Connections are dynamically Discovered and Bound


Another one of the key is to the flexibility and reusability of an SOA is the concept of
dynamic discovery and binding. When the modules of the client are compiled they have
no static linkages with the service. Similarly when the modules of the service are
compiled they have no static linkages with any clients. Rather than use static linkages
built in a compiled time, clients of an SOA use a registry to look up the functionality
which they desire. Then, using the access pointer obtained from the registry, the client
gets the contract from the service and uses the information in the contract to access the
service. This means that the service is completely free to modify the manner in which it
fulfills the contract. Conversely, if the needs of the client change, or a service provider
withdraws its service offering, the client is completely free to choose another service. As
long as the client is using the same contract it can change service providers without having

to make any changes in its internal workings. It is incumbent on the service provider not
to change its contract. Contracts, once published must not be changed. To do so risks
breaking any clients that are counting on using that service.
Transport Independence
Clients access and utilize services using a network connection. SOAs are totally silent on
the particular kind of network connection to use to access services. This means that
services are independent of the transport mechanism used to access them. In practice this
usually means that an adapter is created to allow access to the service via the various
transports. Normally adapters are created as needed. If there are clients that wish to
access the service via HTTP, then an HTTP adapter is created. If another set of clients
come along that need to access this service via RMI, then the necessary RMI adapter is
created. The same adapter can be used with multiple services, allowing reuse of this
component. In addition, this architecture allows the service to transparently utilize new
connection protocols as they are developed.
Location of Service is Transparent to Client
Making the location of the service transparent to the client allows tremendous flexibility in
the implementation of an SOA. Since the client doesn't know or care where the service is
located, the group tasked with providing the service can be located wherever it is most
convenient. Instances of the service can be spread across several machines in a cluster.
Enterprise reorganizations can move hosting of the service to a third party provider.
Outages and disruptions to one location can be concealed from the client by forwarding
service requests to instances of the service running it entirely different locations. By
making it unnecessary for the client to know the location of the service allows for an
architecture that is much more flexible, maintainable, cheaper and resistant to failure.
Platform Agnostic
For the greatest flexibility a service shouldn't care upon which platform it is running.
Therefore the ideal service should theoretically be operating system independent.
In the real world, by making services independent of the transport, the ability to have
your service run on a variety of operating systems is much less important. A .NET service
running on Windows can, in practice, have access to clients running on any operating
system. Also, since any third party provider or Application Service Provider will be able
to provide servers running Windows, a service written using .NET will be as flexible as a
service written using Java.
Many large enterprises run on a variety of platforms. Services developed for these
corporations should be platform independent to provide the flexibility in hosting that is

required for maximum business functionality.


Services developed for smaller
organizations, businesses that are utilizing a single OS, will not find this requirement as
important as the preceding ones.
Language Agnostic
Being language agnostic can be interpreted in two ways. One way is that the client is not
affected by the language in which the service is written. The other way is that the
platform hosting a service is capable of running modules written in many different
programming languages. While, situations can be envisioned in which having the
operating system be language agnostic would be useful, the SOA requires that the client be
totally unaffected by the language in which it is chosen to implement the service.
The purpose of insisting on building services that can be accessed by clients written
in any language is to provide the high degree of flexibility that is required of any service.
When the services are written the limits of the client population is not known. Hence a
service must be accessible from all platforms and from all clients. Currently, the
transports that provide access from clients written in any language are FTP/FTPS, email,
SOAP and HTTP/HTTPS.

BENEFITS OF AN SOA
The benefits a Service Oriented Architecture lie in the areas of
Decreased application development costs

Decreased maintenance costs.

Increased corporate agility.

Increasing overall reliability by producing systems that are more resistant to


application and equipment failure and disruption.
Lets examine these areas in detail. Application development costs are reduced for
the following reasons
Code reuse is extensive.

Most of the code has been thoroughly tested.


Presentation layer is the only layer requiring customization for different
clients.

RAD development via automated code generation becomes a possibility.


Once the infrastructure of a SOA has been built, and your application development
staff has been educated in its use, you'll see extensive reuse of it. The GUI interface layer,
of course, will be unique for each application. The transport layer and resource access

layer adapters will be usable right out of the box, requiring no customization or changes.
Remarkably enough, the service itself, once the business logic has been turned into
application code, will be usable without modification.
Reuse will also substantially reduce the time and expense involved in testing new
applications and certifying them ready for production. Since the bulk of the code and
logic has already been extensively tested the testing effort involved here will probably
only be regression testing. Even more important than reducing the quantity of code that
must be tested is the fact that the really difficult stuff, the transport, resource access and
business logic has not been changed and hence needs little or no attention from your
Quality Assurance team. A solid, well-tested framework also substantially reduces the
risks involved in developing new applications. You should be aware that as the system
becomes more complex, maintenance or enhancements could carry considerable risk. We
strongly recommend that you couple all development, maintenance and enhancements
with the generation of automated test scripts so that a thorough regression test can be
quickly run. This is the best way to give the business side confidence when a SOA is
being evolved.
Development of user interfaces, particularly browser-based GUIs, is only minimally
enhanced by a SOA. There are many excellent RAD tools on the market that greatly
speed GUI development. The server side segment of browser-based presentation layer
development, the JSP, servlet and ASP components, will require a fair level of skill to
develop. It is in the server side layer that the data input from the user must be extracted,
the proper XML message built, and the message inserted into the data transfer object.
After these tasks are finished, a SOA system like the one outlined above will only take a
few lines of code to select the proper transports and pass the data transfer object to the
correct service.
SOA designs like the one outlined above are amenable to the development of RAD
tools to speed the development of new services. Most modern IDE's have facilities for the
insertion of user-built modules. It is quite possible for to develop a module that will allow
RAD development of projects using your SOA design. Even if you don't develop a GUI
interface that allows dragging and dropping of transport, service and resource adapter
modules into a project, you should seriously look at developing modules that allow
developers to input names of adapters and services that they wish to utilize and which will
generate the sleleton code to realize those connections. Code generation modules are
relatively simple to develop and will quickly pay for themselves due to the great increase
in developer productivity that they produce. RAD development is made possible by the
modular, layered architecture of the SOA and to the lack of dependencies among the
various layers.
An SOA framework reduces maintenance costs due to the tested nature of the code
and to the fact that the production support staff is intimately familiar with the back end
code. The modular nature of the components in the various layers means that once they
have been tested and debugged, they can be plugged in where needed with confidence.
The lack of dependencies between the various components in the different layers is also
very important in reducing the cost of maintenance of applications developed under an

SOA framework. The fact that the dependencies are so few and are well known makes it
easy to state with confidence that a given change will have no unexpected side effects.
New application development carries a maintenance risk as well as a development risk.
Production support staff will soon become familiar with and confidence in the framework
code. Since this code is well tested and debugged you'll greatly reduce the code that the
production support people have to comb when a defect appears in a new application. We
must reiterate that it is vital to have an exhaustive set of test scripts available to the
production side that they can use to identify any problems in the SOA system that may be
introduced by maintenance activities.
Instituting an SOA framework is an important step in the direction of building a product
line development system. See the Product Line Practice chapter for further details.
What we're talking about here is measurable and significant reduction in recurring IT
costs. This is not a onetime reduction in costs but savings that will persist.
For many enterprises a major value added for an SOA will be that it can greatly
increase the agility of the corporation. This increase in agility is due to the way in which
SOA makes it easier to respond to new conditions and opportunities. The adapter layer
allows easy access to the services from all clients that are known now and makes it easy to
open the service layer to client modalities that haven't yet been thought of. The fact that
the services themselves exposed to the world a simple, well-developed contract makes it
easy to change the implementation that lies behind that contract without affecting any
clients. This provides a relatively simple and painless evolutionary pathway for the
applications that satisfy the business needs of the organization. In addition to allowing
evolutionary growth and change, the SOA makes the possibility of revolutionary change
much simpler. Perhaps you want to outsource your CRM, yet certain data from this CRM
system is required at several other places in your business processes. If this access to the
CRM has been encapsulated in a service, the location of the actual CRM data has become
irrelevant to the functioning of the service. So, you may choose to outsource the CRM
functionality to firm A. A few years later another CRM outsourcing configuration
becomes more attractive to the corporation. Making this change is a relatively minor
operation that has no real impact on the operation of your business processes. This same
agility also applies to the issue of legacy applications. Once legacy application
functionality has been wrapped in an interface, modifying them, replacing them or
outsourcing them becomes a much more manageable proposition. The reduction in ITT
costs brought about by the cheaper application development and maintenance that comes
from an SOA also helps to make a corporation more agile.
Service Level Agreements (SLAs) have become an important and common element
in modern IT reality. Customers of IT services have come to expect and are demanding
certain guaranteed levels of service. The systems architecture segment of IT departments
has been given most of the burden of satisfying SLAs. However, certain architectures and
applications make satisfying SLAs much easier.

It is relatively easy to write a SLA for a stovepipe application. Tightly selfcontained, they are like a watch. You know pretty well how they will perform. They are
relatively immune to network issues. Database changes are one of the few things that can
slow down or speed up a stovepipe application.
Distributed architectures such as SOA are at the other end of the spectrum when it
comes to specifying and satisfying SLAs. On the plus side:
The modular, a layered nature of an SOA framework naturally lends itself to
setting up parallel, redundant pathways of communication between the various
components that are required for the functioning of the service. This can help
make a server resistant to network problems.

The simple, modular nature of the components in the service layer lends itself
to achieving stated reliability levels through clustering of servers.

Asynchronous communication is much more forgiving of machine and


network disruption than asynchronous communication.

Since services are located and connected to at runtime it is possible for


System Architects to change the location of components and allows for the
possibility of having applications recover from the unavailability of a
component by binding to an alternative component, perhaps at an alternative
location.
On the negative side:
The distributed nature of SOAs makes them vulnerable to network issues.
Not just easy to spot network failures, but the slow, easy to overlook network
slowdowns and intermittent congestion.

SOAs are hosted on many machines. A seemingly innocuous change in


anyone of a number of computers has the potential to disrupt a service.

The complex nature of some systems built on a SOA make it very difficult to
mathematically derive SLA parameters. Unless you have an extremely
thorough system to monitor the elements of your system architecture, initially
any SLA numbers will be largely guesswork.

Yes, there are numerous ways to tune and tweak SOA systems. However,
that tuning and tweaking will cost time and money.
In summary, SOAs live in a complex environment. The layered, modular nature of
the systems plus the fault-tolerant ability of the find and bind service location system help
SOAs satisfy SLAs.

SOA IN PRACTICE
To put these concepts into sharper focus, let us take a look at an implementation of the
SOA.
Object-oriented programming has slowly become the main software development
paradigm. Its initial promise was to make software development easier, programmers
more productive and the resulting product more error-free and easier to maintain. To a
certain extent it has delivered on all of those promises. However there have been
disappointments in the area of increased productivity. One of the big productivity
boosting features of object-oriented programming, reuse of objects, has not been as
extensively utilized as the early object-oriented programming the evangelists had hoped.
As experience has been gained with object-oriented programming it has become
evident that there is the only limited reuse of most classes. It is only when these classes
have been assembled into a more comprehensive structures that there has been serious and
effective reuse of the objects. One of the prime examples of a system that has had a very
significant impact on programmer productivity is the Microsoft Foundation Classes
(MFC). This framework has allowed the relatively easy (when compared with the old
Microsoft Software Development Kit) development of rich Graphical User Interface (GUI)
applications. The vast majority of the GUI applications running on Windows have been
built using the MFC framework.
When we speak of frameworks we mean a large-scale, reusable design that is made up
of a set of classes designed for reusability.
In this section we will describe a framework built on SOA principles. To get a better
idea of the power of the service oriented architecture; we will contrast it with an application
built using classic object oriented design.
This application is not built using a framework of any kind. It consists of a system of
classes with other classes subclassed off the parent class. Subclassing allows the reuse of
data members and of functionality of the parent classe(s). In addition some of the classes
utilize the services of other classes by having them as member variables. This is the only
mechanism used to allow access to the services provided by other classes The entire
application is designed to run as a unit in a single memory space. All function calls are
synchronous and call components in the local memory space. The individual classes in the
application are highly dependent upon each other and are compiled as a unit. Changes in
any of the classes require a recompile of all of the classes that are dependant on the
changed class. The user interacts with the application in a client/server manner or by using
a thin client in a browser. Schematically the application looks like the following
*** Insert ConventionalOODApp.pcx here ***

Fat Client

Business Logic

Browser

Persistence
Classes

Persistence Layer

Figure 4.1

Classic object-oriented application

Some effort has been expended to isolate the components of the application from
each other. Changes in the persistence layer probably will not affect the business logic or
the user interface components. Conversely, changes in the user interface components or
the business logic probably won't affect the persistence layer. However, the business logic
and the user interface are tightly coupled. If this application has a very long run, this part
most assuredly will be the focus of substantial enhancement and modification.
Let us rearrange this functionality using a SOA. To begin with, the loose coupling
of the SOA allows you to arrange functionality into layers. Here is a schematic of the
layers built using service oriented architecture principles
***Insert LayeredArch.pcx here. ***

Java Swing App

Browser-based Thin Client

Mobile Phone

User Interface Layer


Web Service

CORBA Client

Transport Layer

Service A
component 1

Service A
component 2

Service B

Service Layer

JDBC driver

Email Adapter

LDAP Adapter

Resource Access Layer

Email Server

Enterprise Directory

Relational Database

System Resources Layer

Figure 1-2

This design demonstrates the layering that is possible with SOA design.

Let us spend a few moments examining this diagram and discussing its implications.
To begin with this is not the one and only system that can be built using SOA; it is just one
example. This five layered design is used because it offers the maximum flexibility and
adaptability with a minimum of complexity. Let us now examine the functionality of each
layer and the rationale for its existence.

User Interface Layer


The user interface or presentation layer contains the objects that interact with the user. It
presents data to the user and accepts the data input from him or her. Using the layered
design the presentation layer can be represented by a number of applications. It can be a
browser based application, a rich GUI client application, a wireless devise such as a cell
phone or a PDA etc. The object of the presentation layer is to abstract all of the user
interaction modalities away from the rest of the system. In doing so it allows presentation

applications to be swapped in and out without affecting the other layers. This greatly
simplifies the moving from say, a client/server to thin-client architecture. In addition, it
allows the simple and easy integration of new client interaction devices, such as cell
phones.
Is not a requirement to have a user interface layer. In many cases services will be
directly accessed via the transport layer.

Transport Layer
The transport layer encapsulates all of the different transportation mechanisms that can be
used to connect the business logic components to the presentation layer. The
transportation mechanism can be HTTP or HTTPS from a browser to a servlet or ASP
page. It can be a socket connection or a CORBA connection from a fat client. It can be an
RMI/IIOP connection to a Enterprise Java Bean. This is where SOAP messages come into
your Web service. Messaging systems such as IBM's MQSeries or Tibco's Rendezvous are
extremely powerful transport mechanisms.
Enterprise Java Beans
Enterprise Java Beans (EJBs) are objects that run in specialized containers. The
containers take on responsibility for difficult programming aspects such as concurrency control,
security, transactions and many others. EJBs are accessed via the Remote Method
Invocation/Internet InterOrb Protocol (RMI/IIOP).
Proper design of the transport layer is crucial to enabling complete freedom of the
presentation layer from the rest of the layers in the system. Properly designed
presentation, transport and business logic layers will completely isolate presentation from
business components. This means that the location, type, internal logic, and other
application details of the business logic components can be freely changed without
necessitating any changes in the presentation layer. In this case proper design means that
all details that the client requires to establish contact with the business layer must be
obtainable from some sort of registry or directory. If that information is hard coded into a
component in the presentation layer, as it is in the OOD example preceding this an
unnecessary dependency has been built into the system that will be a recurring
maintenance problem.
Registries
Registries can exist in many forms, from the very simple to the very complex. An
example of a simple registry is a configuration file containing name-value pairs. The name is
the name of the service to be accessed and the value is the URI that will allow connection to
the service.
An example of a complex registry is the UDDI specification for access to Web Services.
The important point is that some sort of registry scheme is crucial to a SOA. The
registry enables the dynamic discovery and binding aspects of the SOA.

Proper transport and presentation layer design can also introduce the option of using
asynchronous communication between presentation and business logic. With asynchronous
communication the message is posted and control returns immediately to the user. The
user can then proceed with other tasks while the application periodically checks for a reply
from the message that it posted. Perhaps the network is congested due to momentary
heavy loads or the target URL is currently swamped with requests. The user will still have
a responsive presentation layer. Then, when the results of the message are available, they
can be presented to the user.
Asynchronous communication is an effective way to adapt the continuous interaction
semantics of most presentation layers to the batch processing preferences of many legacy
systems. In addition, asynchronous communications provide a possible means to insulate
presentation layers from network problems or server congestion issues.
Asynchronous communication is not always a possible design option; in many cases
synchronous i.e. remote procedure call (RPC) is required for acceptable user presentation
layer response. "Pseudo RPC" are possible using asynchronous communication. The
presentation layer posts a message to the business logic and waits for a reply. Using a fast
messaging system, one of the authors has built systems with round-trip messaging times of
30 50 milliseconds. The communication system was strictly asynchronous but it
appeared to the user as synchronous.
This is just a partial list of the possible transport mechanisms that can be worked into
this architecture. The various transport mechanisms are abstracted into adapter objects
that the presentation layer can use to converse with the business logic layer. Once more,
the object is to allow the quick and easy swapping in an amount of different transport
mechanisms without substantially affecting any of the other components in the other layers
of the framework.

Service Layer
The service layer is usually where the business logic resides. In some cases there will be
substantial business logic in the resource layer, in which case the service layer will
function to integrate that business functionality with clients. Some services will not
involve business logic but will encapsulate things such as security or transaction
management.
The service layer will consist of all of the services that have been defined for the
enterprise. If the service represents a business problem, it should encapsulate a distinct,
discrete, business domain. This is where the business processes of the enterprise are
mapped to application logic.
These business processes may be COBOL routines wrapped in an object and running
on a mainframe. A SOA is an excellent way to integrate legacy business logic into modern
presentation modalities.
See the System Architecture chapter for a discussion of wrapping legacy application
logic in objects. As is explained in that chapter, wrapping a legacy application primarily

involves abstracting discreet functions into interfaces. It is the interface or contract that allows
the business logic of the legacy application to integrate into the SOA.
The service could be EJBs a running in an application server. It could be a Web
service exposed by another part of the enterprise or by another enterprise. It could be part
of the functionality of the server part of a client/server application written in C++ and
externalized via an interface. It could be and often is a combination of many different
business logic containing components.
The service will normally consist of multiple cooperating components. Even if the
service is really an object to wrapper for a single monolithic legacy functionality, the
design freedom that the SOA provides via the concepts of the contract and dynamic
discovery and binding will normally mean that it will evolve into a more complex system
has various bits of the legacy logic are put into components written in more modern
languages. A SOA is an excellent place to start when you wish to replace a legacy
application. The modular nature of the service layer design allows the slow, controlled
migration of functionality from legacy applications. The layered architecture of the SOA
allows architects to concentrate on the functionality of the legacy application. Business
logic design is free from transport layer, presentation layer and resource access issues. By
developing a contract, or interface that encapsulates the desired legacy application
functionality it is possible to simply plug a legacy application or just part of its
functionality into an SOA framework. When doing legacy application re-engineering, if
the application is first wrapped into a service, which is a relatively straightforward
procedure, the risk involved in the re-engineering process is much less than the risk
involved in attempting a complete rewrite of the legacy functionality.
SOA lends itself to Enterprise Application Integration (EAI) at the following levels:
Service layer.
Services can be built that connect legacy stovepipe
applications.

Transport layer. The flexibility of the transport layer can be used to connect
legacy applications together.

Resource layer. The adapters to the various legacy resources can be used to
allow different legacy applications to communicate with each other.

EAI via the service layer takes advantage of the fact that what the consumers of the
service see is the contract that it presents to the world. Behind the faade that contract can
lay modules that access the required information and functionality from many disparate,
stovepipe applications. The components in the service layer will be used to mediate the
data moving between the various stovepipes. The service can be in the role message
broker, if it does little more that rout data or act as Messaging Oriented Middleware
(MOM) application if it performs extensive reformatting and logical operations on the
data.
EAI via the transport layer leverages the ability of the transport layer to provide a
simple, language and platform-independent method for the various stovepipe applications

to make connections. In some cases a small amount of work is all that it is required to
encapsulate data from one legacy application into a form that can be consumed by another
legacy application. In this case the value that the SOA adds is in providing quick and easy
access to transport objects.
In certain situations the resource access layer can be utilized to provide EAI
functionality. It may be possible to tap directly into the data store of a stovepipe or of a
legacy application. The service layer can then integrate this data with data and inputs
from other applications to carry out its business functionality.

Resource Access Layer


The resource access layer is tasked with encapsulating all of the various mechanisms for
accessing system resources. The important issue here is the contract or interface that these
adapters offer to objects in the service layer. All resource access objects should offer the
same contract. In this way the data sink for a service can be changed without disturbing
the service itself. The object of this exercise, as with the transport layer, is to provide
maximum flexibility. A service that writes to a relational database today may need to send
the data to another division of the company via a messaging service tomorrow. The
change should come via the service dynamically plugging into a different resource adapter.
It should not require any modification to any of the modules that comprise the service.
The layering and modularization of the SOA builds that kind of flexibility into any
framework created from it.
A very important class of resource access objects those that handle database access.
Relational databases are an easy target, because of the plethora of JDBC and ODBC
drivers to access them. Objects that wrap these drivers and offer a uniform interface will
be major parts of any SOA implementation. Most organizations are dealing with relational
database access and will need to build adapters for all of their database access drivers. As
we have previously indicated, this should not be done in isolation but as part of the overall
SOA framework. All adapters should present the same contract to the service layer.
And interesting issue arises with J2EE development when container managed
persistence rather than bean managed persistence is chosen. By using container managed
persistence the service gains independence from the details of relational database access.
This makes the application easily portable between different application servers and/or
different relational databases. However, it firmly weds the service to using a relational
database as its data sink. Even though this does contradict SOA principals, this is
perfectly acceptable for those applications, which will be known to always be sending their
data to a network attached, relational database.
Objects that give access to messaging services or to Messaging Oriented Middleware
(MOM) can be found in the adapter layer as well as the transport layer. When messaging
services are functioning as resource adapters they are being used as data sources, not as
data sinks. IBM's MQSeries, Tibco Rendezvous and SonicMQ are examples of messaging
services. IBM's MQSeries Integrator or Mercator's Integration Broker are examples of

MOM. Once more, when accessing messaging services as resource adapters we should be
dealing with the same interface as with other adapters. The paradigm is a little different
when using messaging services for resource access. Rather than listen for messages on a
listening queue, as is done when the messaging service is functioning as a transport
adapter, when used as a resource adapter it may be the data will be put on one messaging
queue and replies will usually be extracted from another messaging queue. All of these
implementation details will have to be hidden from the service layer clients. When using
an SOA for EAI, quite often services will be receiving data from a messaging adapter in
the transport layer and outputting data to another messaging adapter in the resource access
layer.
Messaging services are also extremely powerful mechanisms to provide location
independence for the components involved in providing a service. All a component needs
to know about the component that will be sending data to it or that it will be sending data
to, is a queue name. This of course, can easily be gotten from a reference service. This
completely obviates the hard coding of component locations. A service component putting
data on a message service queue has no idea of where the recipient of this data is at the
current moment. In addition, messaging services inherently provide asynchronous
communication. So a component communicating on a message queue has no idea if the
recipient is even functioning at that moment. If the business needs of the service support
asynchronous communication, you'll make the service much more resistant to network
problems or server outages.
And email adapter is another useful addition to a resource access layer. Perhaps you
currently have a sales management system service system that uses a Web form to gather
data from salespeople and put it into an email that is sent to a central location for
processing. If you build or convert this to a service and create an email adapter for it,
changing the back end from representatives entering data manually into a computer, into
fully automated system that directly inputs the data into the correct data store will be
relatively straightforward.
The architectural purpose of the resource access layer is to completely encapsulate
details of system resource access. Therefore, it makes it easy to change the resources into
which a service is plugged.

System Resource Layer


The system resource layer signifies all of the various system resources to which services
will be sending data and from which services will be retrieving data.
Databases, enterprise directories, messaging services, email transports and legacy
applications had just a few examples of the objects that belong in the system resource
layer. This is a heterogeneous, ever-changing, collection of objects and systems. The
mutable nature of the system application layer is managed by the resource adapter layer.
Susposea business system was built to gather its data from a directory. Then the data is
moved to a relational database. Now it becomes necessary for that service to point to that

relational database. If the application is built using a conventional architecture this could
be a painful, expensive process. If the service is part of an SOA, this change could involve
no more than changing some information in a configuration repository. At worst it could
require writing a new adapter. But this piece of work would be totally reusable by any
service that needed to access that system resource and would represent an enhancement, an
increase in the value of the SOA resource.

Data Transport
When architecting an SOA framework considerable thought should be spent on the system
that will be used to move data through the layers of the SOA architecture. Should the data
be passed in the form of objects, which are strongly typed or in the form of messages
which are strongly tagged?
Legacy applications and client/server applications passed data back and forth in what
were basically byte streams. The format of these byte streams adhered to a protocol that
was invariably proprietary. Java turned its back on that system and used serializable
objects to pass data back and forth.
Using a strongly typed object to move data back and forth between the various layers
in an SOA framework pins all of the layers together by a strong dependency. This is not
undesirable. On the other hand, attempting to develop a messaging protocol that will fit to
all situations and services in the enterprise is impossible. What is needed is the best of
both worlds, one single object whose type will never change and who can carry any and all
possible assortments of data using a message format that all company applications now
and in the future can understand. We are talking about is an object that will wrap XML.
XML inside an object wrapper is strongly recommended to move data between the various
layers in an SOA framework. Web services use this design to move data, XML inside a
SOAP package.
XML works so well for this purpose because it carries its metadata, its data that
describes the data, along with the actual data itself. In addition to the metadata in the XML
tags, Document Type Descriptions (DTDs) or schema can be used to specify additional
information such as the types of the various data elements carried in the XML message.
Using the metadata in the XML in the transport object along with all supplemental
information such as schemas is all that the service requires to understand and deal with the
data.
We do not recommend attempting to use a complicated scheme of introspection
reflection in the transport, service, or resource layer. Rather use a strongly typed object
that all layers are compiled to accept and use it to transport an XML document.
Furthermore, we do not recommend building a large amount of XML processing
capabilities into this object. Just use it to carry the XML. We recommend using wellestablished external XML tools to create and to manipulate the XML data. We think that
you will find that attempting to duplicate such well-established functionality in your data
transport object will turn out to be an expensive proposition.

ISSUES IN IMPLEMENTING AN SOA


Implementing an SOA framework will cost you time and money. Some organizations have
no budgetary process to finance infrastructure development. In these organizations all
information technology development projects are developed for and managed by a specific
customer. This customer assumes all of the costs associated with the development of the
application. Under this model if it were decided to develop an application for a customer
using an SOA framework, that customer would have to assume the entire cost of
developing the framework as well as the cost of using it to build their application. This
system of financing for IT projects punishes infrastructure development.
A much better financial model is the one used by car manufacturers, such as
Canaxia. Every four years all of Canaxia's car models undergo a major rework. This
rework requires an investment of several billion dollars. In spite of the size of the outlay
required, it is accepted because it is known that that is cost of doing business. Therefore,
the initial retooling outlay is amortized over the four years of the product run by adding a
small amount to the cost of each car. Organizations that adopt a similar approach to
financing IT infrastructure will have a competitive advantage in today's marketplace.
Applications built using an SOA will not only be cheaper to develop and have a
faster time to market, but will be significantly cheaper to maintain. Unfortunately, it is a
common financial procedure in most enterprises to separate the cost of developing an
application from the cost of maintaining it. It is well known to software engineers that the
cost of maintaining an application is several times more than the cost of developing it.
However, most businesspeople are either unaware of that fact or don't believe it. That is
because the cost of maintaining their application is lumped in with the cost of maintaining
all the other applications and buried deep inside the IT budgets. The much more effective
method of financing application development is to have the customer pay for both
development and maintenance costs. Using that financial model, what maintenance is
really costing will become painfully evident to the business side of the enterprise. When
cradle to grave financing is used, the lower maintenance costs of SOA applications will
become quickly evident. Building an SOA and using it to develop applications will
demonstrate a positive ROI that will more than justify the initial outlay required.

WEB SERVICES
Web services have been chosen for particular discussion because they are a well-known
and extremely important example of an SOA implementation. We have no intention of
going into a nuts and bolts discussion of Web Services here. However, examining this
concrete example of an SOA can give you ideas of ways to apply SOA to your
architecture.

The Web services paradigm is rapidly becoming the preferred way to implement
new business interpretational connections both internally and externally. In addition to
being used for new connections, Web Services offer such compelling advantages that
many existing applications have or will be ported to them.
One of the compelling attractions to Web Services is that they adhere to openstandards. Open standards:
Bring write-once, use everywhere to the providing of software services.
There is no difference between accessing internal services and external
services.

Eliminate vendor lock-in.

Streamline the development process by turning the provision of services into


a Product Line Practice.

The open standards behind Web Services have gained allegiance from all of the big
players. IBM, Microsoft, Sun and Oracle to name a few. The open standards utilized by
Web Services are:
The transport protocols HTTP, FTP and STMP.

The messaging protocol SOAP.

The contract description language WSDL.

Registry protocols such as UDDI and ebXML.

In addition to seamless interoperability, considerable agility for this process is


gained because it is now easy to replace the current implementation of the service by an
implementation provided by another group in the enterprise or an external party. Web
Services fit extremely well with the process of enhancing the value of a systems
architecture via enhancements. In this case the enhancement is exposing functionality as a
Web Service or consuming a Web Service to provide functionality that is missing.
See the Systems Architecture chapter for a more thorough discussion of using
maintenance and enhancement activities to enhance the value of your systems architecture.
Let's examine the parameters that have been laid down for a service-oriented
architecture and see how they have been implemented by Web services.
Most Web services are modular in design. It is true that it is impossible to judge the
modularity of a Web service from the outside. It is entirely possible to expose a
monolithic, inflexible application as a Web service. However, all modern software
architects recognize this is a poor design choice. In any case, how modular the Web
service is an internal design choice that is invisible to the consumer of the Web service.
Modularity, or lack thereof, is out of importance only to the organization that has built and
has to maintain the Web service.
We have indicated that a service can represent a business function or a utility. While,
from an enterprise perspective, requiring services to encapsulate a business process is
much more useful than just exposing utility functionality, currently the vast majority of

Web services fall under the category of utilities. As the novelty of Web services wears off
and they come into the mainstream of corporate information technology we expect to see
more and more complete business functions put out as Web services.
Services should be independent of the means of transport. At the current time all
Web services communicate using a single transport: HTTP. There is nothing about Web
services that inherently limits them to using HTTP and we expect to see other transport
modalities used in the future. However, HTTP will continue to be the most popular
transport for Web services messages for some time to come. One of the big reasons for
HTTP's popularity is that HTTP messages can readily passed through firewalls.
Difficulties in traversing firewalls are one of the reasons that Java applets and CORBA are
seldom used in Internet scenarios. Web Services notwithstanding, it is still our
recommendation that, when building internal SOA frameworks, time and attention be
devoted to making the service transport independent.
Web Services utilize the Simple Object Access Protocol (SOAP) messaging
protocol. This protocol is central to Web Services and it needs to be discussed. The
SOAP protocol, among other things, defines how to represent a message in a standard
manner. In addition, it defines data types to be used when using remote procedure calls
(RPC) via Web Services. XML is the language used to compose SOAP messages. A
SOAP message consists of three parts:
Envelope, which basically identifies this XML as a SOAP message.

Header, which contains information for activities such as transaction


management and security, processing directives etc. This is information for
the system processing and routing.
Body, which contains the actual data.

Web services must be designed to a contract. This contract must clearly state the
services that they provide and what they require of the consumer of the Web service. This
contract is in the form of WSDL. WSDL has been developed for the purpose of
discovering Web services that suit the consumer's criteria. Using WSDL, the provider of
the service describes that service. That description consists of
The functionality that the service provides.

A description of the messages that are acceptable to the service and their
format.
The protocol that the consumer must use to utilize the service.

In conjunction with the WSDL contract specified by the service, Web services
provide a mechanism to dynamically build the proxy that allows connection to the service
using WSDL.
Web services offer two methods of establishing connections between modules, one is
dynamic the other static. It is possible to provide to the consumer of the Web service and
object, a "stub", to which it can bind at compile time. Some Web Services specify a proxy

to which the consumer must bind to access the service. These static methods are limited
and inflexible and it is not clear if it will ever be used by any more than a small percentage
of the Web service consumers. Of much more interest is the fact that Web services had
evolved dynamic methods of service discovery. Currently the most popular method of the
dynamic discovery of services is the Universal Document Discovery Interface (UDDI).
This interface provides a standard publish API that can be used by Web Services to
register themselves and a search API that clients can use to search UDDI registries for
services of interest. However, UDDI is not the only registry solution. Another standard
that is gaining considerable acceptance is Electronic Business using eXtensible Markup
Language (ebXML.). ebXML is been developed under the auspices of the international
organizations, UN/CEFACT and OASIS. ebXML is a suite of specifications whose
purpose is to provide a set of standard specifications for XML messages between
businesses. As part of that set of specifications, an ebXML Registry specification has been
developed. ebXML registries can be created for a variety of purposes one of which being
to allow the publication and discovery of Web services. Microsoft has created an XML
standard for business interactions called BizTalk. A Web Services adapter for BizTalk has
been developed. As part of this Web Services functionality, registry services have been
implemented.
For further information on ebXML see www.ebxml.org. For further information on
BizTalk, see www.biztalk.org.
This competition between different registry schemes may slow the growth of the
adoption of Web services. However, we believe that the competition between standards
will eventually result in a stronger, more robust registry solution or set of registry
solutions. The future world of Web services is certain to contain many different registries,
each with its own API. Small, consumer oriented, Web services will probably be
concentrated in one or a few registries. However, the location of business services will
most likely be much more diffused. In fact, many or most enterprises may well choose to
implement their own registry. Having a registry of their own gives corporations the level
of control and security that most of them need to feel comfortable with Web services.
Having a plethora of Web services registries is not a problem. Having a plethora of APIs
to access a plethora of registries is a problem. Standardization on a small set of Web
service registry access APIs is important for the widespread adoption of Web services.
Having to change the registry access code when changing a Web service from one
provider to the other greatly reduces the agility that Web services offer.
Web services are definitely language and platform agnostic. In fact, this is one of
their big attractions. Web services actually do deliver on the promise of a simple,
universal, he effective method to allow any application written in any language are
running on any platform to communicate with any other application. This versatility is
achieved by the universal nature of the transport mechanism, HTTP, and the universal
nature of the message format, XML. Enterprises that effectively implement a Web

services strategy will become considerably more agile due to the greatly increased number
of solutions that will be available to solve their IT problems.
Support for transactions and security, two must-haves to be seriously considered as
part of a functionality that provides enterprise -- level business services, are still under
development for Web services. Once these have become solidified and tested is our
prediction that the number of full-scale business services that will be provided will
increase dramatically. In any case these topics are outside the subject matter domain of
this chapter.
This discussion brushes the surface of Web Services very lightly. There are many
crucial architectural issues involved in Web Services that we do not have the space to treat.
See Java Web Services Architecture by McGovern, Tyagi, Stevens and Mathew for an indepth discussion.

SERVICES AT CANAXIA

Internal Services
The architecture team at Canaxia recognized the potential of SOAs and has implemented
the following services internally.
A set of utility services. This includes utilities important in application
development such as logging. It also includes the registry that is used to locate
services.

Security.

A financial reporting service.


For these internal services, Canaxia has developed a simple registry system. The
registry operates as a service. It consists of an in-memory data structure that allows clients
to dynamically link with a service provider. All that is required when it is desired to
change the service is to change the URI for that service in the registry file. The registry
Transports are HTTP and RMI/IOOP.
Resource adapters are for JDBC and Java Messaging Service. In actuality, database
access is practically always via stored procedures, JDBCs facilities for working with
stored procedures is utilized. A particular service retrieves the information it requires to
connect to the data base from the Registry Service. Modern messaging products are JMS
compliant and existing or legacy messaging products such as IBMs MQ Series or Tibco
Rendezvous have JMS wrappers, so JMS is the most universal messaging choice.

Web Services
Web Services. In addition to the internal services listed above, Canaxia is creating a
set of Web Services to allow closer coordination between Canaxia and some of its
suppliers. The initial Web Service allows visibility into Canaxias inventory and can
produce figures outlining current consumption patterns.
The registry is UDDI compliant and is built and owned by Canaxia. All clients will
be internal until Web Services security has been finalized.

Internationalization
Canaxia is a international company with business units in several countries.
Internationalization is handled on several levels. For the presentation layer, Canaxia
follows Java and J2EE standards in that all of the messages and labels presented to the user
are in resource bundles. The localization setting on the users workstation brings up the
proper language resource bundle to use for the UI.
The UDDI-compliant Web Services registry that Canaxia created for its Web
Services takes advantage of the fact that consumers can specify a language when getting a
bindingTemplate from the registry.
bindingTemplates are part of the way UDDI registries organize data pertaining to
services. See Java Web Services Architecture by McGovern, Tyagi, Stevens and Mathew or
Building Web Services with Java: Making Sense of XML, SOAP, WSDL and UDDI
by Graham, Simeonov, Boubez, Daniels, Davis, Nakamura, Neyamafor more detail.
In addition, XML tags can have a language attribute. Therefore, the XML data
generated from Canaxia locations in the Caribbean specifies Spanish as the language for
the tags. If the location specified for the computer displaying the data is not Spanish, a
translation of the tags is performed.
In this way data is generated in a the local language, eliminating the expensive and
error prone process of forcing all Canaxia locations to work in English.

SOA BEST PRACTICES


1? Think big: develop for the enterprise. Do your architecture work first.
Understand the enterprise architecture and use that knowledge to plan your
SOA.
2? Start small: work incrementally. Use the first project to build the
component infrastructure for your SOA. Add to your SOA on an ad hoc
basis and document what you have learned from each SOA project

3? When encapsulating existing legacy functionality, use the SOA.


4? Wire in what you have. Leverage the standards-based interoperability of the
SOA to allow you to integrate your application infrastructure.
5? Architecture is as important to your Web Services as to your SOA. Use
architectural principles to set the contracts that your services offer to
consumers. One of the things that this means is to get the grain, the size, of
the service that you expose in the contract correct. The contract should
expose a relatively coarse-grained contract and fulfill the contract by
aggregating several fine-grained components.
6? Think agile when building services. Make the creation of services an
opportunity to increase the adaptability and efficiency of your organization.
7? Maximize reuse of components by making them small and cohesive.

SUMMARY
By separating the business logic from presentation, transport, and resource access issues,
the SOA makes legacy application integration and EAI a straightforward process.
In addition, service-oriented architectures greatly improve the flexibility and
reusability of applications. Building a framework built on a SOA is an effort that has a
significant positive ROI.
Having a large variety of choices in being able to move quickly and inexpensively
between those choices is a hallmark of an agile enterprise. Service Oriented Architecture
is an excellent tool to provide agility to business processes.

Systems Architecture

In this chapter we will discuss systems architecture. Systems architecture can best
be thought of as both a process and a discipline to produce efficient and effective
information systems.

It is a process because there are a set of steps to be followed to produce or


change the architecture of a system.

It is a discipline because there is a body of knowledge that informs people as


to the most effective way to architect a system.

By systems we mean an interconnected set of machines, applications and network


resources. Systems architecture unifies that set by imposing structure on the system.
More importantly, this structure aligns the functionality of the system with the goals of the
business.
We are in a meeting with some of Canaxia's top management and Myles Standish,
the new System Architect is being introduced to a semi-skeptical audience. Myles is
Canaxia's first System Architect. The decision to bring on board a System Architect was
controversial and there are some individuals present that are still questioning the need for a
System Architect. However, Canaxia is in the process of building an Architectural Team
and Kello James was very clear that he considered a System Architect a vital part of the
team.
Myles begins his remarks by giving a quick rundown on what he has discovered
about the state of Canaxia's system architecture. He points out that
A large part of Canaxia's fixed assents, like most large corporations, is in the
applications and machines that comprise Canaxia's system architecture. The
corporations in Canaxia's space that manage their system architecture will gain
a competitive advantage.

The majority of the budget of the Information Technology (IT) department is


swallowed by costs related to Canaxia's infrastructure. And, at the present
time, most of these costs are fixed. If Canaxia wishes to gain any agility in
managing its information technology resources, it will have to put the time,
effort and attention into its system architecture necessary to sharply reduce
those costs, Myles points out.

Major business information technology functionality at Canaxia is divided


among silo-applications that were, in some cases, created three decades ago.
At this point Myles was interrupted by an executive vice president who asked,
"You are not going to tell us that we have to rewrite or replace these
applications are you? We tried that!" Myles firmly replied that he had no
plans to replace any of the legacy applications. Rather than replace, he was
going to break down the silos and get these applications talking to each other.

The terrorist attack that completely destroyed the World Trade Center in New
York City has put disaster recovery on the front burner for Canaxia. Yet their
current backup procedures were outdated. Myles told the executives that he
concurred with the storage architecture that Kello and the other architects had
crafted for disaster recovery. Myles then went into a discussion of the issues
surrounding storage. Myles asked the executives if they realized that, at the
rate that Canaxia's data storage costs were rising, the cost of storage will equal
the current IT budget by 2006. The stunned silence from the executives in the
room told Myles that this was new information for the executive team. Myles
then told them that he would also ask them for their support in controlling
those costs and thereby gaining a further competitive advantage for Canaxia.
Data storage and the issues surrounding it will be discussed later on in this chapter.

The recent flurry of bankruptcies among large, high profile companies has
lead to the imposition of stringent reporting rules on the top tier of publicly
owned companies in the U.S. Rules that demand almost real-time financial
reporting capabilities from their IT departments. Canaxia is one of the
corporations under the new reporting stringencies. Myles bluntly told the
gathered managers that the current financial systems were too inflexible to
provide anything more than the current set of reports and monthly is the fastest
that they could be turned out. Myles continued by stating that accelerated
financial reporting would be a high priority on his agenda.

Another issue that he brought up was that visibility into the state of Canaxia's
infrastructure is an important component of a complete view of the business.
Myles intended to build the reporting infrastructure so that line managers and
upper level management had the business intelligence at their disposal to be
able to ascertain if the systems infrastructure was negatively impacting
important business metrics such as order fulfillment. In the back of his mind

Myles was certain that Canaxia's antiquated system architecture was the root
cause of some of the dissatisfaction that customers were feeling with the auto
manufacturer. He knew that bringing metrics on the performance of the
company's infrastructure into the existing business intelligence (BI) system
was an important component in restoring customer satisfaction levels.
"It is my intention," Myles stated, "to move Canaxia's system architecture into
alignment with the business goals of the enterprise and keep it there". Looking at the men
and women in the room he pointed out that the biggest liability that comes from having no
system architecture and System Architect to craft it is that the majority of infrastructure
decisions are made at almost the shop-floor level. The cumulative impact of thousands of
these low-level decisions has encumbered Canaxia with an infrastructure that doesn't
server business process needs and yet devours the majority of the IT budget. In many
cases purchase decisions were made just because the technology was "neat" and "the latest
thing". Under my tenure that will stop," Myles stated firmly. "We are talking about far to
much money to allow that. Moving forward, how well the project matches with the
underlying business process and its ROI will determine where the system infrastructure
budget goes."
Then the subject of money, the budget issue, came up. "You are not expecting a big
increase in budget to accomplish all of this are you? There is really no money available
for budget increases of any size." Myles was firmly informed. Myles responded to this by
pointing out that the system infrastructure of an enterprise should be looked at as an
investment. A major investment. As an investment whose value should increase with
time, not as an object whose value depreciates. The under performing areas need to be
revamped or replaced. The areas that represent income, growth or cost reduction
opportunities need to be concentrated on. "My intent is to identify these areas and present
to you the information necessary for you to make informed decisions about how to align
Canaxia's system architecture with Canaxia's business goals. Moving forward, for the next
two to three years I expect to do what needs to be done within the current budget. After
that time I expect to see infrastructure charges shrink at a rate of three to four percent a
year." is how Myles ended his presentation.
As part of his charter as System Architect for Canaxia, Myles was given oversight
responsibility into recommended changes in the physical infrastructure and to the
application structure of the enterprise. That meant that he would be part of the
architecture team for all application purchase and development decisions at Canaxia. It
was his intent to team with Kello James to bring agile architecture processes into Canaxia.
Some of you in the reading audience may find yourself in a situation similar to
Myles. Improvements are demanded of you and you have to do this with the same or less
money than you received last year. Bigger, better, faster and cheaper is all you ever hear.
In this chapter we will expose a methodology that will allow you to meet those demands.
Better than that, we will demonstrate that these demands to cut costs often conceal
opportunities to move the architecture of your systems in a positive direction. Our purpose

is to help you understand what needs to be done to deal with your systems architecture
problems on time and within budget.
The basic purpose of system architecture is to support the higher layers of the
Enterprise Architecture. The enterprise hardware and software infrastructure is domain of
the System Architect. Since, in many companies, the software and hardware represents a
significant fraction of the enterprises total assets, the position of Systems Architect is a
significant one. It is important for Systems Architects to not equate their duties with the
objects, the applications and the machines that comprise their domain. The Systems
Architects fundamental purpose is to support and further the business objectives of the
enterprise. Hardware and software objects are fundamentally transient and exist only to
further the purposes of the business.
It is important for individuals that are attempting to function in the Systems
Architecture space to take a holistic view of all of the parts in a system and all of their
interactions. When we are speaking of all of the interactions between the parts we mean
not only the existing interactions but all of the possible interactions too. We are aware that
attempting to understand how all the objects in the enterprise can be a challenging,
perhaps overwhelming, task. That is a good thing because the impossibility of the task
will force you to forgo trying to understand exactly how the application produces the
desired result and put your attention on what the application promises to do. In other
words, we urge System Architects to focus on the contracts the application promises to
fulfill and interactions between the components rather than getting bogged down in the
details of exactly how objects do their work. Put in Object-Oriented Design terms, we
suggest that the most effective path to understanding a system is to build a catalogue of the
interfaces the system exposes or the contracts the system fulfills. Focusing on the contract
or the interface makes it much easier to move away from a preoccupation with machines
and programs and move to a focus on the enterprise functionality that the systems must
provide. Taken higher up the architecture stack, the contract should map directly to the
business process that the system objects are supposed to enable.
When done properly, creating and implementing a new Systems Architecture or
changing an existing one involves managing the process by which the organization gets to
the new architecture as well as managing the assembly of the components that make up
that architecture. It involves managing the disruption to the stakeholders as well as
managing the pieces and parts. The goal is to improve the process of creating and/or
selecting new applications and the process of integrating them into your existing systems.
The payoff is to reduce IT operating costs by improving the efficiency of the existing
infrastructure.
Some of the areas that need to be considered when building a system architecture to
support the enterprise are:
The business processes that the applications, machines and network are
supposed to support. This is the primary concern of the systems architecture
for an enterprise. Hence, a Systems Architect needs to map all of the

companys business processes to the applications and infrastructure that is


supposed to support it. In all likely hood there will not be a 100% fit. The
questions to ask yourself when looking at the discrepancies are:

How important is this process to the company? Deal with the big
problems first. A substantial misfit in a vital business process must
be treated with a high level of concern.

If you do identify such a problem, particularly in a system that


has been in use for several years, we recommend examining the
work-arounds that the users of the system have crafted. When an
adequate work-around exists, you may be able to put this problem
on the back burner. In many cases you will find that the side
effects of the compensatory mechanism are causing major
problems in the business.

How important is the discrepancy? As above, the work-around


for a large discrepancy in a minor process can have serious side
effects.

What part of the overall architecture is being compromised? Areas that often
suffer are the data and security architectures and data quality. The data
architecture because users don't have the knowledge and expertise to
understand the effects of their changes to procedure. Security architecture
because security is often viewed by employees as an unnecessary nuisance.
Data integrity falls victim because people are pressured to produce more in less
time and the proper crosschecks on the data are not performed.

The stakeholders in the system architecture.


includes:

The people involved. This

The individuals who have created the existing architecture.


The people tasked with managing and enhancing that architecture.
The people in the business who will be affected positively and
negatively by any changes in the current system architecture.
The businesss trading partners and other enterprises that have a
stake in the effective functioning and continued existence of this
corporation.

The needs and concerns of the stakeholders will have a large impact on what
can be attempted in a system architecture change and how successful any new
architecture will be.
The context that the new System Architecture encompasses. When we use
context, we are referring to the global enterprise realities in which the System
Architect finds him or herself.
The data with which the system architecture deals.
See the Data Architecture chapter for full details.

Working With Existing System Architectures


All companies have a Systems Architect, even if their name is Ad Hoc. This is the
situation that Myles found himself in. Canaxia's system architecture wasn't planned. The
company had gone through years of growth without spending any serious time or effort on
the architecture of their applications, network or machines. Companies that have not had a
system architecture consistently applied during the growth of the company will probably
have what we call shantytown architecture, stovepipe architecture or, more commonly
they will have a mixture of both.
Shantytown architecture is characterized by a hodge-podge of equipment and
software scattered throughout the companys physical plant. This equipment and software
was obtained for short-term, tactical solutions to the corporate problem of the moment.
Interconnectivity was an afterthought at best. Perhaps the worst aspect of the shantytown
architecture is that it is extremely difficult or perhaps impossible to get an accurate picture
of exactly what the enterprise's system resources are.
Stovepipe architecture is usually as the result of large, well-planned projects that
were designed to fill a specific functionality for the enterprise. These systems will quite
often be mission-critical in that the effective functioning of the company is dependant
on them. Normally these systems involve both an application plus the hardware that runs
it. The replacement of these systems is usually considered prohibitively expensive. The
issues with stovepipe architecture are:

They don't fit well with the business process that they are tasked with
assisting. This can be a result of the development process used to construct
them. In most cases a "heavyweight", inflexible development process such as
the waterfall process was used. The waterfall process is predicated on getting
all of the requirements captured perfectly. To a greater or lesser degree this
always fails. Or, it can be a result of the inevitable change of business
requirements as the businesses competitive landscape changes. Both of which
mean that the business is left with a less than perfect fit between the business
process and the stovepipe application.

See the Methodology Overview, Enterprise Unified Process and Agile Modeling
chapters for more depth on software development processes.

Quite often their data doesn't integrate with the enterprise data model. This is
usually due to vocabulary, data format or data dictionary issues.

See the Data Architecture chapter for a more complete discussion of the data
architecture problems that can arise among stovepipe applications.
Canaxia's system architecture was largely stovepipe with a minor shantytown
component.
It is sometimes useful to treat the creation of an enterprise systems architecture as a
domain-definition problem. In the beginning you should divide the problem set so as to
exclude the less-critical areas from it. This will allow you to concentrate your efforts on
critical areas or on areas that can yield and immediate return; low-hanging fruit, in other
words. This can yield additional benefit because problems can solve themselves or
become moot if ignored. In the modern information services world the only thing that is
constant is change. In such a world, time can be your ally and patience the tool to reap its
benefits.

System Architecture Types


Very few Systems Architects will be given a blank slate and told to create a new system
architecture. Like most System Architects, Myles inherited an existing architecture, along
with the people that are charged with supporting and enhancing it and the full set of
dependencies between the systems and the parts of the company that depends on them.
Myles knew that the existing system architectures and the people who own them will
have to be worked with. It must be strongly emphasized that the existing personnel
infrastructure is an extremely valuable resource. These are people to bring onto your
team, the individuals that can spell success or failure to your Systems Architect plans. It is
entirely possible that the Systems Architect will encounter people that have decided to be a
hindrance rather than a help. The first thing to do when such an individual is encountered
is to listen to them very carefully. Perhaps they have a valid case, perhaps they are
warning of a very real danger. If, on further investigation and discussion, you decide that
their position is without merit and that bringing them on board is not going to work, then
you will have to work around them. In this situation the dynamic nature of the modern
information technology environment may be your greatest ally. Look to internal or
external resources that can be used to bypass the recalcitrant individual(s).
Following is an enumeration of the System Architecture types, their strengths, their
weaknesses and how to best work with them.

Legacy Applications
Neither you, nor we have the time to go into an exhaustive analysis of the concept of
Legacy Applications. You that have them know what they are and what they do for your
company.
However, it is useful to consider the following characteristics of legacy
applications:
Security is easier to setup and enforce.

The machines are physically isolated from the rest of the company, often
in their own building. Even if they share building space with other
departments, the physical machinery exists behind locked doors and/or glass
walls. Hence physical security has already been taken care of.

Network connections to legacy applications are usually few and it is often


possible to tightly control them. This to makes network security easier.

Components and routines of the application all run on a single machine.


These applications are either created by IT or their purchase and installation is
under the control of IT.

As originally conceived, input and output was by batch processes. These


processes were often set up to run at night. This does not fit well with the
current model of direct, on-demand interactions with applications and is one of
the issues that Systems Architects sometimes have to deal with.

Some legacy applications do support direct user interactions. These


interactions will be through "dumb terminals" (machines with no memory or
CPU), or through client-server applications and/or thin clients.

Lines of responsibility for the applications and the infrastructure are clear and
sharp. Hence the process of upgrade and change to these applications is
usually clear and well established.
While many legacy applications have been discarded, quite a few have been carried
forward to the present time. The surviving legacy applications will usually be both vital to
the enterprise and considered impossibly difficult and expensive to replace.
This was Canaxia's situation. Much of its IT department involved legacy
applications. Several mainframe applications controlled the car manufacturing process.
The Enterprise Resource Planning (ERP) was from the early 1990s. Fortunately the
Customer Relations Management (CRM) application was newer and built on modular
principles. But the legacy application that did the financials for Canaxia was of more
immediate concern. It was a set of COBOL programs hosted on a mainframe. One of the
big pluses was that all of the certification of these applications by the Federal and State
Departments of Internal Revenue had been accomplished long ago. The process to gain
these certifications is long and expensive, as some of you well know. One of the negatives
is that these applications are not suited to the frequency and level of detail mandated by
the new financial reporting requirements. Since Canaxia was one of the 945 companies

that fell under Securities and Exchange Commission (SEC) Order 4-460, the architecture
team knew that they had to produce financial reports that were much more detailed than
the system currently produced. In addition, they would have to be produced pretty much
on demand. The architecture team knew that they could take one of the following
approaches
They could replace the legacy financial application. The risk involved in this
course may be less than the risk in trying to knee-prop the old application.
Order 4-460 opens the possibility of jail time and fines against the personal
fortunes of Chief Operating Officers (CEOs) and Chief Financial Officers
(CFOs) that are deemed to have released incorrect financial information. The
team knew that if they decided to go the replace route, they would have to
reduce the risk of failure by costing it out at a high level. However, they had
other areas that were going to require substantial resources and they did not
want to spend all of their capital on the financial reporting situation.

Another route they could take is to extract the data necessary to create the
necessary reports from the legacy application into a data warehouse. That
would leave the legacy application to do what it is used to doing when it is
used to doing it. On the down side, this will lead to a duplication of data and
effort. The applications mining the warehouse will be turning out reports that
are identical to or very similar to reports that the legacy application can turn
out. What this approach buys Canaxia is

Greater responsiveness. Using a data warehouse it is possible to


produce an accurate view of the company's financials on a weekly basis.

Much more granular level of detail can be obtained. Access to data in a


data warehouse is much more granular than is available in a legacy
financial application. Granular access to data is one of the prime
functions of data warehouses. Reconciliation of the Chart of Accounts
according to the accounting principles of the enterprise is the prime
function of legacy financial applications, not granular access to data.
Hence it was a natural to move the expanded financial reporting into a
situation that could handle it.

Ad hoc querying. The data in the warehouse will be available in


multidimensional cubes that users can drill down into according to their
needs. This greatly lightens the burden of producing reports. While
producing reports is not normally in the province of the System
Architect, indirectly he or she will benefit from lightening the load on
other IT sections.
The fact that the data warehouse approach leads to a duplication of effort is not
necessarily a bad thing. It does provide the following benefits
This approach is in line with the Agile Architecture principle of only doing
what is necessary. You only have to worry about implementing the data

warehouse or data mart. You can rest assured that the financial reports that the
user community is accustomed to will appear when needed and in the form that
they are used to.

It buys you time and time can be your friend.

In terms of the legacy applications that Myles found at Canaxia, he concurred


with the decision to leave them intact. In particular, he was 100% behind the
plan the legacy financial application intact and moved to implement a data
warehouse for Canaxia's financial information.
In general terms, the good news with legacy applications is that they are extremely
stable and they do what they do very well. Physical and data security is usually excellent.
The less-good news is that they are extremely inflexible. They expect a very specific input
and they produce a very specific output. Modifications to their behavior usually are a
major task. In many cases, the burden of legacy application support is one of the reasons
that there is so little discretionary money available in most IT budgets.
Most System Architects will have to deal with legacy applications. A key to working
with them is to forgo understanding exactly how they work. Also, we don't suggest
attempting to rewrite them. Rather, concentrate on ways to exploit their strengths by
modifying their behavior. Focus on the contracts that they fulfill and the interfaces that
they can be considered to expose. In a nutshell, whenever possible work to break down
the silo walls and to integrate them into the global system architecture. Later in this
chapter we will discuss ways to accomplish that feat.

Client/Server
Client/server architecture is based on dividing effort into a client application, which
requests data or a service and a server application, which fulfills those requests. The client
and the server can be on the same machine or they can be on different machines.
It is useful to review the hole that client/server architecture filled and why that
architecture became popular.
Access to knowledge was the big driver in the rise of client/server. On the
global level, with mainframe applications, users were given the data, but they
wanted knowledge. On the micro level it came down to reports. Reports are a
particular view of the data on paper. It was easy to get the standard set of
reports from the mainframe. To get a different view of corporate data could
take months for the preparation of the report. Also, reports were run on a
standard schedule, not on the user's schedule.

Cost savings was another big factor in the initial popularity of client/server
applications. Mainframe computers cost in the six to seven figure range as did
the applications that ran on them. Modifying existing mainframe applications
or getting a new one were big deals in terms of the company's resources. It
was much cheaper to build or buy something that ran on the client's desktop.

The rise of fast, cheap network technology. Normally the client and the
server applications are on separate machines, connected by some sort of a
network. Hence, to be useful, client/server requires a good network.
Originally, businesses constructed Local Area Networks (LANs) to enable file
sharing. Client/server was able to utilize these networks. Enterprise
networking has grown to include Wide Area Networks (WANS) and the
Internet. Bandwidth has grown from 10 mbs Ethernet and 3.4 mbs Token Ring
networks to 100 mbs Ethernet with Gigabit Ethernet starting to make an
appearance. While bandwidth has grown, so has the number of applications
chatting on a given network. Network congestion is an ever present issue for
Systems Architects.

Client/server led to the development and marketing of some very powerful


desktop applications that have become an integral part of the corporate world.
Spreadsheets and personal databases are two desktop applications that have
become ubiquitous in the modern enterprise.

Client/server application development tapped the contributions of a large


number of non-programmers. Corporate employees at all levels developed
some rather sophisticated applications without burdening an IT staff.
The initial hype that surrounded client/server revolution has pretty well died down.
It is a very mature architecture. The following facts about it have emerged
The support costs involved in client/server architecture have turned out to be
considerable. A large number of PCs had to be bought and supported. The
cost of keeping client/server applications current, how much it cost to
physically distribute new releases or new applications to all of the corporate
desktops that required them came as a shock to many IT departments.
Client/server applications are another one of the reasons that such a large
portion of most IT budgets are devoted to fixed costs.

Many of the user-built client/server applications were not well designed. In


particular, the data models used in the personal databases were often
completely unnormalized. In addition, in a significant number of cases the
data in the desktop data store was not 100% clean.

The huge numbers of these applications and their extreme decentralization


make it extremely difficult for a System Architect to locate and catalogue
them.
Microsoft Access deserves its own section in any System Architect's handbook.
Since it came almost free with Microsoft Office, practically every desktop has a copy of it.
Building an application in Access is extremely easy for non-programmers. As a result,
Access databases have become ubiquitous in the modern corporation and, in many cases a
significant part of vital corporate data is scattered across a multitude of Access databases.
Most IT departments have gotten involved in maintaining these applications. As just one
example, many IT person-hours were spent bringing corporate Access databases into Y2K

compliance. The point is that many IT departments have, by default, become involved in
supporting user-developed applications. This situation brings to mind a price list sign in
an auto-repair shop that I once saw. It went something like this
o Basic rate: $25/hour.
o $50/hour if you watch.
o $75/hour if you help.
o $125/hour if you worked on it first.
One really important client/server helper application is the spreadsheet. Microsoft
Excel and Lotus 123 have taken on a life of their own. At all levels of the corporate
hierarchy the spreadsheet is the premier data analysis tool.
In regards to client/server applications, the Canaxia architecture team was in the
same position as most architects of large corporations in that
Canaxia had a large number of client/server applications to be supported.
Furthermore, they would have to be supported for many years into the future.

Canaxia was still doing significant amounts of client/server application


development. Some were just enhancements to existing applications, but
there were several proposals to develop entirely new client/server
applications.

One of the problems that Myles had with client/server was that it is basically
one-to-one. A particular application on a particular desktop talking to a
particular server, usually a database server. This architecture does not
provide for groups of corporate resources to access the same application at
the same time. The architecture team wanted to move the corporation in the
direction of distributed applications to gain the cost savings and scalability
that they provided.
The architecture team knew that they had to get some sort of grip on Canaxia's
client/server applications. They knew that there are several approaches that could be taken
to them.
Ignore them. This is the recommended approach for those applications that
have the following characteristics
o Are complex and would be difficult and/or expensive to move to
thin-client architecture.
o Have a small user base. Normally this means that the application is
not an IT priority.
o Are stable, i.e. don't require much maintenance or a steady stream of
updates. This normally means that support of this application is not
a noticeable drain on your budget.
o Involves spreadsheet functionality. Attempting to produce even a
small subset of a spreadsheet's power in a thin-client would be very
difficult and expensive project. In this case we suggest accepting

the fact that the rich Graphical User Interface (GUI) benefits that the
spreadsheet brings to the table is the optimal solution.

Put the application on a server and turn the users PCs into dumb terminals.
Microsoft Windows 2000 Server is capable of operating in multi-user mode.
This allows the IT department to run Windows applications on the server by
just providing screen updates to individual machines attached to it. Windows
is singled out here because the vast majority of client/server applications are
Windows applications and heretofore Windows has been a single user, not
multi-user operating system. Putting the application on a server makes the
job of managing client/server applications several orders of magnitudes
easier.

Extract the functionality of the application into an interface or a series of


interfaces. The point of this effort is to turn the client/server application into
a set of components. Components that can be plugged into other needs of
other applications. Components that can fit into a Service Orientated
Architecture (SOA). Components that can be easily replaced. Think about
that last point for a moment. If the client/server application has been broken
down into pieces, it can be replaced a piece at a time. Or, the bits that are the
biggest drain on your budget can be replaced, leaving the rest still functional.
For more information on SOA, visit the SOA chapter.

Encourage them. Client/server applications are still the preferred solution in


a number of situations. If the application requires a complex GUI. When it
comes to building elaborate, responsive GUIs, Visual Basic is substantially
more productive than attempting to do it in HTML. Another set of
applications that are excellent candidates for client/server architecture are
ones that are computation intensive and/or do a lot graphing. You do not
want to burden your server with calculations or use up your network
bandwidth pumping images to a client. It is far better to give access to the
data and let the CPU on the desktop do the work. Applications that involve
spreadsheet functionality are best done in client/server. Excel exposes all of
its functionality in the form of Common Object Model (COM) components.
These components can be transparently and easily incorporated into a VB or
Microsoft Foundation Classes (MFC) applications. In any case, all new
client/server development should be done using interfaces to the functionality
rather than monolithic applications.

Discourage them. This is probably not a 100% viable solution. In any case,
this is not for the System Architect to decide. You well may be in the
position of having a large part of your budget swallowed by client/server
support and have established that there is an excellent
ROI in replacing

all of your client/server them with thin-client applications. However, the


political realities of the needs of your stakeholders may make this unfeasible.

Move them to thin-client architecture. At this time this usually means


creating browser-based applications to replace them. This is the ideal
approach to those client/server applications that have an important impact on
the corporate data model. It has the important side effect of helping to
centralize the data resources contained in the applications.
The Canaxia architecture team decided to take a multipronged approach to their
library of client/server applications.
About a third of them clearly fell in the "leave them be" category. They
would be supported but not enhanced.

Ten to fifteen percent of them would be cut loose. All IT support would
cease. The architecture team accepted the possibility that, at some future time,
they would be forced to accept several of these applications back into the IT
fold.

The rest of them were candidates for replacement by thin-client architecture,


which will be discussed next.
When ordering client/server applications for replacement the architecture team
looked to Myles's strategic System Architecture Plan. That plan was based on the premise
that the synergy an application had with the other components of Canaxia's system
architecture was the important factor. It is true that Myles didn't totally ignore the burden
that an application put on the IT budget. However, the architecture team knew that
reactive chasing after big budget savings would initially look good but would fail to
produce any long-term structural savings.
One thing that is often overlooked in client/server discussions is the enterprise data
sequestered in the desktop databases scattered throughout the business. There are several
approaches the System Architect can take to dealing with the important corporate data
contained in user data stores.
Let it lie. This merits strong consideration under the following circumstances
o You are not going to move the functionality of the application that
drives this data store to a browser-based application.
o If the data is extraneous to the Enterprise Data Model. In other
words, if the data has little or know relevance outside of the
application(s) that use it.
o If the data is dirty you may have no choice but to let it lie.

Extract it from the desktop and move it into one of the corporate databases.
The user then gets the access that he or she is used to via Open Database
Connectivity (ODBC) or JDBC. This is an excellent solution that will take a
significant amount of time, energy and, of course, money. For the

client/server applications that are moved into the browser, extraction will be
necessary.
A combination of the above.
When asked about client/server, Myles said that he had no sweeping
recommendations. Instead his advice was to treat each application on its own merits.
While client/server is not a good fit for the current architecture of distributed applications,
it still has its place in the modern corporate IT world. We are staunch advocates of using
interfaces rather than monolithic applications. As we have said before, concentrate on the
biggest and most immediate of your problems. When it doubt, let it lie.

Thin Client Architecture


Thin-client is currently the most popular architecture. It has progressed from the
architecture du jour into a mainstream architecture. Originally thin clients were a rehash
of time-share computing with a browser replacing the dumb terminal. As the architecture
has matured the client has, in some cases, gained responsibility.
As we have pointed out, client/server applications turned out to have substantial
hidden costs in the form of the effort involved in maintaining and upgrading them.
Dissatisfaction with these hidden costs was one of the impetuses that lead to the concept of
a "thin client". With thin-client systems architecture, all work is performed on a server.
All applications required to support the task at hand run on a server too. All data is stored
on a server or on resources connected to a server by the network. The only thing the client
is charged with is displaying information sent from the server and gathering information
from the user.
The keys to thin-client architecture are the network connection between the thin
client and the server and the Internet browser. The appearance of cheap and effective
network connections is as important to thin-client architecture as it is to client/server. The
appearance of Internet browsers on all desktops turned every corporate PC into a target for
thin-client applications.
As originally conceived, the thin client would realize additional economies because,
since they stored no data and did no work, thin clients would be machines that were much
cheaper than the desktop computers required for a client/server architecture. Little more
than dumb terminals. The idea of a thin client as a machine never was adopted because,
by the time that the first thin client machine appeared, the ill-fated Network Computers or
NCs of the 1990s, the costs of a basic WinTel desktop had dropped to the point that they
were in some cases cheaper than NCs.
It is the appearance of Internet browsers on all desktops that has firmly ensconced
the thin-client architecture in the modern enterprise. The only thing that the client
machine requires is an application, the browser, that is a given on everybody's machine
and a network connection to the machines that will serve up the applications. One of the
really attractive parts of thin-client architectures is that the client software component is
provided by a third party and requires no expense or effort on the part of the business. The

server, in this case a Web server, serves up content to the user's browser and gathers
responses from him or her. All applications run on either the Web server or on dedicated
Application Servers. All of the data is on machines on the businesses network.
Thin-client architecture has several problematical areas
One problem area can be the network. Normally a thin-client architecture
produces a lot of network traffic. The amount of network traffic can
overwhelm 10 mb/sec Local Area Networks (LANs). When the connection is
over the Internet, particularly via a modem, roundtrips from the client to the
server can become unacceptably long.

Another is that the foundation, upon which most thin-client applications are
built, the Web browser, is inherently stateless. The entire Web architecture
was designed to serve static pages. The user requests a page and the server
gives it to him or her. When the next request for a static page comes in the
server doesn't care about the browsing history of the client. Many schemes,
most of them quite elaborate, have been developed to maintain state for a
particular client between calls to the web server. All of these methods of
maintaining state come with overhead.

Control of which browser that is used to access the thin-client application is


sometimes outside the control of the team developing the application. If the
application is only used within a corporation then the browser that will be used
can be dictated by the IT department. If the application is to be accessed by
outside parties or over the Internet, then there is a wide spectrum of browsers
that may have to be programmed for. Applications to be accessed via the
Internet are the most problematical. WebTV and old versions of Microsofts
Internet Explorer and Netscape Navigator are the problem. Many users have
clung to old version of these applications, seeing no point in upgrading. These
antiquated browsers make the development of rich applications difficult or
impossible. Your choices in this situation are to give up functionality by
programming to the lowest common denominator or give up audience by
developing only for modern browsers.

Users usually want rich, interactive applications. It has been our experience
that pages that just provide information are rarely visited more than once. The
pages that allow a user to do something useful are the pages that will get the
hits. HTML is a not the best language in which to attempt to build rich,
interactive GUI applications. It can be done, but it tends to be labor-intensive.
When developing thin-client applications the development team must strive to
provide just enough functionality for the user to accomplish what they want
and no more. Agile application development methodologies have the correct
approach to this problem; do no more than what the client asks for. The test to
apply to extensive, expensive GUI functionality in the thin-client applications
is to demonstrate the ROI that will be generated by this functionality. If it is
not going to augment the bottom line, it shouldn't be in.

Data validation. The data input by the user can be validated on the client
and, if there are problems with the page the user is alerted by a popup. Or, the
data can be sent to the server and validated there. If there are problems then
the page is resent to the user along with a message identifying where the
problems are. Another of architecture team's rules is that verification of input
data is always done on the server, not on the client. Do not fall into the
seductive trap of using JavaScript on the client for data verification. It is more
popular with users, however using Java or VB on the server is much more
productive and the applications produced are easier to maintain.

The paucity of effective Rapid Application Development tools for thin-client


architecture development. The most common Web development tools, are
usually enhanced text editors. There are more advanced Web RAD tools, such
as DreamWeaver and FrontPage, however they all fall short on some level,
usually due to the fact that they only automate the production of HTML. They
give little or no help with JavaScript or with the development of server-side
HTML generators such as Java Server Pages (JSPs) and Java servlets.

Server resources can be stretched thin. A single Web server can handle a
surprising number of connections when all it is serving up are static Web
pages. When the session becomes interactive then the resources consumed by
a client can go up dramatically.
The keys to a strong thin-client systems architecture, beyond an adequate number of
servers and large network pipes, lie in application design. For example, the amount of
information carried in session objects, the opening database connections, the time spent in
SQL queries, all of these application issues can have a big impact on the performance of a
thin-client architecture. If the Web browser is running on a desktop machine, it may be
possible to greatly speed performance by enlisting the computing power of the client
machine. Data analysis, especially the type that involves graphing, can heavily burden the
network and the server. Each new request for analysis involves a network trip to send the
request to the server. Then the server may have to get the data, usually from the machine
hosting the database. Once the data is in hand there CPU-intensive processing steps
required to create the graph. All of these operations take lots of server time and resources.
Then the new graph has to be sent over the network, usually as a fairly bulky object such
as a GIF or JPEG. If the data and an application to analyze and display the data can be
sent over the wire the user can play with the data to his or her hearts content and the server
can be left to service other client requests. This is a "semi-thin client" architecture, if you
wish.
Mobile phones and Personal Desktop Assistants (PDAs) can be considered as thin
clients. They have browsers that you can use to allow interaction between these mobile
devices and systems within the enterprise.
Several intriguing new architectures are available that make the production of rich
thin-client applications quick and easy. Both of them are based on replacing the browser
with a lightweight, easily downloaded client. These clients communicate with the server

via HTTP or HTTPS. Microsoft's .NET architecture has Web Forms. The Java
community is developing JavaServer Faces, which should be available sometime in the
next couple of years. We predict that in the future most businesses will move all
applications that do not interface with the public to Web Forms and/or JavaServer Faces.
The ROI is quite compelling. However, for Canaxia, .NET is out and JavaServer Faces
are too far in the future to be seriously considered.
The rise of the browser client has created the ability and the expectation that
businesses expose a single, unified face to external and internal users. Inevitably this will
mean bringing the businesses legacy applications into the thin-client architecture.
Integrating legacy applications into a thin-client architecture usually will take more work
than integrating client server applications. Exposing the functionality of the legacy
systems via interfaces and object wrappering is the essential first step. Adapting the
asynchronous, batch-processing model of the mainframe to the attention span of the
average Internet user can be difficult. We will discuss this in greater detail later in this
chapter.

Using Systems Architecture to Enhance System Value


As software and hardware systems age, they evolve as maintenance and enhancement
activities change them. Maintenance and enhancement can be used as an opportunity to
increase the value of a system to the enterprise. The key to enhancing the value of a
systems architecture via maintenance and enhancement is to use the changes as an
opportunity to modify stovepipe applications into reusable applications and components.
This will make your system more agile because you will then be free to modify and/or
replace just a few components rather than the entire application. In addition, it will tend to
future proof the application because the greater flexibility offered by components will
greatly increase the chances that existing software will be able to fulfill the requirements
of the latest minute.
The best place to begin the process is by understanding all of the contracts that the
stovepipe fulfills. By contracts we mean all of the agreements that it makes with
applications that wish to use it. Then, as part of the enhancement or maintenance,
externalize the contract via an interface. As this process is repeated with a monolithic
legacy application it begins to function as a set of components.
For example of the process involved in externalizing interfaces, there are tools that
consume COBOL copybooks and produce an object wrapper for the modules described
in the copy books.
Even better is to access via messaging. By messaging we mean products such as
MQ Messaging from IBM, Rendezvous from TIBCO or any of the many products that
implement Java Messaging Service (JMS). With messaging the information required for
the function call is put on the wire and the calling application either waits for a reply
(pseudo-synchronous) or goes on about its business (asynchronous). Messaging is very
powerful in that it completely abstracts the caller from the called functionality. The

language in which the server program is written, the actual datatypes it uses, even the
physical machine upon which it runs is immaterial to the client. With asynchronous
messaging calls the server program can even be offline when the request is made and it
will fulfill it when it is returned to service. Asynchronous function calls are very attractive
when accessing legacy applications. They can allow the legacy application to fulfill the
requests in the mode with which it is most comfortable, batch mode.
To be of the greatest utility, object wrappers should expose a logical set of the
underlying functions. We think that you will find that exposing every function a
monolithic application has is a waste of time and effort. The functionality to be exposed
should be grouped into logical units of work and that unit of work is what should be
exposed.
What is the best way to expose this functionality? When creating an object wrapper,
the major consideration should be the manner in which you expect this service to be
accessed in the future. In practically all cases this will be a service that will be exposed
over a network connection of some sort. This network may be as fast as gigabyte Ethernet
or it may well be as slow as Simple Object Access Protocol (SOAP) over a Wide Area
Network (WAN). In any case, the effort should be made to minimize the number of
network trips.
As you can see, wrappering depends on distributed object technology for its
effectiveness. Distributed objects are available under Common Object Request Broker
architecture (CORBA), Java, .Net and by using Messaging Oriented Middleware (MOM)
technology. Distributed objects rely upon an effective network (Internet, intranet or
WAN) to operate effectively.
In addition to increasing the value of a stovepipe application to other segments of the
enterprise, exposing the components of that application provide the following
opportunities
The possibility of sun-setting stagnant or obsolete routines and the
incremental replacement of them by components that makes more economic
sense to the enterprise.

The possibility of plugging in an existing component that is cheaper to


maintain in place of the legacy application routine.

Outsourcing that functionality to an Application Services Provider (ASP), or


Web Service.
If the interface is properly designed, any application accessing that interface will be
kept totally ignorant of how the business contract is being carried out. Therefore it is
extremely important to design them properly. The Canaxia architecture group has an
ongoing project to define interfaces for legacy applications and important client/server
applications that were written in-house.
The best place to start when seeking an understanding of the interfaces that can be
derived from an application is not the source code but the user manuals. Another good
source of knowledge of a legacy applications contracts is the applications customers, its
users. Normally they know exactly what it is supposed to do and the business rules

surrounding its operation. Occasionally there is documentation for the application that is
useful. Examining the source code has to be done at some point, but, the more
information that you have gathered before hand the faster and easier will be the job of
deciphering the source.

Network Protocols
In a nutshell, a network protocol is a set of agreements and specifications for sending data
over a network. There are many network protocols in use today. System Architects need
an understanding of the various network protocols in common usage in business and
industry today so that they can optimize system architecture design for the needs of their
particular situation.
Transmission Control Protocol/Internet Protocol (TCP/IP) is the network protocol
that has the widest use in industry.
TCP/IP protocol stacks exist for all operating systems currently in use.

It is an extremely robust and reliable protocol.

It is reasonably fast.

It is routable which means that it can be sent between subnets using routers.

The majority of current business networks use TCP/IP for file sharing and
communication.

It is available for free with all operating systems. Even Netware, the home of
the once very popular network protocol SPX, offers TCP/IP as well as their
proprietary protocol.

An extremely robust suite of security functionality has been developed for


communication via TCP/IP.

Internet communication by and large uses TCP/IP. There are several other
network protocols that can be and are used over the Internet. They will be
discussed later in this section. However, the Internet communication
protocols, HTTP, HTTPS, SMTP and FTP use IP.

Web services use TCP/IP.

Messaging protocols use TCP/IP.

All remote object communication such as DCOM, Java RMI and CORBA use
TCP/IP.

Subnets
Subnets are segments of a network that can communicate with each other directly. A
person on a subnet can communicate with all of the other computers on that subnet directly.

Networks are split into subnets by the subnet mask part of the TCP/IP properties. Subnets
simplify network administration and security.
Computers on different subnets require routers to communicate with computers on other
sukbnets.
To be completely accurate, TCP/IP is a suite of protocols built on a four-layer
model:
The first layer is the network interface. These are the LAN technologies such
as Ethernet, Token Ring and FDDI or the WAN technologies such as Frame
Relay, Serial Lines. The first layer puts frames onto and gets them off of the
wire.

The second layer is the IP protocol. This layer is tasked with encapsulating
data into Internet datagrams. It also runs all of the algorithms for routing
packets between switches. As sub-protocols to IP are functions for Internet
address mapping, communication between hosts and multicasting support.

Above the IP layer is the transport layer. This layer provides the actual
communication. There are two transport protocols in the TCP/IP suit, TCP and
User Datagram Protocol (UDP). These will be explained in greater detail in
the next section.

The topmost layer is the application layer. This is where any application that
interacts with a network accesses the TCP/IP stack. Protocols such as HTTP
and FTP reside in the application layer. Under the covers, all applications that
use TCP/IP use a protocol called sockets. Sockets can be thought of as the
endpoints of the data pipe that connects applications. Sockets have been built
into all flavors of UNIX and have been a part of Windows operating systems
since Windows 3.1.

The transmission protocol is the actual mechanism of data delivery between


applications. TCP is the most widely used of the transmission protocols in the TCP/IP
suite. It can be described as follows:
Is a packet-oriented protocol. That means that the data is split into packets, a
header attached to the packet and it is sent off. This process is repeated until
all of the information has been put on the wire.

It is connection-oriented in that it requires an endpoint, an internet address.

Offers guaranteed delivery. All packets will get to the destination and there
will be no duplication of packets.

Provides a means to properly sequence the packets once they have all arrived.

Provides a simple checksum feature to give a basic level of validation to the


packet header and the data.

It does not guarantee the order in which the packets will be received, just that
they will all get there and that they will be delivered to the client in the proper
order.

UDP is used much less than TCP/IP is, yet it has an important place in IP
communication. UDP:
Provides broadcast communication of information. It does not require an
endpoint address. Since an endpoint address is not necessary, UDP can be used
to establish communication between two applications that dont know each
others Internet addresses.

UDP does not have the built-in facilities to recover from failure that TCP has.
If there is a problem such as a network failure that prevents a particular
application from receiving the datagram, the sending application will never
know that. If reliable delivery is necessary TCP should be used or the sending
application will have to provide mechanisms to overcome the inherently
unreliable nature of UDP.

UDP is faster and requires less overhead than TCP.

Normally UDP is just used for small data packets.

Asynchronous Transfer Mode (ATM) is another protocol that should be mentioned.


TCP/IP is packet-oriented, not connection oriented. In the case of video or voice
transmission, you will need to have a connection between the communicating parties and
ATM can provide that. With video and voice communication the packets have to arrive in
the order that they were sent and ATM is better suited to that task than TCP. ATM allows
the creation of a virtual circuit connecting the sender and receiver; hence all of the packets
sent are received in the order they were sent.
We do not wish to leave you with the impression that TCP/IP cannot be used for
connection-oriented transmission such as is required for Voice over IP (VOIP). It can be
and it is used for those purposes every day. However, the compromises required to force
TCP/IP into that role can substantially impact network performance.
Another protocol has emerged is Multiprotocol Label Switching (MPLS). MPLS
competes with ATM in that allows the establishment of labeled paths between the
sender and the receiver. This ability to establish paths has made MPLS of interest to the
creators of Virtual Private Networks. Unlike ATM, with MPLS it is relatively easy to
establish paths across multiple layer 2 transports like Ethernet and FDDI. It also
outperforms ATM and offers some very useful path control mechanisms. Like ATM,
MPLS uses IP to send data over the Internet.
A discussion of MPLS is beyond the scope of this book. For further information see the
MPLS FAQ at www.mplsrc.com /mplsfaq.shtml

Bottom line is that, if you have substantial video and/or VOIP needs you should
move those applications onto ATM or even better, MPLS.
The big question about TCP/IP that you should consider is whether to utilize IPv4,
which was released way back in 1980 or move to implementing IPv6. IPv6 has many
features of importance to System Architects. It provides
Virtually unlimited address space.

A different addressing scheme that allows individual addresses for every


device in your enterprise everywhere in the world. This addressing scheme
does not rely on subnet masking and does not require the use of Domain Name
Servers (DNSs).

A fixed header length and improved header format that improves the
efficiency of routing.

Flow labeling of packets. Labeling packets allows routers to sort packets into
streams. This makes TCP/IPv6 much more capable of handling streamoriented traffic such as VOIP or video streams than TCP/IPv4 is.

Improved security and privacy. Secure communications are compulsory with


v6. This means that all communications exist in a secure tunnel. This can be
compelling in some circumstances. The increased security provided by IPv6
will go along way toward providing a Secure Cyberspace (see side bar for
details).

Secure Cyberspace.
The Internet was designed with a highly distributed architecture to enable it to withstand
catastrophic events such as a nuclear war. The infrastructure, the net part is secure from
attack. However the structure of the Internet provides no security for a single node, a router or
computer connected to the Internet.
Unprotected computers are the key to one of the greatest current threats to the Internet,
denial of service attacks. The others are worms and email viruses that eat up bandwidth.
The Federal Government has outlined a national strategy to protect the Internet. See
www.whitehouse.gov/pcipb, the draft paper called National Strategy to Secure Cyberspace
for details.
Converting to v6, from a hardware and operating system point of view, is cost-free
or very close to cost-free. This is based on the fact that all of the major operating systems
have v6 IP stacks built into them and that adding v6 support to devices such as routers only
requires a software update.
There is also the issue of the exhaustion of the. It is a fact that the current IP address
space is facing exhaustion sooner or later. Innovations such as Network Address
Translation (NAT) have postponed the day when the last address is assigned, buying time
for an orderly transition from IPv4 to v5.

Network Address Translation is a technology that allows all of computers inside a


business to use one of the sets of IP addresses that are private and that cannot be routed to
the Internet. Since these addresses cannot be seen on the Internet, there is no chance of an
address conflict. The machine that is acting as the Network Address Translator is connected
to the Internet with a normal, routable IP address and acts as a router to allow computers
using the private IP addresses to connect to and use the Internet.
Connecting devices to the Internet.
IP v6 contains numerous incremental
improvements. The enormous expansion of the naming space and the addressing changes
that make it possible for every device on the planet to be connected to the Internet are truly
revolutionary for most enterprises. For a Canaxia, not only can every machine in every one of
its plants be connected to a central location via the Internet, but every sub-component of that
machine can have its own connection. All sections of all warehouses can be managed via the
Internet. Dealers that sell Canaxia cars can have their sales floors, their parts department,
their service department with direct, secure Internet connections to a central Canaxia
management facility.
The conversion of an existing enterprise to IPv6 is not going to be cost-free. There
will be training costs for your network staff. Older versions of operating systems may not
have an IPv6 protocol stack available, perhaps necessitating their replacement. In a large
corporation, ascertaining that all existing applications will work seamlessly with v6 will
probably be a substantial undertaking. In theory, IPv4 can coexist with v6. But, we say
that, if you have to mix IP v4 with v6, you will have to prove that there are no coexistence
problems.
We recommend the following template when contemplating conversion to IPv6
If you are part of a startup, by all means take a hard look at going with IPv6.
The only thing that might dissuade you could be incompatibility with a critical
application.

If you are a small or medium-sized firm without any international presence,


wait until you have to. In any case, you will be able to get buy with just
implementing v6 on the edge, where you interface with the Internet.

If you do decide to convert, start it at the edge and try to practice just-in-time
conversion techniques.

If you are a multinational corporation or a firm that has a large number of


devices that need Internet addresses then you should formulate a v6 conversion
plan. Start at the edge and always look for application incompatibilities.
Canaxia, because of its size and international reach, has formulated a strategy to
convert to IPv6. The effort will be spaced over a decade. It will start at the edge, where
Canaxia interfaces with the Internet and with its WAN. New sites will be v6 from the very
start. Existing sites will be slowly migrated, with this effort not due to begin for another
five years. As applications are upgraded, replaced or purchased, compatibility with IPv6
will be a requirement, as much as is economically feasible. Canaxias architecture team is

confident that there will be no crisis due to lack of available Internet addresses anytime in
the next ten years.

Systems Architecture and Business Intelligence


If your enterprise runs on top of a distributed and heterogeneous infrastructure the health
of the infrastructure can be vital to the success of your business. In that case the System
Architect will need to provide visibility into the status of the enterprise systems. By
visibility we don't mean standing up in staff meeting and saying the Web server was up
99% of the time last month. You will need to provide to the decision makers in the
enterprise detailed, near real-time data on system performance. For example
The Web server was up 99% of the time which means that it was down 7.3
hours last month. What happened to sales or customer satisfaction during
those hours? If the customers didn't seem to care, should we spend the money
to go to 99.9 uptime?
The Web server was up, but what was the average time to deliver a page?
What did customer satisfaction look like when the page delivery time was the
slowest?
How about the network? Part of the system is still on 10 MBits/second wiring.
How often is it afflicted with packet storms? With what other variables do
those storm-times correlate?

The part of the service staff is experimenting with wireless devices to gather
data on problem systems. What was their productivity last week?
In terms of Quality Attributes, these figures for uptime relate to Availability.
99% uptime per year equates to 3.65 days of downtime
99.9% uptime equates to .365 days or 8.76 hours down per year.
99.99% uptime means that the system is down no more than 52 minutes in any given

year.
It should be possible for a manager in marketing, after getting a angry complaint
from an important client, to drill down into the data stream from systems underpinning the
enterprise and see if there is a correlation between the time it took to display the Web
pages the customer needed to work with and the substance of his or her complaints.
Perhaps several of the times that the Web server was down coincide with times when this
customer was attempting to do business. Perhaps there is a positive ROI in the upgrade
necessary to get to 99.9% server uptime. Or, perhaps the Web system is not the culprit as
everybody assumes. Perhaps it is the database that is the choke point. Or that creaky old
application running on a hand-me-down server is causing the system problems. Without
all of the data from throughout the enterprise, it will be impossible to pin down the real
source of the problems.

You can see where we are going here. We urge Systems Architects to make the
effort necessary to integrate their systems monitoring tools with the overall business
intelligence (BI) application. In most organizations the tools to monitor the health of
networks, databases and the Web infrastructure are not plugged into the enterprises overall
BI system. They are usually stand-alone applications that provide either snapshots into the
health of a particular system or they dump everything into a log that is looked at on an
irregular basis. Often they are little homegrown scripts or applications thrown together to
solve a particular problem or they are inherited from the past when the systems were much
simpler and much less interdependent.
This is not a small project we are talking about or an easy sell to upper management.
But the alternative is much worse. All systems support personnel are familiar with the
emergency beeper calls in the middle of the night informing you that "the system is down"
and "you have to do fix it. Now." This is unpleasant but even worse is the finger pointing
that can develop after a systems crash. Problems occurred after the crash, to be sure. But
was the crash the problem or is there no correlation whatsoever? The System Architect
will need data to answer that. He or she needs a complete picture of the health of the
system at all times. And, if the System Architect has that at is fingertips, wouldn't it be
better if that data were available to the executive team so that they can become familiar
with the health of the enterprise infrastructure themselves.
The key is to remember number seven in the seven habits of successful architects:
crawl before you walk. Start with the most important performance indicators for your
system. Perhaps they are page hits per second or percent bandwidth utilization. Integrate
this data into the BI system first. Or, perhaps the segment of the enterprise infrastructure
that you architect aligns with one of the companies business units. Fit the performance
metrics to the unit's output and present it to the decision makers for your unit.
When a new system is being planned is an excellent time to build performance
metrics measurement into its architecture. Often new systems are conceived using an
optimistic set of assumptions as to metrics such as performance and system uptime. When
that new system comes on line you will be grateful for the ability to generate the data to
quantitate how accurate the initial assumptions were. So, if problems crop up when the
system becomes operational, you will have the data necessary to identify where the
problem lies at your fingertips.
Service Level Agreements (SLAs) are a formalization of the Quality Attributes of
availability and performance that have been externalized in a document. They are
becoming more and more important in the System Architects world. If you are not now
subject to a SLA, you will be later. The crucial part of dealing with a SLA is to think
carefully about the metrics required to support the SLA. If you are required to provide a
page response time of less than 3 seconds, you will have to measure page response times,
of course. But what happens when response times deteriorate and you can no longer meet
your SLA? At that point you had best have gathered the data necessary to figure out why
the problem occurred. Has the overall usage of the site increased to the point where the
existing Web server architecture is overloaded? What does the memory usage on the
server look like? For example, Java garbage collection is a very expensive operation. To

reduce garbage collection on a site running Java programs, examine the Java code in the
applications. In particular look at the number of strings and how they are handled. It has
been our experience that the majority of response time problems in Java applications can
be traced to Strings and how they are handled. From the data of why the problem occurred
the correction to the problem should flow.
The bottom line is: when architecting the system you will have to think beyond the
metric in the agreement to measuring all of the variables that impact that metric. We
realize that this is easy to say, but hard to do. Don't expect to get it totally right the first
time or even the tenth time. Your experience will grow and so will the richness of your
existing system of metric measurements. You will probably eventually reach the point
where 90% of the metrics required for the SLA for a new project will already be in place.
We suggest that the introduction of SLAs in your organization be looked on as an
opportunity rather than a threat. It is an excellent prod to force organizations to update,
rationalize and above all integrate their infrastructure measurements. If a new project does
not have an SLA, perhaps you might want to write up a private SLA of your own. The
discipline will pay off in the future.
If you have surfaced the proper measurements to ensure that you can meet an SLA,
we suggest that you consider integrating them into the BI system. The vice president of
marketing should not have to call the System Architect to get the current average response
time. At the very least the measurements specified in the SLA should be available to all
interested parties from their desktop.

Systems Architecture and Storage


Managing storage costs is a critical systems architecture issue, because, for most
businesses data storage costs are increasing exponentially. The costs are both in the
physical devices and the personnel costs associated with the individuals needed to manage,
upgrade and enhance the storage devices.
One of the problems with storage costs is that the real cost to the enterprise is hidden
in hundreds or thousands of places. Most businesses do storage a machine and a hard drive
at a time.
*Insert StorageArch.eps here

Storage
Device 1

Storage
Device 2

Storage
Device 3

Com puter
1

Com puter
2

Computer
3

network

Convential Storage Architecture


network

Computer
1

Computer
2

Com puter
3

Storage
device

Storage Area Network Architecture

Computer
1

Com puter
2

Storage
Device

Network Attached Storage

Figure 4.1: the three storage architectures

Com puter
3

The important thing is not that you adopt a particular architecture, but that you
understand the benefits and tradeoffs associated with the different architectural
approaches.
The conventional approach is to attach the storage devices, usually hard drives,
directly to the machine. This is an excellent choice for situations when tight security must
be maintained over the data in the storage device. Its downside is that it is expensive to
upgrade and maintain. Upgrade costs arise more from the cost of the people necessary for
moving applications and data from older, smaller storage devices to the newer, bigger
device than the cost of the storage device itself. In addition, this storage mechanism
makes it difficult to establish exactly what the storage costs of the organization are and to
manage those costs.
Storage Area Networks (SANs) offer basically unlimited storage that is centrally
located and maintained. Using extremely high-speed data transfer methods such as FDDI
or Gigabit Ethernet, it is possible to remove the storage media from direct connection to
the computers backplane without affecting performance. SANs offer economical
enterprise-level storage that is relatively easy to manage and to grow. Implementing a
SAN is a large operation that requires thorough architectural analysis and substantial upfront costs. However the ROI for most enterprises is very compelling. SANs should not
be considered unless it is possible to establish the very high-speed data connection that is
required. If the machines to be connected to the SAN are geographically far-removed then
a different storage architecture is probably required.
Network Attached Storage (NAS) are cheap, very-low maintenance devices that just
do storage, nothing else. With a NAS, you plug it into the network, turn it on and you
have storage. Setup and maintenance costs are nonexistent; your only cost is to relocate
the existing, attached storage to the NAS. These devices are perfect for increasing storage
to geographically distributed machines.
The following matrix will help you choose between NAS and SAN. The major
determinant should be the architectural design.
Disaster recovery strongly favors SAN because of the ease with which it can
be distributed over large distances. As of this writing fiber can extend up to
95 miles without amplification. This means that data can transparently be
mirrored between devices that are 180 miles apart.

Distributed performance is better with SANs.

Function of the device


o Very large database applications definitely favor SANs.
o File storage duties definitely favor NASs.
o Functioning as an application server favors SANs.

Ease of administration is a tie. In their own way, both storage solutions are
easy to administer. With a NAS you plug it into the wall and into the
network and you have storage. The work is in partitioning the storage,
moving data onto the device and pointing users to the device. SANs require

quite a bit of up-front design and setup, but once on line, adding storage is
very simple and more transparent than with a NAS.

High availability is pretty much a tie The SAN can mirror data for recovery
and the NASs all have hot swappable hard drives in a RAID 5 configuration.

Cost favors NASs. Initial costs for a NAS storage installation are about onefifth the cost of a similar sized SAN storage installation. However, the
ongoing costs of a SAN are usually less than a similar NAS. This is due to
the following factors:
o Staffing costs are less due to the centralization of the storage
devices. This makes the task of adding storage much quicker and
easier.
o Hardware costs are less. Less is spent on storage devices and tape
backup units to the increase in the utilization of existing storage.
Normally SAN storage will average 75-85% utilization.
o Decreased LAN infrastructure spending. NASs are accessed over
the network and backups are usually conducted over the network.
This increase in network traffic can force costly network upgrades.

Canaxia has a SAN that hosts the companys mission-critical databases and data.
Portions of that data are mirrored to data centers physically removed from the operation
center itself. In addition, each company campus has a central file storage facility that
consists of large numbers of rack-mounted NAS arrays. Most application servers are
connected to the SAN. Storage requirements continue to grow at a steady rate but the cost
of administrating this storage has actually dropped as the data has been centralized onto
the SAN and various NASs.
Most of you will use a mix of the three storage modalities. The important thing is
that this mix follows an architectural plan and is easy for managers to cost and to justify.

Systems Architecture Aspects of Security


Effective enterprise security consists of:

Effective, well thought out, clearly communicated security policies.

Effective, consistent implementation of these policies by a company staff that


is motivated to protect the businesses' security.

A systems architecture that has the proper security considerations built-in at


every level.

Insuring enterprise security is a wide-ranging operation that touches on almost every


area of the business. As such, it has to grow out of extensive interactions between the

Company's management and the architecture team. One of the first issues that have to be
on the table is how to build a security architecture that increases the Company's
competitive advantage. Since security is an enterprise-wide endeavor, the input of the
entire architecture team is required. Apply the proper security levels at the proper spots in
the enterprise.
When discussing enterprise security, one of the first things to discuss is what needs
to be protected and what level of protection is appropriate for that particular item. A
useful and knowledge he can be borrowed from inventory management. Proper inventory
management divides articles into three levels, A, B and C. The A level items are the most
valuable and extensive security provisions are appropriate for them. Trade secrets, credit
card numbers and update and delete excess to financial systems are just a few of the items
that would be on the A list. B list items need to be protected but security considerations
definitely need to be balanced against operational needs. C items require little or no
security protection. As a rule of thumb, five percent of the list should be A items, 10 to 15
percent should be B items and the rest should be C items. A similar analysis is appropriate
for the enterprise's systems architecture. To begin with the businesses and data assets are
located on machines that are part of its systems architecture. In addition, elements of the
systems architecture, such as the network, will most often be used in attacks upon the
businesses vital data resources. The following is a list of the major classes of security
attacks that must be considered when building the security part of the systems architecture.
They are listed in rough order of frequency:

Viruses and worms.

Attacks by dishonest or malicious employees.

Destruction or compromise of the important data resources due to employee


negligence or ignorance.

Denial of service attacks.

Attacks from the outside by "hackers".

Viruses and worms are the most common IT security problem. The vast majority of
the viruses and worms that have appeared in the last few years to not actually damage data
that is resident on the computer that has the virus. However, in the process of replicating
themselves and sending a copy to everyone in a corporate mailing list, they consume a
large amount of your network bandwidth and usually cause a noticeable impact in
productivity. For the last couple years the most common viruses have been e-mail viruses.
For worms like the love family of viruses, they depend on extremely well known
psychological vulnerabilities as well as security holes in the software. A patch can be
downloaded and applied to the software, but the only "patch" available for these
psychological vulnerabilities is insuring that ALL employees are aware of e-mail security
policies and steps are taken to motivate employees to follow those policies. In addition to
that, some person or group or persons in corporate infrastructure support will have to be

required to regularly monitor computer news sites for the appearance of new e-mail viruses
and to immediately get and apply the proper antivirus updates, hopefully before the new
virus finds an employee in your business that it can exploit to launch it. Unfortunately, the
current state of the antivirus technology is such that defenses for viruses can only be
created after the virus appears.
The majority of attacks against corporate data and resources or perpetuated by
employees. To protect your organization against inside attack:
User groups and effective group-level permissions.

Effective password policies to protect access to system resources.

Thorough, regular security audits.

Effective security logging.

Assigning users to the proper group and designing security policies for that group
that restrict their access to only the data and resources and that they require to perform
their functions is the first line of defense against attacks by insiders. Don't forget to
remove the accounts of users that have left the firm.
For those enterprises that still rely on passwords, another area for effective security
intervention at the systems architecture level is effective password policies. In addition to
mandating that the use of effective passwords, make sure that no resources, such as
databases, are left with default or well-known passwords in place. All passwords and user
IDs that are used by applications to access systems resources should be kept in encrypted
form, not as plain text in the source code.
You need to know where you are vulnerable and what attacks can exploit these
vulnerabilities. If you have not run one, make plans to do so. Large enterprises may want
to engage outside consultants to audit the entire firm. As an alternative, there are software
packages that can effectively probe your system for vulnerabilities. You might consider
building up a security cadre of individuals in your organization. They would be tasked
with knowing and understanding all of the current "hacker" attacks that can be mounted
against your systems of your type and with running any security auditing software that you
might purchase.
Finally, effective, tamperproof audit systems will allow you to detect that an attack
has occurred and will provide you with the identity of the attacker. Here is where
scrupulously removing expired user accounts is important. If Bob Smith has left the
company but his user account still exists that account can be compromised and used in an
attack and you will have no clue who really perpetuated it.
In any case, the cost of implementing the patch to the vulnerability has to be
balanced against the seriousness of the threat and the probability that someone in your
organization would have the sophistication necessary to carry out such an assault. Lock
the most important doors first and most effectively.
Inadvertent damage to or compromise of business data by well-meaning or ignorant
employees causes substantial business losses annually. This can be prevented in a variety
of ways. Training is first and foremost. When new corporate applications are being

brought on-line it is crucial that the people that will be using them are thoroughly trained.
After an application is in regular operation it is important that experienced operators be
given the time and resources to train and mentor new operators. But, even the best
training programs are not 100 percent effective. It is also important to make sure that
people are in the proper roles and that the security parameters of these roles are so crafted
that they have the minimum opportunity to do damage while not restricting their ability to
carry out their assigned functions. Audit log trails are useful to establish exactly what was
damaged or compromised.
Of the attacks that can occur from the outside, from the Internet, denial of service
attacks (DOS) are potentially in the most destructive. DOS attacks involve the recruitment
of large numbers of outside computer systems and the synchronization of them to flood
your system with such a large number of requests that it either crashes or becomes unable
to respond to legitimate service requests by your customers. Fortunately, due to the largescale effort involved on the part of the attackers, DOS attacks are rare and are normally
directed against high-profile targets. DOS attacks are extremely difficult to combat and it
is beyond the scope of this book to discuss methods to deal with them. It is important,
though, that if your company is large enough and important enough to be a possible target
for a DOS attack, you begin now to research and to put into place countermeasures to and
DOS attack.
Last, and least, on the list of security threats about which you should be concerned
are so-called "hacker" attacks. Hacker attacks are put last on the list not because they are
uncommon. All companies can expect to see scores were even hundreds of hacker attacks
against them in the course of a year. However, the vast majority of these exploits are well
known and the measures necessary to defeat them are standard parts of all companies
Internet defense systems.
All security mechanisms such as encryption and applying security policies will have
an impact on system performance. Providing certificate services costs money. By
applying agile architecture principles to the design of this security part of your systems
architecture you can substantially mitigate these impacts. To be agile in the area of system
architecture security design means applying just the right level of security at just the right
time. It means being realistic about your company's security needs and limiting the
intrusion of security provisions into the applications and machines that make up your
systems architecture. Just enough security should be applied to give the maximum ROI
and no more.
As a final note, do not forget physical security for any machine that hosts sensitive
business data. There are programs that allow anyone with access to a computer and its
boot device to grant themselves administrator rights. Once more we recommend the A, B
C level paradigm. C level data and resources have no special physical security. B level
data and resources have a minimal level of security applied. For A level data and
resources we suggest that you set up security containment that also audit exactly who is in
the room with the resource at any time.

Systems Architecture Summary


The following should be considered as system architecture Best Practices:
1. Know the business processes that the system architecture is supporting.
Know them inside and out.
2. Keep support of those business processes first and foremost on your agenda.
Know what the business needs and keep the business side aware of your
accomplishments.
3. Know the components in your system architecture. All of the machines,
applications, network connections, etc.
4. Instrument your system. This allows you to find the problem areas, the
bottlenecks.
5. Attack the cheap and easy problems first. It will build and help maintain
creditability and trust with the business side of your company.
6. Prioritize and be proactive. If you are not constrained to produce
immediate results, identify the most important system architecture problems
and attack them first. Even before they become a problem. Good system
measurements are key to being able to identify problem areas in your
system architecture and to establish the most effective means to deal with
them.
7. Know all of your stakeholders. The people aspects of your system
architecture are as least as important as the machines.
8. Only buy as much security as you need. Give up on the idea of becoming
invulnerable. Prioritize your security issues into A, B and C lists.
There are three important things to emphasize about systems architecture. It must
align with the business goals of the organization. It must provide what the stakeholders
need to perform their functions. And finally, that the software and hardware infrastructure
of an enterprise is a major investment that must be managed to provide the greatest return
on that investment.

Presentation Tier Architecture

Presentation tier is the glass where the users and the system meet each other. This
tier is responsible for generating the user interface (UI) for the users of the system.
Presentation tier architecture is the act of defining the blue print, or the underlying
schematics used to map out or design a system. Since UI is the most visible part of the
system, it ends up being the most frequently changed part. This is perhaps the main reason
for separating the presentation tier from the other back-end code in multi-tiered software
architecture.
The basic concept of a tiered architecture involves breaking up an application into
logical chunks, or tiers, each of which is assigned general or specific roles. Tiers can be
located on different machines or on the same machine where they are virtually or
conceptually separate from one another. The more tiers used, the more specific each tier's
role. Multi-tier architecture concepts are fast evolving. But in most architecture there is a
clear separation between the front-end, that contains the code for generating the UI, and
the back-end, that contains the code for business logic and the databases. Figure 1
illustrates an example of multi-tiered software architecture and the key components
involved in the overall presentation tier architecture framework. (Note: The term
component is used in the sense of a piece or part of the overall solution.)
The sample architecture illustrates three-tier architecture that has become a model
for many enterprise applications. One of the major benefits of this architecture is its
potential for a high level of maintainability and code reuse. This architecture is composed
of the user system interface (the presentation tier), the process management (the business
logic tier), and the database management (the data tier). In this architecture the
presentation tier has been separated into a Primary and Secondary tier, corresponding to
the activities related with the client and the server processes. The processes on the business
and data tiers have not been elaborated.

Component Types
An examination of most business solutions based on a layered component model
reveals several common component types. Figure 1 shows these component types in one
comprehensive illustration. Although the list of component types shown in Figure 1 is not
exhaustive, it represents the common kinds of software components found in most
distributed solutions. These component types are described in depth throughout the
remainder of this chapter.
Primary Presentation Tier - Client

Browser

HTML,
JScript,
VBSCript,
XML/XSLT

Fat Client

Applets,
ActiveX, 3rd
Party Plugins

JAVA

PDAs, Phones, SmartDevices

Microsoft

WAP, WML

VoXML

Device
Manager

UI Processes

Caching

Security

Communication

UI Components

Management

Secondary Presentation Tier - Server

Protocol Gateway Tier

Business Tiers
Messaging

JSP

EJB

COM

ASP

Etc.

Data Tiers
RDBMS

ODBMS

LDAP

Figure 1: Multi-tiered software architecture.

XML

Etc.

The component types identified in the sample architecture scenario are:


1. Primary presentation tier components. Most solution today requires users
to interact with the system via a user interface. The primary presentation
tier, called he client, consists of software that renders the user interface for
the users to interact with the system. For smaller and less complex systems,
users may be provided with only one client type. In recent years, many
different types of client software, devices and technologies are becoming
increasingly popular. Users now expect systems to offer user interface in the
client of their choice. Amongst the many client options that are available
today, the choices can broadly be grouped together in the following
categories:
Internet Browser. This client software is built based upon the World Wide
Web Consortium (W3C) specifications and is used for rendering the webbased applications. The Internet browser is based upon a request-response
paradigm that requires accessing the web server and downloading the client
code to render the user-interface. In most cases the UI is updated only after
a user an explicit request from the system. For the UI to be quickly
available to the users, the code that is downloaded needs to be small in size,
and therefore, allows for little client-side processing. The browser is also
called the thin-client. Internet Explorer from Microsoft and Netscape from
AOL-Time Warner are the main Internet browsers available today.
Fat-client. When the software is pre-installed on the physical computer of
the user for them to access the system, it is called a fat-client. Since the
code actually resides on the users computer, the size of the code can be
potentially larger than that used for creating the UI via a thin-client. This
makes it possible for the UI to be richer in both the user experience and
processing. The fat-client also makes it possible for the users to do a large
(and sometimes all) the work offline without requiring to send or receive
information from a remote server. The main difference in paradigm,
between the Internet Browser and a fat-client is the in the ability of the
client to persist the connection information. This makes it possible for the
server to send and update information on the client without an explicit
request from the user. JAVA an open source technology from Sun
Microsystems and several proprietary technologies from various companies,
including the popular Visual Basic and VC++ from Microsoft, is used for
creating the fat-client.
Mobile Devices. Mobile access technology has gained a lot of attention
recently. There are strong interests in mobile access among a wide range of
people and organizations involved in mobile access technology, such as
hardware manufacturers, software providers, communication service
providers, content providers, and end user organizations. W3C is working

towards making information on the World Wide Web accessible to mobile


devices. The key challenge is in the device that is characterized by small
screens, limited keyboard, low bandwidth connection, small memory and so
on. These devices are highly portable and connect to the server using a
wireless network. On one end of the spectrum are devices that are primarily
phones and on the other are smart-devices that are primarily miniaturized
computers. As the devices continue to grow in their telephony and
computational power, their adoption is increasing many-folds. There are
now several critical applications that are offered to the users on their mobile
devices. These applications include content serving such as news, and actual
transaction processing such as banking, and several others.
2. Secondary Presentation Tier components. This tier consists of
components that run on the server and prepare the presentation of the UI
that is sent to the client-side for display to the users. Some of the key
components discussed in this architectural scenario are:
User interface (UI) components. Most solutions need to provide a way for
users to interact with the application. In the retail application example, a
Web site lets customers view products and submit orders, and an application
lets sales representatives enter order data for customers who have
telephoned the company. User interfaces are implemented using forms, web
pages, controls, or any other technology to render and format data for users
and to acquire and validate data coming in from them.
User process components. In many cases, a user interaction with the
system follows a predictable process. For example, in the retail application
you could implement a procedure for viewing product data that has the user
select a category from a list of available product categories and then select
an individual product in the chosen category to view its details. Similarly,
when the user makes a purchase, the interaction follows a predictable
process of gathering data from the user, in which the user first supplies
details of the products to be purchased, then provides payment details, and
then enters delivery details. To help synchronize and orchestrate these user
interactions, it can be useful to drive the process using separate user process
components. This way the process flow and state management logic is not
hard-coded in the user interface elements themselves, and the same basic
user interaction engine can be reused by multiple user interfaces.
Protocol gateway components. When the presentation tier needs to
communicate with the business and data component, it needs to use
functionality provided in some code to manage the semantics of
communicating with that particular component. These components require
interfaces that support the communication (message-based communication,

formats, protocols, security, exceptions, and so on) that the different


consumers require.
Components for security, operational management, communication,
Caching, and Device Manager: The application will probably also use
components to perform exception management, to authorize users to
perform certain tasks, to communicate with other components/applications,
persist data, and map a UI components to a particular client/device
platform.
3. Business Tier Components. After the required data is collected by a user
process, the data can be used to perform a business process. For example,
after the product, payment, and delivery details are submitted to the retail
application, the process of taking payment and arranging delivery can
begin. Many business processes involve multiple steps that must be
performed in the correct order and orchestrated. For example, the retail
system would need to calculate the total value of the order, validate the
credit card details, process the credit card payment, and arrange delivery of
the goods. This process could take an indeterminate amount of time to
complete, so the required tasks and the data required to perform them would
have to be managed. Regardless of whether a business process consists of a
single step or an orchestrated workflow, the application will probably
require components that implement business rules and perform business
tasks. For example, in the retail application, you would need to implement
the functionality that calculates the total price of the goods ordered and adds
the appropriate delivery charge. Business components implement the
business logic of the application.
4. Data tier components. Most applications and services will need to access a
data store at some point during a business process. Although, in a multi
tiered architecture the presentation tier is not expected to interact directly
with the data tiers, it needs the data mirroring the real life need of the users.
For example, the retail application needs to retrieve product data from a
database to display product details to the user, and it needs to insert order
details into the database when a user places an order. It makes sense to
abstract the logic necessary to access data in a separate layer of data access
logic components. Doing so centralizes data access functionality and makes
it easier to configure and maintain. Most applications require data to be
passed between components. For example, in the retail application a list of
products must be passed from the data access logic components to the user
interface components so that the product list can be displayed to the users.
The data is used to represent real-world business entities, such as products
or orders.

General Design Recommendations


When designing the multi-tier architecture for the application, you should consider
the following recommendations:
Identify the kinds of components you will need in your application. Some
applications do not require certain components. For example, smaller
applications that has only one user interface with a small number of elements
may not require user process components.
Design all components of a particular type to be as consistent as possible,
using one design model or a small set of design models. This helps to preserve
the predictability and maintainability of the design and implementation for all
teams. In some cases, it may be hard to maintain a logical design due to
technical environments.
Understand how components communicate with each other before choosing
physical distribution boundaries. Keep coupling low and cohesion high by
choosing coarse-grained, rather than chatty, interfaces for remote
communication.
Keep the format used for data exchange consistent within the application or
service. If you must mix data representation formats, keep the number of
formats low.
Keep code that enforces policies (such as security, operational management,
and communication restrictions) abstracted as much as possible from the
application business logic. Try to rely on attributes, platform application
programming interfaces (APIs), or utility components that provide single line
of code access to functionality related to the policies, such as publishing
exceptions, authorizing users, and so on.
Determine at the outset what kind of layering you want to enforce. In a strict
layering system, components in layer A cannot call components in layer C;
they always call components in layer B. In a more relaxed layering system,
components in a layer can call components in other layers that are not
immediately below it. In all cases, try to avoid upstream calls and
dependencies, in which layer C invokes layer B. You may choose to
implement a relaxed layering to prevent cascading effects throughout all layers
whenever a layer close to the bottom changes, or to prevent having
components that do nothing but forward calls to layers underneath.

Designing Presentation Layers


The presentation layer contains the components that are required to enable user
interaction with the application. The most simple presentation layers contain user interface
components, such as SWING for JAVA clients. For more complex user interactions, you
can design user process components to orchestrate the user interface elements and control
the user interaction. User process components are especially useful when the user
interaction follows a predictable flow of steps, such as when a wizard is used to
accomplish a task.
In the case of the retail application, two user interfaces are required: one for the ecommerce Web site that the customers use, and another for the Windows Forms based
applications that the sales representatives use. Both types of users will perform similar
tasks through these user interfaces. For example, both user interfaces must provide the
ability to view the available products, add products to a shopping basket, and specify
payment details as part of a checkout process. This process can be abstracted in a separate
user process component to make the application easier to maintain.

Designing User Interface Components


You can implement user interfaces in many ways. For example, the retail application
requires a Web-based user interface and a Windows-based user interface. Other kinds of
user interfaces include voice rendering, document-based programs, mobile client
applications, and so on. User interface components manage interaction with the user. They
display data to the user, acquire data from the user, and interpret events that the user raises
to act on business data, change the state of the user interface, or help the user progress in
his task.
User interfaces usually consist of a number of elements on a page or form that
display data and accept user input. For example, a Windows-based application could
contain a DataGrid control displaying a list of product categories, and a command button
control used to indicate that the user wants to view the products in the selected category.
When a user interacts with a user interface element, an event is raised that calls code in a
controller function. The controller function, in turn, calls business components, data access
logic components, or user process components to implement the desired action and
retrieve any necessary data to be displayed. The controller function then updates the user
interface elements appropriately. Figure 2.3 shows the design of a user interface.
Figure 2.3 User interface design
<Add more for device management.>

User Interface Component Functionality


User interface components must display data to users, acquire and validate data from
user input, and interpret user gestures that indicate the user wants to perform an operation
on the data. Additionally, the user interface should filter the available actions to let users
perform only the operations that are appropriate at a certain point in time. User interface
components:
Do not initiate, participate in, or vote on transactions.
Have a reference to a current user process component if they need to display its
data or act on its state.
Can encapsulate both view functionality and a controller.
When accepting user input, user interface components:
Acquire data from users and assist in its entry by providing visual cues (such as
tool tips), validation, and the appropriate controls for the task.
Capture events from the user and call controller functions to tell the user
interface components to change the way they display data, either by initiating
an action on the current user process, or by changing the data of the current
user process.
Restrict the types of input a user can enter. For example, a Quantity field may
limit user entries to numerical values.
Perform data entry validation, for example by restricting the range of values
that can be entered in a particular field, or by ensuring that mandatory data is
entered.
Perform simple mapping and transformations of the information provided by
the user controls to values needed by the underlying components to do their
work (for example, a user interface component may display a product name
but pass the product ID to underlying components).
Interpret user gestures (such as a drag-and-drop operation or button clicks) and
call a controller function.
May use a utility component for caching. If your application contains visual
elements representing reference data that changes infrequently and is not used
in transactional contexts, and these elements are shared across large numbers
of users, you should cache them.
May use a utility component for paging. It is common, particularly in Web
applications, to show long lists of data as paged sets. It is common to have a
helper component that will keep track of the current page the user is on and
thus invoke the data access logic component paged query functions with the

appropriate values for page size and current page. Paging can occur without
interaction of the user process component.
When rendering data, user interface components:
Acquire and render data from business components or data access logic
components in the application.
Perform formatting of values (such as formatting dates appropriately).
Perform any localization work on the rendered data (for example, using
resource strings to display column headers in a grid in the appropriate language
for the users locale).
Typically render data that pertains to a business entity. These entities are
usually obtained from the user process component, but may also be obtained
from the data components. UI components may render data by data-binding
their display to the correct attributes and collections of the entity components,
if the entity is already available. If you are managing entity data as DataSets,
this is very simple to do. If you have implemented custom entity objects, you
may need to implement some extra code to facilitate the data binding.
Provide the user with status information, for example by indicating when an
application is working in disconnected or connected mode.
May customize the appearance of the application based on user preferences or
the kind of client device used.
May use a utility component to provide undo functionality. Many applications
need to let a user undo certain operations. This is usually performed by
keeping a fixed-length stack of old value-new value data for specific data
items or whole entities. When the operation has involved a business process,
you should not expose the compensation as a simple undo function, but as an
explicit operation.
May use a utility component to provide clipboard functionality. In many
Windows- based applications, it is useful to provide clipboard capabilities for
more than just scalar values for example, you may want to let your users
copy and paste a full customer object. Such functionality is usually
implemented by placing XML strings in the Clipboard in Windows, or by
having a global object that keeps the data in memory if the clipboard is
application-specific.

Fat-Client User Interfaces


Fat-client user interfaces are used when you have to provide disconnected or offline
capabilities or rich user interaction, or even integration with the user interfaces of other

applications. Some technologies, such as JAVA, make it possible to create the application
user interfaces that can run on any operating system. While others, such as Microsoft
Windows user interfaces can only run in the Windows environment. They can, however,
take advantage of a wide range of state management and persistence options and can
access local processing power of the native operating system and the computer. There are
three main families of standalone user interfaces: full-blown applications, applications
that include embedded HTML and application plug-ins that can be used within a host
applications user interface:
Full-blown fat-client user interfaces provide all or most of the data rendering
functionality. This gives you a great deal of control over the user experience
and total control over the look and feel of the application. However, it might
tie you to a client platform, and the application needs to be deployed to the
users (even if the application is deployed by downloading it over an HTTP
connection).
Embedded HTML. You can use additional embedded HTML in your fat-client
applications. Embedded HTML allows for greater run-time flexibility (because
the HTML may be loaded from external resources or even a database in
connected scenarios) and user customization. However, you must carefully
consider how to prevent malicious script from being introduced in the HTML,
and additional coding is required to load the HTML, display it, and hook up
the events from the control with your application functions.
Application plug-ins. Your use cases may suggest that the user interface of
your application could be better implemented as a plug-in for other
applications, such as Microsoft Office, Customer Relationship Management
(CRM) solutions, engineering tools, and so on. In this case, you can leverage
all of the data acquisition and display logic of the host application and provide
only the code to gather the data and work with your business logic.
When creating a Windows Forms-based application, consider the following
recommendations:
Rely on data binding to keep data synchronized across multiple forms that are
open simultaneously. This alleviates the need to write complex data
synchronization code.
Try to avoid hard-coding relationships between forms, and rely on the user
process component to open them and synchronize data and events. You should
be especially careful to avoid hard-coding relationships from child forms to
parent forms. For example, a product details window can be reused from other
places in the application, not just from an order entry form, so you should
avoid implementing functionality in the product details form that links directly

to the order entry form. This makes your user interface elements more
reusable.
Implement error handlers in your forms. Doing so prevents the user from
seeing an unfriendly exception window and having the application fail if you
have not handled exceptions elsewhere. All event handlers and controller
functions should include exception catches. Additionally, you may want to
create a custom exception class for your user interface that includes metadata
to indicate whether the failed operation can be retried or canceled.
Validate user input in the user interface. Validation should occur at the stages
in the users task or process that allow point-in-time validations (allowing the
user to enter some of the required data, continue with a separate task, and
return to the current task). In some cases, you should proactively enable and
disable controls and visually cue the user when invalid data is entered.
Validating user input in the user interface prevents unnecessary round trips to
server-side components when invalid data has been entered.
If you are creating custom user controls, expose only the public properties and
methods that you actually need. This makes the components more
maintainable.
Implement your controller functions as separate functions in your design that
will be deployed with your client. Do not implement controller functionality
directly in control event handlers. Writing controller logic in event handlers
reduces the maintainability of the application, because you may need to invoke
the same function from other events in the future.

Internet Browser User Interfaces


The retail application described in this chapter requires a Web-based user interface
to allow customers to place orders through the Internet. Web-based user interfaces allow
for standards-based user interfaces across many devices and platforms. Most applications
require a dynamic page creation technology, such as ASP.NET, JSP, ColdFusion, etc.
Most technologies provides a rich environment where you can create complex Webbased interfaces with support for important features such as:
A consistent development environment that is also used for creating the other
components of the application.
User interface data binding.
Component-based user interfaces with controls.
Access to the integrated .NET security model.

Rich caching and state management options.


Availability, performance, and scalability of Web processing.
When you need to implement an application for a browser, you need to use several
rich features offered by the technology of choice and you should also consider the
following design recommendations:
Implement a custom error page, and a global exception handler. This provides
you with a catch-all exception function that prevents the user from seeing
unfriendly pages in case of a problem.
Implement a validation framework that optimizes the task of making sure that
data entered by the user conforms to certain criteria. However, the client
validation performed at the browser relies on JavaScript being enabled on the
client, so you should validate data on your controller functions as well, just in
case a user has a browser with no JavaScript support (or with JavaScript
disabled). If your user process has a Validate control function, call it before
transitioning to other pages to perform point-in-time validation.
If you are creating Web user controls, expose only the public properties and
methods that you actually need. This improves maintainability.
Use a component to view state, to store page specific state, and keep session
and application state for data with a wider scope. This approach makes it easier
to maintain and improves scalability. Your controller functions should invoke
the actions on a user process component to guide the user through the current
task rather than redirecting the user to the page directly.
Implement your controller functions as separate functions that will be deployed
with your Web pages.

Mobile Device User Interfaces


Mobile devices such as handheld PCs, Wireless Application Protocol (WAP) phones
and iMode devices are becoming increasingly popular, and building user interfaces for a
mobile form factor presents its own unique challenges. In general, a user interface for a
mobile device needs to be able to display information on a much smaller screen than other
common applications, and it must offer acceptable usability for the devices being targeted.
Because user interaction can be awkward on many mobile devices, particularly mobile
phones, you should design your mobile user interfaces with minimal data input
requirements.
A common strategy is to combine the use of mobile devices with a fat-client or a
browser based application. The user can pre-register data through the fat-client, and then

select it when using the mobile client. For example, an e-commerce application may allow
users to register credit card details through the Web site, so that a pre-registered credit card
can be selected from a list when orders are placed from a mobile device (thus avoiding the
requirement to enter full credit card details using a mobile telephone keypad or personal
digital assistant [PDA] stylus).
Web User Interfaces. A wide range of mobile devices support Internet browsing.
Some use micro browsers that support a subset of HTML 3.2, some require data to be sent
in Wireless Markup Language (WML), and some support other standards such as Compact
HTML (cHTML). Most development environments for creating the user interface supports
a device manager sends the appropriate markup standard to each client based on the device
type as identified in the request header. Doing so allows you to create a single Web
application that targets a multitude of different mobile clients including Pocket PC, WAP
phones, iMode phones, and others. As with other kinds of user interface, you should try to
minimize the possibility of users entering invalid data in a mobile Web page. You can use
built in functions for some devices for validating user data or use the properties of input
fields such as Textbox controls to limit the kind of input accepted (for example by
accepting only numeric input). However, you should always allow for client devices that
may not support client-side validation, and perform additional checks after the data has
been posted to the server.

Document-based User Interfaces


Rather than build a custom fat-client application to facilitate user interaction, you
might find that it makes more sense in some circumstances to allow users to interact with
the system through documents created in common productivity tools such as Microsoft
Word or Microsoft Excel. Documents are a common metaphor for working with data. In
some applications, you may benefit from having users enter or view data in document
form in the tools they commonly use.
Consider the following document-based solutions:
Reporting data. Your application (Windows- or Web-based) may provide the
user with a feature that lets him or her see data in a document of the
appropriate type; for example, showing invoice data as a Word document or a
price list as an Excel spreadsheet.
Gathering data. You could let sales representatives enter purchase information
for telephone customers in Excel spreadsheets to create a customer order
document, and then submit the document to your business process.

There are two common ways to integrate a document experience in your


applications, each broken down into two common scenarios: gathering data from users and
reporting data to users.
1. Working with Documents from the Outside.
You can work with documents from the outside, treating them as an
entity. In this scenario, your code operates on a document that has no
specific awareness of the application. This approach has the advantage that
the document file may be preserved beyond a specific session. This model
is useful when you have freeform areas in the document that your
application doesnt need to deal with but you may need to preserve. For
example, you can choose this model to allow users to enter information in a
document on a mobile device and take advantage of the Pocket PC
ActiveSync capabilities to synchronize data between the document on the
mobile device and a document kept on the server. In this design model, your
user interface will perform the following functions:
Gathering data. A user can enter information in a document, starting with a
blank document, or most typically, starting with a predefined template that
has specific fields. The user then submits the document to a Windows-based
application or uploads it to a Web-based application. The application scans
the documents data and fields through the documents object model, and
then performs the necessary actions. At this point, you may decide either to
preserve the document after processing or to dispose of it. Typically,
documents are preserved to maintain a tracking history or to save additional
data that the user has entered in freeform areas.
Reporting data. In this case, a Windows- or Web-based user interface
provides a way to generate a document that shows some data, such as a
sales invoice. The reporting code will usually take data from the ongoing
user process, business process, and/or data access logic components and
either call macros on the document application to inject the data and format
it, or save a document with the correct file format and then return it to the
user. You can return the document by saving it to disk and providing a link
to it (you would need to save the document in a central store in loadbalanced Web farms) or by including it as part of the response. When
returning documents in Web-based applications, you have to decide whether
to display the document in the browser for the user to view, or to present the
user with an option to save the document to disk. In Web environments, you
need to follow file naming conventions carefully to prevent concurrent users
from overwriting each others files.
2. Working with Documents from the Inside
When you want to provide an integrated user experience within the

document, you can embed the application logic in the document itself. In
this design model, your user interface performs the following functions:
Gathering data. Users can enter data in documents with predefined forms,
and then specific macros can be invoked on the template to gather the right
data and invoke your business or user process components. This approach
provides a more integrated user experience, because the user can just click a
custom button or menu option in the host application to perform the work,
rather than having to submit the entire document.
Reporting data. You can implement custom menu entries and buttons in
your documents that gather some data from the server and display it. You
can also choose to use smart tags in your documents to provide rich inline
integration functionality across all Microsoft Office productivity tools. For
example, you can provide a smart tag that lets users display full customer
contact information from the CRM database whenever a sales representative
types in a customer name in the document.
Regardless of whether you work with a document from the inside or from the
outside, you should provide validation logic to ensure that all user input is valid. You can
achieve this in part by limiting the data types of fields, but in most cases you will need to
implement custom functionality to check user input, and display error messages when
invalid data is detected. Microsoft Officebased documents can include custom macros to
provide this functionality.

Accessing Data Access Logic Components from the UI


Some applications user interfaces need to render data that is readily available as
queries exposed by data access logic components. Regardless of whether your user
interface components invoke data access logic components directly, you should not mix
data access logic with business processing logic.
Accessing data access logic components directly from your user interface may seem
to contradict the layering concept. However, it is useful in this case to adopt the
perspective of your application as one homogenous service you call it, and its up to it
to decide what internal components are best suited to respond to a request.
You should allow direct data access logic component access to user interface
components when:
You are willing to tightly couple data access methods and schemas with user
interface semantics. This coupling requires joint maintenance of user interface
changes and schema changes.

Your physical deployment places data access logic components and user
interface components together, allowing you to get data in streaming formats
from data access logic components that can be bound directly to the output of
the user interfaces for performance. If you deploy data access and business
process logic on different servers, you cannot take advantage of this capability.
From an operational perspective, allowing direct access to the data access logic
components to take advantage of streaming capabilities means that you will
need to provide access to the database from where the data access logic
components are deployed possibly including access through firewall ports.

Designing User Process Components


A user interaction with your application may follow a predictable process; for
example, the retail application may require users to enter product details, view the total
price, enter payment details, and finally enter delivery address information. This process
involves displaying and accepting input from a number of user interface elements, and the
state for the process (which products have been ordered, the credit card details, and so on)
must be maintained between each transition from one step in the process to another. To
help coordinate the user process and handle the state management required when
displaying multiple user interface pages or forms, you can create user process components.
(Note: Implementing a user interaction with user process components is not a trivial task.)
Before committing to this approach, you should carefully evaluate whether or not
your application requires the level of orchestration and abstraction provided by user
process components. User process components are typically implemented as JAVA, .NET
or similar classes that expose methods that can be called by user interfaces. Each method
encapsulates the logic necessary to perform a specific action in the user process. The user
interface creates an instance of the user process component and uses it to transition
through the steps of the process.
The names of the particular forms to be displayed for each step in the process can be
hard-coded in the user process component (thus tightly binding it to specific user interface
implementations), or they can be retrieved from a metadata store such as a configuration
file (making it easier to reuse the user process component from multiple user interface
implementations). Designing user process components to be used from multiple user
interfaces will result in a more complex implementation in order to isolate device-specific
issues, but can help you distribute the user interface development work between multiple
teams, each using the same user process component.
User process components coordinate the display of user interface elements. They are
abstracted from the data rendering and acquisition functionality provided in the user

interface components. You should design them with globalization in mind, to allow for
localization to be implemented in the user interface. For example, you should endeavor to
use culture-neutral data formats and use Unicode string formats internally to make it easier
to consume the user process components from a localized user interface.
Separating the user interaction functionality into user interface and user process
components provides the following advantages:
Long-running user interaction state is more easily persisted, allowing a user
interaction to be abandoned and resumed, possibly even using a different user
interface. For example, a customer could add some items to a shopping cart
using the Web-based user interface, and then call a sales representative later to
complete the order.
The same user process can be reused by multiple user interfaces. For example,
in the retail application, the same user process could be used to add a product
to a shopping basket from both the Web-based user interface and the fat-client
application.
An unstructured approach to designing user interface logic may result in undesirable
situations as the size of the application grows or new requirements are introduced. If you
need to add a specific user interface for a given device, you may need to redesign the data
flow and control logic. Partitioning the user interaction flow from the activities of
rendering data and gathering data from the user can increase your applications
maintainability and provide a clean design to which you can easily add seemingly complex
features such as support for offline work. Figure 2.4 shows how the user interface and user
process can be abstracted from one another.

Figure 2.4 User interfaces and user process components


User process components help you to resolve the following user interface design
issues:
Handling concurrent user activities. Some applications may allow users to
perform multiple tasks at the same time by making more than one user
interface element available. For example, a Windows-based application may
display multiple forms, or a Web application may open a second browser
window. User process components simplify the state management of multiple
ongoing processes by encapsulating all the state needed for the process in a
single component. You can map each user interface element to a particular
instance of the user process by incorporating a custom process identifier into
your design.

Using multiple panes for one activity. If multiple windows or panes are used in
a particular user activity, it is important to keep them synchronized. In a Web
application, a user interface usually displays a set of elements in a same page
(which may include frames) for a given user activity. However, in rich client
applications, you may actually have many non-modal windows affecting just
one particular process. For example, you may have a product category selector
window floating in your application that lets you specify a particular category,
the products in which will be displayed in another window. User process
components help you to implement this kind of user interface by centralizing
the state for all windows in a single location. You can further simplify
synchronization across multiple user interface elements by using data bind-able
formats for state data.
Isolating long-running user activities from business-related state. Some user
processes can be paused and resumed later. The intermediate state of the user
process should generally be stored separately from the applications business
data. For example, a user could specify some of the information required to
place an order, and then resume the checkout process at a later time. The
pending order data should be persisted separately from the data relating to
completed orders, allowing you to perform business operations on completed
order data (for example, counting the number of orders placed in the current
month) without having to implement complex filtering rules to avoid operating
on incomplete orders.
User activities, just like business processes, may have a timeout specified, when
the activity has to be cancelled and the right compensatory actions should be taken on the
business process.
You can design your user process components to be serializable, or to store their
state separately from the applications business data.

Separating a User Process from the User Interface


To separate a user process from the user interface:
1. Identify the business process or processes that the user interface process will
help to accomplish. Identify how the user sees this as a task (you can
usually do this by consulting the sequence diagrams that you created as part
of your requirements analysis).
2. Identify the data needed by the business processes. The user process will
need to be able to submit this data when necessary.
3. Identify additional state you will need to maintain throughout the user
activity to assist rendering and data capture in the user interface.

4. Design the visual flow of the user process and the way that each user
interface element receives or gives control flow. You will also need to
implement code to map a particular user interface session to the related user
process:
A web UI will have to obtain the current user process by getting a reference
from the Session object, or by rehydrating the process from another storage
medium, such as a database. You will need this reference in event handlers
for the controls on your Web page.
Your windows or controls need to keep a reference to the current user
process component. You can keep this reference in a member variable. You
should not keep it in a global variable, though, because if you do,
composing user interfaces will become very complicated as your
application user interface grows.

User Process Component Functionality


User process components:
Provide a simple way to combine user interface elements into user interaction
flows without requiring you to redevelop data flow and control logic.
Separate the conceptual user interaction flow from the implementation or
device where it occurs.
Encapsulate how exceptions may affect the user process flow.
Keep track of the current state of the user interaction.
Should not start or participate in transactions. They keep internal data related
to application business logic and their internal state, persisting the data as
required.
Maintain internal business-related state, usually holding on to one or more
business entities that are affected by the user interaction. You can keep
multiple entities in private variables or in an internal array or appropriate
collection type.
May provide a save and continue later feature by which a particular user
interaction may be restarted in another session. You can implement this
functionality by saving the internal state of the user process component in
some persistent form and providing the user with a way to continue a particular
activity later. You can create a custom task manager utility component to
control the current activation state of the process.
The user process state can be stored in one of a number of places:

If the user process can be continued from other devices or computers, you will
need to store it centrally in a location such as a database.
If you are running in a disconnected environment, the user process state will
need to be stored locally on the user device.
If your user interface process is running in a Web farm, you will need to store
any required state on a central server location, so that it can be continued from
any server in the farm.
May initialize internal state by calling a business process component or data
access logic components.
Typically will not be implemented as Enterprise Services components. The
only reason to do so would be to use the Enterprise Services role-based
authorization capabilities.
Can be started by a custom utility component that manages the menus in your
application.

User Process Component Interface Design


The interface of your user process components can expose the following kinds of
functionality, as shown in Figure 2.5.
User process actions (1). These are the interface of actions that typically
trigger a change in the state of the user process. Actions are implemented in
user process component methods, as demonstrated by the ShowOrder,
EnterPaymentDetails, PlaceOrder, and Finish methods in the code sample
discussed earlier. You should try to encapsulate calls to business components
in these action methods (6).
Figure 2.5 User process component design
State access methods (2). You can access the business-specific and business
agnostic state of the user process by using fine-grained get and set properties
that expose one value, or by exposing the set of business entities that the user
process deals with (5). For example, in the code sample discussed earlier, the
user process state can be retrieved through public DataSet properties.
State change events (3). These events are fired whenever the business-related
state or business-agnostic state of the user process changes. Sometimes you
will need to implement these change notifications yourself. In other cases, you
may be storing your data through a mechanism that already does this
intrinsically (for example, a DataSet fires events whenever its data changes).

Control functions that let you start, pause, restart, and cancel a particular user
process (4). These functions should be kept separate, but can be intermixed
with the user process actions. For example, the code sample discussed earlier
contains SaveToDataBase and ResumeCheckout methods. Control methods
could load required reference data for the UI (such as the information needed
to fill a combo box) from data access logic components (7) or delegate this
work to the user interface component (forms, controls, ASP.NET pages) that
needs the data.

General Recommendations for User Process Components


When designing user process components, consider the following recommendations:
Decide whether you need to manage user processes as components that are
separate from your user interface implementation. Separate user processes are
most needed in applications with a high number of user interface dialog boxes,
or in applications in which the user processes may be subject to customization
and may benefit from a plug-in approach.
Choose where to store the state of the user process:
If the process is running in a connected fashion, store interim state for longrunning processes in a central database; in disconnected scenarios, store it in
local XML files, isolated storage, or local databases.
If the process is not long-running and does not need to be recovered in case
of a problem, you should persist the state in memory. For user interfaces
built for rich clients, you may want to keep the state in memory. For Web
applications, you may choose to store the user process state in the Session
object.
Design your user process components so that they are serializable. This will
help you implement any persistence scheme.
Include exception handling in user process components, and propagate
exceptions to the user interface. Exceptions that are thrown by the user process
components should be caught by user interface components and published.

Network Connectivity and Offline Applications


In many cases, your application will require support for offline operations when
network connectivity is unavailable. For example, many mobile applications, including
rich clients for Pocket PC or Table PC devices, must be able to function when the user is
disconnected from the corporate network. Offline applications must rely on local data and

user process state to perform their work. When designing offline applications, follow the
general guidelines described below. The online and offline status should be displayed to
the user. This is usually done in status bars or title bars or with visual cues around user
interface elements that require a connection to the server. The development of most of the
application user interface should be reusable, with little or no modification needed to
support offline scenarios. While offline, your application will not have:
Access to online data returned by data access logic components.
The ability to invoke business processes synchronously. As a result, the
application will not know whether the call succeeded or be able to use any
returned data.
If your application does not implement a fully message-based interface to your
servers but relies on synchronously acquiring data and knowing the results of business
processes (as most of todays applications do), you should do the following to provide the
illusion of connectivity:
Implement a local cache for read-only reference data that relates to the users
activities. You can then implement an offline data access logic component that
implements exactly the same queries as your server-side data access logic
components but accesses the local storage.
Implement an offline business component that has the same interface as your
business components, but takes the submitted data and places it in a store-and
forward, reliable messaging system such as Message Queuing. This offline
component may then return nothing or a preset value to its caller.
Implement UI functionality that provides a way to inspect the business action
outbox and possibly delete messages in it. If Message Queuing is used to
queue offline messages, you will need to set the correct permissions on the
queue to do this from your application.
Design your applications transactions to accommodate message-based UI
interactions. You will have to take extra care to manage optimistic locking and
the results of transactions based on stale data. A common technique for
performing updates is to submit both the old and new data, and to let the
related business process or data access logic component eventually resolve any
conflicts. For business processes, the submission may include critical reference
data that the business logic uses to decide whether or not to let the data
through. For example, you can include product prices alongside product IDs
and quantities when submitting an order.
Let the user persist the state of the applications user processes to disk and resume
them later.

Software Product Lines


One of my primary objects is to form the tools so the tools themselves
shall fashion the work and give to every part its just proportion.
Eli Whitney

I n the previous chapter, we talked about software architecture.

Software
architecture is the high level design of a software product. Most organizations dont have
one or a few software systems they have many. Some of these software systems have
many characteristics in common. They might all share a single execution platform or a set
of reusable components. The designs may be similar and the processes that were used to
create them might be similar. In almost every organization, there is a set of common traits
that exist between software systems that the organization uses.
For example, at Canaxia, the auto manufacturer has six Intranet web sites, three of
which use .NET, one uses J2EE and the others use PHP. They all access a database.
Some access an Oracle database, some access a DB2 database and the .NET sites access a
SQL Server database. They also have three Internet web sites, one for corporate, one sales
site and a site for dealerships. Each one has a different user interface. Two are developed
in J2EE and the other was developed in .NET. Each project team used a different process
to develop the sites. One team used the Rational Unified Process, one used eXtreme
Programming and the last one used their ad-hoc process. In addition, each site has its own
customer database. Users must remember different user ids and passwords to access each
site. Other components were developed to support the sites such as a logging component
and a component to store reference and configuration data. Of course, each project team
created different versions of these components as well.
The situation at Canaxia is probably familiar to most people who work in
information technology. Given this situation, how can it be corrected? How does an
organization take advantage of the opportunities to reuse infrastructure, process,

architecture and software assets across software systems? Is this how manufacturers create
products? When you order a car from Canaxia, Canaxia does not custom build a new car
just for you from scratch. Canaxia has several product lines. It has a product line for
compact cars, mid-size cars, and sport utility vehicles. At Canaxia, there are five common
aspects to building cars within a product line:
1. Shared process
2. Shared components
3. Shared infrastructure
4. Shared knowledge
Each product line has a different process that each car goes through during
manufacture. In the compact car product line, the seats are placed in before the steering
wheel. Whereas in the sport utility vehicle product line the steering wheel goes in first.
Not only is the process the same within a product line, many of the components of a car
are similar within a product line also. For instance, the mid-size car line shares some
common components. The steering wheel, tires, visors and other components are all the
same in every Canaxia mid-size car. In addition, within a product line, there is a single
infrastructure for producing the cars. The mid-size car line has a single assembly line
consisting of robots and other machinery that builds a mid-size car with minor variations
such as paint color. Finally, all of the workers on the assembly line understand the
process, understand the components and infrastructure and have the skills necessary to
build any type of mid-size car within Canaxia. By having shared product line assets,
Canaxia has the flexibility to build similar car models with the same people because they
are familiar with the line, not with a particular model.
The idea of product lines is not new. Eli Whitney created the idea of
interchangeable parts for rifles in the 1880s to fill an order for ten thousand muskets for
the U.S. Government. He did this because in Connecticut at the time, there were only a
handful of skilled machinists available. A few decades later, Henry Ford perfected the
assembly line to create Model T cars. His process was instrumental in achieving orders of
magnitude increases in productivity in the creation of automobiles. He was able to provide
a complex product at a relatively inexpensive price to the American public through
increases in productivity.
The Manufacturing Metaphor
The manufacturing metaphor for software development has been around for a while. In
some respects, the metaphor works. Shared components, process and infrastructure will
increase productivity and quality. However, software is a much more complex and thus
creative endeavor. A car designer has to work in the physical world. He or she is bound by
physics. It is easy to measure the rollover potential of a vehicle based on its center of gravity
and wheelbase. It is also obvious that a car that has four wheels is better than a car that has

two wheels. These properties are easily tested and quantitative evaluations can be made
between different designs. Simple systems can be measured and compared using quantitative
methods in an objective fashion.
In software, there is no easy quantitative measurement that states that a component
that has four interfaces is better than another component that has two interfaces. In software,
we say that source code modules should not be more than 1000 lines of code or that the
design should exhibit loose coupling or high cohesion. Does this mean that a source file
that is over 1000 lines is a bad one? All one can say is that in general its not a good idea.
While we know that a car with one tire will fall over, we cannot say that a source code module
with over 1000 lines is always a bad one. Also, how does one measure coupling and
cohesion? The software process and design are less quantitative (measurable) and more
qualitative. Qualitative analysis requires analysis and judgment based on contrasting and
interpreting meaningful observable patterns or themes.
When it comes to the process of manufacturing a product versus creating a software
product there are several key differences:
Software development is much less predictable than manufacturing a product.
Software is not mass-produced
Not all software faults result in failures
Software doesnt wear out
Software is not bound by physics
The creation of software is an intellectual human endeavor. Good software relies on
the personalities and the intellect of the members of the teams that create the software. A
process that delivers great software with one team of developers when applied to a different
team of developers might fail to deliver anything at all. Software development is more akin to
the disciplines of psychology and sociology and less similar to disciplines such as
mathematics, physics and chemistry.
When it comes to the creation of product lines for manufactured products vs.
software products, the similarities between hardware-based products are greater than the
similarities between software based products. Software is larger grained and in order to
achieve reuse, the principles of modularity and small grained components must be adhered
to. In a vehicle, it is natural to create a standard interface for a steering wheel.

[Publisher, please place hardware similarities here]

Figure 6-1

Hardware Product Commonality.

[Publisher, please place software similarities here]

Figure 6-2

Software Product Commonality.

In software it is much more difficult to create standard interfaces between


components because it is much less intuitively obvious how to modularize software. As a
result, there is much more commonality between different models in a hardware-based
product line than there is in a software-based product line.

WHAT IS A SOFTWARE PRODUCT LINE?

[Publisher, please insert Product Line Process here, requires permission from
Clements, Northrop, Software Product Lines]

Figure 6-3

Product Line Development.

A software product line is a set of software-intensive systems sharing a common,


managed set of features that satisfy the specific needs of a particular market segment or
mission and that are developed from a common set of core assets in a prescribed way.
- Clements and Northrop, Software Product Lines, 2002.
The main purpose of a software product line is to achieve reuse across software
systems. Any organization that has many software systems will notice that many of those
software systems have characteristics in common. When a set of systems have common
characteristics, they can be though of as being part of a product family or product line.

A product line consists of the core assets that a shared family of systems is built
upon. It includes shared components, infrastructure, tools, process, documentation and
above all else, a shared architecture.
A product line is a decomposition of the entire system portfolio of an organization.
This decomposition occurs according to the architecture similarities of the systems in the
organization. For example, Canaxia hired Scott Roman a new CIO of Internet systems.
He decided to group create an Internet product line that included all Internet based systems
because they all share a common architecture. From the previous chapter, we learned that
software architecture is built based on the quality attribute requirements of the project. In
product line development, the software architecture that belongs to the product line is
based on the common quality attribute requirements for all of the systems in the product
line. Scott realized that all of the Internet systems require 24x7 availability, have the same
performance and scalability requirements and generally need the same software
architecture. These requirements drove him to create a set of common core assets to meet
the quality attribute requirements for all of the Internet systems at Canaxia.
The diagram in figure 6-3 shows that product line development is cooperation
between three different constituencies: core asset development, product development and
management.
Core asset development is the creation and maintenance of the artifacts or core assets
in the product line. These core assets are used to create systems that match the quality
criteria of the product line.
The second constituency is product development. Product development involves the
creation of products or systems from the core assets of the product line. If a system
requires an asset that is not included in the core assets, then the core asset must be created.
Also, if the core asset that exists in the product line does not match the quality
requirements of the product under development then the core asset must be enhanced.
There are several models for organizing core asset development that we will talk about
later in the chapter.
Management must be involved to make sure that the two constituencies are
interacting correctly. Instituting a product line practice at an organization requires a
strong commitment from management. It is also important to identify which assets are
part of the product line and which assets that are part of the development of the individual
products of the system. The role of product line manager (s4032.pdf) is one of a product
line champion. The champion is a strong, visionary leader who can keep the organization
working towards the creation of core assets while limiting the impact on project
development.

PRODUCT LINE BENEFITS

Cost
Just like Eli Whitney and Henry Ford, an organization that takes a product line approach to
their IT systems can realize cost savings. It is always cheaper to reuse a shared component
than build a new one.

Time-to-market
For many organizations, cost is not the primary driver for product line adoption.
Reusable components speed the time it takes to get a product out of the door. Product
lines allow the organization to take advantage of the shared features of the product line
and add features particular to the product they are building.

Staff
There is much more flexibility when moving people around the organization since
they become familiar with the set of shared tools, components and processes of each
product line. A critical product development effort can benefit by using people from other
product development teams within the overall product line. The learning curve is
shortened because the shared assets of the product line are familiar to both projects.

Increased Predictability
In a shared product line, there are several products that are developed using a set of
common core assets a shared architecture and a production plan and people with
experience for creating products within the product line. These core assets and
architecture are proven on several products. Project managers and other stakeholders will
have more confidence in the success of new projects within the product line because of the
proven set of core assets and people.

Higher Quality
Because shared assets serve the needs of more than one project, they necessarily
need to be of higher quality. Also, multiple projects exercise the shared assets in more
ways than a single project would. This leads to shared assets, which have a much higher

quality. In addition, the shared architecture of the product line is also proven through
implementation by several projects.

PRODUCT LINE ASPECTS


There are four aspects to a product line. The product line has a:

Related business benefit


The product line should support an established business benefit. At Canaxia, the
Internet sites provide timely information and electronic business functionality to a variety
of users. There are tangible business benefits to being able to transact business with
customers, dealers and sales agents across the Internet.

Core Assets
Core assets are the basis for the creation of products in the software product line.
They include the architecture that the products in the product line will share as well as the
components that are developed for systematic reuse across the product line or across
multiple product lines. Core assets are the main aspect to a software product line. They
also include the infrastructure, tools, hardware and other assets that enable the
development, execution and maintenance of the systems in the product line.
The core assets at Canaxia are grouped according to technology. At Canaxia, there
is a central IT department and it makes most sense to group the products together
according to technology. At other organizations, it might make more sense to group the
core assets and product lines differently. For instance, a company that sells software might
create product lines according to their business segment.

Supporting Organization
The product line has an organization to design, construct, support and train
developers and other users on it. The organization that supports the product line is called
the domain engineering unit.
This group is responsible for the product line assets. Groups that use the product line
assets rely on a group to handle the upgrade, and other reuse issues that come up when
shared assets are employed on a project. Some organizations do not use a central group to
maintain core assets. Project teams develop the core assets and they have to coordinate the
development of common core assets. For example, at Canaxia, they initially did not have

a central domain-engineering unit. They initially created the logging component for use in
the corporate Internet site. When the dealer and sales Internet site projects were started,
they called the corporate site developers and asked them if they had any components that
they could use for their site. They installed and used the logging component from the
corporate site and get periodic updates of the component from the corporate site team.

Reusable Process
The core assets of the product line are components, systems and architecture. There
are optimal processes that can be created to use these artifacts in a new project. For
instance, at Canaxia, the teams that use the customer database know that it is best to define
the new attributes that have to be captured for a customer early in the elaboration phase of
the project. The domain team has to be notified early in the process of the additional
attributes that a new project team needs for a customer. That way, they can be factored
into the schedule for the new system. Some processes that can be reused across projects
include:
1.
2.
3.
4.
5.
6.

Requirements gathering
Design and development
Software configuration management processes
Software maintenance
Release management
Quality assurance

CORE ASSETS
In a software product line, the term that identifies something in the product line that
can be reused is called a core asset. Within a product line, there are core assets and a
production plan that outlines how to build a system using the core assets. The following
diagram outlines the kinds of assets that can be included in a product line:

[Publisher, please place product line decomposition here]

Figure 6-4

Product Line Decomposition

There are several categories of assets that exist in the product line.

Shared Architecture
The shared architecture (Bosch) for a family of systems is the main core asset for a
product line. For instance, at Canaxia, they have three Internet systems. Each system uses
a distributed Model-View-Controller architectural style. Each one is also a multi-layer
system, accesses a database and sends markup to a browser client. All three systems also

share the same quality attributes. For example, each one requires a similar level of
performance, scalability and security. By considering all three systems as part of a
product family, an architect can construct a common conceptual architecture for al three
systems. When new systems are developed that match the characteristics of this product
line, then the new system can reuse the architecture of the product line.
The problem at Canaxia is that although each system has the same quality attributes,
each one was developed separately. It is fairly obvious to Canaxia management now that
these project should have all been part of the same product line, been built on the same
architecture, used the same tools and used the same process during development. Now,
management has a problem. Do they continue with three separate systems or merge them
together into a single product line? Canaxia spent millions on each project separately. In
order to merge them into a single product line, it would take years and additional millions.
For now, Canaxia is considering an evolutionary approach to merging the systems into a
single product line.
This situation is familiar to anyone who works in IT. It is difficult to make the
creation of a product line a big bang process. Product lines usually evolve over time,
starting with one large project that becomes the basis for the product line. At Canaxia,
they have to figure out ways to synchronize their architectures over time. Perhaps this
means that as enhancements are requested of each system, they can migrate towards shared
components. Web services and middleware play a role in sharing components across
different technology types. When a major revision of each site is needed, that might be
the time to move to the shared product line architecture. This is a difficult problem and if
an organization has the courage to move to a shared architecture quickly, they will save
time and money in the long run. Sometimes big change is necessary and an organization
must recognize when it is best to do a wholesale move to a new architecture.

Dont be afraid to take a big step if one is indicated. You cant cross a chasm in two
small jumps. David Lloyd George

The architecture that is developed for the product line is built from the quality
attribute requirements that are known to the product line at the time it is created. New
systems in the product line will require changes to the architecture because each new
system has slightly (hopefully) different requirements. For example, if a new system must
allow scalability to millions of users rather than the current thousands that the product line
supports, then the architecture must be changed to support that requirement. However, if
the requirements are so different that the architecture must be overhauled then the new
system is probably not a candidate for inclusion in the product line.
This is one of the negative aspects of product lines and reuse in general. When a
core asset is updated to meet the requirement of a new system, any existing systems that
use that asset might be impacted. For example, if a logging service must be updated to
include asynchronous messaging, then any system that uses the service might have to be

updated. Even if the interface did not change, any system using the logging service would
have to be thoroughly tested with the new version.
How should an organization document the architecture? The architecture artifacts
that are relevant at the product line level are the reference architecture that consists of
reference models and reference implementations. The reference model is a conceptual
model for the ideal system in the product line. It is a diagram that shows the basic layers
of the system, the kinds of data that is transferred between the layers, the basic
responsibilities of the major layers and components in the system. Here is an example of a
basic reference model for an application:

[Publisher, please place Reference Model here]

Figure 6-5

Sample Reference Model

Another classic reference model for depicting the seven layers of a protocol stack for
networking is the ISO OSI 7 layer model:

[Publisher, please place ISO Model here]

Figure 6-6

ISO OSI 7 Layer Reference Model

The model can be as detailed as necessary and show different views of the
architecture such as execution, module or code views.
In addition to the reference model, the architecture also should contain a reference
architecture. The ideal reference architecture is a sample application that shows how a
working application is built using the architecture and the tools provided by the product
line. A great example of a reference architecture is the Pet Store sample application that
comes with the J2EE reference edition. The Pet Store application shows how an
application should be constructed using the J2EE platform. In addition to working code,
the reference architecture also contains guides and people who are familiar with the
architecture to help developers and others use the materials provided by the product line.

Shared Components
In the software industry, the term component is used in many different ways.
Components come in many forms. Most people think of components as JavaBeans or
ActiveX controls. Some people think of a component as any software artifact that can be
reused. The term component as it is used in the context of this book is a self-contained
software module that has a well-defined interface or set of interfaces (Herzum 2000). A
component has run-time and design time interfaces. It is easily composed into an
application to provide useful technical or business functions. A component is delivered
and installed and used by connecting and executing an interface at run-time. It is also
considered a software black box. No other process can view or change the internal state
of the component except through the components published interface. A component can
be a shared run-time software entity such as a logging component, a billing service or an
ActiveX control.
There are two types of components, a business component and a technical
component. The component types are differentiated because even the components
themselves are part of their own product lines. They each serve a different purpose. A
business component serves a business function such as a customer service (as we learned
in the Service Oriented Architecture chapter, a service can be a service-enabled
component or a component based service (CBS)). The customer service has a set of
interfaces that manage customers for the company. A technical component has a technical
purpose such as logging or configuration. Both the business components and the technical
components have a distinct set of quality attributes. A product line supports a different
type of component. Each product line includes the core assets and production plans
necessary to create and maintain components of each type.
At Canaxia, every Internet system has its own logging component, customer
database and configuration service. Rather than creating three separate versions of each of
these components, Canaxia could create a single version of all three of these components
and share them between systems. There are two ways to share components between

different systems in a product line. Different instances of the components could be


deployed into each application and used separately, or a single component or service could
be used for the entire product line. The method of distributing the components will vary
based on the needs of the project. Some product lines might even allow some systems
within the product line to use a central component and also allow other systems to
deployed and used an instance of the component within the system that needs it.

Systems
The systems (Bosch) that are constructed within the product line are also core assets.
These systems are constructed using the architecture of the product line and the
components that are part of the product line. There are two types of systems, internally
developed and commercial off-the-shelf systems (COTS). At Canaxia, the three Internet
systems for corporate, sales, and dealerships are considered part of the same product line
because they share similar characteristics. The characteristics they share are the fact that
they use the same software architecture and they use the same components in their
implementation.
COTS systems can share some aspects of architecture, components and other core
assets with internally developed systems within the product line. Some examples of core
assets that a COTS can take advantage of include a logging service, middleware and
infrastructure such as hardware, networks and application servers.
Each system becomes part of the system portfolio for the product line. The
product portfolio should be managed just like an investment portfolio. Determinations of
buy, sell and hold are relevant to products within the product line. Each system
should exhibit some return on the investment made in its development and ongoing
maintenance. Once the benefits are outweighed by the costs of maintaining the system,
then a sell decision should be made. By creating a product line mindset, the return on
investment of systems in the portfolio should improve because the costs for shared assets is
always lower than the cost for multiple assets.

Shared Technology and Tools


A product line can share technology and tools for building and executing systems.
The technology and tools that are selected have a dramatic effect on making sure that the
systems built within the product line conform to the architecture of the product line.
Source Code Reuse
Software reuse is usually identified with the reuse of source code across software
systems. Source code is extremely difficult to reuse on a large scale. The software

development community has tried and mostly failed to reuse source code. Bob Frankston, a
programmer from VisiCalc, the first spreadsheet program said (Frankston 2002):
that you should have huge catalogs of code lying around. That was in the 70s;
reusable code was the big thing. But reusing pieces of code is like picking off sentences from
other peoples stories and trying to make a magazine article.
While you can write a magazine article using sentences from other works, it is a little
like creating a ransom note from newspaper clippings. You might make your point, but not
very clearly. It is usually easier to just write the whole thing yourself. As a result, source code
reuse is extremely difficult and has mostly failed to materialize even with the invention of
object oriented programming in the 1990s.
Technology includes hardware, system software, application servers, database
servers and other commercial products that are required to run the systems in the product
line. The most obvious technology that can be reused within a product line is the
execution platform for all of the systems and components that run in the product line.
Some obvious (and not so obvious) technologies that can be leveraged across projects
include (McGovern, et al. 2003):
1. Application servers
2. Database servers
3. Security servers
4. Networks, machines
5. Modeling tools
6. Traceability tools
7. Compilers and code generators
8. Editors
9. Prototyping tools
10.Integrated development environments
11.Version control software
12.Test generators
13.Test execution software
14.Performance analysis software
15.Code inspection and static analysis software
16.Configuration management software
17.Defect tracking software
18.Release management and versioning software

19.Project planning and measurement software


20.Frameworks and libraries
At Canaxia, one of the first steps toward creating a shared product line for Internet
systems was to move all of the applications onto a shared infrastructure. Each system
separately used approximately 20% of its capacity. By pooling their servers, they shared
their excess capacity for handling peak loads. By moving to a shared infrastructure, it also
required each application to modify some of their systems and bring them into
conformance with the product line. It also highlighted the areas in each application where
commonalities existed.
In addition to hardware, frameworks, libraries and an extended IDE provides the
developers with a single environment for the product line. The advantage to having a
single development environment is that it gives the organization the flexibility to move
developers from project to project without a steep learning curve before they become
productive.

eXtended IDE (XIDE)


Peter Herzum and Oliver Sims in their book Business Component Factory
(Herzum 2001) explain the concept of an extended IDE. The XIDE stands for an
extended integrated development environment. It is the set of tools, development
environments, compilers, editors, testing tools, modeling tools, version control tools and
other software that a developer needs to perform his or her job. The concept of the XIDE
is to include all of these tools in a single pluggable IDE, or at least to have a standard
developer desktop environment that includes all of the standard tools for the product line.
Having a standard XIDE is crucial to providing developer portability from system to
system within the product line. In addition to managing core assets that make their way
into the systems within the product line, the core asset developers also create tools for
other developers that can be plugged into their development environment such as code
generators, configuration tools and other tools that are specific to the environment of the
product line.

SUPPORTING ORGANIZATION
An organization can adopt several organizational structures to support the product
line. We already talked about the possibility of creating a domain engineering unit that
creates and supports the core assets of the product line separate from the products that use
them. However, there are other forms of organizational structure that may be adopted
(Bosch):

Development Department
This structure is one where all software development is performed by a single
department. This department creates products and core assets together. Staff can be
assigned to work on any number of products or develop and evolve core assets. This type
of structure is common and most useful in small organizations (less than 30 staff members)
that cannot justify the expense of specialized areas for core asset development.
These organizations benefit from good communication between staff members
because all of the members work for a single organization. There is also less
administrative overhead. This type of organization can be considered to be agile.
The primary disadvantage of this structure is scalability of the organization. When
the number of members grows past 30 members, specialized roles and groups should be
created that focus on the different functional and technical areas of the project, which
leads to the next type of organizational structure.

Business Units (Bosch, 2000)


This structure specializes development around a type of product. Each business unit
is responsible for one or several products in the product line. The business unit is
responsible for the development of the product as well as the development, evolution and
maintenance of core assets that they need to create the product. Multiple business units
share these core assets and all business units are responsible for the development,
maintenance and evolution of the core assets.

[Publisher, please place Business Unit Model here]

Figure 6-7

Business Unit Model

Core assets can be managed in three ways. In an unconstrained model, all business
units are responsible for all of the core assets. With this type of structure, it is difficult to
determine which group is responsible for what parts of the shared assets. Each business
unit is responsible for funding the features required by its individual product. Features of
the core assets that do not directly apply to the product under development usually fall to
the business unit that needs the feature. A large amount of coordination is necessary for
this model to work. Typically, components will suffer from an inconsistent evolution by
multiple project teams with different feature goals. Components will become corrupted
and eventually lose the initial benefit of becoming a shared asset.
A second type of structure is an asset responsibles structure in which a single
developer or responsible is tasked with the development, evolution and maintenance of a
single shared asset. In theory, this model should solve the problem, but the developer is
part of a business area and subject to the management priorities of the project, not the
shared assets.
A third type of organizational model is called mixed responsibility. In this model,
a whole unit is responsible for different core assets. The other business units do not have
authority to implement changes in a shared component that they are not responsible for.
This model has advantages over the other two. However, business priorities will still
trump core asset priorities. The high priority requests of one group for changes to a core
asset will be less of a priority to a business unit that is responsible for the core asset and is
attempting to deliver a product as well. The next model addresses these concerns for large
organizations.

Domain Engineering Unit


This structure contains several business units and a central domain-engineering unit.
The domain-engineering unit is responsible for the development, maintenance and
evolution of the core assets. This model is the recommended structure for product line
adoption in product line literature for large organizations. This structure benefits from a
separate group that is focused on creating the core assets, process and architecture for the
product line. They can focus on high quality components that support multiple projects.
Many organizations are skeptical of this structure. It is believed that separate groups
that perform core asset development will not be responsive to the business needs of the
organization and focus on only interesting technical problems. However, when the
number of software developers reaches a level of about 100, a central domain-engineering
unit is the best way to scale the organization and take advantage of the benefits of shared
core assets. It helps to improve communication and core assets will be built to include
requirements from multiple projects instead of focusing on the requirements of a single
project. Although, it can be difficult to balance the requirements of multiple projects and
some projects might feel that the domain-engineering unit is less responsive than if the
assets were built as part of the project.

[Publisher, please place Domain Engineering Unit Model here]

Figure 6-8

Domain Engineering Unit Model

Hierarchical Domain Engineering Unit Model


In extremely large organizations, it is likely that multiple domain-engineering units
are necessary to provide specialized core assets to projects. To accomplish this, each
product line can have a separate domain-engineering unit that is responsible for
specialized core assets for the product line in addition to a single central domainengineering unit that is responsible for core assets that span product lines.

[Publisher, please place Hierarchical Domain Engineering here]

Figure 6-9

Hierarchical Domain Engineering

SHARED PROCESS (PRODUCTION PLAN)


Since the characteristics of all of the projects within the product line are similar, they
will benefit by having a similar process for their development, maintenance and evolution.
The core assets of the product line are components, systems and architecture. There are
optimal processes that can be created to use these artifacts in a new project. For instance,
at Canaxia, the teams that use the customer database know that it is best to define the new
attributes that have to be captured for a customer early in the elaboration phase of the
project. The domain team has to be notified early in the process of the additional
attributes that a new project team needs for a customer. That way, they can be factored

into the schedule for the new system. Some processes that can be reused across projects
include:
1. Requirements gathering
2. Design and development
3. Software configuration management processes
4. Software maintenance
5. Release management
6. Quality assurance
This shared process is called a Production Plan. It contains all of the necessary
processes for building a project using the core assets. It outlines what groups should be
involved, what their role is and when they should be involved. The project development is
still up to the business units, but now that core assets may be out of the control of the
business units, more coordination is necessary with the domain-engineering units to update
core assets for use on new projects.

SCOPING A PRODUCT LINE


When setting up product lines within an organization, figuring out how to scope the
product lines is difficult. Should the product line support a particular architectural style or
should it map more to the business segment? A typical organization will already have
many applications. Some of these will be related. They will share a common architecture,
a shared infrastructure or be managed by a single organization. These applications will
logically belong to a single product line. But what about applications that belong to other
organizations or have the same quality attribute profile? What if they use different
architectures, processes and infrastructure? How does one decide how to group these
applications into a set of product lines?

[Publisher, please place Existing Product Line Grid here]

Figure 6-7

Grouping existing systems into a product line

To group product lines, there are two main determining factors: the business segment
and the architectural style. A product line can be grouped according to business segment.
For instance, at Canaxia, they could group their product lines by corporate vs. consumer
sales groups. They can also furthur decompose their product lines according to
architectural style. For instance, the Internet systems, mid-tier and mainframe systems
could each become part of separate product lines.
In order to analyze how to group products into product lines the existing inventory of
the applications within the organization should be analyzed. One method of doing this is
to create a grid with the business segments or organizations at the top and the existing
architectural styles at the on the right. Business segments or organizations are aligned to
the business need. The architectural styles are classes of technologies such as Web, clientserver, etc. At Canaxia, Scott and his team went through this exercise and came up with
the following grid:

[Publisher, please place Existing Product Line Grid here]

Figure 6-8

Grouping existing systems into a product line

After all of the products within the organization were mapped, each application was
grouped with other applications into a product line based on the business need and the
architectural style of the application. For instance, in the above grid, it was clear that the
20 homegrown intranet sites and the purchased Intranet software should be grouped into a
single product line. This may be reasonable if the organizations that maintain each of the
homegrown sites are the same. If not, then the organization must change or there will be
separate product lines that align to each organization.
Once the products are grouped into their respective product lines, each product line
must be managed by a single organization. The line manager and his or her team should
conduct a quality attribute analysis to understand the needs of the architecture for the

product line. Then they should work towards creating a shared architecture, infrastructure
and a process for the product line and begin to work toward that common shared vision for
the line.

Conclusion
Every organization of any size has tried to implement reuse plans. They usually
suffer from a focus on source reuse or component reuse and a lack of organizational will
when project deadlines are at hand. Structuring an organization around the use of product
lines can help make reusable core assets a focus of the organization, not an afterthought.
Not only the should source code and components be reused, but the processes,
infrastructures and especially the people that understand these systems should be leveraged
across projects. This chapter is an introduction to the topic of product lines. There is a
tremendous amout of research that has and is being done on the topic. We would
encourage you to investigate the additional information and references section of this
chapter.

Additional Information
The SEI Product Line Practice Intiative (http://www.sei.cmu.edu/plp/)
Jan Bosch, Design and Use of Software Architectures, 2000, Addison-Wesley.
Paul Clements, Linda Northrop, Software Product Lines: Practices and Patterns, 2001,
Addison-Wesley.
REFERENCES (style BH)

Jan Bosch, Design and Use of Software Architectures, 2000, AddisonWesley.

Paul Clements, Linda Northrop, Software Product Lines: Practices and


Patterns, 2001, Addison-Wesley.

Chapter 6: The Enterprise Unified Process


Enterprise architecture efforts dont occur in a vacuum, they exist within the cultural
framework defined by your IT department, its environment, and the processes that it
follows. To succeed your enterprise architecture strategy and the software process(es)
followed by your project teams must reflect one another. If they do not, if your enterprise
architecture team follows one approach and the development teams another you are likely
to find yourself in serious trouble.
In Chapter 5 you learned that the Rational Unified Process (RUP) (Kruchten 2000) is
swiftly becoming an industry standard, and it is quickly being adopted in organizations
whose culture leans towards a well-defined, prescriptive approach. This makes sense,
when you consider RUPs strengths:
The RUP is based on sound software engineering principles such as taking an
iterative, requirements-driven, and architecture-based best practices
approach to development.
The RUP provides opportunities for concrete management visibility. The most
important opportunities include the creation of a working architectural
prototype at the end of the Elaboration phase, the creation of a working
(partial) system at the end of each Construction iteration, and the go/no-go
decision point at the end of each phase.
Rational Corporation (2003) has made, and continues to do so, a significant
investment in their RUP product, a collection of HTML pages and artifacts that
your organization can tailor to meet its exact needs.

Unfortunately the RUP suffers from several weaknesses:


It is only a development process. The current version of the RUP does not
cover the entire software lifecycle, it is very obviously missing the concept of
operating and supporting systems once they are in production. Nor does it
include the concept of eventually retiring systems.

Multi-system infrastructure development is missing. This includes enterprise


architecture, enterprise business modeling, and strategic reuse efforts.
Multi-system management is missing. IT departments need to manage both
their staff and their portfolio of projects.
Although the RUP may be adequate for the development of a single system, it is not
adequate to meet the real-world needs of modern organizations. IT departments in
medium to large-sized organizations need to deal with far more than just the issues
involved with developing a single system. Experience shows that to succeed at these other
issues that you need to explicitly include them in your software process. Therefore, for
your enterprise architecture efforts to succeed in an organization that has adopted the RUP
you will need to enhance the RUP to reflect the required activities. The bottom line is that
the RUP may be part of your overall process solution but it isnt the entire solution.

P R I
P R O

S
C

E
E

U N I
S
(

F
E

I E D
U P )

To enhance the RUP to meet the real-world needs of typical organizations you need
to expand its scope to include the entire software process, not just the development
process. Figure 6.1 depicts an advanced instantiation of the Unified Process called the
Enterprise Unified Process (EUP). This lifecycle is an extension of the one originally
described by Scott Ambler and Larry Constantine (2000a; 2000b; 2000c; 2002). The EUP
home page is www.enterpriseunifiedprocess.info.

Figure 6.1. The Lifecycle of the Enterprise Unified Process (EUP).

Figure 6.2 shows how the EUP extends the RUP. As Rational Corporation suggests,
each project team will follow a tailored version of the RUP called a development case
that meets their unique needs. The twist is that this is done within the scope of another
process, the EUP, which defines a complete software lifecycle. It is important to
understand that the EUP doesnt require you to follow the RUP, it just requires you to
follow a development lifecycle that achieves similar goals (e.g. produces working
software). You could use process such as eXtreme Programming (XP) (Beck 2000) or
Feature Driven Development (FDD) (Palmer and Felsing 2002) in place of the RUP if
appropriate. The important thing is that the team has a software process to follow that
reflects their situation.
The EUP makes four major additions to the RUP lifecycle:
The Production Phase
The Retirement Phase
The Operations & Support Discipline
The Enterprise Management Discipline

Figure 6.2. The EUP additions to the RUP lifecycle.

The Production Phase

The EUPs Production phase represents the portion of the software lifecycle in which
a system is in actual use. As the name implies the goal of this phase is to keep your
software in use until it is either replaced with a new version or it is retired and removed
from service. Note that there are no iterations during this phase, or there is one iteration
depending on how you look at it, because this phase applies to the production lifetime of a
single release of your system. To develop and deploy a new release of your software you
will need to run through the four development phases again, a concept depicted in Figure
6.3.

Figure 6.3. The Augmented view of the EUP lifecycle.

Figure 6.3 is interesting because it depicts critical concepts that arent always clear
to people new to the Unified Process. The large arrows across the top of the diagram
shows that once a system is released into production that you often continue working on a
new release of it. Depending on your situation, work on the next release will begin either
in the Inception, Elaboration, or Construction phases.
Another interesting feature are the phase milestones indicated along the bottom of
the diagram to make them more explicit. A serious problem that project teams face is
political pressure to move into the next phase even though they havent achieved the
appropriate milestone. Explicitly showing the milestones makes it that much harder to try
to get around them.

The Retirement Phase

The focus of the Retirement Phase is the successful removal of a system from
production. Systems are removed from production for two basic reasons:
? They are no longer needed. An example of this is a system that was put in
production to fulfill the legal requirements imposed by government legislation.
Now that the legislation has been repealed the system is no longer needed.
? The system is being replaced. It is common to see home-grown systems for
human resource functions be replaced by commercial off-the-shelf (COTS)
systems and sometimes even to be rewritten from scratch.

The retirement, or sunsetting, of a system is often a difficult task that should not be
underestimated. Activities of the Retirement Phase include:
1.
2.

3.

4.
5.

A comprehensive analysis of the existing system to identify its


coupling to other systems.
A redesign and rework of other, existing systems so that they no
longer rely on the system being retired. These efforts will typically be
treated as projects in their own right.
Transformation of existing legacy data, perhaps via database
refactoring (Ambler 2003b), because it will no longer be required or
manipulated by the system being retired.
Archival of data previously maintained by the system that is no longer
needed by other systems.
Configuration management of the removed software so that it may be
reinstalled if required at some point in the future (this is easier said
than done).

6.

System integration testing of the remaining systems to ensure that


they have not been broken via the retirement of the system in
question.

7.

Notification and training for people who use the old system on how to
conduct business after the system is retired.

The Operations and Support Discipline

The goal of the Operations & Support discipline is exactly as the name implies, to
operate and support your software. Operations and support can be very complex
endeavors, endeavors that need processes defined for them. During the Construction
phase, and perhaps as early as the Elaboration phase, you will need to develop operations
and support plans, documents, and training manuals (some of this effort is currently
included in the RUPs Implementation discipline). During the Transition phase you will
continue to develop these artifacts, reworking them based on the results of testing and you
will train your operations and support staff to effectively work with your software. During
the Production phase your operations staff will keep your software running, performing
necessary backups and batch jobs as needed. Your support staff will work with your user
community in working with your software, helping them to understand the system, to
overcome potential problems, and to gather potential change requests pertaining to your
system. These activities also occur during the Retirement phase until the actual point
where you remove the system.

The Enterprise Management Discipline


The Enterprise Management (EM) discipline focuses on the activities required to
develop, evolve, and support your organizations cross-system artifacts such as your
organization-wide models, software process, standards, guidelines, and your reusable
artifacts. Table 1 summarizes the sub-disciplines of Enterprise Management. Although
each sub-discipline is arguably worthy of a discipline in its own right you wouldnt want to
complicate the lifecycle with eight extra disciplines, particularly when the RUP lifecycle is
already quite complex.
It is important to understand that although there is overlap with existing RUP
activities that the scope is different the scope of the activities within the EM subdisciplines are many systems whereas the scope of the corresponding RUP activities are

for a single system. When you allocate requirements within the RUP it is to an iteration or
release of the single system, when you do it as part of the Programme Management subdiscipline you are assigning it to a system. The implication is that when you are using the
EUP to extend the RUP that you will need to tailor the RUP activities to reflect the
corresponding EUP activities. The type of RUP process tailorings that you need to
consider for each sub-discipline are also indicated in Table 1.

Table 1. The sub-disciplines of Enterprise Management.

Sub-

Description

Potential RUP
Tailori
ngs

discipline

Program Management

The definition, planning, management, and


tracking of your organizations entire
software portfolio. This sub-discipline
includes efforts that occur before
development begins as well as the oversight
of all systems (either in development, in
production, or being retired). Activities
include identifying potential systems or
families of systems, assigning requirements
to these systems/families, manage your
overall IT release schedule (to ensure that
system deployments dont conflict with one
another), prioritizing projects, assigning
resources to projects, initiating new
projects, and initiating retirement of
existing systems.

Your project management


efforts will need to reflect
your overall program
management efforts.

Enterprise Business
Modeling

This is the identification, modeling, and


communication to others of the business
that your organization is engaged in. It
defines both the business processes that
your organization engages in as well as an
overarching model of the requirements for
the software applications that support it. In
many ways this is the business modeling
discipline with an enterprise scope instead
of system scope.

Your enterprise business


models will provide input
for your project-level
business modeling efforts,
which in turn provide
detailed feedback to your
enterprise-level efforts.

Enterprise
Architecture

The activities required to define,


communicate, support, and evolve the
enterprise architecture of your
organization.

See the Adding Enterprise


Architecture Activities to
the RUP section.

Enterprise
Configuration
Management

This is the identification of all of your


organizations hardware and software assets
and the relationships between them.

The RUPs Configuration


and Change Management
discipline is both enhanced
and constrained by the
addition of this subdiscipline because your
project-level configuration
management efforts should
reflect your enterpriselevel efforts.

People Management

This sub-discipline includes basic human


resource activities such as staffing, career
management, succession planning, and
education (including both training and
mentoring).

People management is also


a project management
issue. For example, some
training can be project
specific and someone can
be promoted within the
scope of a project team.
Therefore your people
management activities
inside a project must
reflect your activities
outside of the project (and
vice versa).

Strategic Reuse
Management

This is the implementation and


management of an effective software reuse
program. This can include a reuse team that
specializes in harvesting existing artifacts,
in developing new artifacts, and then
supporting their use on project teams. It
can also include the management and
support of a reuse repository.

This sub-discipline
potentially affects the
modeling-oriented and the
Implementation disciplines
because the reuse
engineers will work closely
with the developers.

Standards and
Guidelines
Management and
Support

The focus of this sub-discipline is the


identification, purchase/development, and
support of applicable standards and
guidelines for the enterprise. Potential
standards/guidelines include modeling
notation (e.g. UML), modeling style
guidelines (www.modelingstyle.info),
coding style guidelines, user documentation
guidelines, and data naming conventions.
The RUP product includes some guidelines
but you will find that it may not include
everything your organization requires (and
you will definitely find that you need to
tailor them to meet your unique needs).

This sub-discipline
provides artifacts for a
project teams
Environment discipline
efforts.

Software Process
Management

This is the management of your software


process efforts within your organization.
This includes software process
improvement (SPI) and support of
educational activities.

A D D I N G
E C T U R E

This sub-discipline
provides the software
process that the
Environment discipline
tailors. In turn your
Environment efforts
provide feedback to this
sub-discipline.

N T E R P
A C T I V I
T O
T H E

R
T

I
I
R

S
E
U

E
S
P

Figure 6.4 presents a RUP-like workflow diagram for the Enterprise Architecture
sub-discipline. The Define the Enterprise Architecture activity produces two main
deliverables, an Enterprise Architecture Document (EAD) that consists of the enterprise
architectural models and its supporting documentation as well as an Enterprise
Architecture Vision that defines the goals and directions of the enterprise architecture. In
many ways this is the design and document aspect of this sub-disciple, whereas the bulk
of the analysis occurs in the Perform Architectural Synthesisactivity as you would
expect its inputs include the Enterprise Business Model that forms the base requirements,
change requests from project teams, and industry research. There are also activities for
developing reference architectures, which are accepted patterns and solutions to common
architectural issues. For example, a J2EE reference architecture would include an
overview of the technology, suggested ways to work with it, and a small working sample
of how to build an application with it. Because effective enterprise architects work with
their stakeholders to understand the enterprise architecture, see Chapter 7 for a detailed
discussion of this, this sub-discipline includes activities for communicating the
architecture to the rest of the organization and mentoring project teams.

Figure 6.4 Enterprise architecture activities.

A fundamental question that youre likely asking is how does enterprise architecture
fit into the RUP? The answer is quite easily or very hard, depending on how you choose to
do so. The problem with enterprise architecture, as youll see in Chapter 7, is that it can
often become quite dysfunctional, particularly when the enterprise architects wish to take a
traditional command-and-control approach as opposed to a more cooperative agile
approach. These issues are covered in detail in Chapter 7 so lets focus on potential
changes to the RUP instead.
The Requirements discipline will need to explicitly call out the process by which
enterprise architects share architectural requirements with system analysts. This is can be
done simply by including the enterprise architectural models and guidelines with them or
by including one or more enterprise architects (in the role of stakeholder) in the
requirements gathering process. System analysts will need to take enterprise architectural
requirements into account by referencing them in their supplementary specification. These
requirements will typically focus on technical issues such as the need to integrate with
existing infrastructure, such as security frameworks or standard technical platforms.
The Analysis & Design discipline can also be affected with the introduction of
enterprise architecture artifacts. First, this sub-discipline is the source of reference
architectures, which are currently used by the A&D discipline but no source is given for
them. Second, project-level software architects and designers will need to work with the
enterprise architecture models to ensure that their work fits into the existing architectural
and reflects the future architectural vision for your organization. In many ways the
enterprise architecture models could be thought of as a highly suggested road map for
the project. Third, this effort should be supported by the enterprise architects who will
often act as mentors to project-level software architects and designers. In fact enterprise
architects may even find themselves in project-level roles depending on the way you
decide to formulate your team.
There will also be a slight modification to your Configuration and Change
Management discipline. Project teams will need to be able to submit change requests
against the enterprise architecture artifacts, and in general be able to do so against all
Enterprise Management artifacts. These change requests will then need to be assigned to
the enterprise-level people.
Your enterprise architecture efforts affect other Enterprise Management efforts as
well. For example, the enterprise architects should be reviewing and providing input to
the Program Management effort so that the enterprise architecture can be ready when new
releases are rolled out. They can also provide feedback to the folks in Program
Management as to what can be supported and when.

The Enterprise Unified Process (EUP) defines extensions to the Rational Unified
Process (RUP) that enhance it to address the real-world concerns of medium to large-sized
organizations. From our point of view its most important feature is the addition of an
Enterprise Management discipline that addresses multi-system, enterprise issues. One
such issue is enterprise architecture. To succeed at enterprise architecture you need to
tailor the RUP to include activities for developing, evolving, and supporting it. You also
need to modify existing activities to use your enterprise architecture artifacts and to take
advantage of the expertise of your enterprise architects. The basic strategy is to make
enterprise architecture explicit within the RUP, and the extensions defined by the EUP
clearly do this.

Das könnte Ihnen auch gefallen